TIP 1: Know the process.
A science test is an instrument we use to probe minds. I used the word "instrument" intentionally on many levels, but think of a beautifully crafted telescope.
We are looking to make a measurement of knowledge, understanding, skill - let's call that the latent construct.
The Process.
We identify a latent construct want to measure,
construe this into something to solve, where solving implies they "get" the concept,
wrap this in language[s] (words, and all the graphical language types of science) to create a test item.
The student has to work through wrapper of our language[s],
decipher the task being asked,
reference their understanding of concept,
apply their understanding to solve the task,
conceptualise a solution to the task,
encode their solution in language[s].
Marker decodes (if legible) the student's language wrapper,
then infers level of understanding from solution.
To illustrate, what mark would you award this student's response?
Have they demonstrated the ability to correctly assign voltage and current to the IV / DV? This might be an interesting opening discussion for your next science department meeting...
>> Some Implications
It takes time to build good test items because it is hard work to be concise and unambiguous, to construct a task to solve which genuinely reflects the capacity we are testing.
Use key verbs to make the task clear (to both examiner and student)
Have someone other than the author whose load allocation is to rip the test to bits, test to destruction, look at ALL the ways it could be misconstrued. Ok, so feeling matter in how that gets communicated, but don't skip this and allocate time in your planning schedule for assessors to evaluate.
Marking a test is worthy of an article in itself and something I'll explore later (subscribe to make sure you don't miss it)
TIP 2: Know Cognitive Load Theory
... and how it applies to building test instruments.
You might think having a diagram AND describing words that conveys the same thing would be helpful... -that sounds sensible, right? Cognitive load theory is one of the best evidenced, yet weirdest thing from the field of recent educational research. If you are not conversant with this, here is a beautiful, short summary: Cognitive load theory: Research that teachers really need to understand
>> Some Implications
Arrange MC options from short to long, from low number to high
Choose one language type to convey information (put it all in the diagram, OR text, not split across both or repeated).
Avoid "noise" on the test paper, such as "question 23 continues on page 32". This is extraneous cognitive load. (that sentence uses potentially 2 of the precious (7+-2) slots in short term memory)
Ensure the picture adds relevant context and is not just a pretty decoration (for instance, for a question about a circuit in a caravan, an image of a camping caravan might provide the context in which the task can be better understood (rather than a row of camels, also a caravan).
TIP 3: Start fresh and exploit the neuroscience of learning
Don't start with last year's exam! Why not? Fresh thinking is needed, so don't constrain your imagination in the cage of yesterday.
However, do review the worksheets / tasks / practical investigations that form the common experience of the cohort. Consider co-opting diagrams and artwork they have already seen as the stimulus for new questions.
Several benefits arise from this, firstly the cognitive load for students will be less than a virgin stimulus, secondly it helps to revive the memories of the "first learning" which has massive benefits for memory consolidation and thus persistent learning, thirdly it will reward those of your students who reviewed material and put in an effort to prepare for the test, and those who engaged actively with the task the first time. It sends the message -effort (in class and/or revising) was worth the investment.
>> Some Implications
If your school sets a common test across multiple classes, consider mandating some common experiences that are pivotal to the topic.
Review your published "learning outcomes" - do they adequately describe the latent constructs of the topic? There is a constant tussle in schools; too many outcomes seems too much for a student to learn, but too few and they end up being arcane (you need to be an expert in the subject to interpret them).
TIP 4: Great Wall Steps Audit
From the tiny part I walked of the Great Wall of China, when it went from level to up a hill, the risers on the steps were low at first, getting higher as you went. It was weird after living in Australia where the building code expressly prohibits uneven risers in a staircase - but a useful metaphor nevertheless.
Since it takes time and cognitive load to interpret a stimulus, use it like a narrative to ask a series of questions from easy low-risers progressing up in difficulty. Use key verbs like identify early on, progressing to describe, explain, justify, construct, etc.
>> Some Implications
A key part of the Assessor's role should be a quick audit. A simple 3 column ledger, roughly arranged to a 3-level Bloom's scale (low:middle:high) can be so enlightening. Roughly aim for 30:50:20 and you will have a balanced test.
TIP 5: Learn from previous tests
When writing the marker's comments, set aside a space for reflections on the test itself. What questions were misinterpreted by more than one student? If you could have interviewed them, would we have found this group of students had the capability this question was probing, but struggled somewhere in "the process"?
>> Some Implications
If you are not already, collect question by question (by class) data as it can be quite revealing.
Create a template for your "solutions / marking guide" that includes a reminder to evaluate the test itself.
Program time in the appropriate department meetings to share those insights - your own "Meet the Markers"! The BEST PD for all test writers! (yep, this is hard as it is usually near the end of a term when reports are also due and everyone is hanging for a break- I'm living the same reality :) )
Comments