After relationship, assessment is the most crucial tool for the teacher. It is the front windscreen of effective education, informing the road ahead and valuing the learning in the rearview mirror.
In secondary schools, assessing a paper artefact that students submit, whether at the end of a standard one-size-fits-all test or a hand-in task has become dominant. It relies on a student's language skills: decode the actual question that was encoded in language, work out the solution and then re-encode that in unambiguous language that the markers "likes". But now, everyone has access to a language processing machine - Large Language Models.
While authenticating (ensuring this is the work of that student) a test is straightforward, the tricks we used to use to authenticate the hand-in tasks not longer work in the age of AI. Back then (all the way back in Nov 2022 - as if that was soooo long ago) we could create tasks that were largely cut-and-paste-from-internet-proof by asking students to translate the domain knowledge into novel contexts (e.g., have Volta and Galvani debate their different models for electricity in a hypothetical reality TV show called, "Shocking Truth"). AI does that translation job brilliantly (see the expander below).
Shocking Truth - Cut and paste this in your favourite Generative AI chatbot
Our colleagues in Primary and Early Learning are not so wedded to the paper artefact. My sister-in-law makes daily assessments of her 5 year old students' social abilities by direct observation. Her evaluation of a student's collaboration skills is no less valid because it doesn't include written answers to a written test. However, for all sorts of reasons - many goods ones - us secondary teachers have relied very heavily on a pen meeting paper to gains insights into the internal learning of our students.
Perhaps the challenges that AI has precipitated for educators (right now the focus is on re-capturing non-test assessment) are an opportunity to re-examine assessment and explore different instruments to measure learning with.
A focus on assessing the student, not the paper.
With my teaching colleagues, we implemented interview-as-assessment. Our students in Years 8 and 9 completed an extended student research project (in science). Observational assessments were made along the way by their classroom teacher. At completion, each student had a private 5-6 minute interview with their teacher, to which they were allowed to bring any resource from their project. It is important to emphasise this was not a presentation, but a conversation. Questions were not scripted because the goal was a genuine discussion. Many students reported high levels of enjoyment, with some commenting on the novelty of having an adult's undivided attention for a full 5 minutes - not talking at them, but truly listening to them. Combined with the observational assessments and a highly formatted 2 page written outline of their project, these three parts formed a valid and authenticated evaluation of learning. The logistics of freeing up teachers for interviews are challenging, but solvable - as our experience proves.
Full disclosure, this was BAE (Before the Ai Era) and was motivated by a desire to assess learning with less emphasis on written language - although authentication was still a major motivator for this experiment. However, such a rethink of ways to validly assess learning is perhaps just what the AI revolution requires. Language, including written language, will always be a vital assessment medium - just as it is in social discourse. But using alternative assessment tools can be a game-changer for many students - especially boys. An instrument like this might be worth adding to your ensemble in this new world of AI in education. What other assessment instruments can we explore?
by Roger Kennett, Learning Forge
コメント