Blog

Comparative Tech and Meaningful Assessment

July, 2017

Adaptive Comparative Judgement in AssessmentFor students, the process of assessment should always be an integral and valuable part of the learning experience. Receiving feedback that is both informative and meaningful to them enables students to consistently improve their work. Yet for many, the traditional assessment model fails to meet these objectives.

The problem is entrenched within the very language of assessment. Students often find themselves struggling to interpret their feedback according to an abstract mark scheme written in inaccessible ‘marker speak’, which is often not directly applicable to the work they have produced. It gives them no clear indication of what specifically needs to be improved and how, or where they sit in relation to their peers. In short, the assessment is often ‘done to them’ rather than the student being an active participant in the assessment process.

None the wiser

Instead of being a helpful process, students are often simply made to realise that their work hasn’t met a certain standard of criteria, but they are still none the wiser as to what a ‘good’ piece of work actually looks like. If a student is not able to decode the mark scheme and see how it relates to their work, which frankly requires substantial effort on their part, then they will never improve. How can they do better next time, if they don’t engage with or respond to the feedback given?

The assessment process doesn’t have to be this way. An intuitive, scalable, digitised approach, which incorporates formative assessment software based on Adaptive Comparative Judgement (ACJ), has the power to truly engage learners and transform the entire assessment process. It has also been proven to significantly increase student satisfaction and attainment; the latter by as much as 14%.

Developed by academics from Cambridge University and Goldsmiths, University of London, the ACJ approach is based on the Law of Comparative Judgement, as first devised in the 1920s, and proves that people are better at making comparative, paired judgements rather than absolute ones. For example, it is easy to compare the temperature of two rooms and say which is warmer, but it is much more difficult to say with absolute certainty what the precise temperature is in each.

World of comparisons

Applying this same logic to the assessment world, learners should be able to log into a digital platform, look at two pieces of work side by side and identify for themselves which is better at meeting the desired learning outcomes. The software then generates repeated rounds of comparisons using an iterative and adaptive algorithm, enabling students to rank the work and recognise a spectrum of quality. Instead of receiving their own feedback in isolation, this process makes it much more apparent about exactly how they are performing and what precise improvements could be made. You can hear a student’s perspective on this by watching this short student interview here.

As such, educational institutions should seek to embrace anonymous online peer review systems like the one mentioned here, in which students are actively able to participate in the assessment process. Engaging students in collaborative critique makes the logic of assessment much more transparent, and a clear understanding of how their work is perceived by others will undoubtedly influence the way they tackle future assignments.

Currently, numerous institutions worldwide are trailblazing the ACJ approach through this innovative software, but it is yet to be utilised to its full potential. If the technology exists to make the assessment process clearer and more accessible, and which generates more meaningful feedback, why don’t we embrace the widespread application of this?

Matt Wingfield, Chief Business Development Officer

Link to the original story can be found here.