Has the exam grading system had its day?

Charles Robertson / Alamy

Charles Robertson / Alamy

Erratic marking is damaging the credibility of the exams system and there’s a disturbing upward trend in appeals made – it’s time for a change, says Karim Derrick

It goes without saying that grades have always been a sensitive issue for pupils, educators and the government. This year, Ofqual reported that some 77,400 qualification grades had been changed, up from 54,400 in 2013.
The the exams watchdog also said that, in total, schools lodged more than 450,000 appeals against GCSE and A-level results – a 48 per cent rise in just 12 months.

This figure reveals a disturbing upward trend in the sheer number of appeals made. The Department for Education (DfE) has noted this issue is causing a “great deal of concern”; it’s aware that erratic marking is damaging the credibility of the exams system.

Despite this, no serious actions have been taken in order to make assessment more reliable and solve the issue. Ultimately, the rate of appeals will likely rise further and the credibility of the UK’s exams system will continue to be questioned if nothing is changed.

The fundamental flaw with today’s assessment system is assessment reliability; however well trained assessors are, or meticulously moderated the system is, it is still prone to bias: the key cause of inaccurate marking.

The Government has tried to solve this by pushing changes within the existing system, though its attempts have only caused wider reaching issues. For example: recently we’ve seen a reduction of coursework in favour of more traditional exams.

The decision to abandon speaking and listening tests in English GCSEs is just one example of this. Another more recent example was the decision to no longer count science practicals towards A-level grades.
So, whilst in education we are trying to address reliable assessment by simplifying it, the world of business is lamenting a system that is churning out pupils who don’t have the necessary skills for the new global economy.
Compounding both of these issues is the fact that the UK education sector is inherently risk averse. This has meant that the Government and Ofqual haven’t really prioritised innovation within the sector. As a result, they’ve ended up simply trying to improve on what has always been done instead of thinking outside the box.

New approaches and technologies are already available to improve what and how we assess. The UK education technology industry is well-regarded in the global marketplace for such innovations, yet they are being adopted in countries such as Sweden, Singapore and the US much faster than in the UK.

Some UK institutions are thinking outside of the box; Agored Cymru, an awarding body and provider of vocational courses, has adopted a web-based approach to assessment. This has helped them not just assess practical-skills with greater validity but has also allowed them to take the leg work out of assessment so that they can assess more frequently.

One institution in particular has pioneered in its efforts to create such approaches, namely the Technology Education Research Unit at Goldsmiths University, headed up by Professor Richard Kimbell, with support from Cambridge academic, Alastair Pollitt.

It first started looking into this issue of assessment in 2002 and proposed to change the assessment methodology from using the increasingly complex marking schemes used today to using Adaptive Comparative Judgement (ACJ) instead.

With this approach, assessors are given pairs of examinations and are then asked to compare them and choose which is better. By harnessing assessors’ own abilities to make professional judgements, and by deploying a collaborative online version of ACJ, you have a system that allows groups of assessors to review a set of any student work. This includes oral or video based work as well as project portfolios, on a local or national scale, and with significantly higher reliability.

This approach has been tested on a large scale in a number of varied subject and discipline contexts to great effect, increasing the reliability of assessments significantly.

Further to this, it has even been trialled in the UK by our regulators to standardise inter exam board grading. So our Government and Ofqual could continue as they are by implementing stopgap measures, or they can turn to the UK education technology sector for the answer to providing a reliable way to assess and provide pupils with an education that is worthy enough to help them succeed in the modern workplace.

Karim Derrick, Chief Operating Officer