Schools are currently paying around £17 million per year to appeal exam grades, and may find themselves that figure rising as high as £179 million within the next five years.
Those numbers have led Professor Richard Kimbell, founder of the London Education Research Unit at the UCL Institute of Education, to call for urgent reforms to UK’s assessment system.
The average cost of challenging a grade is £34 per paper, in a year that’s seen historically high appeal numbers, with 572,000 exam grade queries submitted and changes made to some 62,000 GCSE results. The education technology specialist Digital Assess – for whom Professor Kimbell acts as an advisor – predicts that given current trends, the number of GCSE and A-Level grades challenged annually could be on course to top 1,300,000 by 2020.
Professor Kimbell maintains that the existing exam appeal system is an unnecessary and stressful process for pupils, parents and schools alike, caused poor levels of grading accuracy – but that appropriate reforms could eliminate the problem at its source.
In his view, reliability rates are such that any GCSE grade could actually be a level above or below, due to factors beyond that of the candidate’s performance in the exam hall – including the identity of the marker. “There’s no guarantee that one teacher marking a paper has the same mindset as another teacher marking a different paper,” Professor Kimbell says. “Exam boards do their best to try and standardise the grades, but the reality is that it’s impossible to take individual teachers from all over the country and expect them to operate to the same standard. They won’t.”
“To reduce the numbers of exams appealed each year we need to fundamentally change the way in which we assess exams. The methods and technologies to fix the problems already exist and are proven in use, but are yet to be adopted in schools.”
In term of solutions, Professor Kimbell suggests two possibilities. The first, ‘Adapted Comparative Judgement’ would start from the basis that individuals are better at making comparative judgements than they are at making judgements according to an absolute scale or criteria.
The same marker would compare papers side-by-side, declaring which they deem to be better. Under such a system, exam boards could produce a reliable ranking of all candidates, against which grade boundaries would be applied. Over time, papers from previous years could then be seeded as controls to maintain consistency and guard against grade inflation.
The second approach would draw on ‘Evidence-Based Assessment’ as an alternative to end-of-course exams, as is currently practised in a range of career paths, including engineering and medicine. According to Professor Kimbell, “It is a more statistically reliable indicator of attainment and allows the individual to demonstrate the best of themselves, without the stress and unfamiliarity of the pen-and-paper, exam hall environment.”
Professor Kimbell’s conclusion is that, “Both these concepts allow for learners’ expertise to be demonstrated in real tasks, and the comparative judgement tool allows multiple teachers/judges to contribute their expertise to the assessment. The role of the Chief Examiner then becomes far more manageable, since it does not involve trying to make all examiners think exactly alike on the minutiae of a mark scheme.”
Original post can be found here.