EDUCATION A new model of exam grading has been developed by a group of scientists that could cause greater reliability and accuracy in grading.
GCSE & A-level results appeals cause stresses and financial losses for candidates, parents, schools and awarding bodies every year and the numbers are still increasing. As it stands an essay worth a C grade at GCSE could easily find itself in the B or D bracket with reported grading accuracy currently standing at an average of 60% for essay or portfolio type assessments.
A new model of exam grading has been developed by a group of scientists from technology company Digital Assess with input from senior academics at Cambridge University and Goldsmiths, University of London, and has demonstrated grading accuracy improvements from the reported current average of 60% to over 98%.
Adaptive Comparative Judgement (ACJ) is based on the Law of Comparative Judgement that proves that people are better at making comparative, paired judgements than absolute ones.
By moving from the current system in which expert examiners mark every paper according to a mark scheme (an absolute judgement), research has shown how asking the same markers to compare one paper to another side-by-side and declare which is better, can produce far more consistent results. The approach can be repeated to rank every paper in the country fairly and accurately. 98% accuracy is possible with as few as eight rounds of comparisons, with the use of machine learning being explored to reduce the number of rounds even further.
ACJ has demonstrated huge potential in use by awarding bodies and institutions worldwide. So far, pilots delivering significantly improved reliability have taken place in Australia, Sweden, Singapore and the US. The process means GCSE and A-Level appeal figures could be dramatically reduced, after all if your grade is generated via a collective consensus of expert assessors through ACJ, what is there to appeal against?
There are also many more possibilities for using ACJ across education. One recent project, Classical 100, which is a free online teaching resource provided by ABRSM, containing the top 100 recordings of classical music, plus information on the composers and stories behind the music was created using ACJ. The database was ranked by a panel of music experts using ACJ, which allowed the music to be judged and ordered into a definitive ranking according to its suitability for different classroom scenarios.
Another recent case study saw The University of Edinburgh using ACJ to develop a crowdsourcing-style system of peer review, changing the way formative assessment is delivered. This project used ACJ to empower learners to critique their peers’ work against assessment criteria, giving them deeper insight into what made a good piece of work and capturing feedback across the cohort, enhancing their learning experience and giving them more opportunities to improve and develop.
Using the ACJ method correctly could allow every exam paper in the country to be compared against each other for a definitive ranking. This would give a direct way of comparing performance across different regions, types of school, or any other variable, giving a way of identifying the influence of those different factors on exam grades.
In the USA, a similar system exists where each school is ranked and the ranking of the school is factored into consideration when grading pupils– if a school is low performing and a student still achieves a high grade, that grade counts for more than it would in a high-ranking school. These types of measures could be implemented to combat education inequality.
Ultimately, greater reliability and accuracy in grading could be the answer to many of the education system’s current challenges.
Dan Sandhu, CEO
Link to original article can be found here.
EDUCATION Educationalists would quite rightly rather focus their energies and expertise on teaching than testing, but few would disagree that we still need some means of measuring the outcomes of learning. However, the exams system in this country has not fundamentally changed in over a century.
We know that a three-hour written paper can seldom be the truest, fairest and most robust means of demonstrating the outcomes of years of learning. We need more than a hand-written script to represent the skills, knowledge and competencies required for employment in the 21st Century.
Not only does the technology exist to solve the age-old problems of grade inflation, inaccuracies, appeals, inefficiencies, unfairness etc.; it has been in use all around us for years.
Everywhere outside schools, in professions from accounting to medicine, IT to construction, career-shaping decisions are based on scientifically and technologically advanced methods of assessment. These assessments, known as Evidence-Based Assessment (EBA), give a more reliable indicator of attainment and allow the individual to demonstrate the best of themselves without the stress and unfamiliarity of the pen-and-paper, exam hall environment.
The concept of EBA is growing in popularity because it focuses on enhancing learning experience and eliminates “teaching to the test”. It utilises the power of formative assessment to inform summative decisions – that is it allows the learner to learn, gather evidence, test and adapt their theories as they go along, just as they would in the workplace, rather than placing all their hopes on “the big day”.
One of the most vocal of the growing number of advocates for the adoption of EBA in schools is the awarding body OCR, part of Cambridge Assessment. And it should know: it provides A Levels and GCSEs in over 40 subjects, but is simultaneously extremely active in the vocational space, where it is already using these cutting-edge assessment methods with successful results.
It is delivering a new suite of qualifications known as the Cambridge Technicals, at different levels and in fields from Art to Business, and from Health & Social Care to Engineering, using a formative e-portfolio assessment system to inform summative achievement decisions. This gives each learner a portfolio of evidence that can be peer-reviewed as well as teacher-reviewed at every stage of the course The portfolio and journal can either be sent into the employer selection process or the university admissions process with a far more meaningful, rounded and reliable demonstration of their learning journey than a written paper would.
All of this can very easily be adopted in schools and colleges. Teachers would far rather be freed up to support learning, rather than being bogged down in marking papers. The necessary reform would be a big one, but by no means insurmountable.
If the process of capturing evidence is introduced right from the beginning of secondary school, this would give the system at least four years to bed in. Ofsted could act as a quality assurance body to ensure evidence is captured correctly, teachers are intervening at the right moments, and schools are addressing common skills gaps in a systematic, coherent manner (such as improving teacher CPD).
It need not even cost the Government any extra money – we need only to make better use of the existing investment. If the next generation are to contribute to our 21stCentury economy, their destinies should not depend on a 19th Century examination.
Dan Sandhu, CEO
Link to original article can be found here.
Schools are currently paying around £17 million per year to appeal exam grades, and may find themselves that figure rising as high as £179 million within the next five years.
Those numbers have led Professor Richard Kimbell, founder of the London Education Research Unit at the UCL Institute of Education, to call for urgent reforms to UK’s assessment system.
The average cost of challenging a grade is £34 per paper, in a year that’s seen historically high appeal numbers, with 572,000 exam grade queries submitted and changes made to some 62,000 GCSE results. The education technology specialist Digital Assess – for whom Professor Kimbell acts as an advisor – predicts that given current trends, the number of GCSE and A-Level grades challenged annually could be on course to top 1,300,000 by 2020.
Professor Kimbell maintains that the existing exam appeal system is an unnecessary and stressful process for pupils, parents and schools alike, caused poor levels of grading accuracy – but that appropriate reforms could eliminate the problem at its source.
In his view, reliability rates are such that any GCSE grade could actually be a level above or below, due to factors beyond that of the candidate’s performance in the exam hall – including the identity of the marker. “There’s no guarantee that one teacher marking a paper has the same mindset as another teacher marking a different paper,” Professor Kimbell says. “Exam boards do their best to try and standardise the grades, but the reality is that it’s impossible to take individual teachers from all over the country and expect them to operate to the same standard. They won’t.”
“To reduce the numbers of exams appealed each year we need to fundamentally change the way in which we assess exams. The methods and technologies to fix the problems already exist and are proven in use, but are yet to be adopted in schools.”
In term of solutions, Professor Kimbell suggests two possibilities. The first, ‘Adapted Comparative Judgement’ would start from the basis that individuals are better at making comparative judgements than they are at making judgements according to an absolute scale or criteria.
The same marker would compare papers side-by-side, declaring which they deem to be better. Under such a system, exam boards could produce a reliable ranking of all candidates, against which grade boundaries would be applied. Over time, papers from previous years could then be seeded as controls to maintain consistency and guard against grade inflation.
The second approach would draw on ‘Evidence-Based Assessment’ as an alternative to end-of-course exams, as is currently practised in a range of career paths, including engineering and medicine. According to Professor Kimbell, “It is a more statistically reliable indicator of attainment and allows the individual to demonstrate the best of themselves, without the stress and unfamiliarity of the pen-and-paper, exam hall environment.”
Professor Kimbell’s conclusion is that, “Both these concepts allow for learners’ expertise to be demonstrated in real tasks, and the comparative judgement tool allows multiple teachers/judges to contribute their expertise to the assessment. The role of the Chief Examiner then becomes far more manageable, since it does not involve trying to make all examiners think exactly alike on the minutiae of a mark scheme.”
Original post can be found here.