Karim Derrick writes in the THES after Chris Rust’s article “Unknown qualities” (Features, 13 November) raises interesting questions about UK degree standards.
Why shouldn’t universities be exposed to the same levels of scrutiny as schools, especially considering the significant amount of money being spent on tuition fees each year? In the schools sector, a great deal of energy has gone into ensuring that the different accreditation bodies have comparable standards. For example, exam papers, questions and grading systems across different awarding bodies are compared using an approach called “comparative judgement”. In harnessing an assessor’s ability to make professional judgements, this ensures that a pupil who receives a grade A in maths from Edexcel would also receive the same grade from AQA.
Such an approach would need to scale up when it comes to degrees as each university is its own awarding body. However, assessment technology allows institutions to deploy an adaptive and collaborative web-enabled version. This would provide universities with a highly reliable and scalable way to support summative judgement across a wide variety of coursework.
With the rise in tuition fees, it seems only reasonable that we begin to see a more comparable degree system. Improving grading standards in degrees will mean that students can make a more informed decision on where to study and that they will receive an accurate grade reflecting their abilities and efforts.
Chief Operating Officer, TAG Assessment
The University of Edinburgh is set to increase student attainment through a new peer review and feedback system following a successful trial with the School of Physics at the University of Edinburgh, TAG Assessment today announced that it will be supporting the Edinburgh Award with its CrowdAssess platform.
The University will look to get richer feedback through peer assessment; CrowdAssess facilitates student peer review and feedback to help students increase engagement, attainment and grades. The University has signed a new contract to capture and assess students’ employability skills as part of the Edinburgh Award.
Ian Pirie, Assistant Vice-Principal of Learning and Development, University of Edinburgh said: “The University is constantly exploring ways to improve feedback, as well as increasing assessment consistency, in order to improve the quality of student learning and their academic attainment. We were very impressed with the results from The School of Physics trial which saw a 14% increase from anticipated to actual grades and therefore it seemed an obvious choice for further pilot studies at scale in our Edinburgh Award. In these early trials we have found CrowdAssess to be extremely effective and it has already enabled our students to review and anonymously peer-assess each other to a very high degree of reliability. Tutors have also confirmed that this assessment is consistent with the University’s marking criteria. Overall, CrowdAssess has proved already to be successful with our students and tutors alike and its use is being scaled-up significantly during this academic year.”
The Edinburgh Award is designed to help students to develop their professional and studying skills by working with other students and peer mentors. The Award encourages students to invest in their personal strengths, personal development and self-awareness to improve employability prospects and stand out from the crowd in the workplace.
The students can take the Edinburgh Award in up to 32 subject areas from across the University Schools and departments. Tutors from these departments are encouraged to volunteer to provide the Edinburgh Award students with feedback and support, however peer mentoring and reviewing is key here in allowing students to develop their soft skills. CrowdAssess will be used in 24 of the 32 courses of the Edinburgh Award and by 500 students.
CrowdAssess allows students to anonymously peer review each other’s work and give constructive feedback through adaptive comparative judgement (ACJ). ACJ is a reliable alternative to traditional exam script marking where judges are presented with pairs of student work and are then asked to choose which is better. Through CrowdAssess, the student assessor views successive pairs of student’s work, which they make a professional judgement on to choose which one is best. By means of an algorithm, which reacts to the assessor’s judgement, a score is given to each piece of compared work and then it is ranked, from this order grades can be calculated. Assessors are also asked to leave feedback about their peers work to suggest how the work could be improved. The system captures all of the constructive feedback and this is reported to the appropriate student to improve their future plans and essays.