Rapid advancements in artificial intelligence (AI) have sparked excitement and trepidation within the education sector. As AI systems become increasingly sophisticated, educators are now grappling with how best to use these powerful technologies to enhance the assessment process while addressing valid concerns around fairness, equity and the role of human expertise.
Earlier this year, The Edge Foundation and partners hosted the Next Generation Assessment Conference 2024 at the University of Manchester. The event provided a timely platform to discuss these issues.
Among other seasoned experts, the conference saw Dr Miriam Firth (Senior Lecturer in Education Management and Leadership, University of Manchester), Dr Andy Kemp (Principal, The National Mathematics and Science College), and Dr Timo Hannay (Managing Director, SchoolDash) sharing their insights on the transformative potential of AI in assessment. They also addressed some challenges we must navigate as we responsibly implement these technologies.
Student perceptions of generative AI in assessment
Dr Miriam Firth opened the discussion by exploring a recent national study (led by students from the University of Manchester) into student attitudes towards AI in assessment. Following analysis of the student-collected data by Jisc, the study found that students and teaching staff share many concerns regarding AI.
In particular, teachers fear malpractice and misuse of the technology, while learners fear that AI-generated content may eventually replace expert-created curriculums. However, the study found that by providing appropriate opportunities for students to engage with the technology, AI can enhance the assessment process. Allowing students to play with and test this new technology is critical for improving AI-powered assessment.
Dr Firth noted that while there are valid concerns around the use of AI in assessment, it is essential to approach the challenge with nuance and care. ‘Excellent guidance,’ she said, ‘not harsh policy, is needed to support all stakeholders.’ By giving students a voice and role in shaping the future of AI-powered assessment, we can build trust and implement these technologies in ways that benefit rather than hinder learners.
Using AI to support a more humane, authentic assessment experience
As a college principal, Dr Andy Kemp envisioned AI enabling a radical shift towards a more conversational, personalised assessment approach. Reflecting on his own educational experiences – both as a teacher and student – he said that the most authentic assessments are often dialogues between expert teachers and learners.
Dr Kemp suggested that this is true for everything from low-stakes discussions in school classrooms to high-stakes doctoral vivas. While scaling such an approach has historically been a challenge, Dr Kemp said that generative AI models can now make this conversational, adaptive assessment a reality.
‘By putting together many of the things happening in AI right now, it’s not hard to imagine students conversing with a virtual avatar that explores their understanding of a topic,’ Dr Kemp said. ‘This could create fairer, more authentic assessment, which is more humane, even if it has fewer humans directly involved.’
He argued that such an approach could address the limitations of traditional written exams, where a student’s misunderstanding of a question can prevent them from demonstrating their knowledge of a topic. By creating a dialogue, conversational AI could avoid these issues, delivering a more personalised, meaningful evaluation of student progress.
How machine learning is shaping new approaches to AI-powered assessment
Dr Timo Hannay explored some ways in which AI is already transforming assessment. He highlighted how a growing shift towards adaptive assessment systems (often used to track student progress in real time) is rendering summative assessment techniques less popular than they were.
In the past, adaptive assessment systems – which measure a broader range of student ability using questions tailored to each learner’s skillset and skill level – were mainly targeted at quantitative, fact-based domains like STEM subjects. However, the emergence of sophisticated machine-learning systems (such as those created by Thiemo Wambsganss and colleagues at Bern University) means we can now also assess qualitative skills like argumentation and essay writing.
‘In principle,’ Hannay said, ‘no knowledge domains are beyond the reach of assessment by computer.’ However, he cautioned that it is essential to maintain human oversight and accountability in AI-powered assessment. Just as human pilots must evaluate their navigation systems, human educators must always be there to ensure the integrity of the assessment process.
Reflecting a theme common across the conference, he also concluded that new assessment approaches should seek to fill gaps in the data we collect, such as the happiness or well-being of children at school. ‘We must be mindful of measuring the things we value, not just the things that are easiest to quantify.’
As the education sector continues to grapple with the implications of this novel technology, the conference provided an excellent forum for exploring the opportunities and challenges ahead. While acknowledging the need for equitable assessment and academic rigour, the panellists agreed that we should not shy away from using AI in assessment.
As long as it is applied appropriately and with a clear understanding of its limitations, AI can benefit learners in terms of immediate assessment, yes – but also by familiarising them (and, indeed, educators) with an emerging technology that will no doubt transform how we live and work.
Click here for a summary of the conference and videos of the main panel discussions.
Olly Newton is Executive Director of the Edge Foundation.
Register for free
No Credit Card required
- Register for free
- Free TeachingTimes Report every month