Reacting to the use of AI in exam grading: - Can weaken academic integrity
Students at the University of Bergen express concerns regarding the use of AI in the grading process, claiming it may bias the assessment of their exam papers.
Kaia E. Lehmann, a medical student at the University of Bergen, has raised alarms regarding the university's use of an AI model to evaluate exam answers. The AI was designed to identify the bottom 25% of submissions, which were then subject to more thorough review by a grading committee. In contrast, the remaining 75% of exams were evaluated primarily by human assessors but did not receive as much scrutiny. This method has left many students, including Lehmann, apprehensive about the potential impact on academic integrity.
Lehmann, who is currently traveling in Peru with fellow students from her cohort, has highlighted the collective concern regarding this grading approach. One of the main arguments against the AI involvement is that it can lead to bias in the grading process, as assessors may have preconceived notions of the quality of an exam based on the AI's initial analysis. Lehmann argues that such a procedure undermines the objectivity needed in academic evaluations, making it difficult for assessors to appraise each submission on its own merits.
The introduction of AI in exam grading raises significant questions about fairness and integrity in academia. As technology becomes increasingly integrated into educational processes, it challenges traditional methods of evaluation. Students argue that retaining human oversight is crucial in maintaining impartiality and ensuring that all students are assessed on an equal footing. Their concerns may prompt the University of Bergen and other institutions to reevaluate their use of AI in academic assessments to preserve the integrity of their academic programs and bolster student trust.