Feb 11 • 03:27 UTC 🇰🇷 Korea Hankyoreh (KR)

The Ministry of Education cites 'excessive question replacement' as the reason for the failure in English exam difficulty adjustment

The South Korean Ministry of Education attributes its failure to regulate the difficulty level of the English portion of the 2026 College Scholastic Ability Test to excessive replacement of exam questions, with plans for improvement including enhanced oversight and AI assistance.

The South Korean Ministry of Education has revealed that the failure to effectively adjust the difficulty level of the English section of the 2026 College Scholastic Ability Test (CSAT) was primarily due to an excessive number of question replacements during the exam preparation process. Specifically, 19 questions were changed, which is significantly higher than alterations made in other subjects, which in turn caused disruptions in the thorough assessment of the exam's difficulty. This was compounded by a lack of proper verification of the capabilities of the randomly selected committee members responsible for question creation.

Following a controversial score report, widely criticized by top-performing students and parents due to a record low of just 3.11% achieving the highest grade, the Ministry launched a field investigation. The discrepancies prompted the resignation of the head of the Korea Institute for Curriculum and Evaluation, which administers the CSAT. As part of a comprehensive improvement strategy, the Ministry plans to establish specialized committees focused on scrutinizing the difficulty levels of exam questions and intends to incorporate an AI system to assist in generating English reading comprehension materials.

The Ministry's analysis indicated that the root of the problem stemmed from a change in the process of appointing exam question committees. In 2024, the Ministry shifted to a system that randomly selected members from a pool recommended by the Educational Broadcasting System (EBS), a move intended to enhance fairness. However, this random selection lacked sufficient assessment of the members' expertise, leading to the current predicament, underlining an urgent need for better training and evaluation of exam committees to ensure future assessments meet required standards.

📡 Similar Coverage