Generative Artificial Intelligence Impact on Education | How ChatGPT and AI Are Reshaping Remote Assessments

Generative Artificial Intelligence Impact on Education | How ChatGPT and AI Are Reshaping Remote Assessments

The integrity of remote assessment in higher and further education is under threat, and this has been a major concern since the introduction of ChatGPT in November 2022. Numerous studies have demonstrated that generative AI (GAI) can pass exams in a wide range of subjects.

A group from The Open University (OU) has been examining the robustness of various assessment forms in light of GAI (generative artificial intelligence) with funding from the NCFE Assessment Innovation Fund. The results validate issues with GAI detection and offer guidance for creating assessments, that are not only more resistant to GAI but also concentrate more heavily on the human “value-added-ness” that teachers truly wish to evaluate.

Problems with Generative AI Detection in Assessments

Complete information about the methodology and results can be found in the research report. Our conclusions from these results are presented in this article.

Read More From Generative Artificial Intelligence News

One unexpected discovery was that GAI could pass almost all of the assessment types, despite the great diversity of assessment formats. All of the GAI scripts received a fail except for one (a role-play question with a clear marking rubric). However as study levels rose, GAI scores dramatically dropped, indicating that GAI was less competent and had stronger critical-thinking abilities.

It was also surprising to learn that although the training on GAI hallmarks, which was highly regarded by markers, significantly improved markers’ capacity to identify GAI scripts, the percentage of student scripts that were mistakenly classified as GAI (a false positive) also rose significantly. Instead of being helpful for GAI detection, the research revealed that the recognized “hallmarks” of GAI are better for directing the design of questions and markings.

Generative AI Hallmarks in Education

Generative AI Hallmarks in Education
GAI could pass almost all of the assessment types / Photo by Emiliano Vittoriosi on Unsplash

Even though student scripts included the GAI hallmarks, some of them like not using the module materials were more prevalent in the GAI scripts (at the OU, these are behind a paywall, reducing exposure to GAI for the time being. Additionally, there were differences in the ways, that some characteristics showed up for GAI and students. As an illustration, a marker noted that “students just fail to put forward a convincing argument throughout. while GAI tends to provide both sides of an argument and then it’s not coming to a conclusion.”

Read More From Tech News

Whether the problem is due to poor study skills or misuse of AI, the authors advise providers to focus on offering additional support to the learner when assessors find these hallmarks in student work. The report identifies these characteristics as shared by low-performing learners’ responses and generative AI answers. While this method may seem contentious, it provides a workable solution to the detection issue and enhances students’ study abilities.

The question types that stood out as being more robust were those that combined the lowest-performing GAI grades with the best detection performance, both in terms of low false positives and GAI detection:

  • The role-play questions require students to apply what they have learned in the module to real-world situations while critically selecting what to apply.
  • Thoughts on work practice are welcome, as long as they are supported by concrete examples rather than the generalizations offered by GAI.

These kinds of questions fit the definition of “authentic assessment,” as it is commonly used. According to the research, remote assessments could be strengthened by requiring students to apply what they have learned, provide evidence for their answers and draw particular conclusions that are well-supported by marking guidelines.

Read More From Brit News Hub

This study expands on earlier research demonstrating that GAI can pass written exams. These new findings present the education sector with two challenges. First of all, is it possible for us to employ more authentic assessment formats, which aid students in entering the workforce and reduce the likelihood that they will abuse GAI?

Second, is the industry prepared to instruct students on the proper use of AI in assessment while maintaining the validity and reliability of the generated evidence as a gauge of a learner’s knowledge or abilities?

Leave a Reply

Your email address will not be published. Required fields are marked *