Ouriginal is now part of the Turnitin family Learn More
December 7, 2021

Is artificial intelligence a threat to academic integrity?

When it comes to education, artificial intelligence (AI) can be a double-edged sword. On the one hand, these tools are widely used to support academia in the work they do including preventing plagiarism to uphold academic integrity and optimizing students’ writing styles to help them reach their true potential. While on the other hand, as AI develops in its capabilities and becomes ‘smarter’, students will be able to harness the power of these tools to write high-quality essays and articles on their behalf.  But are educational institutions ready to tackle this challenge in the near future?

Emergence of machine-generated human-like text

Over the past two years due to the Covid-19 pandemic, the world of education was forced to make a hasty move to online learning and teaching – a move many were ill-prepared to make. Added to that, online education saw a significant increase in the number of cheating and academic dishonesty cases. Researchers and academics believe that this increase can be attributed to the availability of easy options to students to take shortcuts such as access to ghostwriters, etc. If we now add AI to this mix, academia might have an even bigger challenge on its hands.

The past few years have seen the ability of AI to generate human-like content jump exponentially. Take for example, GPT-3 (Generative Pre-trained Transformer) – a language model that uses machine learning to produce human-like text. GPT-3, created by OpenAI and co-founded by Elon Musk, is one of the largest language models ever trained. Another language model is the Megatron-Turing Natural Language Generation (MT-NLG), developed by NVIDIA in partnership with Microsoft. These language transformers can generate texts using pre-trained algorithms and can create sophisticated essays, memos and answer questions in a human-like manner, making it difficult to differentiate between the machine-generated text and the actual human-generated text.

AI – a tool that assists students to write better or a tool that makes cheating easier?

Technology is used very extensively in education today – and the current move to remote learning has further intensified its application. Technological advancements have made tools and products available to students and academia in general, that make learning and writing easier by helping students improve and hone their skills, while aiding educational institutions preserve academic integrity.

Advancements in machine-based language generators on the other hand, might help lazy students ‘cheat’ the system, allowing them to outperform other students without actually putting in any effort.

Additionally, if teachers have no way of discerning AI-written text from actual text written by students, current grading systems will lose their rationale and legitimacy. Furthermore, the system might in reality, reward dishonest students by enabling them to perform better than their competition.

Detecting text that has been machine-generated also adds an extra burden on teachers as these tools are able to beat plagiarism detection software and hence, will need more vigorous examination by teachers. This will take precious time away from them which they could spend in teaching or collaborating with students instead.

Preserving academic integrity in the face of automation

Given that the educational landscape as we know it is changing quite rapidly and is being influenced by technological advancements, it is imperative for academia to keep ahead of the curve and to stay abreast of these innovations.

While we are still some time away from text generators becoming mainstream and easily accessible to everyone including students (currently, OpenAI’s code is only available to select developers via an API), universities and schools should start thinking about developing effective solutions now to be able to tackle the challenge in the coming years.

Although several teams of scientists have been successful in developing algorithms that could help identify machine-written texts, these have their own limitations and it will be a while before a viable solution is developed.

Existing plagiarism detection tools used to check the authenticity of documents today are not equipped to detect text generated by computers. One of the reasons for this is that when generating text, AI does not really plagiarise – its algorithms access resources in its database and produce text based on how it was trained, thereby avoiding detection.

If left unchecked, AI could exploit this loophole, enabling dishonest students to get away with work that is not theirs, and the entire premise on which a university’s grading system is built and education, in general, will be largely compromised. Without a solution to this issue, it is possible that schools and universities will need to re-think how they evaluate their students or perhaps, embracing AI and including it in the classroom, as suggested by Miles Brundage, a research scientist at OpenAI, might be the way forward.

Read more blogs:

This website uses cookies to improve the site’s overall user experience and performance. Read more here.