Generative AI presents a definite challenge to educators seeking to uphold academic integrity. Use of generative AI to write assignments remove agency from the learner and can lead to unmerited progression or qualification. Use of plagiarism detection services is futile, as for all its faults, AI-generated content is original (though there is the possibility of the tool simply returning its training data verbatim).
Learners using generative AI to create content and presenting it as their own without proper attribution or acknowledgment is a breach of academic integrity. When they use generative AI to generate text, images, or other content, they are essentially relying on a machine to create the content for them. If it isn't disclosed that the content was generated by AI or they fail to attribute the AI tool or platform used, it can be seen as dishonest and unethical, like more traditional forms of plagiarism.
There are three approaches than can be taken:
This is discussed in more detail on the uses of Generative AI page. Teachers - and for that matter programme designers - should look at alternative forms of assessment that de-emphasise the traditional essay. Assessment grading rubrics could be amended to emphasise more AI-proof criteria, such as: teamwork, verbal communication, creativity, originality, expansion beyond the original topic, etc.
Have discussions with learners about generative AI usage; what's acceptable (see the Generative AI for Learners page), ethical issues around use of generative AI, the limitations of generative AI and the importance of academic integrity as a whole.
You can use a few difference techniques to assess whether something was written by generative AI. None are foolproof, however. See also the Detecting AI-Generated Text Content page in the Evaluating Information Sources section
There are a variety of online tools that claim to be able to detect AI written content. See the links and resources page for details. These all use AI to detect AI. You may reflect on useful these tools might actually be. OpenAI withdrew their classifier tool for distinguishing between AI and human-generated content because of "its low rate of accuracy".
AI-Generated content often reads like it was written by a robot, and there are some things you can look out for
Here is an entire site where probably every article has been written by AI Would you say the articles have a distinctive style?
As described above - there's a different between out-of-date information and nonsense. Generative AI makes up things. Remember, generative AI is plausible, not necessarily factual. If a learner has to resort to using generative AI to in their assignment, then there's a good chance they won't know what's true either.
It's often the case that the learners who breach academic integrity guidelines make no attempt to cover their tracks. There have been many instances of this appearing in learner assignments:
If you suspect that a piece has been written by generative AI search within it for "As a language model..." or its variations.
Image sources:
10 Strategies: @AmandaFoxStem on Twitter
"As a language model..." @RayFleming on Twitter