OpenAI shuts down its AI detection tool

ChatGPT istock 2023

Image:
ChatGPT istock 2023

AI Classifier was supposed to be able to tell users if a piece of writing was AI- generated but it was wrong too often.

GPT-4 creator OpenAI has taken down links to its AI Classifier tool. The AI detection tool was announced at the start of the year, prompted by concerns about plagiarism, academic integrity, and the potential for generative AI to generate misinformation.

OpenAI claimed at the launch of AI Classifier that it could distinguish between human and machine written text although it acknowledged that the tool was far from perfect, adding that tests has showed it correctly identified 26% of AI text as "likely AI-written," and incorrectly labelled human written text as AI generated 9% of the time. OpenAI went ahead with AI Classifier in the hope that the tool would learn to be more accurate, and with the view that an imperfect way of distinguishing between human and machine written text was better than none.

OpenAIs announcement of what amounted to the failure of Classifier was decidedly low key. Links to the tool were removed and the original blog post announcing the release of the tool was updated with the following statement:

Image
null
Description
OpenAI announcement

Like ChatGPT, AI Classifier was free and simple to use. You simply pasted a chunk of text into a box and the classifier would tell you the likelihood of that text being AI-generated. One of the more significant issues with it, which OpenAI flagged at the outset, was that it worked less well on pieces of text under 1000 words. This meant that it was more likely to catch out students who'd used ChatGPT to create an essay than it was to catch an AI-generated social media post based on lies. Even on larger pieces of text the results were often tagged "unclear."

A common way to use LLMs for creative purposes has been to use them to create a first draft and then edit that draft. This joint AI/human approach confused the classifier, which was also, "sometimes extremely confident in a wrong prediction," according to OpenAI.

Accurately determining whether text is AI-generated is proving onerous despite plenty of other tools popping up claiming to do just that. It's a hit and miss process riddled with false positives, as this case in the US demonstrates. It's also fairly easy to specify to your chosen LLM that you would like it to write your essay in a style that evades AI detectors. It doesn't guarantee that you won't get found out but it does improve your chances.

The potential implications of AI tools failing to identify AI generated content are greater than the compromise of academic standards, important as that is. Evidence of the impact on democratic processes of being able to spread misinformation and lies is already available and Microsoft, Alphabet and Meta have all recently pledged to implement measures such as watermarking AI-generated content to mitigate the risks that LLMs pose to information integrity.