The joy of students about the fact that writing any text work can be entrusted to AI turned out to be premature. Developers from Stanford created the DetectGPT tool, trained to determine who the author of the text is, in fact, artificial intelligence or a person.
Large language models (LLM) are in great demand today. The increase in the intensity of their use was influenced, in particular, by the "smart" chatbot ChatGPT, developed based on the GPT-3.5 language model. When it became clear that it was doing homework and writing term papers so well that it was almost impossible to distinguish the text it created from the one written by a person, the demand for LLM increased significantly. Imagining the prospects of such "study", many teachers were horrified. There was an urgent need for a service that determines the real author of the text.
Recently, a team of IT researchers at Stanford University, led by Eric
Mitchell, presented
the DetectGPT detector, which should help educators deal
with texts generated by artificial intelligence.
This tool is based on the machine learning model “zero-shot learning”
(the ability of the system to determine what it has not yet been
familiarized with).
For testing DetectGPT, an array of data on fictitious news was chosen.
The test results indicated that this detector turned out to be much better
than other methods for identifying AI-generated texts. It was able to
identify AI-generated text in 14 out of 15 cases. True, at the same
time, it did not provide information about which particular language model
wrote it. The high level of performance of DetectGPT already now allows
us to consider it one of the most promising systems for studying texts
created by artificial intelligence.