Student Creates App to Detect Essays Written by AI

In response to the text-generating bot ChatGPT, the new tool measures sentence complexity and variation to predict whether an author was human

a student works at a laptop
Teachers have cited concerns about students trying to pass off AI-written essays as their own work. fotostorm via Getty Images

In November, artificial intelligence company OpenAI released a powerful new bot called ChatGPT, a free tool that can generate text about a variety of topics based on a user’s prompts. The AI quickly captivated users across the internet, who asked it to write anything from song lyrics in the style of a particular artist to programming code.

But the technology has also sparked concerns of AI plagiarism among teachers, who have seen students use the app to write their assignments and claim the work as their own. Some professors have shifted their curricula because of ChatGPT, replacing take-home essays with in-class assignments, handwritten papers or oral exams, reports Kalley Huang for the New York Times

“[ChatGPT] is very much coming up with original content,” Kendall Hartley, a professor of educational training at the University of Nevada, Las Vegas, tells Scripps News. “So, when I run it through the services that I use for plagiarism detection, it shows up as a zero.” 

Now, a student at Princeton University has created a new tool to combat this form of plagiarism: an app that aims to determine whether text was written by a human or AI. Twenty-two-year-old Edward Tian developed the app, called GPTZero, while on winter break and unveiled it on January 2. Within the first week of its launch, more than 30,000 people used the tool, per NPR’s Emma Bowman. On Twitter, it has garnered more than 7 million views. 

GPTZero uses two variables to determine whether the author of a particular text is human: perplexity, or how complex the writing is, and burstiness, or how variable it is. Text that’s more complex with varied sentence length tends to be human-written, while prose that is more uniform and familiar to GPTZero tends to be written by AI.

But the app, while almost always accurate, isn’t foolproof. Tian tested it out using BBC articles and text generated by AI when prompted with the same headline. He tells BBC News’ Nadine Yousif that the app determined the difference with a less than 2 percent false positive rate.

“This is at the same time a very useful tool for professors, and on the other hand a very dangerous tool—trusting it too much would lead to exacerbation of the false flags,” writes one GPTZero user, per the Guardian’s Caitlin Cassidy. 

Tian is now working on improving the tool’s accuracy, per NPR. And he’s not alone in his quest to detect plagiarism. OpenAI is also working on ways that ChatGPT’s text can easily be identified. 

“We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else,” a spokesperson for the company tells the Washington Post’s Susan Svrluga in an email, “We’re already developing mitigations to help anyone identify text generated by that system.” One such idea is a watermark, or an unnoticeable signal that accompanies text written by a bot.

Tian says he’s not against artificial intelligence, and he’s even excited about its capabilities, per BBC News. But he wants more transparency surrounding when the technology is used. 

“A lot of people are like … ‘You’re trying to shut down a good thing we’ve got going here!’” he tells the Post. “That’s not the case. I am not opposed to students using AI where it makes sense. … It’s just we have to adopt this technology responsibly.”

Get the latest stories in your inbox every weekday.