AI-Detectors Biased Against Non-Native English Writers | Stanford HAI
Skip to content
Navigate
About
Events
AI Glossary
Careers
Participate
Get Involved
Support HAI
Stay Up To Date
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
For Latest News
Don’t put faith in detectors that are “unreliable and easily gamed,” says scholar.
In the wake of the high-profile launch of ChatGPT, no fewer than seven developers or companies have countered with AI detectors. That is, AI they say is able to tell when content was written by another AI. These new algorithms are pitched to educators, journalists, and others as tools to flag cheating, plagiarism, and mis- or disinformation.
It’s all very meta, but according to
a new paper
from Stanford scholars, there’s just one (very big) problem: The detectors are not particularly reliable. Worse yet, they are especially unreliable when the real author (a human) is not a native English speaker.
The numbers are grim. While the detectors were “near-perfect” in evaluating essays written by U.S.-born eighth-graders, they classified more than half of TOEFL essays (61.22%) written by non-native English students as AI-generated (TOEFL is an acronym for the Test of English as a Foreign Language).
It gets worse. According to the study, all seven AI detectors
unanimously
identified 18 of the 91 TOEFL student essays (19%) as AI-generated and a remarkable 89 of the 91 TOEFL essays (97%) were flagged by at least one of the detectors.
Read the full study,
GPT Detectors are Biased Against Non-native English Writers
“It comes down to how detectors detect AI,” says
James Zou
, a professor of biomedical data science at Stanford University, a Stanford Institute for Human-Centered AI affiliate, and the senior author of the study. “They typically score based on a metric known as ‘perplexity,’ which correlates with the sophistication of the writing — something in which non-native speakers are naturally going to trail their U.S.-born counterparts.”
Zou and co-authors point out that non-native speakers typically score lower on common perplexity measures such as lexical richness, lexical diversity, syntactic complexity, and grammatical complexity.
“These numbers pose serious questions about the objectivity of AI detectors and raise the potential that foreign-born students and workers might be unfairly accused of or, worse, penalized for cheating,” Zou says, highlighting the team’s ethical concerns.
Zou also notes that such detectors are easily subverted by what is known as “prompt engineering.” That term of art in the AI field simply means asking generative AI to “rewrite” essays, for example, to include more sophisticated language, Zou says. He provides an example of just how easy bypassing the detectors is. A student wishing to use ChatGPT to cheat might simply plug in the AI-generated text with the prompt: “Elevate the provided text by employing literary language.”
“Current detectors are clearly unreliable and easily gamed, which means we should be very cautious about using them as a solution to the AI cheating problem,” Zou says.
The question then turns to what to do about it. Zou offers a few suggestions. In the immediate future, he says we need to avoid relying on detectors in educational settings, especially where there are high numbers of non-native English speakers. Second, developers must move past using perplexity as their main metric to find more sophisticated techniques or, perhaps, applying watermarks in which the generative AI embeds subtle clues about its identity into the content it creates. Finally, they need to make their models less vulnerable to circumvention.
“The detectors are just too unreliable at this time, and the stakes are too high for the students, to put our faith in these technologies without rigorous evaluation and significant refinements,” Zou says.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.
Share
Link copied to clipboard!
Contributor(s)
Andrew Myers
Related News
Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much
KQED
Apr 13, 2026
Media Mention
Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.
Media Mention
Your browser does not support the video tag.
Stanford Study: AI Experts Are Optimistic About AI. The Rest of Us… Not So Much
KQED
Workforce, Labor
Sciences (Social, Health, Biological, Physical)
Design, Human-Computer Interaction
Ethics, Equity, Inclusion
Apr 13
Sha Sajadieh, AI Index Lead, comments on HAI's 2026 AI Index findings.
Want To Understand The Current State Of AI? Check Out These Charts.
MIT Technology Review
Apr 13, 2026
Media Mention
"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."
Media Mention
Your browser does not support the video tag.
Want To Understand The Current State Of AI? Check Out These Charts.
MIT Technology Review
International Affairs, International Security, International Development
Education, Skills
Regulation, Policy, Governance
Machine Learning
Workforce, Labor
Apr 13
"If you’re following AI news, you’re probably getting whiplash. AI is a gold rush. AI is a bubble. AI is taking your job. AI can’t even read a clock. The 2026 AI Index from Stanford University’s Institute for Human-Centered Artificial Intelligence, AI’s annual report card, comes out today and cuts through some of that noise."
How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Mar 03, 2026
News
Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.
News
How a HAI Seed Grant Helped Launch a Disease-Fighting AI Platform
Dylan Walsh
Computer Vision
Healthcare
Sciences (Social, Health, Biological, Physical)
Machine Learning
Mar 03
Stanford scientists in Senegal hunting for schistosomiasis—a parasitic disease infecting 200+ million people worldwide—used AI to transform local field work into satellite-powered disease mapping.
US