Timon Harz
December 12, 2024
How to identify AI-generated text: 7 ways to tell if content was made by a bot
Unlike images or videos, synthetic text is almost impossible to discern with the human eye.

As AI-generated content becomes increasingly common, you may find yourself asking, "How can I tell if a piece of text is AI-generated?" With advancements in AI, it’s becoming harder to distinguish machine-written content from that of humans. Fortunately, some forms of content, like images and video, are still relatively easy to identify with the human eye.
In text, however, the subtleties of AI authorship are harder to detect as language models improve. While there are some typical signs—like repetitive phrasing or unusual sentence structures—AI-generated content often lacks the nuanced tone or deep context that human writing provides.
Despite this, identifying AI-generated text isn't entirely impossible. Some tools and methods, including AI detection software and analyzing patterns in the writing, can help you spot these texts. For now, visual and video content are simpler to analyze, but as technology evolves, even these might become more challenging to decode.
How to detect AI-generated text
If you're a teacher or someone who spends a lot of time online, you might be wondering how to spot AI-generated text. The answer is simpler than you think: use your eyes. The key lies in recognizing patterns that differentiate human writing from machine-generated content. Experts, such as Melissa Heikkilä from MIT Technology Review, argue that the "magic" behind AI is often "the illusion of correctness," which makes it harder to distinguish from human-written text.
Much like how people in corporate settings use generic phrases when drafting memos, AI-generated text tends to follow certain patterns. This is why AI text detectors often flag content as "likely AI-generated"—because distinguishing a bland human writing style from a generic AI voice can be nearly impossible.
Here are some tips for spotting AI-generated text:
Common word usage: Watch for overuse of words like "the," "it," and "its."
No typos: AI-generated text is usually too perfect, lacking the typical mistakes that humans make.
Neat conclusions: AI often provides clean, definitive statements that neatly summarize paragraphs.
Verbose or padded writing: Lengthy sentences that add little substance.
Inaccurate or fabricated information: AI sometimes creates false details or invents sources.
Advanced tone: The writing might sound more sophisticated than the writer’s usual style.
Repetitive or overly polished grammar: Look for strange repetition or unnaturally flawless phrasing.
While there are AI text detectors available, in my experience, they’re often less reliable than simply using your own eyes. With practice, you can train yourself to spot the subtle markers of AI-generated writing.
AI text detectors: Why they're not reliable
The rise of AI models like ChatGPT, Gemini, and Claude has fueled the development of a cottage industry focused on AI text detection. Platforms like ZeroGPT, as well as tools initially built to detect plagiarism such as Grammarly and Copyleaks, have pivoted to address AI-generated content. However, while these detectors claim high accuracy, their reliability is often questionable. No tool currently offers perfect detection, and even those that claim to be 99% accurate can still struggle, particularly as AI-generated text becomes more sophisticated.
As AI models improve, they produce content that increasingly mimics human language, making it harder to distinguish between the two. Junfeng Yang, a professor at Columbia University, notes that the growing fluency of AI models means older detection methods become less effective. AI-generated texts now use vocabulary and sentence structures that are much closer to human writing, posing a real challenge for even advanced detectors.
Testing AI text detectors' accuracy
Despite the bold claims of AI detection tools like GPTZero and Hive Moderation, tricking these systems into classifying human-written content as machine-generated is surprisingly easy. These detectors often analyze lines of text that summarize or explain concepts, making them susceptible to false positives. For instance, when I tested a simple, hastily written summary of *Game of Thrones*, it was flagged as "likely AI-generated" by almost every detector I used. This highlights the challenges in relying on current AI text detectors, as they can misidentify well-written human content as machine-made based on structure rather than substance.
I then took the same Game of Thrones summary, input it into ChatGPT, and asked it to enhance the writing by making it more elaborate and less straightforward. Here's the modified version:
The plot of Game of Thrones follows several families in the fictional continent of Westeros. First, there are the Starks who rule the North. Second are the Lannisters, who rule in the Westerlands. The eldest daughter of the Wastelands is Cersei, who is also the Queen of Westeros and is married to Robert Baratheon. Robert is from the Stormlands and usurped the throne of the Targarayns — a group of dragon riders who took over the land over 1000 years ago but slowly lost their grip on Westeros due to infighting. The series was created by George RR Martin and is currently chronicled in five books — A Game of Thrones, A Clash of Kings, A Storm of Swords, A Feast for Crows, and a Dance with Dragons. There are two more books planned for the series with the Winds of Winter set to release sometime in 2025.
Here's the "improved" ChatGPT version:
Game of Thrones throws viewers into the brutal, blood-soaked chessboard of Westeros, where power is the prize and morality is negotiable. Leading the pack are the Starks, who hold dominion over the icy North. Their stoic honor feels like a relic in a land where backstabbing is a form of currency. In sharp contrast are the Lannisters of the opulent Westerlands, the family everyone loves to hate — with Cersei Lannister perched atop the throne as Queen of Westeros. Her marriage to Robert Baratheon, the hard-drinking, boar-hunting king from the Stormlands, is a political match made in hell, bound together by lies and mutual loathing. But the throne itself carries a bloody history. Robert took it after a rebellion that ousted the Targaryens, the long-ruling dynasty of dragon riders who, once unstoppable, eventually imploded in a whirlwind of paranoia and civil war. Their fire-breathing beasts may have once scorched the skies, but family feuds brought their empire crashing down, leaving only whispers of their former glory. Meanwhile, George R. R. Martin’s sprawling saga continues in five books that are equal parts battlefield carnage, political intrigue, and heartbreaking betrayals. While fans devour every page, they remain on edge, waiting for the Winds of Winter to blow, teased for what feels like eons, now with a tentative promise of landing in 2025. But in Westeros, promises — like allegiances — are made to be broken.
The results of my testing were quite inconsistent. One of the first tools I tried was Scribbr, which flagged my original summary as "100% likely AI-generated." However, when I replaced it with the version enhanced by ChatGPT, Scribbr surprisingly lowered the likelihood to just 48%.


I also tested Winston AI, which flagged the original text as 39% likely to be generated by GPT-4. Interestingly, when I input the AI-enhanced version, it only assigned a 1% likelihood of being AI-generated.


Hive Moderation completely missed the mark in analyzing my work, failing to flag either of the summaries I submitted. The system confidently labeled both as human-written content, despite the significant differences in their structure and style.

If I ask ChatGPT for a random paragraph on any topic and then paste it into various text detectors, it is almost always flagged as AI-generated immediately. But this actually highlights the problem: without specific instructions, ChatGPT’s default writing style tends to be generic, formulaic, and overly neutral. This lack of flair or unique voice makes it more easily identifiable as machine-written content, as the text lacks the nuances and idiosyncrasies that characterize human writing.
The unremarkable tone is what triggers false positives in AI text detectors—not any complex, proprietary technology these tools claim to use. Even when platforms like Originality.ai correctly identified both AI-generated texts, simple sentence rewording completely altered the outcome. By slightly adjusting the phrasing, text previously flagged with "100% confidence" as AI-generated could suddenly be marked as "Likely original."
This highlights a key issue: the AI detection tools are easily tricked by small changes, which undermines their accuracy. In testing, I used a mix of AI-written summaries and more flowery academic excerpts from grad school papers to see how these tools would respond. Here's the list of AI text detection tools I tested:
GPTZero
ZeroGPT
Hive Moderation
Scribbr
CopyLeaks
Originality.ai
Grammarly
GPT-2 Output Detector
Writefull X
Winston AI
The results show that if your writing lacks personality or sounds overly formulaic—like an 8th-grade report—AI detectors will often flag it as machine-generated. This testing demonstrates how avoiding certain structural patterns can easily bypass AI detectors, creating challenges for companies that rely on these tools for detection services. Many of these platforms, which offer subscriptions and B2B API solutions, now face the task of improving their detection accuracy to avoid false positives.
While these AI detection tools are effective for plagiarism detection, their ability to spot AI-generated content still requires significant improvement. The inconsistency in results is glaring—submitting the same text to different detectors often yields wildly varying outcomes. What one tool flags as AI-generated might be overlooked by another. Given this lack of reliability, it’s difficult to recommend any of these tools with confidence at this time. Further refinement is necessary before they can be trusted for accurate detection across platforms.
Why is detecting AI-generated text so difficult?
Human language is notoriously intricate and unpredictable, which is one of the key challenges in detecting AI-generated text. Bamshad Mobasher, IEEE member and chair of the AI program at DePaul University, explains that since AI models are trained on human text, they can easily mimic human conversations. Detection tools typically rely on identifying patterns, such as repetitive phrases or overly consistent grammatical structures. Mobasher notes that while it might be easier for humans to spot text that feels "too perfect," confirming that it is AI-generated remains difficult.
Unlike image generators, which often produce visible mistakes like extra fingers or distorted faces, language models rely on statistical probabilities to generate more fluid text. This makes AI-generated content harder to distinguish from human-written text, as subtle errors or nuanced phrasing are less common, posing challenges for both detection tools and human readers.
This is what makes AI-generated text particularly dangerous. Mobasher warns that it has become easier to produce large-scale misinformation. With LLMs able to create convincing, polished content that mimics credible voices, distinguishing between fact and fiction becomes increasingly difficult for the average person.
Yang adds that AI makes it much easier to carry out these deceptive tactics, such as crafting a fluent email with personalized details about a target’s role at a company.
Beyond its potential for misuse, AI-generated text also contributes to a decline in internet quality. Models like OpenAI’s and Anthropic’s scrape publicly available data to train their systems, generating articles that are then published online and scraped again, perpetuating a cycle. This feedback loop leads to more generic, recycled content, making it harder to find authentic, well-crafted writing.
While the rapid growth of AI and its negative impact on content quality may seem inevitable, improving media literacy is one way we can fight back. As Yang advises, "If you see an article or report, don’t just blindly believe it — look for corroborating sources, especially if something seems off."
Press contact
Timon Harz
oneboardhq@outlook.com
Other posts
Company
About
Blog
Careers
Press
Legal
Privacy
Terms
Security