🔬 AI Fakes in Science: When ChatGPT Writes Papers and Gets Caught
Modern science is hard to imagine without AI. It already helps to discover new drugs, analyze medical archives, and sift through telescope data in search of exoplanets.
But AI in science has a dark side—as its use becomes more widespread, research fraud is reaching new heights. Analysts estimate that thousands of AI-generated papers are published in scientific journals every week.
What's truly surprising isn't that so many of their authors are exposed—it's how they're caught, just like students who turn in a last-minute, bot-written essay.
How Are Researchers Getting Busted?
📌 Tortured phrases: Clumsy attempts to rephrase text using AI to hide plagiarism. That's how "breast cancer" turns into "bosom peril," "big data" becomes "huge information," and "neural network" ends up as "neural organization."
📌 AI-generated comments: Some authors unthinkingly copy phrases like "As an AI language model, I..." or "Certainly, here is a possible introduction for your topic" straight out of ChatGPT.
📌 Language patterns: AI tends to favor certain words and expressions much more than humans. For example, overuse of words like "commendable" and "meticulous" is a strong sign that ChatGPT was involved.
📌 AI-generated images: Illustrations or diagrams with anatomical errors or meaningless labels.
📌 Hallucinations: Made-up chemical formulas, incorrect calculations, or nonsensical equations.
🔍 How Are Fake Papers Getting Caught?
Ironically, other AIs are now helping detect generated fakes. Tools like Problematic Paper Screener scan texts for typical ChatGPT and tortured phrases, while Proofig AI verifies images and catches visual inconsistencies.
🦥 It's Not Just the Authors Cheating—Reviewers Are Doing It Too
Even those responsible for detecting cheating in science sometimes take shortcuts.
Ecologist Timothée Poisot recently shared that he received a peer review clearly written by AI. The reviewer gave themselves away with a lazy intro: "Here is a revised version of your review with improved clarity and structure."
⚖️ Where's the Middle Ground?
Progress isn't slowing down, and many scientific publishers are starting to draft guidelines for using AI in research. In most cases, authors can use AI to edit and polish their language—but not to write the entire paper. Still, there's no universal standard.
The academic world is at a crossroads: either set clear rules or watch scientific publishing drown in a flood of convincing yet meaningless AI-generated content—at least until the Japanese startup Sakana AI perfects its AI Scientist.
More on the topic:
⚛️ AI Hallucinations Could Help Scientists


