ScienceTechnology

AI’s Impact on Scientific Literature

The growing role of AI in generating scientific literature is raising concerns. Recent studies explore how much of this content is created by AI, revealing a fast-changing landscape that’s hard to fully understand.

Many researchers fear that poor-quality or fake studies made by large language models (LLMs) could flood current systems meant to check for errors. This could harm the trustworthiness of science.

Richard She, a scientist in Singapore, warns it’s an “escalating arms race” between those abusing AI and those trying to stop it. Maria Antoniak, a computer scientist, says the situation is evolving so quickly that experts are unprepared.

In March, an analysis found AI-generated articles likely outnumbered human-written ones online. While AI can help speed up research, it also poses risks like creating fake or low-quality papers.

To tackle this, researchers use tools to detect AI-generated text. However, these tools aren’t perfect—some can’t tell the difference between text edited by AI and fully generated content. They also define “AI-generated” differently, sometimes wrongly flagging human-written work as AI-made.

A study using detection tools on 7,000 journal submissions found a 42% increase in AI-generated content since November 2022. By early 2026, over 30% of peer-review reports were also found to contain AI text.

Other researchers, like She and Antoniak, are trying to track AI-generated papers online. Their efforts are tough due to the massive number of papers published. For example, She analyzed biomedical papers in top journals and found that about 12.5% contained some AI-generated text.

Antoniak’s research on arXiv preprints showed a sharp rise in AI-generated content between 2023 and 2025, especially in computer science.

Experts predict this trend will continue, signaling the start of a new era where AI plays a bigger role in scientific publishing.

Leave a Reply

Your email address will not be published. Required fields are marked *