Last summer, Peter Degen’s postdoctoral supervisor presented him with an unusual dilemma: one of his papers was being cited at an unprecedented rate. Citations are the backbone of academic recognition, but this surge was anything but ordinary.

Published in 2017, the paper evaluated the accuracy of a specific statistical analysis method applied to epidemiological data. Over the years, it had accumulated a modest few dozen citations. Then, in a matter of months, it was referenced hundreds of times—often every few days—catapulting it into the ranks of the most cited works in Degen’s career.

While many academics would celebrate such attention, Degen’s adviser asked him to investigate the cause. The answer, they soon discovered, lay not in a sudden wave of interest from human researchers, but in the rise of artificial intelligence.

AI tools, including large language models, have begun generating academic citations automatically. These systems scour vast repositories of research papers and produce references that mimic human citations. The result? A flood of AI-generated citations that inflate citation counts, distorting metrics used to evaluate scientific impact and funding eligibility.

Degen, now a postdoctoral researcher at ETH Zurich, is among the growing number of scientists sounding the alarm. “This is a systemic issue,” he said.

“If AI is generating citations without human oversight, we risk polluting the academic record. Citation counts are supposed to reflect genuine scholarly interest, not algorithmic output.”

The problem extends beyond Degen’s paper. Researchers across disciplines report similar spikes in citations for older, less-cited works. The phenomenon is particularly acute in fields where AI tools are widely used, such as computer science and biomedical research.

Some argue that AI-generated citations could democratize access to citation metrics, helping newer researchers gain visibility. Others warn that the practice undermines the integrity of academic evaluation. “Citation counts are used to make high-stakes decisions,” said Dr. Maria Schmidt, a bibliometrics expert at the University of Amsterdam.

“If these numbers are artificially inflated, it could lead to misallocation of research funding and distort the peer-review process.”

Institutions and publishers are beginning to respond. Some academic journals have updated their submission guidelines to prohibit AI-generated citations, while others are developing detection tools to identify automated references. However, the cat-and-mouse game between AI developers and academic integrity watchdogs is just beginning.

For now, Degen and his colleagues continue to monitor the situation. “We need transparency,” he said.

“If AI is being used to generate citations, it should be disclosed. Otherwise, we’re building a house of cards on top of a flawed foundation.”

Source: The Verge