Our Terms & Conditions | Our Privacy Policy
AI system to detect questionable journals in science publishing
United States: A team of computer scientists led by the University of Colorado Boulder has created a new artificial intelligence platform designed to identify so-called ‘predatory’ scientific journals, a growing problem in global research publishing.
The study highlights how AI can help combat fraudulent practices that undermine scientific credibility. Lead researcher Daniel Acuña, associate professor in CU Boulder’s Department of Computer Science, said he frequently receives spam invitations from unknown journals that offer to publish his work for a fee.
These journals, often lacking peer review, lure researchers, particularly from regions with high publication pressure such as China, India, and Iran, into paying hundreds or thousands of dollars to publish papers without proper vetting.
Acuña explained that, “There has been a growing effort among scientists and organizations to vet these journals, but it’s like whack-a-mole. You catch one, and another pops up under a new name.”
The AI platform developed by Acuña’s team automatically evaluates journal websites and related data, screening for red flags such as missing editorial boards, vague peer review policies, excessive grammatical errors, and unusual citation patterns. The system was trained using data from the nonprofit Directory of Open Access Journals (DOAJ), which has flagged suspicious publications since 2003.
Image Via: University of Colorado at Boulder | Cropped by ET
Testing the AI on nearly 15,200 open-access journals, the platform initially flagged over 1,400 as potentially problematic. Human reviewers later confirmed that while around 350 were likely misclassified, more than 1,000 journals remained questionable. Acuña emphasized that the tool is meant to function as a prescreening system, with human experts making the final judgment.
Unlike some ‘black box’ AI platforms, the system was designed to be transparent. The researchers discovered that questionable journals often publish unusually high volumes of articles, list authors with multiple affiliations, and display excessive self-citations.
The AI is not yet publicly accessible, but the team hopes to make it available to universities and publishers soon. Acuña describes it as a ‘firewall for science,’ helping safeguard research integrity against fraudulent outlets.
Acuña added that, “As in computer science, where new software comes with flaws and bug fixes, we should treat science the same way. We need safeguards to ensure the foundation of research remains strong.”
Co-authors on the study include Han Zhuang of the Eastern Institute of Technology in China and Lizheng Liang of Syracuse University in the United States.
TRENDING | Dubai RTA to launch 5 new bus routes from August 29
Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.
Comments are closed.