Scientific research. Credits: Artpodres, Pexel
Researchers have screened around 15,000 open access journals and flagged them as having more than 1,000 issues, according to a study from the University of Colorado Boulder University. Advances in science.
The tool finds red flags such as flagged titles owned by big reputable publishers, such as super-fast publication times, high self-quotes, opaque fees and more.
How AI works when screening science journals
The system scans journal websites and published articles for patterns associated with suspicious practice. It was trained on some of the best practice criteria from the Open Access Journal (DOAJ) directory. This is the main watchdog and directory of a trusted open access journal.
Daniel Acknya, co-author of the University of Colorado Boulder University research, emphasizes that the tool is preschool rather than judges or ju judges. (Quote) Nature.))
But a cancer researcher at the University of Sydney, Jennifer Byrne, Australia, said “the whole group of problematic magazines appears to be functioning as journals that appear unqualified are showing normal.”
The University of Colorado Boulder team added that they “sought to create as much interpretable (AI) as possible,” adding that they have framed it as a “science firewall.”
What exactly is a “peer-reviewed” study?
The phrase “peer-reviewed research” has been thrown at a lot during the community pandemic. A simple definition is that peer review refers to the evaluation of a manuscript by a peer of an author. This is an independent expert who assesses whether a study is healthy before it is published. The major journals use it to protect reliability and reputation.
However, peer reviews are not perfect. Various models (single blind, double blind, open) have advantages and disadvantages, with history indicating that errors can be missed or covered in games.
Follow the money – who will fund the research?
Comprehensive scoping reviews in American Journal of Public Health “Industry-sponsored research tends to be biased towards sponsored products,” concluded that “company interests can alienate the research agenda from questions most relevant to public health.”
The same review documented common tactics (tobacco, food, pharmaceuticals) across the industry. It steers funds for commercially useful topics, prioritizes research lines that support legal/policy positions, and increases credibility through publications and conferences.
How should we evaluate “scientific” claims?
- Check the journal – is it indexed by Doaj?
- Does the site clearly explain peer review policies, fees and licenses? (AI was flagged in the exact magazine of these gaps.)
- Look for the Peer Review Trail: Do editors name reviewers or publish reports (transparent/open reviews) or are the process opaque?
- Follow the funding disclosure: Who paid? Is the conflict declared? Funding can change research agendas and outcomes.
- Beware of speed and spam: Super fast acceptance and mass solicitation emails are red flags.
AI can spotlight large anomalies, such as strange quote patterns, suspicious turnaround times, and journals with muddy governance. If used frequently, it can become a powerful early warning system. But even the creators are insisting on ultimate human judgment. Tools don’t tell you what’s true. They help us decide what is worthy of our attention.
View all technology news.
See all world news.