SkepticblogSkepticblog logo banner

top navigation:

A Problem with Open Access Journals

by Steven Novella, Oct 07 2013

In a way the internet is a grand bargain, although one that simply emerged without a conscious decision on the part of anyone. It greatly increases communication, lowers the bar for content creation and distribution, and allows open access to vast and deep databases of information. On the other hand, the traditional barriers of quality control are reduced or even eliminated, leading to a “wild west” of information. As a result it is already a cliche to characterize our current times as the “age of misinformation.”

For someone like me, a social-media skeptic, I feel the cut of both edges quite deeply. With podcasts, blogs, YouTube videos, and other content, I can create a network of content creation and distribution that can compete with any big media outlet. I can use these outlets to correct misinformation, analyse claims, engage in debates, and debunk fraud and myths.

On the other hand, the fraud, myths, and misinformation are multiplying at frightening rates on the very same platforms. It is difficult to gauge the net effect – perhaps that’s a topic for another post.

For this post I will discuss one of the most disturbing trends emerging from the internet phenomenon – the proliferation of poor quality science journals, specifically open access journals. The extent of this problem was recently highlighted by a “sting” operation recently published by Science magazine.

According to the Directory of Open Access Journals (DOAJ):

We define open access journals as journals that use a funding model that does not charge readers or their institutions for access. From the BOAI definition of “open access”, we support the rights of users to “read, download, copy, distribute, print, search, or link to the full texts of these articles” as mandatory for a journal to be included in the directory.

This is great, and open access has many supporters, including me. But all new “funding models” have the potential of creating perverse incentives. With the traditional model of print publishing, money was made through advertising and subscription fees. Subscriptions are driven by quality and impact factor, creating an incentive for high quality peer review and overall quality.

Open access journals frequently make their money by charging a publication fee of the author. This creates an incentive to publish a lot of papers of any quality. In fact, if you could create a shell of a journal, with little staff, and publish many papers online with little cost, that could generate a nice revenue stream. Why not create hundreds of such journals, covering every niche scientific and academic area?

This, of course, is what has happened. We are still in the middle of the explosion of open access journals. At their worst they have been dubbed “predatory” journals for charging hidden fees, exploiting naive academics, and essentially being scams.

John Bohannon decided to run a sting operation to test the peer-review quality of open access journals. I encourage you to read his entire report, but here’s the summary.

He identified 304 open access journals that publish in English. He created a fake scientific paper with blatant fatal flaws that rendered the research uninterpretable and the paper unpublishable. He actually created 304 versions of this paper by simply inserting different variables into the same text, but keeping the science and the data the same. He then submitted a version of the paper to all 304 journals under different fake names from different fake universities (using African names to make it seem plausible that they were obscure).

The result? – over half of the papers were accepted for publication. I think it’s fair to say that any journal that accepted such a paper for publication is fatally flawed and should be considered a bogus journal.

This, of course, is a huge problem. Such journals allow for the flooding of the peer-reviewed literature with poor quality papers that should never be published. This is happening at a time when academia itself is being infiltrated with “alternative” proponents and post-modernist concepts that are anathema to objective standards.

Combine this with the erosion of quality control in science journalism, also thanks to the internet. Much of what passes as science reporting is simply cutting and pasting press releases from journals, including poor-quality open access journals hoping for a little free advertising.

At least this creates plenty of work to keep skeptics busy.

What this means for everyone is that you should be highly wary of any published study, especially if it comes from an obscure journal. The problem highlighted with this sting is not unique to open-access journals. There are plenty of “throw-away” print journals as well. And even high impact print journals may be seduced into publishing a sexy article with dubious research. Michael Eisen reminds us about the aresenic DNA paper that Science itself published a few years ago.

Definitely you should look closely at the journal in which a paper is published. But also, do not accept the findings of any single paper. Reliable scientific results only emerge following replication and the building of consensus.

Perhaps the Science paper will serve as a sort-of Flexner report for open access journals. In 1910 the Flexner report exposed highly variable quality among US medical schools, resulting in more than half of them shutting down, and much tighter quality control on those that remained open. The Flexner report is often credited with bringing US medical education into the scientific era.

In order to tame the wild west, we need clearing houses that provide careful review and their stamp of approval for quality control. The DOAJ tries to do this, stating:

For a journal to be included it should exercise quality control on submitted papers through an editor, editorial board and/or a peer-review system.

Clearly such review needs to be more robust. The integrity of the published literature is a vital resource of human civilization. As we learn to deal with the consequences of open access, intended and unintended, we need to develop new institutions of quality control and science-based standards.

14 Responses to “A Problem with Open Access Journals”

  1. Jonathan Jarry says:

    I agree that the open-access model makes it easy and appealing to create “shell” journals with the sole intent to make money. However, I find that Science’s sting was biased against open-access journals and that their findings invite a similar operation to test the supposedly more rigorous peer-review process of traditional journals. They have not proven that open-access journals allow more crap to be published than traditional journals; they have simply shown that the majority of them are willing to publish crap. I want to see this experiment redone in traditional journals so that we can compare the laxities in both systems. I’m not sure I really trust the execution of the traditional peer-review system anymore….

  2. Gerry says:

    Do we not sort of have a QC for these journals? Pubmed listing? When I want to check the quality of someone’s CV, I search on Pubmed. If the paper does not appear there, it may as well not exist. Not to mention the fact that anyone who is going to look for references for a paper will also search on Pubmed. Many (most) of these predatory journals are not listed. This, more than anything else, makes them no more than resume padding and digital garbage. If no-one is going to read your paper, why publish it?

  3. Martin says:

    How does the peer review process actually work? Are the reviewers required to carry out the experiment(s) contained in the paper and analyse the results or do they just read the paper and give it a thumbs up or down?

    • Max says:

      They read the paper and give it a thumbs up or down, sometimes with constructive criticism.

    • Daniel says:

      Someone can correct me if I’m wrong, but I imagine the type and significance of the claim has something to do with the level of scrutiny.

      So if you claim that you’ve mastered cold fusion or a perpetual motion machine, your experiments and data will be reviewed with the finest tooth comb imaginable. If there’s a more mundane conclusion (I suppose 99 percent of them), peer review is probably more lax.

    • Yair says:

      Reviewers do not replicate the experiments, and a thorough analysis of your data by another is rare (it takes a lot of time and effort to do a proper analysis).

      Reviewers usually do more than a simple thumbs up or down, however, by elucidating specific concerns that the paper missed and needs to address. This often requires gathering more data (to account for another variable, say, or to rule out an alternative explanation the reviewer raised, or so on), but only rarely is redoing the entire experiment needed.

      Most often, you need to do some minor corrections such as linking to some literature your overlooked or even just clarifying a few points. In some cases, the reviewers outright reject the work, either because it’s bad science or because it doesn’t fit the journal.

  4. Dave Rockwell says:

    That integrity is more than just a vital resource for civilization; it is an essential component, without which everything will decay back to medievalism in due course.

  5. Yair says:

    It should be noted that many of these journals don’t have a peer review process. Still, I believe that many that do agreed to publish this spoof paper.

    There is a need to come up with an incentive structure that rewards quality yet allows for open access for the benefit of all. I can’t think of one.

  6. Bill says:

    “Perhaps the Science paper will serve as a sort-of Flexner report for open access journals. In 1910 the Flexner report exposed highly variable quality among US medical schools, resulting in more than half of them shutting down, and much tighter quality control on those that remained open. The Flexner report is often credited with bringing US medical education into the scientific era.”

    A bit ironic, considering how many medical schools are now opening centers for snake-oil (oh, sorry…I meant complementary and alternative (oh, sorry…I meant integrative)) practices.

  7. Max says:

    A similar funding model gives rise to diploma mills, yet people still pay big bucks to earn a diploma from a reputable university.
    Sting operations have exposed diploma mills as well.

  8. Loren Petrich says:

    John Bohannon has conceded that he had not sent his paper to traditional subscription journals, because their turnaround times is often weeks to months, meaning that he would not have gotten much by way of results.

    But what he discovered was nevertheless eye-opening, like journals in India pretending to be in the US or Europe, and low-standards journals published by some big names in journal publishing.

  9. MJN says:

    People insist on calling it a sting operation against open access. Well, in a way it’s true – but as was pointed out by many comments, if you don’t have a comparison among traditional model journals it doesn’t really say much about OA.
    It should really more be considered a sting against peer review. ow badly it works in many cases. And that it often does work. One of my pet peeves is when people call peer review a gold standard for distinguishing science from dross. It’s not. The gold standard is repetition.
    So, see the sting more as an eye-opener to those who naively thought what was published in peer reviewed journals is true…