Recently, I ordered a book via interlibrary loan. I entered the bibliographic details into an online form on my university library’s site. My library received the form, a librarian looked up the book in a catalogue and sent a request to a German library. There, another librarian collected the book and sent it to my university library. I got an email when the book arrived, and went to the library to collect it. I read the book and scanned the chapters relevant to me so that I had them for later reference. Then I returned the book to my library which sent it back to Germany.
Why did I tell this lengthy and slightly boring story? Because there are still scholars who think that digital publishing and open access are seriously harming science. Douglas Fields makes many problematic assumptions in this article, and I do not want to address all of them. Jaleehs Rehman and Björn Brembs have already written very crafty replies that address most of the issues. I only want to focus on one statement which is outright wrong: the assumption that in the past, the peer-review system has allowed only for world-class research to be published. Now, open access journals would flood the literature.
After all, science has been growing exponentially for the last 400 years. Even in 1613, an author called Barnaby Rich decried the increase in literature:
One of the diseases of this age is the multiplicity of books; they doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought forth into the world. (cited after Price 1961)
As we can see, information overload is not a contemporary problem in science. In fact, the first journals were established because people could not read all the books that were published. Barnaby Rich himself authored 26 books in his lifetime which is a high output, even by today’s standards. This led two German information scientists to declare the Barnaby Rich Effect (Braun and Zsindely 1985):
It‘s always the other author(s) who publishes too much and “pollutes“, “floods”, “eutroficates” the literature, never me.
Keeping that enormous output in mind, it is impossible that all research pre-web was world-class research. Following Dr. Fields rationale, this would not only require that every piece of research published was of world-class but that there were also always three world-class researchers to review the material. It is quite easy to comprehend that this cannot be the case. There is a good reason, why measures to judge the quality of research output have become so popular. In a world with an ever-increasing number of researchers, journals and papers, they are being used to separate the wheat from the chaff (how well they are doing that is subject to an ongoing discussion…).
Research into the quality of papers has shown consistently and over time that quality follows a power law. There are only a few publications that publish world-class research. The papers in these publications get cited often and thus have a high impact. The vast majority of papers, however, goes by almost unnoticed. Therefore, it is not the fault of open access journals that so much research gets published: it is rather an artefact of the enormous growth of science. Bad research has always been published, but in most cases, it just did not get the attention. And in the worst case, world-class research went by unnoticed because it was simply not picked up, like Mendel’s lost paper.
Taking publications into the digital age
However the world of digital scholarship will look like, I expect that scholars will still judge the quality of papers based on certain metrics and quality indicators, and read only those that are worth reading to them. A whole movement called altmetrics is devoted to adapting scholarly metrics to the digital age. But apart from the issue of filtering, digital publishing offers many opportunities that the existing system doesn’t. With the content of papers being machine readable, we can start analyzing and linking it in ways that were not possible before. One practical application of these analyses are recommendations. They might help to unearth that valuable piece of research that went previously unnoticed. Another application that I am currently involved in is to use digital links between papers to create timely overviews of research fields.
But we can go even further than that: a truly open science. By making data and source code available, and linking it to the research results by an open methodology, we can make research better reproducible – and therefore easier weed out bad research.
True, there are certain problems with respect to open access that need to be solved. Fields makes a valid point when he hints at predatory publishers that are only interested in profit and do not provide quality control. But scholars already started to collect information about questionable open access publishers. He also talks about the loss of blinded reviews in open peer review. Anonymity gives reviewers the possiblity to be honest even towards the most renowned researchers and institutions This issue needs to be addressed, as I wrote in an earlier blogpost.
Nevertheless, I believe that digital publishing and open access are such a great opportunity for science that it is worth taking the risk. The open access version of the story at the beginning of this post is rather short: I look up the book in a search engine and download the PDF. Think about what difference it would make not in my case, but also for people in regions of the world where interlibrary loans do not exist, and libraires cannot afford paying for journal subscriptions. Taking all of the other potentials aside, an open science would mean that billions of people would get access to knowledge that was not available to them before.