Archive

Peer Review

Photo by Cory Doctorow, Slides by Lora Aroyo

Photo by Cory Doctorow, slides by Lora Aroyo

I spent last week at Web Science 2013 in Paris. And what a well spent time that was. Web Science was for sure the most diverse conference I have ever attended. One of the reasons for this diversity is that Webscience was collocated with CHI (Human-Computer-Interaction) and Hypertext. But most importantly, the community of Webscience itself is very diverse. There were more than 300 participants from a wide array of disciplines. The conference spanned talks from philosophy to computer science (and everything in-between) with keynotes by Cory Doctorow and Vint Cerf. This resulted in many insightful discussions, looking at the web from a multitude of angles. I really enjoyed the wide variety of talks.

Nevertheless, there were some talks that failed to resonate with the audience. It seems to me that this was mostly due to the fact that they were too rooted in a single discipline. Some presenters assumed a common understanding of the problem discussed and used a lot of domain-specific vocabulary that made it hard to follow the talk. Don’t get me wrong: most presenters tried to appeal to the whole audience but with some subjects this seemed to be impossible.

To me, this shows that a better insight is needed on what Web Science actually is and more discussion on what should be researched under this banner. There seems to be a certain uncertainty about this, which was also reflected in the peer reviews. Hugh Davis, the general chair for Websci’13, highlighted this in his opening speech:

I think that Web Science is a good example where Open Peer Review could contribute to a common understanding and a better communication among the actors involved. I have been critical of open processes in the past because they take away the benefits of blinding. Mark Bernstein, the program chair, also stressed this point in a tweet:

Nowadays, however, I think that the potential benefits of open peer review (transparency, increased communication, incentives to write better reviews) outweigh the effects of taking away the anonymity of reviewers. Science will always be influenced by power structures, but with open peer review they are at least visible. Don’t get me wrong: I really like the inclusive approach to Web Science that the organizers have taken. The web cannot be understood with the paradigm of a single discipline, and at this very point in time it is very valuable to get input from all sides on the discussion. In my opinion, open peer review could help in facilitating this discussion before and after the conference as well.

Contributions

I made two contributions to this year’s Web Science conference. First, I presented a paper written together with Sebastian Dennerlein in the Social Theory for Web Science Workshop entitled “Towards a Model of Interdisciplinary Teamwork for Web Science: What can Social Theory Contribute?”. In this position paper, we argue that social scientists and computer scientists do not work together in an interdisciplinary way due to a fundamentally different approach to research. We sketch a model of interdisciplinary teamwork in order to overcome this problem. The feedback on this talk was very interesting. On the one hand participants could relate to the problem, but on the other hand they alerted us of many other influences to interdisciplinary teamwork. For one, there is often a disagreement at the very beginning of a research project about what the problem actually is. Furthermore, the disciplines are fragmented as well and have often different paradigms that they follow. We will consider this feedback when specifying the formal model. You can find the paper here and the slides of my talk below.

In general, the workshop was very well attended and there was a certain sense of common understanding regarding opportunities and challenges of applying social theory in web science. All in all, I think that a community has been established that could produce interesting results in the future.

My second contribution was a poster with the title “Head Start: Improving Academic Literature Search with Overview Visualizations based on Readership Statistics” which I co-wrote with Kris Jack, Christian Schlögl, Christoph Trattner, and Stefanie Lindstaedt. As you may recall, Head Start is an interactive visualization of the research field of Educational Technology based on co-readership structures. Head Start was received very positively. Many participants were interested in the idea of readership statistics for mapping. There were some scientometrists but also educational technologists who expressed their interest. Many comments went towards how the prototype could be extended. You can find the paper at the end of the post and the poster below.

Head Start

Several participants noted that they would like to adapt and extend the visualization. Clare Hooper for example is working on a content-based representation of the field of Web Science, and it would be interesting to combine our approaches. This encouraged me even more to open source the software as soon as possible.

All in all, it was a very enjoyable conference. I also like the way that the organizers innovate in the format every year. The pecha kucha session worked especially well in my opinion, sporting concise and entertaining talks throughout. Thanks to all organizers, speakers and participants for making this conference such a nice event!

Citation
Peter Kraker, Kris Jack, Christian Schlögl, Christoph Trattner, & Stefanie Lindstaedt (2013). Head Start: Improving Academic Literature Search with Overview Visualizations based on Readership Statistics Web Science 2013

Advertisements
open_access

Open access logo by PLOS

Recently, I ordered a book via interlibrary loan. I entered the bibliographic details into an online form on my university library’s site. My library received the form, a librarian looked up the book in a catalogue and sent a request to a German library. There, another librarian collected the book and sent it to my university library. I got an email when the book arrived, and went to the library to collect it. I read the book and scanned the chapters relevant to me so that I had them for later reference. Then I returned the book to my library which sent it back to Germany.

Why did I tell this lengthy and slightly boring story? Because there are still scholars who think that digital publishing and open access are seriously harming science. Douglas Fields makes many problematic assumptions in this article, and I do not want to address all of them. Jaleehs Rehman and Björn Brembs have already written very crafty replies that address most of the issues. I only want to focus on one statement which is outright wrong: the assumption that in the past, the peer-review system has allowed only for world-class research to be published. Now, open access journals would flood the literature.

After all, science has been growing exponentially for the last 400 years. Even in 1613, an author called Barnaby Rich decried the increase in literature:

One of the diseases of this age is the multiplicity of books; they doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought forth into the world. (cited after Price 1961)

As we can see, information overload is not a contemporary problem in science. In fact, the first journals were established because people could not read all the books that were published. Barnaby Rich himself authored 26 books in his lifetime which is a high output, even by today’s standards. This led two German information scientists to declare the Barnaby Rich Effect (Braun and Zsindely 1985):

It‘s always the other author(s) who publishes too much and “pollutes“, “floods”, “eutroficates” the literature, never me.

Keeping that enormous output in mind, it is impossible that all research pre-web was world-class research. Following Dr. Fields rationale, this would not only require that every piece of research published was of world-class but that there were also always three world-class researchers to review the material. It is quite easy to comprehend that this cannot be the case. There is a good reason, why measures to judge the quality of research output have become so popular. In a world with an ever-increasing number of researchers, journals and papers, they are being used to separate the wheat from the chaff (how well they are doing that is subject to an ongoing discussion…).

Research into the quality of papers has shown consistently and over time that quality follows a power law. There are only a few publications that publish world-class research. The papers in these publications get cited often and thus have a high impact. The vast majority of papers, however, goes by almost unnoticed. Therefore, it is not the fault of open access journals that so much research gets published: it is rather an artefact of the enormous growth of science. Bad research has always been published, but in most cases, it just did not get the attention. And in the worst case, world-class research went by unnoticed because it was simply not picked up, like Mendel’s lost paper.

Taking publications into the digital age

However the world of digital scholarship will look like, I expect that scholars will still judge the quality of papers based on certain metrics and quality indicators, and read only those that are worth reading to them. A whole movement called altmetrics is devoted to adapting scholarly metrics to the digital age. But apart from the issue of filtering, digital publishing offers many opportunities that the existing system doesn’t. With the content of papers being machine readable, we can start analyzing and linking it in ways that were not possible before. One practical application of these analyses are recommendations. They might help to unearth that valuable piece of research that went previously unnoticed. Another application that I am currently involved in is to use digital links between papers to create timely overviews of research fields.

But we can go even further than that: a truly open science. By making data and source code available, and linking it to the research results by an open methodology, we can make research better reproducible – and therefore easier weed out bad research.

True, there are certain problems with respect to open access that need to be solved. Fields makes a valid point when he hints at predatory publishers that are only interested in profit and do not provide quality control. But scholars already started to collect information about questionable open access publishers. He also talks about the loss of blinded reviews in open peer review. Anonymity gives reviewers the possiblity to be honest even towards the most renowned researchers and institutions This issue needs to be addressed, as I wrote in an earlier blogpost.

Nevertheless, I believe that digital publishing and open access are such a great opportunity for science that it is worth taking the risk. The open access version of the story at the beginning of this post is rather short: I look up the book in a search engine and download the PDF. Think about what difference it would make not in my case, but also for people in regions of the world where interlibrary loans do not exist, and libraires cannot afford paying for journal subscriptions. Taking all of the other potentials aside, an open science would mean that billions of people would get access to knowledge that was not available to them before.

There are still some open spots for junior reviewers at the IJTEL Young Researcher Special Issue on “State-of-the-Art in Technology Enhanced Learning”. This is a good opportunity to get to know the work of a referee. Junior reviewers will be asked to review 1-2 papers; furthermore, we require junior reviewers to participate in a workshop on article reviewing. If you want to become a reviewer, please apply here. The full CfR can be found below.

International Journal of Technology Enhanced Learning  (IJTEL)

Call for Papers

Young Researcher Special Issue on: “State-of-the-Art in TEL”

Guest Editors:
Peter Kraker, Graz University of Technology, Austria
Moshe Leiba, Tel Aviv University, Israel
Martina Rau, Carnegie Mellon University, USA
Derick Leony and Israel Gutiérrez Rojas, Universidad Carlos III de Madrid, Spain
Dirk Börner, Open Universiteit in the Netherlands, The Netherlands
Antigoni Parmaxi, Cyprus University of Technology, Cyprus
Wolfgang Reinhardt, University of Paderborn, Germany

Introduction

The International Journal of Technology Enhanced Learning (IJTEL) invites paper submissions for a special issue targeting young researchers in the community of Technology Enhanced Learning (TEL). This call for papers encourages a review of state-of-the-art of TEL topics, topped with a description of the current and future work carried out by the authors doing research on these topics.

This special issue is directed to all young researchers such as PhD students, post-graduate students, and post-docs working in topics related to TEL both in academia and industry, and from different disciplines of the community (technologists, educationists, psychologists, etc.).

The purpose of this special issue is manifold: (a) to provide a better overview on TEL research lines; (b) to investigate and expand current TEL research themes; (c) to promote international and multidisciplinary collaboration and exchange of ideas among young researchers; (d) to encourage young researchers to formalise their research questions, topics, and methodologies.

The International Journal of Technology Enhanced Learning (IJTEL) recognizes the value and importance of the reviewing process in the overall publication process both in shaping the individual manuscript and in highlighting the reliability and reputation of a journal. Within this framework, the identification and selection of reviewers who have expertise and interest in the topics appropriate to each manuscript are essential elements in ensuring a productive review process.

Reviewer profiles

We are inviting:

  • Junior reviewers (post-graduate students, PhD students, recent post-docs) working in research related to TEL in both academia and industry, who would like to get experience in being a reviewer and participate in a workshop on article reviewing;

  • Experienced reviewers working in research related to TEL who would like to participate in the reviewing process for the special issue.

Workshop for Junior Reviewers

We will hold a workshop on initiation to article reviewing focusing on the subject of the special issue. The workshop will be online and it will take place on late February. The participation of junior reviewers is mandatory since it is the core of their mentoring process. Experienced reviewers are welcome to join in order to provide advice from their experience.

Review process

Contributions to the Young Researcher Special Issue of the International Journal of Technology Enhanced Learning (IJTEL) will undergo a double blind review process. All submissions will be reviewed by two or three reviewers, including at least one experienced reviewer. Junior reviewers must have participated in the workshop in order to get assigned articles to review.

How to apply

Please use this form to apply as a reviewer: http://bit.ly/zkcTlG

Important dates

Application deadline: 10/02/2012
Review of full papers: 01/04/2012
Publication of the special issue: second half of 2012

In the Research 2.0 workshop at EC-TEL, we had a lengthy discussion about new forms of academic peer review. Among other issues, blinding was one of the topics that came up. Some dismissed blinding in favour of a completely open process, but I am not so sure about that.

I agree that blinding does a bad job of protecting the author(s), especially in close-knit communities. References to earlier work, projects, or simply the focus point of the paper often give away the submittee(s). In combination with a closed process (i.e. the reviews do not get published with the paper), blinding leverages biased and ill-founded reviews. This is bad news in a world where it often comes down to a single review whether your paper or project gets accepted.
Still, blinding satisfies a very important social function: it protects the reviewers and allows to them be honest, without fearing repercussions. After all, the author of the paper you just reviewed could well be the reviewer of your next project proposal. Therefore, it plays an important role in academic quality control.
Nevertheless, we all get bad reviews from time to time; reviews which make it apparent that the reviewer just had a bad day, and that it was not the quality of our submission that caused a reject. The question is – how can we improve this situation and how can the capabilites of the web help us in this endeavour?

A middle way

You might have noticed the Call for Papers on the IJTEL Young Researcher Special Issue, where we are seeking to attract visionary articles of PhD students in the field of Technology Enhanced Learning. When we put together the CfP, we did not only want to broaden the scope of articles we would allow for submission. We also wanted to innovate on the reviewing process. One thing that we agreed upon very early was to have teams of junior and experienced reviewers, whereby the experienced reviewers will act as mentors for their younger colleagues.
Then I brought some new forms of peer review into the discussion. A completely open, unblinded peer review was off the table pretty fast: as a young researcher, you do not want your vision being crushed in public, possibly by an authority in the field. Still, there was a common understanding that we were not completely satisified with going down the road of a completely closed and blinded review. Therefore, after some heated discussions, we settled for a middle-way: we will have the authors submit abstracts first for a closed and double-blinded peer review. Then will be put both abstracts and reviews (still blinded) on TEL Europe and open them for comments by the community. This way we want to make the process more transparent and increase the quality of reviews, while at the same time protecting authors and reviewers.

Thomas posted links some links on TEL Europe to the Open Peer Review process in the Semantic Web Journal. They took a similar approach, although they made blinding for reviewers optional and completely dismissed it for authors.

What do you think? Is it enough to make corrections to the traditional peer reviewing process, or do we need a completely new approach in the light of the capabilites of the web?

%d bloggers like this: