Archive

Open Science

With “Ich bin Open Science!”, we want to raise public awareness for open science in Austria and beyond. The project, a collaboration between Know-Center and FH Joanneum, has been submitted to netidee 2015. In the video (German only for the moment) we explain the project idea, and you can see first testimonials who lend a face to open science. Why are you committed to openness in science and research?

Note: This is a reblog from the OKFN Science Blog.

It’s hard to believe that it has been over a year since Peter Murray-Rust announced the new Panton fellows at OKCon 2013. I am immensly proud that I was one of the 2013/14 Panton Fellows and the first non UK-based fellow. In this post, I will recap my activities during the last year and give an outlook of things to come after the end of the fellowship. At the end of the post, you can find all outputs of my fellowship at a glance. My fellowship had two focal points: the work on open and transparent altmetrics and the promotion of open science in Austria and beyond.

Open and transparent altmetrics

Peter Kraker on stage at the Open Science Panel Vienna (Photo by FWF/APA-Fotoservice/Thomas Preiss)

On stage at the Open Science Panel Vienna (Photo by FWF/APA-Fotoservice/Thomas Preiss)

The blog post entitled “All metrics are wrong, but some are useful” sums up my views on (alt)metrics: I argue that no single number can determine the worth of an article, a journal, or a researcher. Instead, we have to find those numbers that give us a good picture of the many facets of these entities and put them into context. Openness and transparency are two necessary properties of such an (alt)metrics system, as this is the only sustainable way to uncover inherent biases and to detect attempts of gaming. In my comment to the NISO whitepaper on altmetrics standards, I therefore maintained that openness and transparency should be strongly considered for altmetrics standards.

In another post on “Open and transparent altmetrics for discovery”, I laid out that altmetrics have a largely untapped potential for visualizaton and discovery that goes beyond rankings of top papers and researchers. In order to help uncover this potential, I released the open source visualization Head Start that I developed as part of my PhD project. Head Start gives scholars an overview of a research field based on relational information derived from altmetrics. In two blog posts, “New version of open source visualization Head Start released” and “What’s new in Head Start?” I chronicled the development of a server component, the introdcution of the timeline visualization created by Philipp Weißensteiner, and the integration of Head Start with Conference Navigator 3, a nifty conference scheduling system. With Chris Kittel and Fabian Dablander, I took first steps towards automatic visualizations of PLOS papers. Recently, Head Start also became part of the Open Knowledge Labs. In order to make the maps created with Head Start openly available to all, I will set up a server and website for the project in the months to come. The ultimate goal would be to have an environment where everybody can create their own maps based on open knowledge and share them with the world. If you are interested in contributing to the project, please get in touch with me, or have a look at the open feature requests.

Evolution of the UMAP conference visualized in Head Start. More information in  Kraker, P., Weißensteiner, P., & Brusilovsky, P. (2014). Altmetrics-based Visualizations Depicting the Evolution of a Knowledge Domain 19th International Conference on Science and Technology Indicators (STI 2014), 330-333.

Evolution of the UMAP conference visualized in Head Start. More information in Kraker, P., Weißensteiner, P., & Brusilovsky, P. (2014). Altmetrics-based Visualizations Depicting the Evolution of a Knowledge Domain 19th International Conference on Science and Technology Indicators (STI 2014), 330-333.

Promotion of open science and open data

Regarding the promotion of open science, I teamed up with Stefan Kasberger and Chris Kittel of openscienceasap.org and the Austrian chapter of Open Knowledge for a series of events that were intended to generate more awareness in the local community. In October 2013, I was a panelist at the openscienceASAP kick-off event at University of Graz entitled “The Changing Face of Science: Is Open Science the Future?”. In December, I helped organizing an OKFN Open Science Meetup in Vienna on altmetrics. I also gave an introductory talk on this occasion that got more than 1000 views on Slideshare. In February 2014, I was interviewed for the openscienceASAP podcast on my Panton Fellowship and the need for an inclusive approach to open science.

In June, Panton Fellowship mentors Peter Murray-Rust and Michelle Brook visited Vienna. The three-day visit, made possible by the Austrian Science Fund (FWF), kicked off with a lecture by Peter and Michelle at the FWF. On the next day, the two lead a well-attended workshop on content mining at the Institute of Science and Technology Austria.The visit ended with a hackday organized by openscienceASAP, and an OKFN-AT meetup on content mining. Finally, last month, I gave a talk on open data at the “Open Science Panel” on board of the MS Wissenschaft in Vienna.

I also became active in the Open Access Network Austria (OANA) of the Austrian Science Fund. Specifically, I am contributing to the working group “Involvment of researchers in open access”. There, I am responsible for a visibility concept for open access researchers. Throughout the year, I have also contributed to a monthly sum-up of open science activities in order to make these activities more visible within the local community. You can find the sum-ups (only available in German) on the openscienceASAP stream.

I also went to a lot of events outside Austria where I argued for more openness and transparency in science: OKCon 2013 in Geneva, SpotOn 2013 in London, and Science Online Together 2014 in Raleigh (NC). At the Open Knowledge Festival in Berlin, I was session facilitator for “Open Data and the Panton Principles for the Humanities. How do we go about that?”. The goal of this session is to devise a set of clear principles which describe what we mean by Open Data in the humanities, what these should contain and how to use them. In my role as an advocate for reproducibility I wrote a blog post on why reproducibility should become a quality criterion in science. The post sparked a lot of discussion, and was widely linked and tweeted.

by Martin Clavey

by Martin Clavey

What’s next?

The Panton Fellowship was a unique opportunity for me to work on open science, to visit open knowledge events around the world, and to meet many new people who are passionate about the topic. Naturally, the end of the fellowship does not mark the end of my involvement with the open science community. In my new role as a scientific project developer for Science 2.0 and open science at Know-Center, I will continue to advocate openness and transparency. As part of my research on altmetrics-driven discovery, I will also pursue my open source work on the Head Start framework. With regards to outreach work, I am currently busy drafting a visibility concept for open access researchers in the Open Access Network Austria (OANA). Furthermore, I am involved in efforts to establish a German-speaking open science group

I had a great year, and I would like to thank everyone who got involved. Special thanks go to Peter Murray-Rust and Michelle Brook for administering the program and for their continued support. As always, if you are interested in helping out with one or the other project, please get in touch with me. If you have comments or questions, please leave them in the comments field below.

All outputs at a glance

Head Start – open source research overview visualization
Blog Posts
Audio and Video
Slides
Reports
Open Science Sum-Ups (contributions) [German]

Note: This is a reblog from the OKFN Science Blog. As part of my duties as a Panton Fellow, I will be regularly blogging there about my activities concerning open data and open science.

6795008004_8046829553

by AG Cann

Altmetrics are a hot topic in scientific community right now. Classic citation-based indicators such as the impact factor are amended by alternative metrics generated from online platforms. Usage statistics (downloads, readership) are often employed, but links, likes and shares on the web and in social media are considered as well. The altmetrics promise, as laid out in the excellent manifesto, is that they assess impact quicker and on a broader scale.

The main focus of altmetrics at the moment is evaluation of scientific output. Examples are the article-level metrics in PLOS journals, and the Altmetric donut. ImpactStory has a slightly different focus, as it aims to evaluate the oeuvre of an author rather than an individual paper.

This is all good and well, but in my opinion, altmetrics have a huge potential for discovery that goes beyond rankings of top papers and researchers. A potential that is largely untapped so far.

How so? To answer this question, it is helpful to shed a little light on the history of citation indices.

Pathways through science

In 1955, Eugene Garfield created the Science Citation Index (SCI) which later went on to become the Web of Knowledge. His initial idea – next to measuring impact – was to record citations in a large index to create pathways through science. Thus one can link papers that are not linked by shared keywords. It makes a lot of sense: you can talk about the same thing using totally different terminology, especially when you are not in the same field. Furthermore, terminology has proven to be very fluent even in the same domain (Leydesdorff 1997). In 1973, Small and Marshakova realized – independently from each other – that co-citation is a measure of subject similarity and therefore can be used to map a scientific field.

Due to the fact that citations are considerably delayed, however, co-citation maps are often a look into the past and not a timely overview of a scientific field.

Altmetrics for discovery

In come altmetrics. Similarly to citations, they can create pathways through science. After all, a citation is nothing else but a link to another paper. With altmetrics, it is not so much which papers are often referenced together, but rather which papers are often accessed, read, or linked together. The main advantage of altmetrics, as with impact, is that they are much earlier available.

clickstream_map

Bollen et al. (2009): Clickstream Data Yields High-Resolution Maps of Science. PLOS One. DOI: 10.1371/journal.pone.0004803.

One of the efforts in this direction is the work of Bollen et al. (2009) on click-streams. Using the sequences of clicks to different journals, they create a map of science (see above).

In my PhD, I looked at the potential of readership statistics for knowledge domain visualizations. It turns out that co-readership is a good indicator for subject similarity. This allowed me to visualize the field of educational technology based on Mendeley readership data (see below). You can find the web visualization called Head Start here and the code here (username: anonymous, leave password blank).

Why we need open and transparent altmetrics

The evaluation of Head Start showed that the overview is indeed more timely than maps based on citations. It, however, also provided further evidence that altmetrics are prone to sample biases. In the visualization of educational technology, the computer science driven areas such as adaptive hypermedia are largely missing. Bollen and Van de Sompel (2008) reported the same problem when they compared rankings based on usage data to rankings based on the impact factor.

It is therefore important that altmetrics are transparent and reproducible, and that the underlying data is openly available. This is the only way to ensure that all possible biases can be understood.

As part of my Panton Fellowship, I will try to find datasets that satisfy these criteria. There are several examples of open bibliometric data, such as the Mendeley API, and figshare API that have adopted CC BY, but most of the usage data is not available publicly or cannot be redistributed. In my fellowship, I want to evaluate the goodness of fit of different open altmetrics data. Furthermore, I plan to create more knowledge domain visualizations such as the one above.

So if you know any good datasets please leave a comment below. Of course any other comments on the idea are much appreciated as well.

Note: This is a reblog from the OKFN Science Blog. As part of my duties as a Panton Fellow, I will be regularly blogging there about my activities concerning open data and open science.

Peer review is one of the oldest and most respected instruments of quality control in science and research. Peer review means that a paper is evaluated by a number of experts on the topic of the article (the peers). The criteria may vary, but most of the time they include methodological and technical soundness, scientific relevance, and presentation.

“Peer-reviewed” is a widely accepted sign of quality of a scientific paper. Peer review has its problems, but you won’t find many researchers that favour a non peer-reviewed paper over a peer-reviewed one. As a result, if you want your paper to be scientifically acknowledged, you most likely have to submit it to a peer-reviewed journal.

Even though it will take more time and effort to get it published than in a non peer-reviewed publication outlet.

Peer review helps to weed out bad science and pseudo-science, but it also has serious limitations. One of these limitations is that the primary data and other supplementary material such as documentation source code are usually not available. The results of the paper are thus not reproducible. When I review such a paper, I usually have to trust the authors on a number of issues: that they have described the process of achieving the results as accurate as possible, that they have not left out any crucial pre-processing steps and so on. When I suspect a certain bias in a survey for example, I can only note that in the review, but I cannot test for that bias in the data myself. When the results of an experiment seem to be too good to be true, I cannot inspect the data pre-processing to see if the authors left out any important steps.

As a result, later efforts in reproducing research results can lead to devastating outcomes. Wang et al. (2010) for example found that they could not reproduce almost all of the literature on a certain topic in computer science.

“Reproducible”: a new quality criterion

Needless to say this is not a very desirable state. Therefore, I argue that we should start promoting a new quality criterion: “reproducible”. Reproducible means that the results achieved in the paper can be reproduced by anyone because all of the necessary supplementary resources have been openly provided along with the paper.

It is easy to see why a peer-reviewed and reproducible paper is of higher quality than just a peer-reviewed one. You do not have to take the researchers’ word of how they calculated their results – you can reconstruct them yourself. As a welcome side-effect, this would make more datasets and source code openly available. Thus, we could start building on each others’ work and aggregate data from different sources to gain new insights.

In my opinion, reproducible papers could be published alongside non-reproducible papers, just like peer-reviewed articles are usually published alongside editorials, letters, and other non peer-reviewed content. I would think, however, that over time, reproducible would become the overall quality standard of choice – just like peer-reviewed is the preferred standard right now. To help this process, journals and conferences could designate a certain share of their space to reproducible papers. I would imagine that they would not have to do that for too long though. Researchers will aim for a higher quality standard, even if it takes more time and effort.

I do not claim that reproducibility solves all of the problems that we see in science and research right now. For example, it will still be possible to manipulate the data to a certain degree. I do, however, believe that reproducibility as an additional quality criterion would be an important step for open and reproducible science and research.

So that you can say to your colleague one day: “Let’s go with the method described in this paper. It’s not only peer-reviewed, it’s reproducible!”

Note: This is a reblog from the OKFN Science Blog. To my excitment and delight, I was recently awarded a Panton Fellowship. As part of my duties, I will be regularly blogging there about my activities concerning open data and open science.

Peter Kraker at Barcamp Graz 2012. Photo by Rene Kaiser

Photo by Rene Kaiser

Hi, my name is Peter Kraker and I am one of the new Panton Fellows. After an exciting week at OKCon, I was asked to introduce myself and what I want to achieve during my fellowship, which I am very happy to do. I am a research assistant at Know-Center of Graz University of Technology and a late-stage PhD student at University of Graz. Like many others, I believe that an open approach is essential for science and research to making progress. Open science to me is about reproducibility and comparability of scientific output. Research data should therefore be put into the public domain, as called for in the Panton Principles.

In my PhD, I am concerning myself with research practices on the web and how academic literature search can be improved with overview visualizations. I have developed and open-sourced a knowledge domain visualization called Head Start. Head Start is based on altmetrics data rather than citation data. Altmetrics are indicators of scholarly activity and impact on the web. Have a look at the altmetrics manifesto for a thorough introduction.

In my evaluation of Head Start, I noticed that altmetrics are prone to sample biases. It is therefore important that analyses based on altmetrics are transparent and reproducible, and that the underlying data is openly available. Contributing to open and transparent altmetrics will be my first objective as a Panton Fellow. I will establish an altmetrics data repository for the upcoming open access journal European Information Science. This will allow the information science community to analyse the field based on this data, and add an additional data source for the growing altmetrics community. My vision is that in the long run, altmetrics will not only help us to evaluate science, but also to connect researchers around the world.

My second objective as a Panton Fellow is to promote open science based on an inclusive approach. The case of the Bermuda Rules, which state that DNA sequences should be rapidly released into the public domain, has shown that open practices can be established, if the community stands together. In my opinion, it is therefore necessary to get as many researchers aboard as possible. From a community perspective, it is the commitment to openness that matters, and the willingness to promote this openness. The inclusive approach puts the researcher in his or her many roles at the center of attention. This approach is not intended to replace existing initiatives but to make researchers aware of these initiatives and helping them with choosing their approach to open science. You can find more on that in on my blog.

Locally, I will be working with the Austrian Chapter of the Open Knowledge Foundation to promote open science based on this inclusive approach. Together with the Austrian Student’s Union, we will be having workshops with students, faculty, and librarians. I will also make the case for open science in the research communities that I am involved in. For the International Journal on Technology Enhanced Learning for example, I will develop an open data policy.

I am very honored to be selected as a Panton Fellow, and I am excited to get started. If you want to work with me on one or the other objective, please do not hesitate to contact me. You can also follow my work on Twitter and on my blog. Looking forward to furthering the cause of open data and open science with you!

Open Science Logo v2

by gemmerich

Update: There is a OKFN pad devoted to discussing this idea. Please add your comments and critique there!

When Derick Leony, Wolfgang Reinhardt, Günter Beham and myself made the case for an open science in technology-enhanced learning back in late 2011, we discussed how open science could become a reality. We finally concluded that this was first and foremost a matter of consensus in the community:

Open Science is first and foremost a community effort. In fact we are arguing that reproducibility and comparability should become two of the standard criteria that every reviewer has to judge when assessing a paper.  [..] These two criteria should be of equal importance as the established criteria, giving incentive to the authors to actually apply the instruments of Open Science.

In addition, journals and conferences ought to make the submission of source code, data, and methodological descriptions together with the paper mandatory for them to be published. Conferences and journals themselves should in turn commit to making the papers openly accessible. The case of the genetic sequence database GenBank, which stores DNA sequences and makes them available to the public, has shown that if publishers and conference organisers adopt new standards, they can be propagated quickly within the community. The huge success of GenBank is due to the fact that many journals adopted the Bermuda principles (Marshall 2001), which state among other things that DNA sequences should be rapidly released into the public domain.

There is a crucial interplay at work between individual researchers and other actors within a field such as funding agencies, journals, and conferences. On the one hand, individual researchers are often bound by the rules that are made by those institutions because they depend on them as sources of funding and as publication outlets. On the other hand, the boards and committees steering these institutions are (at least partly) made up of the same researchers. Many researchers are sitting on conference committees, editorial boards, and policy advisory boards. They are thus shaping the community and commonly defining what is shared pratice among its participants. In their role, they can advocate open practices and propose rules that help establishing an open science.

In my perception, the discourse in open science often runs along the lines of open vs. closed approaches. A lot of effort is put into determining what is truly open and what is actually still closed. In open access for example, there is a heated debate whether to choose the green or the gold road with advocates on both sides ferociously arguing why only one of the two can only be considered as true open access. Whereas this discussion surely has some merit, most researchers have to worry more about whether their efforts are recognized by the community than what constitutes true openness. As Antonella Esposito writes in her insightful study on digital research practices:

Nonetheless their digital identities and online activities constituted a ‘parallel’ academic life that developed as a self–legitimating approach within a traditional mode of knowledge production and distribution. These tentative efforts were not acknowledged in their respective communities, struggling to become identifiable open research practices. Indeed, some interviewees called for clear institutional rules enabling sharing practices — especially in teaching and learning — that might slowly produce a general change of attitude and overcome current isolated initiatives by a few pioneers of open scholarship.

Most researchers are neither completely open nor completely closed. There is no black and white, but different shades of grey. Nonetheless, there are many researchers out there who make their publications available or put their source code online. In my opinion, it is necessary to get these reseachers aboard, not to drive them away with endless debates whether their research is “truly” open. Don’t get me wrong: it is important to have discussions about the optimal characteristics of open science, but not at the expense of making open science an elitist club where only a small minority can enter that satisfies all criteria. From a community perspective, it is the commitment to openness that matters, and the willingness to promote this openness on editorial boards and program committees.

It seems that such a holistic view is gaining some traction: in a recent Web Science paper, R. Fyson, J. Simon and L. Carr discuss the interplay between actors regarding open access publications. Another good example of an inclusive approach is the Open Science Project here in Graz. The Open Science Project is a group of students led by Stefan Kasberger that tries to do all of their study-related work according to open science practices. This means that they try to use open source software for their homework assignments and make the results publicly available. They go to great lengths in their effort as they also try to persuade lecturers to follow their example and make their scripts openly accessible.

Draft Petition

At a recent meeting of the Austrian chapter of the OKFN Open Science, we started discussing an inclusive  approach to open science. This motivated me to write a first draft for a petition which you can find below. So my question is: would you sign such a petition? Do you think it is engaging/far going/well worded enough? Let me know what you think in the comments or join us at the OKFN Pad where you can help us to collaboratively edit the text:

Science is one of the greatest endeavours of mankind. It has enjoyed  enormous growth since its inception more than 400 years ago. Science has  not only produced an incredible amount of knowledge, it has also created  tools for communication and quality control. Journals, conferences, peer  review to name just a few. Lately, serious shortcomings of these  established instruments have surfaced. Scientific results are often irreproducible and lead to ill-guided decisions. Retraction rates are on  the rise. There have been many cases of high profile scientific fraud.

In our view, all of these problems can be addressed by a more open approach to science. We see Open Science as making the scientific  process and all of its outcomes openly accessible to the general public. Open Science would benefit science, because it would make results more  reproducible, and quality control more transparent. Open Science would also benefit the society by including more people in the process and sparking open innovation.

Besides the greater good, open science also benefits individual scientists. Research has shown that papers that are openly accessible are cited more  often. If you share source code and data, you could get credited for  these parts of your research as well. If you talk about your methodology and share it with others, this will bring attention to your work. The internet provides us with the technology to make Open Science possible. In our view, it is time to embrace these possibilities and innovate in the scientific process.

It is very important to note that we see Open Science as a community effort that can only work if we include as many people as possible. We know that it is not possible to open up entire work processes  overnight. In our view, this is not necessary to contribute to an Open Science. The idea is to open everything up that you already can and work towards establishing open practices in your work and your  community. You might already have papers that you are allowed to share in a personal and institutional repository. You might have source code or data that you can easily publish under a permissive license. And you might be sitting on a board and committee where you can bring open practices into the discussion.

If you agree with this point of view, you are encouraged to sign the  declaration below.

  • I will open up resources that I have the legal right to
  • I will work towards establishing open practices in my research
  • I will promote Open Science in my institution and my research community

If you would like to comment on the manifesto, or add your own ideas, please go to this OKFN Pad.

open_access

Open access logo by PLOS

Recently, I ordered a book via interlibrary loan. I entered the bibliographic details into an online form on my university library’s site. My library received the form, a librarian looked up the book in a catalogue and sent a request to a German library. There, another librarian collected the book and sent it to my university library. I got an email when the book arrived, and went to the library to collect it. I read the book and scanned the chapters relevant to me so that I had them for later reference. Then I returned the book to my library which sent it back to Germany.

Why did I tell this lengthy and slightly boring story? Because there are still scholars who think that digital publishing and open access are seriously harming science. Douglas Fields makes many problematic assumptions in this article, and I do not want to address all of them. Jaleehs Rehman and Björn Brembs have already written very crafty replies that address most of the issues. I only want to focus on one statement which is outright wrong: the assumption that in the past, the peer-review system has allowed only for world-class research to be published. Now, open access journals would flood the literature.

After all, science has been growing exponentially for the last 400 years. Even in 1613, an author called Barnaby Rich decried the increase in literature:

One of the diseases of this age is the multiplicity of books; they doth so overcharge the world that it is not able to digest the abundance of idle matter that is every day hatched and brought forth into the world. (cited after Price 1961)

As we can see, information overload is not a contemporary problem in science. In fact, the first journals were established because people could not read all the books that were published. Barnaby Rich himself authored 26 books in his lifetime which is a high output, even by today’s standards. This led two German information scientists to declare the Barnaby Rich Effect (Braun and Zsindely 1985):

It‘s always the other author(s) who publishes too much and “pollutes“, “floods”, “eutroficates” the literature, never me.

Keeping that enormous output in mind, it is impossible that all research pre-web was world-class research. Following Dr. Fields rationale, this would not only require that every piece of research published was of world-class but that there were also always three world-class researchers to review the material. It is quite easy to comprehend that this cannot be the case. There is a good reason, why measures to judge the quality of research output have become so popular. In a world with an ever-increasing number of researchers, journals and papers, they are being used to separate the wheat from the chaff (how well they are doing that is subject to an ongoing discussion…).

Research into the quality of papers has shown consistently and over time that quality follows a power law. There are only a few publications that publish world-class research. The papers in these publications get cited often and thus have a high impact. The vast majority of papers, however, goes by almost unnoticed. Therefore, it is not the fault of open access journals that so much research gets published: it is rather an artefact of the enormous growth of science. Bad research has always been published, but in most cases, it just did not get the attention. And in the worst case, world-class research went by unnoticed because it was simply not picked up, like Mendel’s lost paper.

Taking publications into the digital age

However the world of digital scholarship will look like, I expect that scholars will still judge the quality of papers based on certain metrics and quality indicators, and read only those that are worth reading to them. A whole movement called altmetrics is devoted to adapting scholarly metrics to the digital age. But apart from the issue of filtering, digital publishing offers many opportunities that the existing system doesn’t. With the content of papers being machine readable, we can start analyzing and linking it in ways that were not possible before. One practical application of these analyses are recommendations. They might help to unearth that valuable piece of research that went previously unnoticed. Another application that I am currently involved in is to use digital links between papers to create timely overviews of research fields.

But we can go even further than that: a truly open science. By making data and source code available, and linking it to the research results by an open methodology, we can make research better reproducible – and therefore easier weed out bad research.

True, there are certain problems with respect to open access that need to be solved. Fields makes a valid point when he hints at predatory publishers that are only interested in profit and do not provide quality control. But scholars already started to collect information about questionable open access publishers. He also talks about the loss of blinded reviews in open peer review. Anonymity gives reviewers the possiblity to be honest even towards the most renowned researchers and institutions This issue needs to be addressed, as I wrote in an earlier blogpost.

Nevertheless, I believe that digital publishing and open access are such a great opportunity for science that it is worth taking the risk. The open access version of the story at the beginning of this post is rather short: I look up the book in a search engine and download the PDF. Think about what difference it would make not in my case, but also for people in regions of the world where interlibrary loans do not exist, and libraires cannot afford paying for journal subscriptions. Taking all of the other potentials aside, an open science would mean that billions of people would get access to knowledge that was not available to them before.

Do you remember this blogpost from March 2011? Probably not. It contains a mindmap on open science in technology enhanced learning. I mentioned back then that we will use it as an input for a publication. Almost two years later, I am very happy to announce that this paper is now published in IJTEL. The postprint of the article is open access and can be found on Mendeley.

An intense process

In September 2010, Günter Beham and I came up with the idea for a visionary article on open science in technology enhanced learning. Flying back from EC-TEL in Barcelona, we discussed our growing concern with the irreproducibility and incomparability of TEL research. A lot has happened since then. In November, I posted a note on TELpedia looking for further collaborators. Soon thereafter, an enthusiastic Derick Leony joined us, and we started working on an abstract. We submitted this abstract in January 2011 and received encouraging feedback and important hints from two anonymous reviewers. After that we created the mindmap, and I wrote the aforementioned blogpost to include more people in the spirit of Open Science. Wolfgang Reinhardt read the post and was immediately interested; thus, he became the last member of the author collective. We intensified our research and produced several drafts accompanied by regular Skype calls and flashmeetings. We submitted a first version of our article to Inderscience in May 2011. The manuscript was reviewed by three anonymous referees. The reviewers had various requests for revisions, but we were accepted for publication on the constraint of a successful re-review. We started incorporating the changes, broadening our initial focus on reproducibility and comparability to further benefits of Open Science. A final re-review in November 2011 gave green light to publication eventually.

It was interesting to see, how the open process drew people in and how that helped to grow and refine the article. In retrospect, I think that we could have been even more open, discussing our ideas beyond the mindmap in social networks and on Twitter. That might have helped to include people other than the original authors. Well, there is always a next time! Thanks to my co-authors for an awesome collaboration, and to the anonymous reviewers for their helpful comments.  We do not see the publication as the end of the process; it is merely the start of a conversation. We want to invite you to download the paper, and tell us what you think.

Abstract: In this paper, we make the case for an Open Science in Technology Enhanced Learning (TEL). Open Science means opening up the research process by making all of its outcomes, and the way in which these outcomes were achieved, publicly available on the World Wide Web. In our vision, the adoption of Open Science instruments provides a set of solid and sustainable ways to connect the disjoint communities in TEL. Furthermore, we envision that researchers in TEL would be able to reproduce the results from any paper using the instruments of Open Science. Therefore, we introduce the concept of Open Methodology, which stands for sharing the methodological details of the evaluation provided, and the tools used for data collection and analysis. We discuss the potential benefits, but also the issues of an Open Science, and conclude with a set of recommendations for implementing Open Science in TEL.

Update: Below is a presentation of the paper that I held at the Opencamp in Graz.

Citation
Kraker, P., Leony, D., Reinhardt, W., & Beham, G. (2011). The case for an open science in technology enhanced learning International Journal of Technology Enhanced Learning, 3 (6) DOI: 10.1504/IJTEL.2011.045454

<span>%d</span> bloggers like this: