We officially announced the launch of Open Knowledge Maps, the visual interface to the world’s scientific knowledge, with a tweet and a message to our advisors late on May 11. We had soft-launched the site more than a week before that, and a bare-bones version of the PLOS visualization service had been online since Mozfest. The website was already getting some attention, and people were using the service on a daily basis. One of the highlights was a feature on Storybench in the very early days of the project. The idea behind the announcement was to get broader feedback on the search service and the overall idea behind Open Knowledge Maps. We had come a long way since the Mozfest days, and we thought that the website was ready for a wider audience.

What was to follow though went far beyond my highest expectations. Over the next 48 hours, we saw more than 350,000 hits on openknowledgemaps.org, generated by 12,000 visitors from all over the world.

What had happened? On the morning of the next day (May 12), I noticed that the tweet had gained a lot of traction, which had translated to acitivity on the site. Lots of people were using the search service, and a new map was created every few seconds. Much to our delight, the feedback was overwhelmingly positive. We started filing all the reactions as many of them contained useful pointers for future improvements. At this point, our server was still humming along fine. Granted, you had to wait a few extra seconds on the search here and there, but nothing out of the ordinary.

During the day, news about Open Knowledge Maps spread to other channels, and at some point around noon CEST, we hit the front page of Hacker News. I immediately noticed a spike in all our parameters. We went from a map every few seconds to multiple maps being created each second. Search time began to rise and we started receive complaints about failed or endless searches. Around 3:30 pm, our server finally gave in. Hundreds of searches were running at the same time, each of them taking minutes to be processed. It was time to take the search service offline and to post our version of the “Fail Whale”. You can still find a version of this screen here.


While we frantically rewrote the search service to handle a larger amount of requests (it was back a mere 60 minutes later), the stream of positive feedback continued to roll in. Up until today, Open Knowledge Maps was mentioned in over 100 tweets, with the announcement tweet creating more than 22,000 impressions alone. You can find a collection of my favourite tweets in this collection. But it was not just Twitter – the news was shared on Facebook, blogs, and in discussion forums.

At one point, we were called “the Wikipedia of scientific knowledge”. It is clear that we still have to go a long way to really deserve this tagline, but it is encouraging that people see the potential of the idea. Needless to say, the positive feedback also sparked the ambition of the Open Knowledge Maps team of volunteers. We are currently busy improving the site and the service; the first results will be available in just a few weeks.

It was a fascinating day in the eye of the storm. I’d like to thank my awesome team for their outstanding work and our great advisors for their help in shaping Open Knowledge Maps. And I’d like to thank all of you out there for the love that you have shown for this project. It means a lot to me – Open Knowledge Maps is a project that is very close to my heart. Please continue providing feedback via social media, e-mail, or on our Github repositories. You can also sign up to the newsletter to stay on top of everything #OKMaps.

It is time to change the way we discover research, and we are off to a good start!


On May 1, I submitted an application for a Shuttleworth Foundation Fellowhip. Started by Mark Shuttleworth in 2001, the Shuttleworth Foundation has enabled many amazing open knowledge initatives, including  ContentMine (Peter Murray-Rust) and Hypothes.is (Dan Whaley). The foundation has expressed the following vision:

“We would like to live in an open knowledge society
with limitless possibilities for all.” (Shuttleworth Foundation)

This vision aligns strongly with my own goal to enable everyone in society to benefit from scientific knowledge. My belief is that if we turn discovery from a closed, solitary activity into an open and collaborative one, we can bring the fruits of the open content revolution to everyone. To make this change possible, I want to create Open Knowledge Maps: a large-scale, collaborative system of open, interactive and interlinked knowledge maps for every research topic, every field and every discipline. For all  details, please see my application below – or watch the application video. As you would expect, the application was openly developed on Github.


1) Tell us about the world as you see it

A description of the status quo and context in which you will be working

Currently, the fruits of the open content revolution are unequally distributed. In the recent past, humanity has started to open up large amounts of scientific knowledge. Today, we can read over 90 million scientific articles on the web. But the tools for exploring and discovering this massive amount of content are seriously lacking. Most people rely on search engines, where they have to examine articles and their relationships by hand in order to get to the knowledge that they need. If you want to gain an overview of a research field, it will take weeks if not months to process all the necessary information, scattered over thousands of scholarly articles. There are more powerful tools that guide you through the literature – but they are proprietary and hugely expensive.

This is a problem for researchers, who spend a lot of time and effort on gaining and keeping an overview of scientific fields. But researchers have a community of peers that supports them in this task. People outside academia are usually on their own, and therefore often lost. Take the example of patients who would like to learn about the newest research on their illness. In the worst case, they don’t discover a lifesaving treatment, because the paper describing it was buried far down the results list.

There is a huge demand for better exploration and discovery tools, inside and outside of academia, but there are no large-scale attempts to provide these tools in an open manner. I am set to change that.

2) What change do you want to make in the world?

A description of what you want to change about the status quo, in the world, your personal vision for this area

To create a visual interface to the world’s scientific knowledge that can be used by anyone in order to revolutionize the way we discover research.

The base for this visual interface are so-called knowledge maps, a powerful tool for the exploration of a research field. Knowledge maps show the main areas of the field at a glance, and papers related to each area. By overlaying further connections between papers, e.g. references, we can also highlight relationships between areas. This makes it possible to infer connections between research results, which may have been unknown. Knowledge maps thus enable the exploration of existing knowledge, and the discovery of new knowledge.

My goal is to provide Open Knowledge Maps: a large-scale, web-based system of open, interactive and interlinked knowledge maps for every research topic, every field and every discipline. Around these maps, I want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery: researchers, students, journalists, librarians, practitioners and citizens. I want to enable people to guide each other in getting to the knowledge that they need, by collaboratively annotating and modifying the automatically created maps. I also want to enable users to create and contribute their own maps – achieving layered overviews of the world’s scientific knowledge including the perspectives of different epistemic cultures, geographic regions etc.

3) What has prevented this change from happening?

Describe the innovations or questions you would like to explore during the fellowship year

I want to explore how to automatically create knowledge maps on a massive scale and how to design an inclusive and sustainable space for collective knowledge mapping that brings together the individuals and communities involved in exploration and discovery.

In the recent past, open access has dramatically grown with up to 50% of new articles being published open access. Even the situation regarding legacy content is changing, with the ContentMine liberating millions of facts from closed sources. In my PhD, I created an open source, web-based knowledge mapping software called Head Start that builds on top of this open content (we further developed it during my subsequent Panton Fellowship). Head Start is capable of automatically producing knowledge maps from a variety of data, including text, metadata and references. The approach has received a lot of positive feedback from users and experts alike, and multiple awards.

Many people are currently tackling exploration and discovery of scientific knowledge on their own. The results of their efforts are usually not shared; they become visible only later as references in a publication or as reading lists. I want to explore how to bring different individuals and communities together, for example how to best connect patients, researchers and medical librarians to collaboratively map the newest research on a certain disease and how to enable them to openly share their efforts for the benefit of others affected by this disease.

4) What are you going to do to get there?

A description of what you actually plan to do during the year

Further develop the existing mapping software: In January, I published a Call for Collaborators that brought together the Open Knowledge Maps team. Jointly, we created an open roadmap on how to develop Head Start into a system of living, crowd-sourced guides to research fields. We will connect Head Start to over 90 million scholarly articles to create overviews of all fields of research. The maps will be enriched with facts extracted from full text and made available on openknowledgemaps.org. There, they can be interactively explored, collaboratively annotated and modified in a Wikipedia-style editing process.

Create and implement a community strategy: My approach has always been to involve users at every step of the process, taking usability and cognition into careful consideration. I have therefore initiated an advisor programme to guide the development of Open Knowledge Maps in a human-centered design process. We will also review social factors that prevent people from using open knowledge systems, and explore ways to address these concerns. Another concrete action is to establish mapping parties (similar to those in the Open Street Maps project), where people get together to jointly map an underrepresented research field, for example a neglected disease.

Formulate a long term plan: My goal is to develop Open Knowledge Maps into a building block of the open knowledge society. Therefore, I will address points such as a legal entity and a sustainable funding stream.

5) What challenges or uncertainties do you expect to face?

If we build it, will they come? This is a challenge for any socio-technical system, which I will address by following best practices as detailed above: human-centered design and the development of a community strategy. The cold start problem will not be an issue as a massive amount of maps will be pre-computed and ready for exploration.

Establishing a strong and diverse community: I will face this key challenge by leveraging and expanding the existing advisor community, relying on experience that I have gained as one of the founders of Barcamp Graz, and as a coordinator of the open science WG of Open Knowledge AT. In both cases, we have have established strong communities based on openness and inclusiveness.

Technical challenges related to building a large-scale system: In this respect, I will draw on my long experience in software engineering (15 years, thereof 7 years as a project manager), and the experience of my team. It will be crucial to address scalability from the start and build it into the core architecture. We will use a distributed agile process and adopt strategies of successful open source projects.

Launching a self-sustainable non-profit organization: My involvement with non-profit organizations in the past 7 years – running a smaller organization, Knowledge Management Forum Graz, for 2 years – has made me conscious of the challenges that are connected to that. As with the other challenges, I plan to gather advice from the Shuttleworth community.

6) What part does openness play in your idea?

Openness is at the very core of my idea. Open Knowledge Maps strives to be a building block of the open knowledge society by openly sharing data, source code, and content that is being created. The code will be made available on Github under the license of the existing project (LGPL v3). The visualizations will be released under CC BY – with the exception of the contained content, which of course retains its original license. The underlying knowledge structures will be mapped to Wikidata concepts and can be exported in various open formats under CC0, so that they can be easily re-used.

We partner with existing open initiatives, including ContentMine, Hypothes.is, rOpenSci, and the Internet Archive Labs. We will actively involve our partners, advisors and users to seek feedback, input, and pointers for further collaboration throughout the project. My goal is to reuse as much of the existing ecosystem as possible. To achieve this, the project progress is openly shared with the world, starting with this proposal which is hosted on Github. The development will also take place on Github. The concrete targets for developing the system will be published as issues in our repositories.

Openness will also play an important role in all social activities, which will be organized in the spirit of other open knowledge events. Mapping parties, for example, will be free of charge and they will be open to everyone interested in collaborative knowledge discovery.

A little longer than a month ago, I posted an Open Call for Collaborators for an Open Science Prize Proposal on Discovery on this blog and to various open science mailing lists. The call has been very fruitful and I am happy to announce that we have submitted a proposal. In the spirit of open science, you can find the full proposal and the supplementary materials on Github. See below for the executive summary and our video.

Team Open Discovery: Peter Kraker, Mike Skaug, Scott Chamberlain, Maxi Schramm, Michael Karpeles, Omiros Metaxas, Asura Enkhbayar & Björn Brembs

Executive Summary: Discovery is an essential task for every researcher, especially in dynamic research fields such as biomedicine. Currently, however, there are very few discover tools that can be used by a mainstream audience, most notably search engines. The problem with search engines is that they present resources in a linear, one-dimensional way, making it necessary to sift through every item in a list. Another problem is that the results of the traditional discovery process are usually closed. Therefore, the discovery process is repeated over and over again by different researchers, taking away valuable time and resources from the actual research. To solve these challenges and bring the discovery process into the open science era, we propose BLAZE, the comprehensive open science discovery tool. BLAZE will leverage the existing open science ecosystem to provide multi-dimensional topical maps of research fields, involving not only publications, but also datasets, presentations, source code and media files. BLAZE will provide a single, intuitive interface for researchers to explore, edit and share maps. The edit history of a map will be preserved to allow Wikipedia style collaboration. The maps themselves will be open, so users can embed them on their own websites and export the structure into other open science tools. Opening the discovery process will enable researchers to reuse maps, saving valuable time and effort because they can build on top of each other’s work. Furthermore, they will be able identify collaborators long before the research is usually communicated. There is an existing, early-stage protoype for BLAZE and with the Open Science Prize, we plan to develop this prototype into a comprehensive tool. BLAZE will show the enormous potential of open science for innovation in scholarly communication by providing a structured, open and multi-dimensional approach to discovery.

I am currently preparing a proposal for the Open Science Prize in the field of open discovery, and I am looking for motivated collaborators who want to join the project and change the way we do discovery. Here is the current summary:

Discovery is an essential task for every researcher, especially in dynamic research fields such as biomedicine. Currently, however, there are only a limited number of tools that can be used by a mainstream audience.We propose BLAZE, an open discovery tool that goes far beyond the functionality of search engines and social reading lists. The tool builds on Pubmed Central and other open content sources and will provide topical maps for a given list of papers, e.g. a search result or a journal volume. The maps are created automatically using fulltexts to calculate similarities and derive topical structures among papers. Furthermore, they will be enriched with features that are extracted from the papers (e.g. all papers with the same species are highlighted). BLAZE will enable users to do their discovery in a single interface. Users can interact with the maps, explore different topical areas, filter and read individual papers in the same interface. An edit mode will provided for users to make changes to the maps and to introduce new papers and topical areas. Users can openly share maps with others and export the structure in various open formats. BLAZE will be based on the existing open source visualization Head Start, and make extensive use of the digital open science ecosystem, including, but not limited to, open content, content mining services, open source solutions, and open metrics data. With this tool, we want to show the potential of open science for innovation in scholarly communication and discovery. In addition, we believe that this tool will increase the visibility of and awareness for open content and open science in general.

A first draft is also available.

I am looking for backend and frontend web developers who code in JavaScript and/or PHP and R. We will be extending an existing tool for creating web-based knowledge domain visualizations that uses D3.js on the frontend, and R content mining packages on the backend, in particular rOpenSci and tm, so you should have experience with at least one of these libraries. A background in biomed would be nice but it’s not mandatory.

Everything about this project will be open: we will prepare the proposal in the open, the development will take place on a public Github repository, and all project outputs will be published under an open license.

So if you want to join the project and create an awesome open science tool together with me, please send an e-mail to opendiscovery@gmx.at outlining which part of the project interests you most, what you’d be able to contribute and how many hours you could devote to the project over the coming months. Please also include a link to your Github repository. It would be great if you could let me know whether you are a citizen of, or permanent resident in, the United States (US), as we will need to have at least one team member who satisfies this criterion. I am looking forward to your messages!

With “Ich bin Open Science!”, we want to raise public awareness for open science in Austria and beyond. The project, a collaboration between Know-Center and FH Joanneum, has been submitted to netidee 2015. In the video (German only for the moment) we explain the project idea, and you can see first testimonials who lend a face to open science. Why are you committed to openness in science and research?

Note: This is a reblog from the RDA Europe website.

From March 8 to 11, I spent several insightful days in Southern California – at the 5th Plenary of the Research Data Alliance in San Diego to be precise. The RDA, for those of you not familiar with the organization, is a global consortium of individuals and organizations with a common goal: building the social and technical bridges that enable open sharing of data in research. It’s vision is: “Researchers and innovators openly sharing data across technologies, disciplines, and countries to address the grand challenges of society.” The organization has high-level backing: among its supporters are funding heavyweights NSF, NIH, and the European Commission.

My stay in San Diego was made possible by the generous financial support provided by RDA Europe’s Early Career Programme. I applied as I was especially interested in the Data Citation working group headed by fellow Austrian Andreas Rauber and his co-chairs Ari Asmi and Dieter van Uytvanck. I was closely following the activities of this group throughout the meeting and acted as a scribe for the group in the working group meeting, their presentation in the plenary and the Data Publishing interest group. The Data Citation WG has come up with a way to make dynamic and highly volatile dataets and parts thereof citable. Data citations of this kind are very important for reproducibility of science and they are not supported by current solutions. I was very impressed with the results of the working group – and by the pilots and workshops that are being carried out by NERC, ESIP, CLARIN, and NASA. If I have sparked your interest, I’d encourage you to check out the website of the WG and join the group.

In a way, the Data Citation WG embodies the RDA’s spirit: solution-oriented, focused and implementation-driven. Nevertheless there was also plenty of room for high-level talk at the meeting. I was impressed by the keynote by Stephen Friend of SAGE Bionetworks (check out the recording of his and other talk here). He provided a look into a data-driven future in biomedical research illustrated by a number of projects that have turned heads beyond the research community. These include Accelerating Medicines Partnership in Alzheimer’s Disease (AMP-AD) and Apple’s ResearchKit.

Bibliometrics and altmetrics, which are two of my main research foci, were also discussed in the course of the Plenary; most notably during the Publishing Data Bibliometrics WG of course, but also in the Publishing Data Interest Group. There, I presented two recent studies that I had been part of, dealing with the distribution of data citations and altmetrics. More information can be found in the accompanying slides.

I also contributed to the event by presenting a poster on the overview visualization of scholarly materials that I have developed in my PhD. More information on that in the poster below and in this blogpost. Discovery was also the main topic in the Data Description Registry Interoperability (DDRI) session. Amir Aryani presented the Research Data Switchboard, which connects datasets over repositories using semantic relations. Can’t wait to try this one out myself!

The RDA meeting was a unique experience. I did get to meet many fascinating people, and it was awesome to see just how many people are working towards promoting and enabling sharing research data in an open manner. I will certainly follow the work of the working groups that I participated in and I will try to contribute as much as I can – and I would encourage everyone interested in open research data to do the same!


Note: This post is a reblog from the LSE Impact Blog.hitchhikers-guide

In Douglas Adam’s famous novel The Hitchhiker’s Guide to the Galaxy, an unsuspecting man called Arthur Dent is lifted onto a spaceship just before earth is demolished by intergalactic bureaucrats. Together with a group of interstellar travellers (including amongst others the President of the galaxy), he then embarks on a journey through the universe to unravel the events that lead to the destruction of earth. To help Arthur better understand the new surroundings he is thrown into, he is handed a copy of The Hitchhiker’s Guide to the Galaxy, a multimedia guidebook that offers wisdom and advice on all topics of interest in the universe.

Starting out in a new scientific field can feel very similar: you are faced with a new world that you have to make sense of. Unfortunately, the knowledge needed to understand this new world is not readily structured and summarized in one handy guide, but scattered over millions of scientific articles. To make matters worse, you have no idea which articles belong to the field that you are interested in and which of them are actually important. For many researchers, the starting point in their quest to conquer an unfamiliar knowledge domain is to turn to their personal favourite search engine, type in the name of the field of interest and start reading at the top of the list. Once you have read through the first few articles (usually highly cited review articles), and followed relevant references, you develop an idea of important journals and authors in the field and adapt your search strategy accordingly. With time and patience, a researcher can thus build a mental model of a field.

The problem with this strategy is that it can take weeks, if not months before this mental model emerges. Indeed, in many PhD programs, the first year is devoted to catching up with the state-of-the-art. There is also a lot of reading and summarizing involved, but searching for relevant literature usually accounts for a large chunk of the time. And even with the most thorough search strategy, the probability that you are going to miss out on an important piece of prior work is rather high.

Another means of getting an overview a research field are knowledge domain visualizations. An example for such a visualization is given above. Knowledge domain visualizations show the main areas in a field, and assign relevant articles to these main areas. Hence, an interested researcher can see the intellectual structure of a field at a glance without performing countless searches with all different sorts of queries. An additional characteristic of knowledge domain visualizations is that areas of a similar subject are positioned closer to each other than areas of an unrelated subject. In the example “Pedagogical models” is subject-wise closer to “Virtual learning environments” than “Psychological theories”. Thus it is easy to find related areas to one’s own interests. Granted, even with a knowledge domain visualization in hand, you would still need to do the reading. But it would certainly save you a lot of time that you would otherwise spend on searching, indexing and structuring.

Image credit: Maxi Schramm. Public domain.

Image credit: Maxi Schramm. Public domain.

Knowledge domain visualizations can not only be created on the level of the individual research article. Below you can see a visualization by Bollen et al. (2009) of all of science. The nodes in the network represent research journals and the different colors designate different disciplines. Even though the idea of knowledge domain visualizations has been around for quite some time, and despite their obvious usefulness, they are not yet widely available. Part of the reason may be that in the past, the data needed to construct these visualizations was only available from a few rather expensive choices. Part of the reason may be that there has been an emphasis on all-encompassing overviews. While they provide valuable insights into the structure of science as a whole, they are usually not interactive and provide little value in day-to-day work where you want to be able to zoom into specific publications. There are several applications out there that can be used to create one’s own overview, but they can usually only be operated by users that are information visualization specialists.

Image credit: Bollen J, Van de Sompel H, Hagberg A, Bettencourt L, Chute R, et al. (2009) Clickstream Data Yields High-Resolution Maps of Science. PLoS ONE 4(3): e4803. Creative Commons Attribution 3.0 Unported.

Image credit: Bollen J, Van de Sompel H, Hagberg A, Bettencourt L, Chute R, et al. (2009) Clickstream Data Yields High-Resolution Maps of Science. PLoS ONE 4(3): e4803. Creative Commons Attribution 3.0 Unported.

In our work, we therefore aimed at creating an interactive visualization that can be used by anyone. As a first case, we chose to visualize the field of educational technology, as it represents a highly dynamic and interdisciplinary research field. As described in a recently published paper in the Journal of Informetrics (Kraker et al 2015), the visualization is based on a novel data source – the online reference management software Mendeley. The articles for the visualization were selected from Mendeley’s research catalog which is crowd-sourced from over 2.5 million users from around the world and offers structured access to more than a 100 million papers.

One of the most important steps when creating a knowledge domain visualization is to decide which measure defines the similarity between two articles. The measure determines where an article gets placed on the map and how it is related to other articles. Again, we used Mendeley data to tackle this issue. Specifically, we used co-readership information. “So what is this co-readership exactly?” you may ask. Mendeley enables users to store their references in a personal library and share them with other people. The number of times an article has been added to user libraries is commonly referred to as the number of readers, or in short readership. In analogy to that, we are talking about the co-readership of documents, when they are added to the same user library. When Alice adds Paper 1 and Paper 2 to her user library, the co-readership of these two documents is 1. When Bill adds the same two papers, the co-readership count goes up to 2, and so on. Our assumption was now that the higher the co-readership of two documents, the more likely they are of the same or a similar subject. It’s not unlike two books that are often rented together from a library – there is a good chance that they address related topics. And indeed, our first analyses indicate that our assumption is valid.

The cool thing is that once you have settled on a similarity measure, the process of creating the map can be highly automated. We adapted procedures for assigning papers to research areas and for situating them on the map. We also put a heuristic in place that tries to guess a name for each area using web-based text mining systems OpenCalais and Zemanta.

The resulting knowledge domain visualization can be seen below. The blue bubbles represent the main areas in the field. The size of the bubbles signifies the number of readers of publications in that area. The closer two areas are in the visualization, the closer they are subject-wise. An interactive version is also available; once you click on a bubble, you are presented with popular papers in that area. The dropdown on the right displays the same data in list form. Just go to Mendeley Labs (http://labs.mendeley.com/headstart) and try it for yourself! The source code is available on github: http://github.com/pkraker/Headstart


Apart from the fact that you can get a quick overview of a field, there are many other interesting things that you can learn about a domain from such a visualization. Fisichella and his colleagues even argue that mappings like the one above might help to overcome the fragmentation in educational technology by building awareness among researchers of the different sub-communities. There may be some truth to this assumption: when I evaluated the map with researchers from computer science, they discovered research areas that they did not know existed. One example is Technological Pedagogical Content Knowledge, which is a conceptual framework emanating from the educational part of the research community.

Another interesting possibility is to study the development of fields over time [1]. When I compared the map to similar maps based on older literature (e.g. Cho et al. 2012), I learned a lot about the development of the field. Whereas learning environments played an important role in the 2000s, issues relating to them have later split up into different areas (e.g. Personal Learning Environments, Game-based Learning). You can find further examples in the paper describing the full details of the evaluation which still under review. You can find a pre-print on arXiv.

Given the enormous amount of new knowledge that is produced each and every day, the need for better ways of gaining – and keeping – an overview is becoming more and more apparent. I think that visualizations based on co-readership structures could provide this overview and serve as universal up-to-date guides to knowledge domains. There are still several things that need fixing – the automated procedure for example is not perfect and still requires manual interventions. Furthermore, the characteristics of the users have a certain influence on the result, and we need to figure out a way to make users aware of this inherent bias. Therefore, we are currently working on improving automatization techniques. Algorithms, however, will never be correct 100% of the time, which is why we are also experimenting with collaborative models to refine and extend the visualizations. After all, an automated overview can never be the end product, but rather a starting point to discovery.

Kraker, P., Schlögl, C., Jack, K., & Lindstaedt, S. (2015). Visualization of co-readership patterns from an online reference management system Journal of Informetrics, 9 (1), 169-182 DOI: 10.1016/j.joi.2014.12.003. Preprint: http://arxiv.org/abs/1409.0348

[1] Educational technology experts will notice that some of the newest developments in the field such as MOOCs or learning analytics are missing from the overview. That is due to the fact that the data for this prototype was sourced in August 2012 and is therefore almost 2,5 years old. The evaluation was conducted in the first half of 2013.

%d bloggers like this: