Web 2.0

Recently I joined the TEAM Project, which focuses on research networks on the web. The project deals with issues like recommendation, text disambiguation, and metadata validation. In my part, I will do something a bit different: I will take a look at how a research fields is represented in such research networks.

So far, academic fields have been analyzed using metadata that came with the published articles. That includes, amongst others co-authorship, categories, keywords, and most importantly, citations. With this kind of metadata, it is possible to map out a research field from the position of the authors. Now, employing user generated data from research networks, it is possible to take a look at a field from a whole new viewpoint: that of the reader.

You might ask: why is that interesting? Well, meta-data from articles always only give you one part of the story. Co-citation and co-authorship analysis surely are sound ways to look at a field; but what if there are two groups of authors in different fields working on the same topic that just never publish together and never cite each other? In that case you will not get the connection between them. Most probably they will be using different language, so text analysis won’t help either. In come the readers: they might have identified that the authors are working on the same topic despite all the issues mentioned above. Furthermore, they might have grouped them together or used the same tags to describe their articles. If we analyze these groups and tags, we can find the connection, thus extending the field beyond its original borders.

That is not all; other interesting questions include: How are articles shared among researchers, and what does that say about interdisciplinarity in a field? Are the articles that are often read the ones that are often cited? As you can see, I am pretty enthusiastic about this. I could go on why I think that readership analysis is a good idea, but I am more inclined to get some early feedback: What other issues would be interesting to look at? What problems do you see with this kind of analysis?

I have previously talked about the IJTEL Young Researcher Special Issue on Ground-breaking Ideas in TEL on here. This special issue is directed at junior researchers in TEL, encouraging authors to write about the vision behind their research.

Now, I would like to alert you about two important updates:

  1. An ideas clinic has been created on TELpedia. Authors can submit an outline of their contribution capturing the main idea in a brief article.
  2. In addition to that, a flashmeeting will be organized on 15/11/2010 to provide feedback on these initial ideas, as well as to provide a platform for furthering cross-fertilization among authors. Please join our group on TELeurope to get all the relevant details for this event.

Please also note that the Call for Reviewers ends on November 20. Looking forward to connecting with you!

Our contribution to the Research 2.0 Workshop at EC-TEL 2010. Get a pre-print of the paper here.

Feeding TEL: Building an Ecosystem Around BuRST  to Convey Publication Metadata
Peter Kraker, Angela Fessl, Patrick Hoefler and Stefanie Lindstaedt.

Abstract. In this paper we present an ecosystem for the lightweight exchange of publication metadata based on the principles of Web 2.0. At the heart of this ecosystem, semantically enriched RSS feeds are used for dissemination. These feeds are complemented by services for creation and aggregation, as well as widgets for retrieval and visualization of publication metadata. In two scenarios, we show how these publication feeds can benefit institutions, researchers, and the TEL community. We then present the formats, services, and widgets developed for the bootstrapping of the ecosystem. We conclude with an outline of the integration of publication feeds with the STELLAR Network of Excellence and an outlook on future developments.

If you followed me on Twitter lately, you could not help but notice that I am conducting a series of focus groups on Web 2.0 practices among researchers. In the last group, we did not get to discuss a very interesting topic – career planning, that is – and participants were quite eager to talk about it in a follow-up. Since a second face-to-face meeting was not feasible, I decided to set up a Google Wave for this purpose. I hadn’t used Wave much before, but I considered it to be especially suitable in this case, primarily out of three reasons:

  • Wave allows you to have a structured discussion with different “threads” (indented replies). Thus, I could ask multiple initial questions without losing overview and confusing participants.
  • Wave bridges the gap between synchronous and asynchronous communication. You can have IM-like chat, because you see everything a person types, but the posts are all persisted into one place, and participants are notified of new content in the wave.
  • Wave has a number of extensions which add a lot of functionality interesting for a group discussion. There is a voting extension, a mindmapping tool, and many others (unfortunately I was not able to use extensions – see below for more details)

Getting people into Wave

Since most of the participants had not used Wave before, I had to invite them. A superfluous step since last week, I know, but this was quite challenging: I did not get a notification when someone had accepted my invitation, and not everyone who accepted my invitation was shown in my contacts. Moreover, I had trouble finding people on Google Wave without their exact Wave or Gmail address. Possibly these issues are now fixed though.

Once I got everyone onto the wave, however, problems stopped and the discussion started to flow. Wave is pretty much self-explaining. I had posted all the necessary visualizations from the discussion (flipcharts mostly), and three blips (Google lingo for post) with initial questions, to which I asked people to provide indented replies. An interesting side-note is that you can make an indented reply to every blip but the last one in a thread. This is done by hovering over the bottom border and clicking on “Insert reply here”. On the last blip, though, you only get “Continue this thread” (or “Click here to reply”) with the same procedure, which generates a reply on the same level as the preceding blip. Unless of course, you choose “Indented reply” from a dropdown menu within the blip. This confused all participants, and even the more experienced ones consistently failed to provide indented replies to the last blip in a thread, which disrupted the structure of the discussion a bit.

Even though the wave did not grow exceptionally large (about thirty blips containing mostly text), the application would get unbearably slow on rare occasions. Apart from that, the discussion went on pretty well; posts ranged from longer contributions over a few paragraphs to chat-like comments, such as “I agree”. Notifications about new posts came along rather reliable, containing the wave(s) that had been updated with a (supposed) excerpt from the newest post. This excerpt, however, contained text from a post that I had already read most of the time.

Extensions and data in the wave

One of the main drawbacks to me was the fact that I could not use bots and extensions in the wave. Why? The reason is simple: I had promised to participants that all their contributions would remain with me, and that I would only release anonymisied transcripts to third parties with their explicit consent. If you, however, add gadgets or bots to a wave, they can read the entire content of the wave (as detailed in Google’s privacy policy for Wave). Since most of the extensions are not developed by Google, I would have to check every provider’s privacy policy, (1) which does not exist most of the time, and (2) even if it does, I would still have to decide, if I can trust this provider.

This is also true for the “Ferry” extension which exports waves to Google Docs. Consequently, I had to manually copy and paste the content to a Word file and add all the formatting that was lost, e.g. bullet points and indentations. This was still a lot faster than transcribing the same content from video but a Wave-native export to overcome this problem would be appreciated.


All in all, I would say the experience was quite enjoyable. I would use Wave for a group discussion of this size again any day, but I would be more wary towards a whole focus group or discussions with more participants and longer threads. Then the issue of indented replies and the missing native export would be of more importance.

%d bloggers like this: