Posted by: Montola | May 26, 2009


I recently conducted some double-blind peer reviews for a Central Conference and a Quality Journal. Here’s my breakdown on the anonymity of the five full papers on pervasive games:

  1. I have worked with the authors, so they were obvious to me at first glance. Also recognizable to someone who has read earlier publications on that prototype.
  2. The first author is obvious. The paper talks about a way of thinking about pervasive games. I started by checking which earlier thinkers they refer to, and found anonymized references. References to domestic games only, which indicates nationality.
  3. I know two people who could have written this paper, probably together. I had a hunch early on, but photographs were the real giveaway. Not all people would have recognized the game from them, but I did.
  4. I could not identify the authors, but their research environment is clear: they mention some little discussed prototypes, which are only known to few people.
  5. I do not know the authors. I do know their nationality, but that’s all. This paper did not refer to finished prototypes, but only to papers discussing them, hinting that they are new to the field.

I declined to review paper #1. In the cases #2 and #3 I discussed the editor/chair about my connection to the authors, and since I have not collaborated with those people directly, was asked to review anyway.

Why do I blog this?

It is an open secret that anonymity has problems in prototype studies. If you had to review a paper on Momentum, the first thing you would do is to check how the paper connects to earlier papers by people like myself. By the time you have figured that out you know if the text is written by insiders with a direct access to the prototype data.

Is this pretence of belief in double-blindness functional? We should think more about how to conduct peer reviews in a field where authors can be identified more often than not. This problem probably runs through the entire HCI community.

Someone should also write a guide on good anonymization practices. I’ve seen it all: People simply deleting author references from reference lists, leaving a giant holes in alphabetical lists. People highlighting anonymous text with black in Acrobat in an easily undoable manner. People leaving comments in the papers with their names in them. Photographs with authors shown. Author names in document information. Acknowledgements left visible. Giveaway references to “my earlier Momentum research [anon]”.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: