Peter Krautzberger · on the web

If you build it, will they come?

Last week, Tim Gowers published another post on the publishing debate entitled "Abstract thoughts about online review systems".

There are, as usual, interesting thoughts both in the post and in the comments (including some very nice elitist ones), enough to keep the discussion going for another few weeks.

And yet I can't shake the feeling that this discussion will not lead to the creation of an online review platform -- no matter how I wished otherwise.

If you build it, they won't come

Let me start by recalling an earlier experience with introducing technology to academia. While I was working on my PhD in Berlin, there was a big change within the university's administrative technology with respect to course management. I was strongly in favor of this technological update and ended up having repeated arguments with a colleague who was not. This colleague pointed out that the new system relied heavily on a fundamental cultural change among faculty while the university made no effort to facilitate this change. I vividly remember how angry I was about all this since the problems were "clearly" the fault of all these old, lazy professors who didn't care enough for their students to adopt a new tool.

Long after, I came to appreciate my colleague's argument for what it was -- precisely on point. (I don't know what the current state of affairs, but while I was there, the system failed miserably for precisely this cultural reason.)

Coming back to an online review system, I can see the appeal of abstractly designing an online review platform. I don't doubt that the current debate will lead to many interesting models for reviewing platforms -- that's what mathematicians are very good at. But this won't be enough.

Where is our culture of review?

The problem isn't that we're lacking a great system. The problem is that nobody reviews other people's work publicly (nor do we have other professionals to do it for us as in the case of science journalism). I don't think that progress can be made with technology alone -- we must improve the culture of review.

It's not that we don't read other people's work, mind you. We do it
all the time! And when we're serious about a paper, we essentially
re-write the damn thing; simplifying, expanding, hoping to get a new
result out of the process.

And yet, you will find very few traces of this activity on the net. We
waste our efforts by not speaking publicly about the reviewing we do
day in and day out. From what I can see via mathblogging.org, one or two prominent researcher bloggers do it (not Tim Gowers I should add), as well as a few not-at-all prominent bloggers. In other words, public reviewing hasn't entered our collective culture at all; it's reduced to isolated cases.

When Nature built it, nobody came

When it comes to post-publication peer review, the locus classicus is
Nature's failed experiment in 2006. They tried an open reviewing process because everybody said they'd like to have more reviews to identify good and interesting work. As it turned out, nobody wanted to actually do the work. So Nature stopped this experiment after five months.

What is particularly strange about this failure is the fact that many science bloggers were already spending a considerable amount of their time writing about peer-reviewed papers.

If people do it, you can build it

In the summer of 2007, researchblogging.org went online. If you have ever visited a science blog, you might have come across its well known badge (I finally got around to it here).

It's a very simple service, really. When you're blogging about a peer-reviewed work, you go and grab a bit of javascript to display a citation. In return, your post is aggregated at researchblogging.org, reaching a much wider audience. Reversely, people interested only in blog posts about peer-reviewed work can find all these posts in one central location.

In short, it's post-publication peer review at its best -- grassroots,
decentralized, open, supporting the independence of authors and scientists, using the crowd for checks and balances.

(Even though I don't know the precise history, it looks to me as if f1000 has successfully taken the post-publication peer-review back to the commercial level. Not sure what this development will mean, since f1000 is relatively young.)

If you build, and some come, is that enough?

Maybe, I'm wrong. Maybe, Tim Gowers leads us, like a Greek hero, to a brave new reviewing platform. And maybe some top researchers start writing reviews -- finally. And then, I wonder, what will happen?

Will it end like the tricki, with a few high quality items from our heroes that no mere mortal could match? Or will it follow MathOverflow, in need of a lower-level clone like math.SE?

tl;dr

I guess what I'm trying to say is: the longer this debate revolves around the perfect model, the longer it will take us to actually get our hands dirty and write reviews.

The experiments regarding platforms will continue for much, much longer -- the web is too young to pretend we will find the solution this decade (or century).

Let's focus on the real task: creating a culture of writing about each others work online. An inclusive culture, a sustainable culture, a democratic culture.

I thank Sam for editorial feedback.