The recent publishing debate -- Nisan's posts
The original post was getting longer and longer so I split it up.
To refresh your memory,
- Nov 2 Algorithmic Game-Theory/Economics // Noam Nisan: The problem with journals
- Nov 3 Algorithmic Game-Theory/Economics // Noam Nisan: The good things about journals
I read these posts only after Gowers's and a few other posts. But I found them very interesting so I thought I'd write about them next.
Noam Nisan's posts
The first is a short post replying to a "nothing is wrong"-comment at Gowers's first post. Nisan is making a wonderful point regarding the major functions that journals are supposed to fulfil: dissemination, verification and attention allocation. These used to be the major selling points, but I think he's right when he points out that journals are failing on all three right now.
Let me point out two details: first, Nisan mentions the increase in "worthless" papers. I think this is a critical issue. Second, just like Tim Gowers, he mentions the lack of recognition of books, surveys and other "non-papers" as he calls them. If you read my first post, you won't be surprised that this resonates strongly with me.
Nisan's second post is equally interesting pointing out what he thinks is good about journals: the community behind journals, the idea of peers reviewing peers and the hierarchy.
Another detail I'd like to highlight: Nisan points out that there is no incentive to referee well in the current system since it is not considered something I'd add to the list of "non-papers" that we need to recognize (in fact, I already have).
These two posts were the reason to scrap my first draft -- many things I wanted to say, Noam Nisan wrote up quickly and elegantly.
The first post is strong and clear: the journals are failing at the very core of their mission. Just like Gowers, Nisan makes a very small point that I would highlight, especially, since he also seems to abandon it later: The biggest trouble with our current system is that we value nothing but "new" results. I believe this development is damaging mathematics.
There's pressure to publish as many papers as you can, no matter the quality. That is, not the journal system is broken, but the function the mathematical research community has assigned to it: to evaluate researchers on the academic job market. Because it is the only measure we accept, we are stuck in an intellectual monoculture. And this monoculture leads to a mass of poorly written papers, "get it past the referee"-kind of papers, only designed to game the hiring system.
I think if we want to reduce the negative aspects of the publishing system, we must find a way to reduce this pressure of publishing as many "new" results as possible. We need more alternatives, outside both the "new" scope as well as the "paper" scope.
The second post really confused me. I'm totally onboard with the first two points Nisan makes: any future projects should include as many researchers as possible and the best method to verify results is to have as many researchers look at them as possible.
But then comes the weirdest thing: a celebration of the oligarchical structure of scientific communities.
The journal system is an oligarchical one: small editorial boards are in tight control of the most prestigious journals. They decide what is worthy of publication and thereby who will be a respected and successful researcher. There are no checks or balances in this system.
When it comes to organizing our community we seem stuck. I understand that until 20 years ago communication was too difficult to allow a more democratic approach to the community's organization. But this has changed! Now we are a fully connected community thanks to the internet.
I think, many people fear that average researchers will suddenly have the same chance to make an impression as their prize winning colleagues. It seems that more people worry about cranks and everyday, mediocre research "diluting" top research. However, the people who worry about this are the very ones who easily prevent it: I never met a mathematician who couldn't tell a crank from a real mathematician or a good from a better result. We know these things and we feel strongly about them. Why are we so worried?
Consider the following quotes from Nisan's second post.
When one thinks of web-based reputation systems, one shudders at the thought of papers being evaluated by popular vote rather than by trusted experts. [...] any “decisions”, rankings, or scores must take into account the identity of the recommender/voter/referee/commenter, giving more weight to those with higher reputation.
Besides reminding me of this, MathOverflow has already shown that this is a straw man argument; quality has prevailed. It is precisely the top researchers who easily gain a high reputation. If you want to say something bad, then maybe that they can get away with inappropriate questions and that they have "fans" voting up anything they write -- the same kind of bias we see in the journal system.
If you want to see a real problem with MO, here's one: if you're a logician, you really don't stand a chance to ever post an answer given that Joel, Andreas and François are so active. And indeed, you find people like
Andres and Carl to be more active on math.SE than MO these days. That's why we need more alternatives everywhere.
The real issue I see is laziness: journals (and MO) filter for us, they take away the burden of being an informed, active member of our community. A lot of people seem reluctant to leave this self-caused immaturity. After all, filtering is hard, forming an opinion is hard, organizing a community is hard. I think it is vital for our community to make the effort, there's too much to be gained if we do, so much to be lost if we don't.
Let me finish on a positive note. Imagine all mathematical content (all of it, every single paper, note and drawing) was available on an open social network.
Now watch the TED talk by Deb Roy from 11:00 onwards (HT to Igor Carron).
If you're worried about MO's reputation points, imagine such a data analysis for the actual content. Are you still worried we won't be able to identify exceptional contributors?