Peter Krautzberger · on the web

The recent publishing debate -- The economic power of publishers

I have been trying to find the time to continue my posts on the publishing debate, discussing the other posts from the original timeline. Suffice it to say, I've not yet given up getting around to it... But there were two posts in the last couple of days that made me want to write something.

If you follow me on twitter, you might have seen my remarks on the IMU's strange attempt at entering the blogosphere. A couple of weeks ago, they started a Blog on Mathematical Journals. It's not a blog about writing, but about commenting. The first post (which, horror horribilis, was named BLOG on Mathematical Journals) was aimed at getting comments on the IMU's development of a journal ranking to battle the dominance of impact factors. I was under the impression it's not really working out -- 28 comments in three weeks from the mathematical community of the entire world(!) just seems a little unimpressive. Then again, the only way I heard from it was via the EMS news blog (of course through mathblogging.org), go figure.

But this week, the IMU surprised me with a follow up post, which, as promised, was possible for the discussion of specific related topics. It is titled "What might be done about high prices of journals?".

I starting writing a comment over there, but for several reasons decided not to post it. But this is a topic that is related to what I see to be at the center of the publishing debate so I parked it in the ongoing draft here. So here it is:

It seems to me that the pricing issue cannot be solved as long hiring decisions are almost exclusively based on the publication track record in traditional journals (and even worse on the metrics of those journals).

In other words, when it comes down to it, we only value researchers by the metrics of the traditional journals they've published in. This gives the publishers incredible power over the development of our community and there's no conceivable reason why for-profit publishers wouldn't use this power to maximize their profits.

I think the mathematical community needs to spend time and money on finding ways to evaluate other activities of researchers.

For example, wikipedia has de facto a peer review system, especially the "Good Articles". It's not as easy to evaluate a wikipedia author, but an active author offers a lot of activity to evaluate them by and there is precedent for considering this a research activity.

Similarly for MathOverflow and other research-level Q&A sites. The so-called reputation points are a poor metric per se, but they nevertheless indicate an activity that is worth evaluating, i.e., a high "reputation" indicates at the very least a high activity, an amount of data that could be used to evaluate a researcher.

The best thing about these new platforms of academic activity is: we can get started right away! On the one hand, mathematicians can (and should) invest time in making their efforts outside of traditional journal publishing more apparent, in particular on their professional homepage. On the other hand, the professional societies should start taking the web serious, support and invest in new ways of doing research online and, above all, investigate the development of standards for evaluating such activities.

So why am I writing this now? Well, Noam Nisan mentioned a powerful rant by Danah Boyd with many interesting comments (if only because people there aren't shocked that (social) media activity might be worthwhile for researchers).

So I just left a comment at Noam Nisan's g+ post (repeating some of the above).

I very much agree with your comment there "... but the current system will stay unless we develop an alternative to this grading, an alternative to which there is an incremental transition path"

However, many alternatives already exist, I think -- from MathOverflow to Wikipedia to blogging to video lectures.

What seems to be missing is

a) ourselves considering our own non-standard research activities as research activities (e.g. http://blog.wikimedia.org/2011/04/06/tenure-awarded-based-in-part-on-wikipedia-contributions/ )

b) tools to analyze the data generated by non-standard activities (e.g. MO reputation points is a poor metric, but the actual questions and answers of a user are an incredibly rich source)

I also think that we should try not to replace the monoculture of journal publications with a different monoculture (say MO or Gowers's arxivMO-idea) -- it's too easy to game one system.

Instead, valuing many activities will allow people to find more activities they excel in without the pressure to excel in all of them.

Any thoughts on this?


Addendum Dec22: John Baez encouraged me to post my comments on the IMU's blog after all (and I saw this morning that he also encouraged the n-Category Cafe crowd). I extended it a bit due to the comments already visible. But it's not yet through the moderation process... So, here it is:

It seems to me that the pricing issue cannot be solved as long hiring decisions are almost exclusively based on the publication track record in traditional journals (and, even worse, on the metrics of those journals).

In other words, when it comes down to it, we only value researchers by the metrics of the traditional journals they’ve published in. This gives the publishers incredible power over the development of our community and there’s no conceivable reason why for-profit publishers wouldn’t use this power to maximize their profits.

The pressure to publish is immense and getting published has turned into a game rather than an effort of communication -- with many adverse effects to a functioning community.

None of the comments so far have addressed the issue that the monoculture of "publishing papers" is limiting the way a scientific community can develop. Journals used to be a necessary evil to enable a minimal degree of communication within a community that is spread around the globe. But now the community is fully connected, in real time, and there is no difficulty to stay in touch with any researcher as long as they use the internet to some degree. In turn, we can now communicate every detail of our academic work effortlessly; not just individual papers, but refereeing, student interaction, Q&A's, video lectures, expository writing, research exchanges, live-broadcasting talks and seminars etc

I think the key problem is that we need to find ways reduce the pressure to publish the traditional way. The only way this is possible is if we find a way to publish less.

Therefore we need to spend time (and money) on finding ways to evaluate other activities of researchers.

For example, wikipedia has de facto a peer review system, especially the “Good Articles”. It’s not as easy to evaluate a wikipedia author, but an active author offers a lot of activity to evaluate them by and there is precedent for considering this a research activity.

Similarly for MathOverflow and other research-level Q&A sites: The so-called reputation points are a poor metric per se, but they nevertheless indicate an activity that is worth evaluating, i.e., a high “reputation” indicates at the very least a high activity, an amount of data that could be used to evaluate a researcher.

Fortunately, these new platforms of academic activity allow us to get started right away.

the one hand, mathematicians can (and should) invest time in making their efforts outside of traditional journal publishing more visible, in particular on their professional homepage and their CVs (in the research section, that is!)

On the other hand, the professional societies should start taking the activity of mathematicians on the web seriously. They could support and invest in new ways of doing research online and, above all, investigate the development of standards for evaluating such activities.

===
I thank John Baez for encouraging me to make this comment.


Comments.