Peter Krautzberger · on the web

11 dreams for the publishing debate, the complete version

This is a double-post of sorts. The reasons is that it took me a very long time to write this (in fact, the first draft is marked April 18). In the process, quite a number of versions were stored by Wordpress. As you can see, I ended up splitting the original post into 11 individual ones. (Actually, I ended up extending each of the individual posts, adding another ~1000 words to it, a whopping 4000+ words.) Anyway, the point is, I wouldn't want to delete this original draft for its history should be quite interesting if I can ever get myself to make all revisions public (there's some rude ranting in there). So please excuse this double post.
As always, check out the first post for more context.

These are dreams. Some are realistic---perhaps just around the corner; others are way out there---basically crazy. Some will apply to everyone, others only to some. But all have diversity in mind, diversity in our expectations of who researchers are and what they do.

1. write fewer original-research papers

I know what you're going to say. But hear me out. This is at the core: to enable researchers to publish fewer "new result"-papers.

I believe all major problems brought up in the debate are, at the heart, caused by the immense increase in publications -- but not the global increase, the personal one. You have to publish far too much/big these days to get a half-decent position/grant. Increasing publication numbers did not increase the quality of research or, for that matter, the "quality of life" in our communities.

Instead, the massive inflation is killing us, devaluating everything we do as researchers. More papers mean that our papers are worth less. Having to publish more papers means we produce more research of questionable quality (unintentionally and otherwise). Especially young researchers have to publish for metrics instead of quality. Worst of all, evaluating researchers only by this steam-punkesque output means that the jobs go to people with this one singular skill -- writing the right kind of papers to please the editorial boards of the right kind of journals -- leading to an intellectual monoculture instead of diverse, stable, rich communities. In particular, the pressure works against women and minorities as they often start with and continue to face disadvantages in their careers that make it harder to produce the desired "output" in the desired time frame.

(If you're wondering why I'm not bashing "evil publishers". I don't think they are the problem -- we are. If you are happy with the inflation of papers-in-journals, then big publishers are what you need.)

2. get real credit for surveys, reviews and exposition

Surveys, reviews and expositions are research -- nothing more, nothing less. We live in a time where it is actually more important to write expository work. Why? Because we've optimized our production pipeline so well that there is no shortage of new (looking) results. Yes, you can argue about "relevance" (I won't, but I can't stop you). But if you're serious about all that "research for research's-sake"-talk, then you might notice that we've figured out how to educate people in the tens of thousands every year that do nothing else but churn out result after result.

As Felix aptly wrote: we need to move beyond "new" results. And this means to step back from them and, well, review. Surveys are the glue that holds our fields together, holds our communities together. We do it all the time -- if you read a paper thoroughly, you'll most likely write a review anyway. As we aggregate, work on larger projects or grant proposals these aggregate and easily become surveys and expositions. We need to make all of these public so that the community and the original author(!) can benefit from this enormous creative output that otherwise might only show up in a reading seminar or a journal club.

Mathematicians usually do even more. If we read a paper, we create our own version of the results. Mathematics need these just like music needs different interpretations. We need to give people who are "just" re-writing proofs the credit they deserve, the credit for keeping results alive, accessible and understandable to a wider audience than the original author and referee.

And with credit, I don't mean vouchers for some bookstore. Surveys must be first-class citizens! Surveying and reviewing other peoples work belongs on your CV, it is no less original than your "original research" and every department should have people that are exceptional at it, that are better at surveys, reviews or exposition than "new" research and add to the diversity of a department.

3. get real credit for refereeing

Refereeing is research, just like surveys and "new results". It might seem redundant after dream #2 (and in many ways I would say that's the goal), but given the importance of refereeing right now and the way we do it, it's separate.

Currently, good refereeing seems nearly impossible. With the increase of publications we can't even catch up with what's happening in our own narrowly focused interests -- how, then, can we expect to referee properly? Sam tells his own funny but sad story of refereeing honestly but the problem runs deep. Good referees are hard to find and when you find them, they could be writing a paper instead of refereeing -- and why should they not?

There are many ways we could improve refereeing. We can (and should) split up the different stages of refereeing (reference checking, correctness checking on different levels, opinion gathering etc), we should open pre- and post-publication peer review (and in-between peer review), we should use alternative methods to collect these reviews (instead of review journals that have at most one opinion per paper).

One key of refereeing is forming an opinion -- and voicing it. That's very hard, in particular for young and disadvantaged researchers who fear significant repercussions. But we don't need less people sharing their opinions on other researchers' output, we need more (whether they agree or not); it's a responsibility both to our own research communities and to a larger society.

But all of these are very unlikely to work, if we don't find a way to give referees the credit (and criticism) they deserve. Before we can praise referees we need tools to evaluate refereeing, both in current and future form. Above all, we need to be able to put refereeing on the CV, it's part of the research qualities every strong department should strive for.

4. get real credit for communicating

Communicating other people's work through surveys, exposition and reviews is important. Then there's communicating to students aka teaching. This is an especially sore point for mathematics where undergraduate teaching has become a blunt tool to weed out graduates for other disciplines -- our self-respect seems greatly lacking. And then, of course, there's spreading the word to the wider public.

Have you ever noticed that many of the great researchers are excellent communicators (and teachers and surveyors and referees)? I would go as far as to say that a truly great researcher will be great in at least one of these ways to communicate. Without this, you're only a great something-else. We should cherish not just one ability of our truly great minds, but all of them. Right now we promote researchers according to how their "new result"-output compares to the "new result"-output of the truly great ones. Why are we so one dimensional?

Also, communication goes both ways, so we must listen. Can you imagine a graduate-student-run, "Law Review"-like journal for mathematics? Graduate students are perfect for forcing you to reflect on your research -- we should encourage them to make their thoughts public (including anonymously and pseudonomously). But we mathematicians need to go further. When Bob Moses was speaking at Michigan earlier this year he argued that history will judge us as math literacy becomes a civil rights issue in the 21st century. Are we listening?

Without communication, we risk the longevity of our own research areas because we won't be understood by the next generation, by other areas and by society as a whole. But this means something's gotta give and we need to accept that by giving real credit.

5. sharing all our work every way we can

One battle that most scientists are still fighting -- full self-archival rights -- mathematics has long won. We need to make this happen for everyone. We must use the arXiv, our homepages or (if you must) walled gardens such as academia.edu and researchgate. But we should also embrace more recent alternatives like figshare and github to post all our research publicly -- preprints, notes, lecture notes, expository notes, simply everything. Open notebook science is the key, but it's in its infancy. We need to find ways (many different ones) that work for a larger part of our communities so it becomes easier for people to experiment with it and to make it something even better. It might not be for everyone, but it's something everybody will benefit from.

However, the truth is that even among mathematicians a large group doesn't use the arXiv, let alone keep professional homepages deserving the name. I know that especially older researchers often hesitate because technical issues "are more trouble than it's worth". This is a challenge and we must argue against this and more importantly help to sort out problems, within departments, within small research communities etc to overcome these obstacles. Approach people, ask them why their papers aren't availabe and help them put them properly online. In all other cases: Don't be a Grothendieck.

6. publishing in smaller increments

Have you ever read a paper that seemed to hide its true goal because the author wasn't finished but had to publish something out of the pressure of not having a job otherwise? Have you ever read a paper that made a small but reasonable result look much more than it really was just so that it will make the least publishable unit? Have you ever read a paper that was so badly written that you couldn't make sense of it?

Paradoxically, one way to publish fewer "new results" papers might be to publish more but differently. For scientists this might seem easier, publishing as the data comes in. But even for mathematics we have all those little results -- the small gems, the one-line-proof, the clever counterexample, the good reformulation, the useful shortcut -- all those could be published quickly and openly instead of waiting to find enough to "make it a paper". Just like data, these could be reviewed publicly much more easily and we should get proper credit for doing so (both author and reviewer).

Longer results could (depending on their nature) be published incrementally, with multiple, public revisions. Take the preprint idea one step further and make your writing process public. Use a revision system like github to expose the process. Allow for outside input in your writing process, in your working process. The internet makes us an intricately connected community and we can work together in one big virtual seminar. There are already excellent examples in that area. In mathematics in particular, we have the Polymath project or Ryan O'Donnell's Analysis of Boolean Functions, a text book he's working out as a blog.

But as I've argued Polymath doesn't work for very many people so, again, we need many, many more projects like these so that more people have the opportunity to find a way that works for them.

There's of course a risk -- this could create a lot of noise as incrementally published results implode when data turns out to be flawed, proofs disintegrate and general anarchy rears its head! But I think it's worth the risk. Search technology is constantly improving and good scientific standards should ensure that failed research is marked accordingly. And we have so much to gain! We might be able to finally give credit for failing productively -- the main thing researchers do anyway, we fail and fail and fail until we understand what's going on. Sometimes we have to give up, but why shouldn't somebody else continue for us?

Even if your research implodes, you should get credit and, much, much more importantly, you will help others not to repeat your mistake. Fred once worked on a nice old problem which, after a few months, clearly didn't get anywhere. But he realized that all his attacks had probably been attempted before and so he wrote a pre-print on all the ways to fail, published his code along with it so that people in the future might benefit from his failure. Or as Doron Zeilberger wrote: if only John von Neumann's maid had saved all the notes he'd thrown away each day!

Or to put it differently: the most exciting result in 2011 was having no result about Con(PA).

7. an affordable open access model

Research publications should be free, for both authors and readers. When it comes to traditional journals, there are already some in mathematics (e.g. NYJM) that offer open access without any fees. I believe we can move to a journal system that is entirely open access and without publishing fees but we're not there yet. Mathematical journals are said to have profit margins of 90% so we should be the first to get there. Gold open access is already realistic and, more importantly, can be made affordable right now. With peerj, this is already around the corner on a much larger scale.

On the other hand, it seems natural to me to return academic publications to universities and academic societies. For journals, this could simply be done PLoS-ONE style (checking correctness, not "importance") and our institutions could certainly make such journals open access, non-profit and actually free (just have each department produce one journal). But new ways of doing and sharing research will hardly fit into the journal model. These methods will be much more user centric, will be about people, not publishers. And the natural place to store information about people is their professional homepage as a repository of their work. It seems natural that universities or academic societies could play a much better role in this then proprietary social networks.

However, one problem we'd be facing is that journal publishing is the cash cow of many societies and this money is often used to cross-finance important services. This is not helpful in a time where publishing is a button [Wayback machine]. We'll have to find another way to finance things that need financing and we'll have to talk about that, too. Additionally, if we move scientific "publishing" to the next level, there will be new costs: costs for research into doing it, costs for experiments, costs for failures. We need to talk about those, too.

8. a cultural change of doing research (and metrics for it)

If we publish less "new results", some structural problems might just disappear. Fewer papers means fewer journals means fewer subscriptions means fewer refereeing. Smaller workload and smaller costs. But this can only work if we have tools to nevertheless show what happens in between writing-down-a-decent-result-which-takes-years-dammit. Thanks to the internet, we can actually hope to do this. But the internet will also change everything (again and again) and it will change our communities (again and again). We need to invest resources right now to be able to benefit from this change, find a way to evolve into a work mode that is more appropriate for what is yet to come than what was in the past. We need to get into a constant-change mentality and we need to make this worth everybody's while.

Our funding bodies, academic societies, universities and other institutions must fund more experiments for doing research differently and the metrics needed for this. At the very least, we need more incubators like Macmillan's Digital-Science, but also in a non-profit fashion along the lines of MathJax's business model. Reversely, our communities must value anybody's effort of joining new experimental platforms such as MO/math.SE, blogs, citizen science projects, polymath-esque projects, wikipedia and all those platforms that are still only dreams. The altmetrics people are already setting many good examples and platforms such as Stackexchange and Wikipedia are working hard to develop reputation technology that will allow us to measure people's activity in these new scientific environments.

This, of course, means breaking out of the mono-culture of "papers in journals" -- a rough cultural change. Nobody talks about this drastic change which, I believe, makes it is the biggest problem in this entire debate. If we change the way we do (and evaluate) research, then we ask incredibly much of the people who are working well in the current model, who are good at (only?) writing the right kind of papers for the right kind of journals. It would be a revolution if people were hired because their non-traditional research activities outweigh the traditional paper count of other applicants -- in other words, if hiring would happen strategically, with that kind of diversity in mind. Case in point: even Michael Nielsen overlooks this problem completely in his wonderful book.

This change is even more important for smaller research areas (and must be made to work for them) who can't play the impact factor game. Failing to adapt, might even mean extinction in this case. But the potential is equally great as more diverse ways of doing research can also mean a better chance to work across fields and improve collaboration and visibility of small fields.

9. propagating the Shelah Model -- encouraging bad writers to seek out good writers

This may seem a very math-specific dream, possibly extending to the humanities, but it does apply to the sciences in more than one way.

Not only do we have too many "new result" papers, we also have too many horribly written papers. Although there's certainly a talent to being a great writer of research publications, we're facing the problem that our communities just down care enough to enforce even mediocre standards of writing. This is particularly hard with leading researchers who will find their manuscripts barely scrutinized.

Much worse, however, young researchers have an incentive not to care for the quality of their writing and the work necessary for it. Instead the motto is: "Just get it by the referee and be done with it! You could produce a new result while you waste your time revising this one!". Referees and editors in turn have little incentive to spend their unpaid time improving a paper, so it's easier to simply dismiss a paper or ignore the communicative shortcomings (after all, the referee understood the damn thing...).

As Dor-Bar Natan said

Papers are written so that their author(s) can forget their content and move on to other things.

The lack of quality control in academic writing endangers long term accessibility of our research as much as data supplements in proprietary formats. In the long run, it makes papers less trustworthy and compromises the greatest strength of research, to build on earlier work. Simply put, archival is rather pointless if nobody can comprehend the content.

Of course, some people are more skilled, some are less skilled as writers. So why not join them up? Saharon Shelah has published over 1,000 papers with 220 co-authors. Not only does the number of co-authors allow Shelah to produce more papers, it also allows the community to understand them better. Yet some of his co-authors are mocked as "Shelah's secretaries", supposedly not publishing enough "alone". Besides being pathetic ad-hominem attacks, this completely overlooks the fact that in many cases only these co-authors make Shelah's incredible output accessible to the research community.

Let's have editors and referees tell bad writers to find a capable co-author instead. It should be a win-win situation -- good writers will get to be on more papers and researchers lacking in writing skills get their thoughts polished so that other people can actually build on their work. As a bonus, the papers get some pre-publication peer-review and editors and referees shoulder a smaller workload. All we have to do is give up the notion that only "new results" are acceptable research currency. It seems a small price to pay.

And don't call them secretaries, when they really are smiths. They might not mine the ore, but without them it's all a pretty useless lump of rock.

10. getting from the come-to-me mentality to the go-out-and-find-them mentality.

Sometimes in my dreams, somebody screams "I need journal rankings and impact factors because that's the only way to weed out 600 applicants" and I awake sweating, panicking, thinking: you're doing it wrong!

It's quite simple really. If you have a job which attracts 600 serious applicants you should be headhunting to find the best fit -- not passively wait for applications to pile on a search committee's desk. Of course, the current system does not allow such a behavior. But without a doubt this is a better strategy for hiring a candidate, a strategy with an actual chance of finding a good fit for your department, for all aspects of research.

This simple idea is far fetched given the established system. There would be many challenges to change our culture in this direction, not the least of which is keeping nepotism in check. I don't think it will really happen, actually, even if I dream it would.

But to use impact factors instead is simply lazy. The fact that we rely on this is actually damaging to our community as it is a metric that can be badly gamed and is heavily biased towards mainstream research. Needless to say, hiring committees are yet another research activity which deserves much more recognition. Only if this work gets proper acknowledgement is there any chance to spend more resources on a productive strategy for one of the highest impact activities of any researcher -- hiring fellow faculty members.

11. a democratization of the communities

Here's my biggest dream: a democratically organized scientific community. With this I'm not talking about the challenges of representing interests of scientific communities within a democratic society. Also, I'm not talking about the democratic aspects of citizen science. (Both of these are extremely important, of course.)

Instead, I'm baffled by the aristocratic and often oligarchic structures of scientific communities. Can you imagine editorial boards or grant committees being appointed through a democratic process, say through a parliament of researchers? And can you then imagine this parliament to be elected by all researchers of a specific field -- the faculty of small colleges, the researchers in the industry, the grad students, the postdocs, they'd all get the same vote as prize winning researchers in this?

Noam Nisan once wrote a quote that (brutally out of context) reminds me of 18th century aristocracy:

[...] one shudders at the thought of papers being evaluated by popular vote rather than by trusted experts.

It seems true enough until you ask: who decides which experts are considered trusted?

One of the biggest problems of academia today is that positions of power and responsibility are assigned by what is currently considered academic merit -- basically, the "new results" count, impact factors, blah blah. This often leads to problems, since researchers who are exceptional at producing new results are often poor at managing our communities. But there's something more fundamental at play: meritocracy only seems fine until you notice there's no such thing. Academia is not about merit, it is about reputation -- and reputation is currently exclusively determined by the journals you publish in and hence the editorial boards that think your research is "interesting". However, appointments to these editorial boards are essentially oligarchic. Could it be different? After all, it's not like there's nothing that we organize by popular vote in our society -- we organize everything so that trust is connected to popular vote.

Democracy is the best of the worst -- in academia as elsewhere. But in academia, democracy hasn't really been possible until the advent of the internet. It used to be that we were all so disconnected that each prof had to be the lone ruler over a small academic duchy. But the net brings us so close together that we can constantly work in much larger groups, our collaborations span continents, our communication is instantaneous and world wide. We are so connected that democratic decisions are not only possible, they are inevitable.

This may sound like a revolution but it isn't. It really isn't. If we had an election today for Grand Nagus of Mathematics -- how would Tim Gowers not win? That is to say, if we had elections, we'd most likely vote for the same people who are in charge now.

But at least we could hold them accountable. You see, what still comes back to haunt me is this quote from Tim Gowers:

I have often had to referee papers that seem completely uninteresting. But these papers will often have a string of references and will state that they are answering questions from those references. In short, there are little communities of mathematicians out there who are carrying out research that has no impact at all on what one might call the big story of mathematics. Nevertheless, it is good that these researchers exist. To draw an analogy with sport, if you want your country to produce grand slam winners at tennis, you want to have a big infrastructure that includes people competing at lower levels, people who play for fun, and so on.

As true as the comment appears, it comes across as terribly elitist. The huddled masses are tolerated by the elite only because their uninteresting efforts (is there a greater insult?) are needed for justification. This is, of course, completely upside down. It is the large body of hard working "average" researchers that ensures the future of our community and allows a few talented minds to go to extremes. The average researchers are the ones giving them the enormous privilege of pursuing pure research idly.

Maybe we could use a new constitution for our scientific communities. And then let's have elections. And then let's have transparency and accountability.

Oh well, it's only a dream.


Comments