Peter Krautzberger · on the web

A comment at Gobbledygook on evaluating science

This morning I commented on a post at one of the PLoS-blogs, Gobbledygook. Since it got rather lenghty, I thought its worth posting here as well. You should go and read the originalpost. Martin Fenner makes some interesting points on the problem of evaluating research output. He makes some suggestions on how to change the system so that less time is spent writing grant applications and reports, to which I commented the following.

These ideas sound very good and I agree with them, but I think this is only half the problem. The change has to got both ways. Researchers and their institutions have to become much more professional when it comes to acquiring grants and evaluating projects. We need better standards, but they will be wasted if we, the researchers (and our institutions), do not invest in efficiently using them — which includes money and time for training.

Research institutions often do not offer any kind of support or training for their own researchers when it comes to grant writing — on any level. Neither do they offer support for a later evaluation. Why is that so? Why don’t we require a certain amount of overhead to be spent on developing such support? Why don’t we have people employed that specialize in giving this kind of support? After all, even standards should be revised regularly — and as a researcher I don’t want to be the one who has to keep track all the time.

As an example, I could imagine that continuous reporting/evaluating could be less draining than the big reports that nobody reads. It should be possible to require researchers to keep a blog-like log for a project. After all, your lab notes are already there, why not keep them digitally and with some additional reflection once in a while? Writing a ‘report’ the length of a blog post or just a good long email at the end of the week seems much less work than a huge report at the end of 6 months or a year.

But again, this would have to be developed professionally — just as writing in general is a skill that requires training, so would such a kind of reporting/evaluating. Do we have/develop tools for that? Do we share the tools that we have? Do we train our students to use them? Do we have methods to allow trusted outsiders to evaluate such ‘status updates’? While the project is running?

Such transparency could even shift the focus back to “successfully failing” instead of blowing up every single small result to reach the smallest publishable unit. After all, science is about failing, again and again, until we have revised our ideas enough to make progress in understanding. Imagine, our failures would become our successes again.