Monday, October 10, 2011

Minimum Publishable Unit

What the hell, I'll post again this year.

When is the right time to publish? Is it when you have an interesting result? Is it when your result is "complete", zipped up, lock-tight, slam-dunk? Is it when the result seems more likely true than not? Is it when you want to recruit others to the cause? Is it when you think it's good enough to impress people, and impress them in the amount that you need them to be impressed for your career to advance?

Sadly, I think the answer is often that last on my list. Publications are the the currency of science, and the only hard paper that demonstrates productivity on the public stage. Science is a big establishment, and our individual spheres are often tightly circumscribed, though they may seem vast from the inside. If I am applying for a job, and I have a Nature paper or a Cell paper (not happening any time soon, but let's just imagine for a moment), then I can put that on the desk and point to it. Say my competitor has a paper in PLoS Genetics. Still good, still the same paper, but different brand. I'd carry more weight with the Nature paper.

That brand seeking behavior is nothing new, and it's not actually the problem in itself. The problem is when the brand takes on a life of its own. If everyone buys Nike shoes because they are flashy and Nike, and not because the Nike brand speaks to quality, then where can you expect the quality to go? Similarly if Nature papers are sought because they are flashy, and hard to get, where does Nature go from there? Will Nature continue to speak to the quality of the underlying science?

My fundamental problem with the big name paper is that the results are often big and sweeping. They are rarely circumscribed, but elegant and well thought out. Imagine a demonstration that a single class of proteins perform some specific catalytic or signaling function. This in itself is worthy of being shared with the scientific community, as an informative work. But to get the Nature paper you need to show the protein is relevant in disease X, and that if you inhibit the catalytic activity you cure warts, cancer, and heart disease.

Of course inhibition of the protein won't turn out to actually cure warts, cancer and heart disease. Somewhere along the line a wrinkle will have been ignored. Some control, that in hindsight will be obvious, won't have been done. This is inevitable in the sort of expansive manuscripts that top journals demand.

I argue we should be more focused on rapid dissemination of our research, and broad feedback from the community early on in validating a result. I might run a smart, genome-wide screen, but you might have a better idea of how to interpret it. Someone else might recognize an important statistical error. Most importantly, someone else can *replicate* the experimental results early, before I truck on down the road with faulty assumptions.

The problem is not that the "minimum publishable unit" is too small, it's that it's too big, and that we focus on the papers and the journals and not on the results themselves. In the end, a Nature paper that's wrong is worth less than a small time journal article that's right. The Nature paper that's wrong may actually have negative value, having led labs down the wrong road. Next time you hear someone say that they have a Cell paper under their belt, ask next whether the result was replicated. Ask whether the result has, in fact, been important. Remember to value not the scientific articles, but the science itself.

4 comments:

  1. Well written. Does the brand matter, that is in repeat? Once you've had one high profile paper and demonstrated the ability for quality (or at least eye-catching work)... I've heard more emphasis then on total papers, or total impact, and other strange measures. And from clinical academic centers, as important, is often administrative roles - managing collaborative grants, or training programs, etc. In the end your point is well taken and we're confusing the means with the end.

    ReplyDelete
  2. That's a great point. It's sort of like the SAT's, MCATs, Boards, etc. Once you've done well on it once, it doesn't really matter ever again.

    Like a standardized test, publication in journals offers a test of caliber that is sort of objective (actually less objective than those tests...) and sort of correlated with future work quality. However it is *really* easy to use, since your list of the the top 10 journals would probably mostly agree with mine. The ease of use overwhelms the problems of the *sort ofs*. Fortunately, I'd argue that institutions that are smart enough to take the time to balance that "first big publication" against actual work quality will win out in the end by attracting high quality, overlooked candidates.

    ReplyDelete
  3. At this point I think they'll recruit individuals who can bring in lots of money regardless of what and how they publish...

    ReplyDelete
  4. True. We're all beholden to the study sections.

    ReplyDelete