Sunday, October 31, 2010

The Hazard of High Throughput

We live in a high throughput age. Science is no exception. Microarrays, high throughput sequencing and spectrophotometers generate data on a scale and scope that would have taken years, decades, or centuries with the the old generation of technology. We are generating data on a scale that could not be conceived of a decade ago.

Great power... let's see what comes next....

The challenge, of course, is with the analysis. The data now generated is so massive that it cannot be simultaneously visualized or processed in its raw form. Nor can t-tests get you where you want to go with statistical analysis, so many are the comparisons being made.

Enter the world of the Bonferoni correction and the Benjamini-Hochberg false discovery rate. These statistical methods allow us to sift through such enormous data sets to focus on results that are significantly different from random expectation.

The hazard comes with these methods' complexity and somewhat obscure statistical assumptions. Many scientists are very well versed in the hypotheses of their discipline, but less so on the mathematics. There are so many ways to go wrong in applying these methods in a cookie cutter way that it boggles the mind. Along the lines of "Correlation implies causation" there are other such gems as: "Difference in significance does not imply significant difference".

This last mistake was featured prominently in stage 1 of a statistical analysis in a manuscript from a good lab that just passed my boss' desk. This lab has produced prominent publications in the past, and I was surprised to see this in their analysis.

What most surprised me was that the statistical method, buried deep within the methods section at the end of the manuscript, did not arouse the ire of my boss or the other lab member who read the paper. It was viewed as a good enough an answer to a tough problem. That, plus the prominence of the last author, led to a minor note somewhere in the review.

Reviews don't have dissenting opinions, but let me put one here. Statistical methods are important. So important that, in a paper that uses a high throughput method at its start, they often form the backbone for every follow up experiment. They should not be relegated to a footnote in the back and they should, wherever possible, be declared before the data are even generated, to avoid the nefarious problems of overfitting.

Papers that use statistic need statistically minded reviewers. If we aren't careful, we'll be fooled by randomness.

Sunday, October 17, 2010

Formulary of Ten

Here's a quick question: if you had to trim the formulary of a hospital down to just 10 adult meds, what would they be? I'm sure a careful and informed individual could make a comprehensive choice based on "Quality of Life Year" information and some epidemiology of the area. Here's what I am thinking:

1) Morphine
2) Aspirin
3) One or two antibiotics (Guess who doesn't remember micro enough to know which ones? I am guessing Cipro/levo and Vancomycin)
5) Insulin
6) Beta-blocker
7) A statin
8) Doxorubicin (What my boss called: the one chemo you'd bring to a desert island)
9) Warfarin
10) Lidocain

Thoughts? It might be a little over-covered for heart disease and under-covered for diabetes. Also, is it even worth it to have a chemo agent, from a utilitarian perspective? I know that's heretical from someone who is interested in cancer research, but how much life are you really adding with one therapy alone?

Also: does Morphine/lidocain deserve a place? Pain management is important, and I didn't want to leave it out. But when there are other diseases being left uncovered, should this be a top priority?

Anyone care to offer an opinion? It's an interesting way of thinking about priorities in U.S. medicine, and pondering how much we take for granted. What medicines are those that we can't make due without?

Thursday, October 14, 2010

Peer Review

If you've spent some time in graduate school you might have learned a thing or two about the open secret about peer review: it's not entirely done by peers and it's not always that thorough of a review.

Like it or not, grad students often find these reviews on their desk. I've heard of PIs leaving the job to grad students alone, though this does not occur in my lab. It's understandable, though, that this might happen. PIs have huge demands on their time. As invaluable as their opinions are, there is no way some PIs can field every manuscript. Apparently they have their own concerns about peer review (some of which apparently involves colorful dinos)

But what of the reviews performed by graduate students? There are two ways of looking at the duty of peer review.

Looked at one way, a peer review is a learning opportunity. Often the material details development on the edge of some field that contacts at least tangentially upon the student's research. It's a chance to see another lab's raw, early draft manuscript, and learn what merits publication in high level journals and what does not. It is a learning experience, and a chance to contribute to the body of science as a whole.

But let's just step out of the shiny world of gumdrops and candy canes for a moment. Peer review can be an inane chore. While students provide some value added to the journal and the author of the manuscript, it's harder to see where the review process benefits their progression in the ladder. Their role in the review is, effectively, anonymous, and comes with no honor, distinctions, gold stars, pats on the head, brownie points or first author publications.

Stated simply, there is no clear match between incentives and the quality of the peer review. Mistakes are often buried somewhere deep in the unending, jargon-filled paragraphs of the methods section. Whether a grad student takes the effort to check these methods, line by line, comes down to a question of how many hours (if any) of sleep they might prefer to have that evening.

Where's the incentive to dig in? Some grad students seem to possess a deep personal drive to throw other scientists under the bus, but it's probably a minority at most institutions. We can't rely on pure sadism to drive the scientific engine. There must be a way to reward careful and well considered reviews, particularly where they find obscure errors and tenuous methods.

I don't claim to know what the reward should be or how it could be structured. I've considered the possibility that confidential peer review is a mistake, and that publications should instead be edited by the journal seeking to publish. If Nature wants the value added of an expert opinion in the field, let them pay for it! Certainly they demand payment for their subscriptions, so why should their product be provided for free?

At risk of shifting to a seemingly radical alternative, perhaps open access and open comment system is the way to go. Take all comers that pass a basic editorial spot check, and allow insightful, observational comments to come from the community. Those comments can then be tied to the reputation of those who make them. Great insights can be noticed, and unnecessary bickering (I'm looking at you, reviewer #3), can be ignored. Online systems for grading and sorting comments based on reputation systems exist in many forms. Perhaps its time to turn them loose on science.

Concurrently, give labs the hard task of determining their own publication threshold. Perhaps more self review will go on in-house if authors know that they can publish whatever they please, and that they'd better get it right the first time. Perhaps some systems could allow papers to have a 'versioning' system, wherein they could be updated (to a point) to reflect public comment.

I don't claim that pay-to-review or open-comment systems solve the problems inherent in the current publication regime, but I think they deserve consideration. Let us at least recognize that good work deserves good incentives, and that the adding motivation to peer-review can only improve our scientific rigor.

Grad Year 1, Tiny Learnings

Tiny learnings from grad year 1, stated briefly, for my own purposes. Trite? Maybe. Suck it up.

Mentorship is everywhere, always take the opportunity to meet people doing different things in different labs. The guy down the hall may solve the problem that's been killing you all month. That actually happened to me today, and it was great.

Look out for #1, a little bit. Cognizant of the goal being science, remember that not everyone has your interests at heart. Example: you are very cheap labor for your PI and he/she isn't itching to see you leave. This cuts both ways, remember that you might not be respecting someone else's need to look out for #1. Example: Your PI has duties besides being on-call for you 24/7.
Related Fact: When your PI says it's easy, it's probably hard. If he says it's doable, it'll probably consume all of your attention for as long as you want to work on it. Remember that you are missing years of experience on the person you're talking to.

Write up your research goals early and often, and set some timelines. I almost never make my own deadlines, and almost always find new other goals that I forgot. But you can get a long way by taking a step back and seeing the big picture in which you are swimming, rather than that one method that hasn't been working for a month. Stupid library preps...

All that is gold does not glitter, and all that glitters is not gold. Corollary: Not all new technology is what it claims to be. Stronger claim: It never is. Stay frosty.

When you are on a roll, rock it. There is no more scant resource then genuine excitement. Push it.

Tons of papers are wrong. Downright wrong. Sometimes scandalously wrong. It's embarrassing, and some of it represents systemic problems with peer review and that whole mess. You can moan about it for a long time (I did). Not sure if that's worth the bellyache, but I'll keep you posted.

Don't make second-hand goals that stake your success on the assessment of the Nature/Science/NEJM intelligentsia, or the ivy pillars of the academe.  Those gals and guys have their own little world, and it should come as no small surprise that the people in that club house aren't always all that fun to play with anyway.

Say something, or suck it up. I love to whine. It's a problem that I have. I'm still waiting for an example of when it has helped me. If you have problems with something that's going on in the lab, nip it in the bud and have a conversation. You might learn something, you might stop the problem, or you might at least feel better for having it off your chest. Otherwise suck it up.

The goal is knowledge. The goal is not a first author publication. Find the pleasure in solving the puzzles and exploring the science; it is its own joy. The other rewards may find their way to you somehow or another. Remember what you are building, and why you are building it. Remember your ideal.