1. priors for the phenomenon imply many small correlates, and past results have failed to replicate
2. low statistical power for claimed results, from both planned & post hoc perspective
3. small sample biases: can't estimate accurately (without relatively exotic techniques) when p>n
4. measurement error in diagnosing depression: investigators chose a self-report of lower reliability compared to the available alternative data from a more reliable interview that they had,
5. substantial & likely biased attrition of subjects
6. correlational result is confounded by antidepressant usage, smoking, etc (many things correlate with being depressed, so any blood test may be picking out those instead) and by the small sample size, results may not generalize
7. failing multiple-comparison correction of final results
It may not be clear from the above summary that the "How to critique" link discusses this blood test and why it's probably wrong. I really recommend reading it - it's pretty convincing.
One other thing, looking at the original paper, the affected RNAs look pretty random. I'd find it more convincing if these RNAs suggested some possible connection to depression, e.g. if they were for brain receptors rather than spleen genes or generic metabolism genes. Of course the body is complicated and anything could possibly affect depression...
Let me guess, the funding for this research originated from a pharma company? I notice, despite all the optimism in the press, that the human race hasn't "cured" a whole lot of medical conditions. It's easier, and more profitable, to fund a bunch of trials until one of them shows a statistically significant improvement, and then find a way to categorize as large as possible a portion of the population to fit into the affected category and "treat" them. "Well sir, You don't have diabetes, but you do have pre-diabetes, and we need you to start taking these $20 tablets immediately" or "ms smith, your child's restlessness is a sign of OCD, we can't cure it, but we can treat it with these $300 amphetamine pills"
If a medication consistently shows a statistically significant improvement, then what exactly is wrong with using it to improve someone's life? Perhaps I'm missing something in your post...
If you run enough trials, and scuttle the ones you don't like, you are likely to get a result, eventually, that shows a statistically significant improvement (albeit due to error). Which wouldn't be so bad if drugs cost nothing, had no side effects, and physicians were aware of exactly how much of an improvement the medication is alleged to have (a tiny improvement of some malady may not be worth the hassle and cost for some patients). Having said that, I'm not an expert on medicine here, I'm just someone who read this interesting book: http://en.wikipedia.org/wiki/Bad_Pharma
You added the word "consistently". However, once you got something approved, it is assumed beneficially working and rarely evaluated again until there's a lawsuit for the side effects (e.g. VioXX) or patents expire.
Consistently showing improvement is not actually a requirement.
It's not a smoking gun, but the phrase "secretary of Biochem" doesn't allay my suspicions:
"Mr. Davee was a business executive for companies that developed medical products. He was chairman of Surgitek and DS Medical and secretary of Biochem International Inc. The companies' primary products are vital-sign monitors."
Having posted this link, I feel bad for calling attention to a weak study when I'd only read the press release. Yet I'm glad that doing so resulted in someone (Gwern) providing a link to this excellent analytical rebuttal that I probably wouldn't have encountered otherwise.
Being rebutted by gwern on a topic of blood chemistry and/or mind-altering drugs isn't anything to be ashamed of. If you were unaware, this is but one of many outstanding "articles" available on his site at no charge:
> Having posted this link, I feel bad for calling attention to a weak study when I'd only read the press release.
There's no need to feel bad about this. People here are pretty sophisticated about this topic and we welcome discussions of dubious science -- it tunes our skepticism. :)
1. priors for the phenomenon imply many small correlates, and past results have failed to replicate
2. low statistical power for claimed results, from both planned & post hoc perspective
3. small sample biases: can't estimate accurately (without relatively exotic techniques) when p>n
4. measurement error in diagnosing depression: investigators chose a self-report of lower reliability compared to the available alternative data from a more reliable interview that they had,
5. substantial & likely biased attrition of subjects
6. correlational result is confounded by antidepressant usage, smoking, etc (many things correlate with being depressed, so any blood test may be picking out those instead) and by the small sample size, results may not generalize
7. failing multiple-comparison correction of final results