1. priors for the phenomenon imply many small correlates, and past results have failed to replicate
2. low statistical power for claimed results, from both planned & post hoc perspective
3. small sample biases: can't estimate accurately (without relatively exotic techniques) when p>n
4. measurement error in diagnosing depression: investigators chose a self-report of lower reliability compared to the available alternative data from a more reliable interview that they had,
5. substantial & likely biased attrition of subjects
6. correlational result is confounded by antidepressant usage, smoking, etc (many things correlate with being depressed, so any blood test may be picking out those instead) and by the small sample size, results may not generalize
7. failing multiple-comparison correction of final results
It may not be clear from the above summary that the "How to critique" link discusses this blood test and why it's probably wrong. I really recommend reading it - it's pretty convincing.
One other thing, looking at the original paper, the affected RNAs look pretty random. I'd find it more convincing if these RNAs suggested some possible connection to depression, e.g. if they were for brain receptors rather than spleen genes or generic metabolism genes. Of course the body is complicated and anything could possibly affect depression...
Let me guess, the funding for this research originated from a pharma company? I notice, despite all the optimism in the press, that the human race hasn't "cured" a whole lot of medical conditions. It's easier, and more profitable, to fund a bunch of trials until one of them shows a statistically significant improvement, and then find a way to categorize as large as possible a portion of the population to fit into the affected category and "treat" them. "Well sir, You don't have diabetes, but you do have pre-diabetes, and we need you to start taking these $20 tablets immediately" or "ms smith, your child's restlessness is a sign of OCD, we can't cure it, but we can treat it with these $300 amphetamine pills"
If a medication consistently shows a statistically significant improvement, then what exactly is wrong with using it to improve someone's life? Perhaps I'm missing something in your post...
If you run enough trials, and scuttle the ones you don't like, you are likely to get a result, eventually, that shows a statistically significant improvement (albeit due to error). Which wouldn't be so bad if drugs cost nothing, had no side effects, and physicians were aware of exactly how much of an improvement the medication is alleged to have (a tiny improvement of some malady may not be worth the hassle and cost for some patients). Having said that, I'm not an expert on medicine here, I'm just someone who read this interesting book: http://en.wikipedia.org/wiki/Bad_Pharma
You added the word "consistently". However, once you got something approved, it is assumed beneficially working and rarely evaluated again until there's a lawsuit for the side effects (e.g. VioXX) or patents expire.
Consistently showing improvement is not actually a requirement.
It's not a smoking gun, but the phrase "secretary of Biochem" doesn't allay my suspicions:
"Mr. Davee was a business executive for companies that developed medical products. He was chairman of Surgitek and DS Medical and secretary of Biochem International Inc. The companies' primary products are vital-sign monitors."
Having posted this link, I feel bad for calling attention to a weak study when I'd only read the press release. Yet I'm glad that doing so resulted in someone (Gwern) providing a link to this excellent analytical rebuttal that I probably wouldn't have encountered otherwise.
Being rebutted by gwern on a topic of blood chemistry and/or mind-altering drugs isn't anything to be ashamed of. If you were unaware, this is but one of many outstanding "articles" available on his site at no charge:
> Having posted this link, I feel bad for calling attention to a weak study when I'd only read the press release.
There's no need to feel bad about this. People here are pretty sophisticated about this topic and we welcome discussions of dubious science -- it tunes our skepticism. :)
The article presents a rather stark claim that nine RNA "markers" in a blood sample "were able to diagnose depression". Unfortunately, the article is shy on the details necessary to evaluate the merit of the claim.
Since depression is a quite heterogeneous condition, it's unlikely that any particular test will give unambiguous results in every subset of depressed patients. There's bound to be "fuzziness" in a substantial portion of cases.
ATM I don't have time to discuss the subject in any depth, but I do think the problem is going to be the media reporting this as an absolute fact, not only terribly misleading to the public, but even worse, undermining the researchers' good efforts.
I was wondering about the accurateness of a test against the fuzziness of the diagnoses as well.
There really is an absolutely huge gamut in severity and type of mental disorder and it's so so painful. I was sincerely hoping this test setup would at least start to help alleviate the "lets try this anti-depressent until you feel better, and if that doesn't work hit you with another one" type random debugging on people's brains.
> I do think the problem is going to be the media reporting this as an absolute fact, not only terribly misleading to the public, but even worse, undermining the researchers' good efforts.
This tends to apply to a very large fraction of all science-related news.
I think the fuzziness is a problem to assess treatment, and such a test would help sort different clinical realities behind that admittedly vague behavioural claim. Right now, the choice between the several treatment options (diet, exercice, talk-therapy, drugs; electro-choc and neurosurgery for the worst cases) are given based on more patient's consent than on scientific distinction between the possible issue with the underlying mechanism. It is great to give patient that choice: agency certainly helps for early on-sets. However, it's not enough.
Depression is the result of factors: live events, recreational drug use, bad habits, genetic conditions -- but just considering those is still too vague. Having biological tests (imaging, blood samples, genetic) would help a classification that could be a great help. It certainly was with cancer, a single word that covers entirely different diseases.
I don't think depression is well enough defined for an unambiguous test to even be possible. Google defines depression as "severe despondency and dejection, typically felt over a period of time and accompanied by feelings of hopelessness and inadequacy" which is a reasonable definition, but it really just means "very very sad". When we say someone is suffering from depression does that imply that the cause is medical in nature (not because something horrible happened to them)? How sad does someone have to be before they reach the threshold of depression?
You cannot really blame the media reporting when there is a video with one of the researchers making the same claim (to have a blood test for depression).
Free link to the published paper, "Blood transcriptomic biomarkers in adult primary care patients with major depressive disorder undergoing cognitive behavioral therapy":
In short - an unexpected result, based on a very small sample and trying to give even further implications (on effectiveness of cognitive behavioral therapy).
Apart from possibility, one thing bothers me here is that how quickly (and arrogantly) they will match viability of cognitive therapy to RNA markers. Cognitive therapy works and does not work for so many reasons, and none of those reasons have anything even remotely to do with RNA.
I am saddened and angry that in the end, the researchers will show and justify the experiment using results based on statistics. What medicines and therapies work and do not work will be left to chance, random trial and error, and complicated numbers. The statistics will fail to acknowledge and show the situations and mental states of the patients at that time in which such and such therapy was possible or not possible.
I'm curious what happens if the markers are removed. For example, if one of the markers is that you've got too much X in your system, and you take a drug that binds to X and then is flushed out of your system - do you stay depressed?
Almost certainly not. Biological systems are deeply complicated intertangled nests of cause and effect; when you measure a correlation between two variables (such as being depressed and lower/higher levels of RNA X), it's not going to be a direct causal link, you're much more likely to have picked up on a distant indirect relationship. (I have an essay on this topic which you might have seen on HN before: http://gwern.net/Causality )
In this case, if you skim the depression literature, you'll find depression linked with all sorts of things - too much sleep, too little sleep, weight changes, poorer cognitive performance, high inflammation levels, bad eating habits, stimulants, etc etc. Any of these could be a good predictor of depression but changing them will not help or may harm. (If a depressed person finds modafinil helps them get through the day, a naive investigator may find modafinil correlates with depression - and take it away. Not going to help.)
I doubt it would help as those mmarkers are more likely an innocuous byproduct than a cause of the problem. RNA is something that does its job inside the cells and I'm actually surprised that they can get useful markers from blood.
Not likely. Many of these indicators are tangentially related to depression, they're not causative agents -- or we don't know enough to be able to make that claim.
This may seem to be a trite example, but many diseases are accompanied by a fever. But treating the fever doesn't treat the disease, and in some cases you're better of letting the fever abate on its own.
As someone that's suffered from depression for a very long time (and totally messed up my adolescence) this is a very good thing.
It is _very_ difficult to accurate diagnose these disorders and then half the time people don't believe you anyway. It's a very difficult disorder to convey to other people because it just doesn't resonate unless you've gone through it or have been clinically diagnosed (there's a huge huge difference between being down for a few weeks and having no joy in anything for years seemingly out of no where).
> It is _very_ difficult to accurate diagnose these disorders and then half the time people don't believe you anyway.
Fair enough. Which do you think is worse -- a false positive or a false negative? I ask because this diagnostic category has plenty of both.
I would want to be very sure about a depression diagnosis before assigning it, especially to young person with little life experience. These questionable diagnoses can change the direction of a person's life, possibly cause him to think of himself as permanently handicapped, brain-damaged, after a spell of the blues followed by a superficial diagnosis from a "professional".
Because of how little we know about depression, most diagnoses are in a gray area. It's not like there's a blood test -- and if you've read the posts in this thread, you know by now that there isn't a blood test, there's only a press release.
I'm not really gonna go to deep into it but my personal feelings are a false positive would be infinitely worse.
Those drugs they give you do shit to you. I've been permanently altered from taking them when I was in my adolescence (I can remember feeling _significantly_ different pre and post drugs, and I've been off for many man years). There's a reason I'm not very keen on the "keep trying them till something sticks" approach that we take with this crap. Especially with developing individuals.
Isn't this the kind of thing that diagnoses itself, in that someone super unhappy will seek the test out? Do they need a blood test to determine if they're seriously depressed or not?
Why would someone that isn't depressed ever get this test?
If it becomes powerful enough, you could be checked for whether or not you're predisposed to depression above and beyond what we currently have (family history, mostly), allowing you to take the steps needed and learn the skills necessary to cope with it.
CBT allowed me to beat my depression, but learning it while depressed was nigh on impossible. If I'd known that I suffered from depression or could in the future, I might have had the chance to learn it prior to my first episode, setting me up to be able to live with it far easier and with far less drama than I ended up going through.
We really need to say more than just "N = 32". If you're tossing coins, and the treatment group gets 16 heads, that's a significant result. Not saying that's the case for this particular study, but general point is that N = 32 is not enough to dismiss a study out of hand.
You'd have a stronger point if this study was an experimental intervention with binary outcomes. However, what this study was actually doing was fishing for correlations between various blood factors and depression. Thus the small sample size greatly increases the chance that some factor will "just happen" to be more likely in the 32 depressed patients than in the 32 non-depressed ones.
So I stand by "N = 32" as a valid critique of the study.
Quote: " ... a breakthrough approach that provides the first objective, scientific diagnosis for depression."
Wait ... an "objective, scientific diagnosis" that doesn't identify the cause of the ailment? If I visited a doctor and she said, "You're running a fever, and your blood has cooties," I wouldn't think of that as the height of medical expertise, and it's certainly not science.
In science, we establish causes and then measure effects. This test measures effects. No one knows what causes depression, therapy is indistinguishable from the placebo effect, and existing medications have a terrible reputation.
This claim is way ahead of the evidence, evidence that in principle would identify the source of depression, offer an explanation to replace these many descriptions, and (at long last) craft a meaningful treatment.
EDIT: Okay, before you people downvote my completely accurate post, read this:
Quote: "Scientific theories are testable and make falsifiable predictions. They describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry (e.g. electricity, chemistry, astronomy)."
Feel free to drop by the source and downvote it too, while you're banging your mouse in ignorance.
> In science, we establish causes and then measure effects.
No, in science, we observe and measure phenomena, then form hypotheses about relations between them, then attempt to falsify those hypotheses, and then use the hypothesized relations which we have not yet falsified as a basis for inferring likely causes from observed effects and inferring likely effects from observed causes and inferring likely co-occurrences that are the results of common (and potentially unknown) causes from the other co-occurring phenomena.
"Establish causes and then measure effects" is not only wrong, its non-sense. To even be able to establish the likelihood of a causal relation you must first measure effects and the phenomenon hypothesized as the cause.
> "Establish causes and then measure effects" is not only wrong, its non-sense.
All right, so you don't understand science. It's not fatal, and you're hardly alone.
From Aristotle, through Francis Bacon, to the present, the centerpiece of science has been the establishment of causes, explanations, not merely observations and descriptions as you're suggesting.
Observations often lead to the shaping of a theory, then confirmation of that theory -- that explanation -- through the prediction of phenomena not yet observed, and observations that confirm the predictions (or that falsify the theory).
Quote: "Scientific theories are testable and make falsifiable predictions. They describe the causal elements responsible for a particular natural phenomenon, and are used to explain and predict aspects of the physical universe or specific areas of inquiry (e.g. electricity, chemistry, astronomy)."
I could provide hundreds of quotes similar to that above, but I can't repair your education in a short post. You must accept responsibility for those things you don't understand, but that you feel comfortable reciting to other people in a public show of narcissism.
> Wait ... an "objective, scientific diagnosis" that doesn't identify the cause of the ailment? If I visited a doctor and she said, "You're running a fever, and your blood has cooties," I wouldn't think of that as the height of medical expertise, and it's certainly not science.
GPs will tell you that this is wha they say to most patients. Why do you think antibiotics were overprescribed by so much for so long? To treat cooties. Plenty of illnesses have weakly defined causes but are treated anyway. Your refusal to subject physical health treatments to the same level of rigour that you subject mental health treatments is one sign of your kookery on this subject.
> therapy is indistinguishable from the placebo
This is nonsense. Perhaps you're confused? We know that "counseling" doesn't work. But no one credible suggests counseling as treatment, they suggest things like CBT. Here's a meta-meta-analysis. Given the evidence for CBT is so strong and the evidence for counseling is so weak it's telling that you recommend "chat's with your aunt" as being an effective treatment - totally misreading the literature.
> This review summarizes the current meta-analysis literature on treatment outcomes of CBT for a wide range of psychiatric disorders. A search of the literature resulted in a total of 16 methodologically rigorous meta-analyses. Our review focuses on effect sizes that contrast outcomes for CBT with outcomes for various control groups for each disorder, which provides an overview of the effectiveness of cognitive therapy as quantified by meta-analysis. Large effect sizes were found for CBT for unipolar depression, generalized anxiety disorder, panic disorder with or without agoraphobia, social phobia, posttraumatic stress disorder, and childhood depressive and anxiety disorders. Effect sizes for CBT of marital distress, anger, childhood somatic disorders, and chronic pain were in the moderate range. CBT was somewhat superior to antidepressants in the treatment of adult depression. CBT was equally effective as behavior therapy in the treatment of adult depression and obsessive-compulsive disorder. Large uncontrolled effect sizes were found for bulimia nervosa and schizophrenia. The 16 meta-analyses we reviewed support the efficacy of CBT for many disorders. While limitations of the meta-analytic approach need to be considered in interpreting the results of this review, our findings are consistent with other review methodologies that also provide support for the efficacy CBT. D 2005 Elsevier Ltd. All rights reserved.
It should be obvious why you don't see placebo controlled studies for CBT - it is unethical to provide patients no therapy and so it's hard to get studies approved and it's hard to creat a sham talking therapy that would blind the patient and therapist.
> and existing medications have a terrible reputation
Medication is not recommended as front line treatment anymore. Medications for any illness can be unpleasant. I'm not convinced that anti depressants are any worse than any other medication. Comparing anti-depressants with placebo shows similar rates of drop-out.
> GPs will tell you that this is wha they say to most patients.
Yes, and when that happens, which it certainly does, it's not science. Are you sure an example from medicine that follows the same pattern as psychology, somehow validates the practice as science for both? With all respect, that's not the most powerful argument.
A scientific example from medicine would be -- to choose from hundreds of examples -- Ebola, in which we know exactly what causes it, we can view the causative agent in a microscope, and we know how to treat the cause, not the symptoms. It's the same with hundreds of other conditions. No such conditions or treatments exist in psychology -- the DSM is a list of symptoms with no causes.
> Given the evidence for CBT is so strong and the evidence for counseling is so weak it's telling that you recommend "chat's with your aunt" as being an effective treatment - totally misreading the literature.
You are mistaken. In fact, a careful reading of the literature leads one to exactly that conclusion (i.e. all therapies are equally effective, therefore there is no basis for preferring CBT over IPT, or a psychotherapist over a sympathetic relative). This is a typical crititism of the literature supporting CBT:
Quote: "... the methodological processes used to select the studies in the previously mentioned meta-analysis and the worth of its findings have been called into question.[113][114][115]"
That isn't surprising, when one considers how CBT is evaluated -- with no control groups or other controls in place against systematic bias. While reading the psychology literature one often hears the expression "no-treatment control" to describe an effort to create a show of scientific respectability by telling one group that they won't be treated, then calling that the control group -- only to add a patina of scientific respectability to a study that has no meaningful controls.
Quote: "While there is support for the efficacy of CBT over no treatment control conditions, there is little evidence that CBT is more efficacious than other psychotherapies. [...] Rather than declaring the 'dodo bird verdict' that CBT and all other psychotherapies are equally efficacious, it would be more beneficial to develop more potent forms of CBT by identifying variables that mediate treatment outcomes."
A noble goal for the future, but one that cannot erase the fact that CBT has a questionable evidentiary basis at present.
>> therapy is indistinguishable from the placebo
> This is nonsense. Perhaps you're confused?
The original statement was "... therapy is indistinguishable from the placebo effect." I recommend that you read the literature in your own field:
Quote: "Studies, such as the one conducted by Paley et al. (2008), have found little difference in the efficacy of CBT and IPT. A meta-analysis by Robinson et al. (1990) found that CBT was superior to a no-treatment control group; however, when compared to a placebo control group, there was no significant difference."
I chose the above examples from articles, each of which overall makes a case in favor of CBT, to show that the case for CBT exists without adequate controls against the placebo effect.
When confronted by this evidence against the scientific standing of the corpus of psychology research, its defenders often switch the subject of argument, saying that the reason there are no controls is because that would be unethical or impractical. But that's a different topic, one that cannot be used to rationalize the imagined scientific standing of psychological research.
In fact, reading on, I see you make precisely this argument:
> It should be obvious why you don't see placebo controlled studies for CBT - it is unethical to provide patients no therapy and so it's hard to get studies approved and it's hard to creat a sham talking therapy that would blind the patient and therapist.
That's true, that would be very difficult. But the above isn't a counterargument against the finding that psychology isn't a scientific activity -- instead it attempts to explain and rationalize that fact.
>> and existing medications have a terrible reputation
> Medication is not recommended as front line treatment anymore.
You haven't been talking to psychiatrists lately. To psychiatry, medications are the centerpiece of what they call "biological psychiatry." If medications were to suddenly be withdrawn from the mental health toolkit, modern psychiatry would collapse.
I have had this precise conversation hundreds of times over the past ten years. Someone defends one or more aspects of psychological research or practice, certain claims are made, then, when the claims are compared to the literature, the claims turn out to be flatly contradicted by the literature.
This is not argue that psychological literature comes to a single, well-supported conclusion, as is true in science. The psychological literature is more like an open-air buffet where, by wandering among the tables, you can find whatever suits your fancy.
When scientists at the LHC announced the discovery of the Higgs boson, the evidence was extraordinary good, with a statistical reliability exceeding five sigma (http://understandinguncertainty.org/explaining-5-sigma-higgs...). When scientists review this discovery, which had been predicted (i.e. explained) decades before on purely theoretical grounds, they are satisfied that the announcement corresponds to a trustworthy scientific research outcome. It can always be falsified by better science, or it can be explained by a better scientific theory, but it is certainly science -- it describes a process that reliably moves from theory to supporting evidence, from prediction to confirmation using empirical evidence.
In psychology, there are no theories to which research outcomes can be compared. This means studies tend to compare one observation to another, one symptomatic treatment to another. Eventually neuroscience will evolve to the point where it will replace psychology as the preferred basis for drawing conclusions about human behavior, but neuroscience is not remotely prepared for that role right now.
> Yes, and when that happens, which it certainly does, it's not science
This is excellent progress. You accept that other areas of medicine can be as bad as psychiatry.
Will you stop dropping into unrelated threads and leaving off-topic rants about the poor quality of psychological research? I doubt it.
Will you be expanding your rants to include other areas of medicine that have poor quality research? I doubt it.
You do make a mistake when you lump in counseling with therapies. Here's one example of you making this mistake:
> You are mistaken. In fact, a careful reading of the literature leads one to exactly that conclusion (i.e. all therapies are equally effective, therefore there is no basis for preferring CBT over IPT, or a psychotherapist over a sympathetic relative). This is a typical crititism of the literature supporting CBT:
They carefully talk about therapies; they are not talking about counseling or about chats with relatives. The research that you cite either ignore such non-specific counseling or says that therapies are clearly superior.
And did you just quote Wikipedia as a source? That's a bit cheeky, especially in a discussion about rigour and quality.
This conversation is probably a bit tedious if we keep talking about where we disagree. Let's look at areas where we agree. Firstly: I agree that a lot of psychological research is not very good. (Where we disagree is that I think most research is as bad. This link is more persuasive: http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm )
> You haven't been talking to psychiatrists lately. To psychiatry, medications are the centerpiece of what they call "biological psychiatry." If medications were to suddenly be withdrawn from the mental health toolkit, modern psychiatry would collapse.
This is a good point. Many psychiatrists rely on medication. Many psychologists would treat everything without medication. Teams that include both often have vigorous discussion about where the balance should be.
A psyhiatrist is not a front line treatment! In England to get access to a psychiatrist you see your GP; and she might refer you to a gateway mental health service; and they might refer you on to see a psychiatrist. But the patient might not go to their GP amd might self refer to a CBT course instead. And the GP could refer you on to a self guided CBT resource, or a community based talking therapy (probably CBT); or prescribe medication and a talking therapy. The gateway team might not refer you to a psychiatrist but could refer you back to primary care with recommendations for therapy and quasi-professional interventions. (EG: someone to help you stay in work or get back to work; some group to increase your social activity, and so on. There's a wide range of non-medical and non-therapy interventions that can be accessed before people need to be medicated.)
Perhaps this is a cultural thing? England has pretty strict guidelines about what treatment looks like. Here's the guidance for depression in adults:
> Do not use antidepressants routinely to treat persistent subthreshold depressive symptoms or mild depression because the risk–benefit ratio is poor, but consider them for people with:
> + a past history of moderate or severe depression or
> + initial presentation of subthreshold depressive symptoms that have been present for a long period (typically at least 2 years) or
> + subthreshold depressive symptoms or mild depression that persist(s) after other interventions.
NICE provide guidelines for medical treatment in England. Clinical Commissioning Groups commission local services based on NICE guidelines. While they don't have to follow the guidelines they need a good reason to not do so.
> I have had this precise conversation hundreds of times over the past ten years. Someone defends one or more aspects of psychological research or practice, certain claims are made, then, when the claims are compared to the literature, the claims turn out to be flatly contradicted by the literature.
You need to be a bit careful. You've mistakenly claimed that research shows no difference between "counseling" and "therapy" when, even the papers you cite, it does show clear benefits of therapies over counseling. Some papers show similar amounts of benefit amongst different therapies but that is not the same as saying a therapy is no better than a chat. This is not a benefit to psychologists - it means that counseling is not recommended; counsiling is advised against in the aftermath of local tragedy or disaster.
> In psychology, there are no theories to which research outcomes can be compared. This means studies tend to compare one observation to another, one symptomatic treatment to another.
I want to understand what you mean by this, so I'll ask some questions.
Let's look at CBT. There was a division amoung psychologists for years. One group said you needed to have years of therapy to find and address the cause of current distress - the root trauma. The other group said that the concept of a causal root trauma was irrelevant and you needed to address current thoughts and behaviours.
"Ann" has a phobia of dogs.
The first group would talk to Ann about dogs and try to fid the event that caused Ann to be scared of dogs and so on.
The other group would talk to Ann about her fear and about the physiological reactions that fear causes. They'd talk about Ann's emotions and the thoughts that trigger those emotions, and the evidence for those thoughts, and how strongly she feels those emotions. They'd allow Ann to sit with those thoughts for a while. They'd then ask Ann to think about other evidence, and what her thoughts are, and how strongly she feels the emotions now. Depending on the severity of the phobia Ann will be cured after a couple of hours of work! Severe phobia will see near complete absence of symptoms for two years.
When we research treatment for phobia and we compare these two approaches we find the first therapeutic approach does not work. But we find the second approach strongly works.
We find this benefit across different studies with dofferent controls and different sample sizes. We find it when we do meta analysis. We find it for different phobias; different age groups; different populations. We find it if the CBT is self guided or group based or one to one.
So, even though we don't know what phobia is and we don't know how the various therapies achieve their effect we do know that one works, and quickly, and the other doesn't.
But reading what you say I get the impression that it's all the same as homeopathy or acupunture - pure nonsense with no credible method of action and no science to support it.
If you'd accept that psychiatry, while being bad, is nkt as bad as homeopathy I'd have some agreement.
1. priors for the phenomenon imply many small correlates, and past results have failed to replicate
2. low statistical power for claimed results, from both planned & post hoc perspective
3. small sample biases: can't estimate accurately (without relatively exotic techniques) when p>n
4. measurement error in diagnosing depression: investigators chose a self-report of lower reliability compared to the available alternative data from a more reliable interview that they had,
5. substantial & likely biased attrition of subjects
6. correlational result is confounded by antidepressant usage, smoking, etc (many things correlate with being depressed, so any blood test may be picking out those instead) and by the small sample size, results may not generalize
7. failing multiple-comparison correction of final results