That’s an interesting distinction. Nevertheless, cohort studies, like ecological studies, are suitable only for generating hypotheses; they do not constitute proof of causation. The only way to use epidemiological data as proof of causation is by meeting Bradford-Hill’s criteria, and most studies come nowhere close to doing so. In fact, in most of the studies I’ve looked at, the clinical effects claimed are minuscule, even if they are statistically significant. Which makes me wonder how the researchers can believe they have actually found a signal amid all the noise. Perhaps the situation is better in fields outside of human nutrition. I certainly hope so, anyway.
Actually, there is no absolute proof of anything in inductive science. There is only evidence for or against a hypothesis. As Gary Taubes stated so well in the video you shared: half of what we think we know is likely to be wrong, and science is (thankfully) self-correcting.
Bradford Hills criteria have long been debunked in epidemiologic methods, except for the temporality criterion; a cause has to come before an effect. The others are not so helpful.
- Strength - many important causal associations are weak. Smoking and CVD tends to be weak, but most would agree that smoking increases risk of CVD.
- Consistency - nope. Take the many examples on this forum of people who experience widely varying effects of the same dietary exposures, likely due to contributing factors that we don’t understand.
- Specificity - wholly untrue that we should expect an exposure to have a single effect. Take the many effects of cigarette smoking.
- Temporality - yes
- Biologic gradient - more examples from this forum, where too little of a nutrient exposure might be as likely to cause adverse effects as too much.
- Plausibility - sounds important but many implausible explanations have turned out to be true.
- Coherence - examples Hill gave seemed more about plausibility, but if he meant consistency in findings, we know that is difficult to achieve given varied methods, populations, study designs used.
- Experimental evidence - not feasible for many causal questions, and a good example of how even experiments can be misinterpreted: study effects of swamp gases on malaria in an endemic country by randomizing exposure. The exposed will indeed have higher incidence of malaria, but it won’t be due to swamp gas exposure.
- Analogy - well, people can come up with all sorts of imaginative analogies to support what they believe, so…
Ah. So evidently we all stopped smoking unnecessarily.
Not at all following that…
You are great. n=1 observational data
Thoroughly enjoying your posts.
I was simply alluding to the fact that it was Bradford-Hill who demonstrated, by means of his criteria—which you say “have long been debunked”—the causal link between cigarette smoking and lung cancer. If his criteria are invalid, then cigarette smoking is not a cause of lung cancer, and we stopped smoking for no valid reason.
@PaulL and @Wendy198’s point led me to turn to that (questionable) arbiter of fact, Wikipedia, where there are advocates and detractors of his approach. Thanks for setting me off to learn something more about this…
I believe it is safe to say that Bradford-Hill’s criteria are probably more a minimum guideline for proposing causality, rather than a rigid standard that can be mechanically applied. They are more a matter of the preponderance of evidence and not of certain proof, assuming that were even possible in a given case.
It’s just that when you see a risk 5-30 times greater, you can begin to believe that causation might be involved. But does an increased risk of 1.35 amount to anything like the same level of concern? Bradford-Hill himself felt that the risk had to be at least double, for us to even to think there might be something there worth worrying about.
@SomeGuy Rather than Wikipedia, I would direct you to the respected epidemiology text book, Modern Epidemiology, by Rothman and Greenland. There’s not really a debate about the points I raised. Which ones are up for debate?
@PaulL it’s true it’s harder to explain a strong association by confounding (the usual problem), but I can guarantee there are also questionable findings in the literature (particularly) of ratio measures of effect that may equal 5+, but that is because they are based on tiny cell sizes. It’s mathematically easier to generate a large ratio with small numbers. Take the CVD example. CVD is so “common” it is hard to generate ratio measures of effect greater than 2. Even for cigarette smoking which I think all agree increases risk of CVD.
Many thanks for this lead.
Just found an e-copy of the 4th Edition (2021) - while Rothman is still involved, others have replaced Greenland on the author list.
At well over 2,500 pages in epub form, this digital tome will still likely weigh down my tablet for some time to come. But a quick look at coverage of the Bradford-Hill checklist suggests many concerns are cited regarding its practical application in the field.
Will try to learn what I can, time and sleep permitting. Thanks again for the suggestion.
Well, it’s still disappointing to know that the evidence linking tobacco smoking to lung cancer is based on a statistical fraud, but really, that just means that epidemiological evidence, at least where chronic diseases are concerned, is completely unreliable, with no hope of explaining anything. I wonder if anyone has told the Harvard School of Public Health?
Maths certainly help…random controlled scientific experiments prove, within reason, with larger sample size and repitition.
Maths doesn’t dictate.
Definitive repetive provable reults do.
You know, I normally disparage Harvard too, but apparently Frank Hu actually wanted low carb looked at:
Shocked me, this did.
Wow. Who said this? Where in any of the discussion above is this implied? The link between smoking and lung cancer is supported by the great preponderance of evidence in support of the hypothesis.
Thanks. Well worth reading this link.
But to say I was shocked would be overstating my reaction.
I’m just shocked that Frank Hu made an argument FOR low carb.
The rest of the article was just depressing. SOSO (same old, same old) crap.
Ok, then all epi studies should be immediately stopped, because they will never prove anything. I kinda agree with this. All they tend to do is provide fodder for people to say things like “eggs are worse than smoking!”.
Then again, things like mask wearing can’t ever really be done as an RCT. Sometimes, we have to resort to the best we have.
But you’ll see hazard ratios that are tiny, and people will conclude something from that. I think there has to be a way of saying “that’s unlikely to matter”, and at least Bradford Hill attempted to make something that did that.
Don’t you remember posting, in this very thread, that “Bradford Hills criteria have long been debunked in epidemiologic methods”? I cut and pasted those words from one of your posts.
I wonder if perhaps you and I mean different things by the word “debunked.” To me, “debunked” means that Bradford-Hill’s criteria have been invalidated. And doesn’t it follow, therefore, that invalidating his criteria means invalidating his conclusion that the relationship is causal, given that the criteria were instrumental in allowing him to reach that conclusion? What am I missing here? How can we assert that an observed correlation is causal on the basis of debunked criteria for evaluating such a correlation?
It is a fundamental principle of statistics, or so I understand, that no observed correlation can constitute proof of a causal relationship; proof must come from elsewhere. But Bradford-Hill’s criteria were an attempt to say that, in this case (smoking and lung cancer), the evidence justifies drawing the conclusion of a causal relationship.
Yet if those criteria have been shown to be inapplicable (“debunked”), then what other criteria can be used in their place to justify asserting that this strong correlation amounts to causation? You’ve eliminated strength of correlation as a criterion, you’ve eliminated the mechanistic explanation, you’ve eliminated dose-response as a criterion, etc., etc.
FWIW, I didn’t equate @Wendy198’s characterization of the Bradford Hills checklist as having been “debunked” as a framework as tantamount to alleging “statistical fraud” between smoking and cancer linkage.
Not sure how one step got us to the other.
Regardless of how reliable we deem a particular human construct to be as a framework for drawing statistical inferences, it still doesn’t change facts on the ground in the real world between actual causes & actual effects.
One is a model of reality. The other is reality.