Flu vaccinations associated with reduced T3D risk


#41

This is simply not true. That doesn’t mean elements of Hill’s criteria can’t be used in an argument. Of course, for example, experimental evidence can and should be used to support a hypothesis. Of course consistency is persuasive. Etc. Hill’s criteria might be sufficient to support a causal argument but it is by no means necessary.


(Bob M) #42

I think it should be necessary. Case in point: red meat causing diabetes. How? There is no plausible explanation for this I have ever seen.

If the Bradford-Hill criteria had to be met, at least they’d have to come up with a way that eating red meat leads to diabetes.

Because I can tell you it’s complete BS. Red meat does not – and cannot – lead to diabetes. If you think that’s true, go on an all red meat diet. Let us know when you get diabetes.

What’s happening is that the people who have the temerity to admit via FFQs that they eat EVIL red meat are those people who are getting diabetes. But it’s ludicrous to think that red meat causes diabetes or that reducing consumption of red meat will reduce the chance of getting diabetes.


#43

If this list of criteria was a “requirement” for causal inference, you would also have to dismiss the endless n=1 examples on this forum that don’t meet criteria 2 (consistency) or 5 (biologic gradient). And how about when it is simply not possible to collect experimental evidence? Do we give up on a hypothesis? At the very least, can we all agree that specificity makes no sense in chronic disease etiology? Many exposures have multiple disease endpoints. Cigarette smoking causes CVD in addition to lung cancer. Does the fact that it does not only cause lung cancer somehow diminish our understanding of its role in lung cancer?

How can these criteria possibly be used as a checklist of requirements for making causal inference?

I’m reviewing a bit about what Hill said himself, and it looks like – as is typical – his words have been distorted by the masses. He did not intend for his list to be a set of requirements, but rather, helpful guidelines.


(Bacon is a many-splendoured thing) #44

Yes, we do. Or at the very least, we need to be very clear that it is an unsubstantiated hypothesis that has no experimental data to back it up. Observational data generate hypotheses; they can rarely be used to establish causation. And even in cases where experimental verification is possible, it is tricky to do right.

The reason I laid so much stress on Bradford-Hill’s criteria was that they appeared to be a way to use observational data to come somewhat close to establishing causation—and in fact, it was precisely to be able to state with some reasonable degree of certainty that smoking causes lung cancer that Bradford-Hill developed his criteria in the first place. So if those criteria are now considered invalid, then his conclusion about causality is untenable, because this is a case where it is impossible to do randomised, controled trials to verify the hypothesis.

I suspect that part of the reason for the debunking of Bradford-Hill’s criteria (and quite apart from the economic forces at work) is that nutritional researchers recognise—on some level, at least—that their observational data very rarely come anywhere close to meeting those criteria, which prevents them from ever stating anything authoritative about the conclusions they reach from their observational data. So this inevitably leads to shoddy science. Not in the sense that the observations are wrong (though of course they might be), but in the sense that researchers place far too much confidence in the conclusions they reach. And this is much, much easier to do when you feel exempt from subjecting your hypotheses to experimental testing.

What Bradford-Hill did was to say, here’s a case where experimental verification of the hypothesis is effectively impossible, not to mention unethical in the first place, and here are reasons we might use to justify a conclusion that the observed correlation (which no one disputes) amounts to an implication of causation. If you invalidate his logic, you invalidate his conclusion. You can no longer justify the assertion that smoking causes lung cancer, unless you can come up with a different set of criteria that allow you to do what he did.

So I am asking again: Since people still do assert that smoking causes lung cancer, on what basis do they do so? What are the criteria they use when examining the data, to be able to say with any degree of confidence that it is the smoking that is the actual cause of the the cancer, and not some hitherto unguessed confounder? You mentioned “the preponderance of the evidence,” but the evidence is all observational, and the possibility of confounders is real.

So on what basis do people evaluate this “preponderance of evidence,” so as to be able to say with any degree of certainty that the data are not being confounded? Bradford-Hill’s criteria seem plausible to me, but they have been debunked, so what are we left with? I am not a statistician, so this is a very murky matter to me, and thus I am seeking enlightenment here.

Have we actually established that smoking causes CVD? On what basis? Here, too, the data are purely observational, so again, what about the possibility of confounders? What criteria can we use to say that these data rise to the level of permitting us to assert causation? If Bradford-Hill’s criteria have been debunked, they cannot be used to justify an assertion of causation in the case of CVD, any more than they can be used to justify an assertion of causation in the case of lung cancer.

That said, the fact that multiple correlations have been observed in connexion with the same factor do not allow us to assume that all those correlations are causative. It is entirely possible that smoking might correlate in a non-causative way with CVD, even while being causal in the case of lung cancer. Or vice versa, of course. It could also cause both, or neither. In any case, I don’t see how the question of whether or not A causes B has anything at all to do with the question of whether or not A also causes C.

I mention this because I was highly influenced at the beginning of my nutritional journey by a TED talk given by Peter Attia, which introduced me to the idea that obesity does not cause Type II diabetes, but rather the reason they correlate so strongly with each other is that they are both caused by something else. The implications are profound, because if obesity causes diabetes, then reducing obesity ought to help treat or reverse diabetes; whereas if they are both results of another cause, then tackling either condition is not likely to be of much value, if it means that the underlying cause remains unaddressed.

Also, when we observe a correlation, we make assumptions about the direction of causality that may or may not be warranted. Margarine consumption and the divorce rate in Maine are correlated, but does the margarine-eating cause the divorces, or do divorces cause people to eat margarine? Perhaps lung cancer causes people to smoke?


(Joey) #45

Sorry, I’m still not quite following some of the sequential logic here…

If Austin Bradford Hill’s framework used to draw linkages between smoking and lung cancer was shown to be flawed, why is it necessarily true that such conclusions about causality become untenable?

Sure, RCTests are better than epidemiological studies in ruling out confounding factors and coincidences. But at their best they’re still just human constructs used to gain insights into causes and effects. Not being able to perform such a study doesn’t alter the natural world of causes and effects.

Canaries dying in coal mines don’t cause humans to die. But there’s a strong enough association that, absent causation, miners known enough to leave promptly when their canaries are dying.

Such associations in nature are valuable - even if the pathways of causation remain mysterious to us mortals.

Perhaps I’m still missing your point? As I understand it, Bradford Hill provided a framework that was subsequently shown to be flawed for drawing conclusions about natural causes and effects. That doesn’t mean causes don’t still cause effects.


#46

It is confounding (and the related concept of selection bias) that typically leads researchers to erroneous conclusions in epidemiology. And notice that the Hill list does not address this concern at all (except maybe indirectly in the experimental evidence requirement). But even RCTs can be plagued with confounding, either due to selective loss to follow-up or being so small that groups are not comparable by chance alone. The concept of confounding was not widely understood in the 1950s, and epidemiology was in its infancy.

All of the items in the Hill list are potentially useful in an argument; but they are in no way a list of requirements for making causal inference for the reasons I’ve already mentioned. The list is not the driving factor behind why there is consensus that smoking causes lung cancer. How could it be, since experimental evidence is not checked off? In addition, after taking a look at the 1950 paper by Doll and Hill on smoking and lung cancer, it appears biologic plausibility was also not checked off. There seems to have been doubt that anyone had identified a specific carcinogen in tobacco smoke that would explain how it caused lung cancer. Still, most people were becoming convinced. Here is the paper:


#47

This is exactly what the “specificity” criterion (number 3) is in the list – that A should only cause B. The requirement sort of works when studying infectious disease but not chronic disease.


(Bacon is a many-splendoured thing) #48

Because the conclusions were based on flawed assumptions. Garbage in, garbage out.

Yes, your last sentence is true. Smoking either does or does not cause lung cancer, regardless of what we believe about it. And normally, in every other case, a statisical correlation says nothing about causation, because of the risk of confounders. For example, shoe size correlates very strongly with reading comprehension scores. But not because wearing a larger shoe helps someone read better. (Simple test: Read a passage. Go put on larger shoes. Do you understand the passage better?) There are hidden confounders involved (such as the availability of primary education, at what age children normally go to school, and the growth curve) that mediate the correlation.

August Bradford-Hill claimed, on the basis of his criteria, that we could, in this case, take smoking as a cause of lung cancer. Was he right? Probably; his argument was persuasive at the time, and the tobacco companies have not been able to come up with a counter-argument. But he based his argument on criteria that have been debunked, so we can no longer assume his conclusion is correct.

The fact of whether or not smoking causes lung cancer remains the same, whatever it is (personally, I found Bradford-Hill’s argument persuasive, but that’s just me). But we no longer have any basis to claim that the data show that smoking causes lung cancer, because the arguments Bradford-Hill used to justify his conclusion have been debunked. Unless someone else can come up with another way of justifying that conclusion, a basic statistical principle says that we are not justified in inferring causation from the observed correlation. We accept this in every other epidemiological study, so I’m confused that people don’t seem to be accepting that principle in this case.

If I’m in error here, please don’t hesitate to show me where. I’d rather be humiliated than wrong.


#49

Feeling a bit like a broken record… I took issue with the statement that Hill’s criteria are necessary for causal inference. They are not, and Hill didn’t even use several of them in his argument for a causal link between smoking and lung cancer. A subset of his list was sufficient for him to make his argument. What has been debunked is the claim that his criteria are necessary for causal inference.

Every element in the Hill’s list can and should be used (except specificity) in a causal argument, but the list is in no way a set of requirements for making causal inference. It also leaves out one of the most powerful criticisms of epidemiologic studies, which is the presence of confounding.

I think I am done here. The implication that I (or anyone at this point) would not find Hill’s argument that smoking causes lung cancer persuasive is ridiculous.


(Bacon is a many-splendoured thing) #50

Ah. That’s not how I read that criterion. I understand it to mean that the data contain no other likely explanation than A, not that A causes only B and B alone.

The fact that varicella-zoster virus causes shingles does not eliminate it as the cause of chicken pox as well. So if we studied a group of people who were disease-free when we put them in the sterile room, and we exposed them only to varicella-zoster virus, and then they developed chicken pox, that would be pretty conclusive that it was varicella-zoster and not Yersinia pestis that caused the chicken pox. Especially when we did the reverse experiment but exposing them to Yersinia pestis, and none of them developed chicken pox. (The two experiments would also probably eliminate varicella-zoster as the cause of plague, as well. :grin:)

In this case, we know that smoking causes a number of illnesses, and that there are other causes of lung cancer besides smoking, but we still accept that smoking is one of the causes of lung cancer, because there were a number of studies in which people came down with lung cancer for no apparent reason other than their smoking. A study done on coal miners or workers in an asbestos plant could not be used in the same way, because there would be no way of disentangling the likely causes. Oh, but this criterion has been debunked, so we can’t actually use it to conclude anything.


(Joey) #51

Yeah, we’re going around in circles here. I’m unclear on why @PaulL is insisting on refuting positions not being taken, but I feel it’s time to move on to more productive exchanges. Peace out.


(Bacon is a many-splendoured thing) #52

Ah. That is not what I understood you to be originally saying. You merely said that his criteria had been debunked, and as I stated earlier, I took that to mean that they had been invalidated in toto.

I’d have to look into the matter further, but I’d have expected the argument to be about the sufficiency of the criteria, not their necessity. Curious.


#53

The one word that would lead me to not put a lot of faith in their assumptions is epidemiological. Ancel Keys’ poor science that led to a generation of growing insulin resistance, T2D, cancer rates, kidney and heart failure was due to his low fat/high carb hypothesis based on epidemiological data.

I would take it with a grain of salt.


(Joey) #54

I certainly agree that epidemiological studies are potentially fraught with inherent weaknesses, only some of which can be controlled for in ways that RCT studies can address (…not that all RTC tests are perfect either).

But you may be blaming the tool for being mishandled by its user.

Ancel Keys didn’t send the western world off in the wrong direction because he used epidemiology … he misled us by selectively picking and choosing data from a larger set to fraudulently support his hypothesis.

He knowingly buried results contradictory to his pet theory, and bullied others in the field who challenged his position - knowing quite well there were serious flaws to be found in his own “research.”

In short: he pounded the table while committing fraud and cloaked it all in science. In the world of respectable research, this was a crime.

Ultimately the truth was revealed by more fully reviewing his own raw data. In other words, the same tools of epidemiology were applied to reveal his fraud. Ancel Keys was an inherently bad scientist who left significant damage in his wake.

With all due respect, I would say that targeting epidemiology for Keys’s fraud is a bit off the mark, kind of like blaming math for bank robbery.


(Bacon is a many-splendoured thing) #55

It’s still true, however, that epidemiological data are notoriously weak. It is a standard principle of statistics that an observed correlation between A and B cannot be used to infer that A causes B. At best, you can use associations to generate hypotheses. For example, if Keys, using the correlation between sugar intake and coronary heart disease that he observed in all 22 countries of his Seven Countries Study, had done an experiment comparing subjects with an increased sugar intake against subjects with a reduced sugar intake, we might have been on to something.

Robert Lustig’s clinic did an interventional pilot study on 10 kids, and just by removing sucrose from their diet, reduced their fatty livers and brought their liver enzymes down within 10 days. That tells us that sugar has a strong metabolic effect. Who knows what we could learn from a serious intervention trial?


(Joey) #56

Yes, we’d have been on to an association. :wink: