The abstract paragraph says, " Participants consuming water maintained a weight loss of 6.1 kg over 52 weeks versus 7.5 kg with NNS beverages (difference [90% CI]: 1.4 kg [-2.6, -0.2]; p < 0.05)." and “However, this difference was not clinically significant.” The difference of 1.4 kg is 3.1 lbs and while it may not be clinically significant I’d be happy to see myself 3 pounds lighter. He said while sipping his NNS-sweetened herbal tea.
There’s a statistician with a sense of humor who runs a blog I read fairly regularly. One of the things he beats on regularly is “wee p values” and how the measure of statistical significance that’s so widely used isn’t a “one size fits all” number. The abstract says p is less than 5%, and that’s generally used to mean, “if it could have happened less than 5% of the time, our result wasn’t a random event, so we can publish our study.” In this case they’re saying since the chance of the lower weight being random was less than 5%, the “treatment” (in this case NNS vs. water) caused the difference.
This is the guy:
His point is that this p-value is filling in the blank on a spreadsheet and it might actually require some more math. Shockingly, the scientists who do this sort of study might be ignorant of such things.
Edit to fix a shocking jumble of words I edited ninety-seven times.