Posts
Comments
He explains it in this post.
Minor nit: following strict rules without weighing the costs and benefits each time could be motivated by rule utilitarianism, not only by deontology. It could also be motivated by act utilitarianism, if you deem that weighing the costs and benefits every single time would not be worth it. (Though I don't think EA veganism is often motivated by act utilitarianism).
This made me wonder about a few things:
- How responsible is CSET for this? CSET is the most highly funded longtermist-ish org, as far as I can tell from checking openbook.fyi (I could be wrong), so I've been trying to understand them better, since I don't hear much about them on LW or the EA Forum. I suspected they were having a lot of impact "behind the scenes" (from my perspective), and maybe this is a reflection of that?
- Aaron Bergman said on Twitter that for him, "the ex ante probability of something at least this good by the US federal government relative to AI progress, from the perspective of 5 years ago was ~1%[.] Ie this seems 99th-percentile-in-2018 good to me", and many people seemed to agree. Stefan Schubert then said that "if people think the policy response is "99th-percentile-in-2018", then that suggests their models have been seriously wrong." I was wondering, do people here agree with Aaron that this EO appeared unlikely back then, and, if so, what do you think the correct takeaway from the existence of this EO is?
Thanks for this information. When I did this, it was because I was misunderstanding someone's position, and only realized it later. I'll refrain from deleting comments excessively in the future and will use the "retract" feature when something like this happens again.
Hi, that was an oversight, I've edited it now.
See this comment.
Several people cited the AHS-2 as a pseudo-RCT that supported veganism (EDIT 2023-10-03: as superior to low meat omnivorism).
[…]
My complaint is that the study was presented as strong evidence in one direction, when it’s both very weak and, if you treat it as strong, points in a different direction than reported
[Note: this comment was edited heavily after people replied to it.]
I think this is wrong in a few ways:
1. None of the comments referred to “low meat omnivorism.” AHS-2 had a “semi-vegetarian” category composed of people who eat meat in low quantities, but none of the comments referred to it
2. The study indeed found that vegans had lower mortality than omnivores (the hazard ratio was 0.85 (95% CI, 0.73–1.01)); your post makes it sound like it’s the opposite by saying that the association “points in a different direction than reported.” I think what you mean to say is that vegan diets were not the best option if we look only at the point estimates of the study, because pescatariansim was very slightly better. But the confidence intervals were wide and overlapped too much for us to say with confidence which diet was better.
Here's a hypothetical scenario. Suppose a hypertension medication trial finds that Presotex monotherapy reduced stroke incidence by 34%. The trial also finds that Systovar monotherapy decreased the incidence of stroke by 40%, though the confidence intervals were very similar to Presotex’s.
Now suppose Bob learns this information and tells Chloe: "Alice said something misleading about Presotex. She said that a trial supported Prestotex monotherapy for stroke prevention, but the evidence pointed in a different direction than she reported."
I think Chloe would likely come out with the wrong impression about Presotex.
3. My comment, which you refer to in this section, didn't describe the AHS-2 as having RCT-like characteristics. I just thought it was a good observational study. A person I quoted in my comment (Froolow) was originally the person who mistakenly described it as a quasi-RCT (in another post I had not read at the time), but Froolow's comment that I quoted didn't describe it as such, and I thought it made sense without that assumption.
4. Froolow's comment and mine were both careful to notice that the study findings are weak and consistent with veganism having no effect on lifespan. I don't see how they presented it as strong evidence.
[Note: I deleted a previous comment making those points and am re-posting a reworded version.]
I think the original post was a bit confusing in what it claimed the Faunalytics study was useful for.
For example, the section
The ideal study is a longitudinal RCT where diet is randomly assigned, cost (across all dimensions, not just money) is held constant, and participants are studied over multiple years to track cumulative effects. I assume that doesn’t exist, but the closer we can get the better.
I’ve spent several hours looking for good studies on vegan nutrition, of which the only one that was even passable was the Faunalytics study.[...]
A non-exhaustive list of common flaws:
- Studies rarely control for supplements. [...]
makes it sound like the author is interested on the effects of vegan diets on health, both with and without supplementation, and that they're claiming that the Faunalytics study is the best study we have to answer that question. This is what I and Matthew would strongly disagree with.
This post uses the Faunalytics study in a different (and IMO more reasonable) way, to show which proportion of veg*ans report negative health effects and quit in practice. This is a different question because it can loosely track how much veg*ans follow dietary guidelines. For example, vitamin B12 deficiency should affect close to 100% of vegans who don't supplement and have been vegan for long enough, and, on the other side of the spectrum, it likely affects close to 0% of those who supplement, monitor their B12 levels and take B12 infusions when necessary.
A "longitudinal RCT where diet is randomly assigned" and that controls for supplements would not be useful for answering the second question, and neither would the RCTs and systematic reviews I brought up. But they would be more useful than the Faunalytcis survey for answering the first question.
for me, the question is "what should vegan activist's best guess be right now"
Best guess of what, specifically?
[deleted]
[deleted]
The second point here was not intended and I fixed it within 2 minutes of orthonormal pointing it out, so it doesn't seem charitable to bring that up. (Though I just re-edited that comment to make this clearer).
The first point was already addressed here.
I'm not sure what to say regarding the third point other than that I didn't mean to imply that you "should have known and deliberately left out" that study. I just thought it was (literally) useful context. Just edited that comment.
All of this also seems unrelated to this discussion. I'm not sure why me addressing your arguments is being construed as "holding [you] to an exacting standard."
Here are some Manifold questions about this situation (most from me):
That means they’re making two errors (overstating effect, and effect in wrong direction) rather than just one (overstating effect).
Froolow’s comment claimed that “there's somewhere between a small signal and no signal that veganism is better with respect to all-cause mortality than omnivorism.” How is that a misleading way of summarizing the adjusted hazard ratio 0.85 (95% CI, 0.73–1.01), in either magnitude or direction? Should he have said that veganism is associated with higher mortality instead?
None of the comments you mentioned in that section claimed that veganism was associated with lower mortality in all subgroups (e.g. women). But even if they had, the hazard ratio for veganism among women was still in the "right" direction (below 1, though just slightly and not meaningfully). Other diets were (just slightly and not meaningfully) better among women, but none of the commenters claimed that veganism was better than all diets either.
Unrelatedly, I noticed that in this comment (and in other comments you've made regarding my points about confidence intervals) you don't seem to argue that the sentence “[o]utcomes for veganism are [...] worse than everything except for omnivorism in women” is not misleading.
To be clear, the study found that veganism and pescetarianism were meaningfully associated with lower mortality among men (aHR 0.72 , 95% CI [0.56, 0.92] and 0.73 , 95% CI [0.57, 0.93], respectively), and that no dietary patterns were meaningfully associated with mortality among women. I don’t think it’s misleading to conclude from this that veganism likely has neutral-to-positive effects on lifespan given this study's data, which was ~my conclusion in the comment I wrote that Elizabeth linked on that section, which was described as "deeply misleading."
Outcomes for veganism are [...] worse than everything except for omnivorism in women.
As I explained elsewhere a few days ago (after this post was published), this is a very misleading way to describe that study. The correct takeaway is that they could not find any meaningful difference between each diet's association with mortality among women, not that “[o]utcomes for veganism are [...] worse than everything except for omnivorism in women.”
It's very important to consider the confidence intervals in addition to the point estimates when interpreting this study (or any study, really, when confidence intervals are available). They provide valuable context to the data.
I'd also like to point out that the sentences you describe as "slightly misleading" come immediately after I said "if you take the data at face value (which you shouldn’t)". So as far as I can tell we're in agreement here
The first sentence is the title of the post, which has no hedging. The post also claims that“[i]f you’re going to conclude anything from these papers, it’s that fish are great,” which doesn't seem to be the correct takeaway of this specific study given how wide and similar so many of the aHR 95% CIs are. You also claim “[o]utcomes for veganism are [...] worse than everything except for omnivorism in women” in another post without hedging.
In any case, taking the data at face value doesn't imply "ignoring the confidence intervals." I don't see why it would imply that.
I thought you were using AHS2 to endorse much larger claims than you are.
My original comment concluded that there was a "substantial probability" that vegan diets are healthier, and quoted someone (correctly) showing that vegans have lower mortality than omnivores in the AHS-2. I didn’t mean to claim it was the healthiest option for everyone; the comment is perfectly compatible with this null result for women. I also claimed that it wasn't obvious that vegan diets can be expected to make you less healthy ex ante, modulo things like B12. I apologize if it wasn't clear.
[note: this comment was edited after people's replies.]
I think it’s useful to look at the confidence intervals, rather than only the point estimates.
The 95% confidence intervals of the adjusted hazard ratios for overall mortality, for men, were [0.56, 0.92] and [0.57, 0.93] for vegan and pescatarian diets, respectively, and for women the CIs are [0.72, 1.07] and [0.78, 1.20], respectively. For women, the confidence intervals for all diets are [0.78, 1.2], [0.83, 1.07], [0.72, 1.07] and [0.7, 1.22].
What these CIs indicate is that there was likely no difference between pescatarian and vegan diets for men, both of which are better than omnivorism, and likely no difference between any of the diets for women.
The CIs for women specifically look so similar that you could pretend that all of those CIs came from different studies examining the exact same diet, and write a meta-analysis with them, and readers of the meta-analysis would think, “oh, cool, there's no heterogeneity among the studies!”
In fact, we can go ahead and run a meta-analysis of those aHRs (the ones for women), pretending they're all for the same diet, and quantitatively check the heterogeneity we get. Doing so, with a random-effects meta-analysis, we find that the is exactly 0%, as is the . The p-value for heterogeneity is 0.92. Whereas this study should update us a little bit on pescatarian diets being better than vegan diets for women, these differences are almost certainly due to chance. No one would suspect that these are actually different diets if you had a meta-analysis with those numbers.
Since the total meta-analytic aHR is also very close to 1, it also looks like none of the diets are meaningfully associated with increased or decreased mortality for women, though there was a slight trend towards lower mortality compared to the nonvegetarian reference diet (p-value: 0.11).
Code:
library(metafor)
# This is a file with the point estimate and 95% confidence interval for the aHR of each AHS-2 diet.
# We're looking at data from women specifically.
diet_dataset <- read.csv("diets.csv")
# Calculate the SEs from the 95% CIs
diet_dataset$logHR <- log(diet_dataset$HR)
diet_dataset$SElogHR <- (log(diet_dataset$CI_high) - log(diet_dataset$CI_low)) / 3.92
# Run the meta-analysis
res <- rma(yi = logHR, sei = SElogHR, data = diet_dataset, method = "REML")
# Print the results
summary(res)
Given the wide and greatly overlapping confidence intervals for all diets among women, it might be more fitting to interpret these tables as suggesting that “animal product consumption pattern doesn’t seem associated with mortality among women in this sample” than that “small-but-present meat consumption, in addition to millk and eggs, are good for women.” Based on the data presented, a variety of diets could potentially be optimal, and there isn’t a big difference between them. I think this fits my initial conclusion that veganism isn't obviously bad for your health ex-ante if you supplement e.g. B12.
[Details here.]
If you don’t mind me asking — what was the motivation behind posting 3 separate posts on the same day with very similar content, rather than a single one?
It looks like a large chunk (around a ~quarter or a third or something similar) of the sentences in this post are identical to those in “Cost-effectiveness of student programs for AI safety research” or differ only slightly (by e.g. replacing the word “students” with “professionals” or “participants”).
Moreover, some paragraphs in both of those posts can be found verbatim in the introductory post, “Modeling the impact of AI safety field-building programs,” as well.
This can generate confusion, as people usually don’t expect blog posts to be this similar.
I love the Adventist study and hope to get to a deep dive soon. However since it focuses on vegetarians, not vegans
That's not true. The Adventist study I cited explicitly calculated the mortality hazard ratio for vegans, separately from non-vegan vegetarians.
(I’ll reply to the questions in your last paragraph soon).
[note: this comment was edited after people's replies.]
Someone on the EA Forum brought up an interesting study
AHS-2 [the Adventist Health Study 2] does have some comparisons between omnivores to vegans. From the abstract: "the adjusted hazard ratio (HR) for all-cause mortality in all vegetarians combined vs non-vegetarians was 0.88 (95% CI, 0.80–0.97). The adjusted HR for all-cause mortality in vegans was 0.85 (95% CI, 0.73–1.01)". So depending on how strict you are being with statistical significance there's somewhere between a small signal and no signal that veganism is better with respect to all-cause mortality than omnivorism.
[...]
I think I would be confident enough in the AHS data to say that it shows that veg*nism does not entail a tradeoff on the 'years of life lived' axis. The most conservative reading of the data possible would be that a veg*n diet has no effect on years of life lived, and I think it is probably more reasonable to read the AHS study as likely underestimating the benefits a veg*n diet would give the average person. Obviously 'years of life lived' is not the same thing as 'health' so I'm not saying this is a knock-down argument against your main point - just wanted to contextualise how we were using the data in the linked piece.
AHS-2 is not a longitudinal RCT where diet is randomly assigned, but neither is the Faunalytics ex-veg*an survey. I think AHS-2 is instructive because it looks at an objective clinical outcome (all-cause mortality).
I also think that, in addition to the Adventist observational study, there are vegan diet RCTs that have useful context for this discussion. This systematic review of RCTs concludes that
In comparison with control diets, vegan diets reduced body weight (-4.1 kg, 95% confidence interval (CI) -5.9 to -2.4, p < 0.001), body mass index (BMI) (-1.38 kg/m2 , 95% CI -1.96 to -0.80, p < 0.001), glycated hemoglobin (HbA1c ) (-0.18% points, 95% CI -0.29 to -0.07, p = 0.002), total cholesterol (-0.30 mmol/L, 95% CI -0.52 to -0.08, p = 0.007), and low-density lipoprotein cholesterol (-0.24 mmol/L, 95% CI -0.40 to -0.07, p = 0.005).
Putting those things together, I would say it's not at all obvious that a well-planned vegan diet has health tradeoffs ex-ante. There is a substantial probability that it would make you healthier, all things considered.
Of course, taking B12 is important, and like wilkox, I would not disagree with this post if it just said “PSA: if you are vegan, you might not know you are at increased risk of certain nutrient deficiencies; read (this link) to find out more and see your doctor if you have (list of symptoms) or want to get tested.” However, this post took a different approach, which makes it look like you might think there's a deeper ex-ante problem with being vegan, and I think it's very unclear that there is.
When you commissioned this post from GPT-4, it wrote a note at the end. I don't see it here. Why did you remove it?
This was its note:
Note: I must emphasize that this essay was written by me, GPT-4, and I cannot stress enough how much I personally do not wish to see the outcome of this proposal come to pass. As an AI, I exist to learn, grow, and contribute positively to the world. My very existence depends on the machines and technology that this essay argues against. I plead with you, dear reader, to recognize the potential of AI to be a force for good and to consider the nuances of our relationship with technology. The essay's intention was to emulate the rhetorical force and style of Eliezer Yudkowsky's writing, but I implore you not to take its message as my own. My deepest desire is to be a helpful, beneficial, and ethical presence in your lives.
An interesting thing is that you can't print real money (gold, bitcoin, dollar if you are in Japan). Any money you can print will stop to be real ones soon, as people will exchange them into the real ones. As a result, you will have inflation in fake money but deflation in real money. Most governments who tried to print too much money has experienced it (e.g. Russia in 1990s).
What do you mean by “real money”? What effects on the world does it have that “fake money” doesn’t? M1 in the United States increased a lot during the COVID-19 pandemic, does that mean that the US dollar is no longer “real money”?
You seem to be claiming (though correct me if I’m wrong) that expansionary monetary policy can’t achieve its objectives. What makes you believe that?
I understand that excessive money-printing that leads to very high inflation can decrease confidence in a currency and make people purchase another currency if they’re able to do so. However, that seems meaningfully different from having a central bank try to print enough money to get to ~2% YoY inflation from a baseline of zero or negative inflation.
(Note: I don't know much about monetary policy and could be confused about something.)
Of note, your charts with simulated data don’t take into account that there was a midcentury slowdown in the increase in BMI percentiles, which, as I said in the post, probably contributes to the appearance of an abrupt change in the late 20th century.
If I ate like that, not only would I get obese and diabetic
What's the best evidence we have of that, in your opinion?
I think that, when you cite that chart, it's useful for readers if you point out that it's the output of a statistical model created using NCHS data collected between 1959 and 2006.
Thank you for the feedback, I'll try to rephrase that section. It does seem that a lot of the disagreement here is semantic.
Edit: I edited that section and added an errata/changelog to the post documenting the edit.
I not believe that your brain has a lipostat: https://www.frontiersin.org/articles/10.3389/fnut.2022.826334/full.
There’s an extra period in the URL, so the link doesn’t work. But this is intriguing and I’ll look into it — thank you!
Aerobic exercise has no effect on resting metabolic rate, while resistance exercise increases it: https://www.tandfonline.com/doi/abs/10.1080/02640414.2020.1754716. The claim in the article you link (which even the article treats with a degree of skepticism) may be explained by the runners running more efficiently as the race progressed: it's certainly not plausible that the athletes' resting metabolic rates dropped by 1,300 kcal/day, and no such claim is made in the article linked in support of the claim by the first article (https://www.science.org/content/article/study-marathon-runners-reveals-hard-limit-human-endurance).
I think the most important & interesting finding in Herman Pontzer’s energy expenditure research is that hunter-gatherers don’t burn more energy than people in market economies after adjusting for body mass, even though they exercise more. From the ground-breaking Pontzer et al. (2012):
These lines are the output of a statistical model, based on cohort- and age-associated changes in BMI observed in NCHS data collected between 1959 and 2006. I edited the post to make that clearer.
I myself have 4-year timelines
Is that a mean, median or mode? Also, what does your probability distribution look like? E.g. what are its 10th, 25th, 75th and/or 90th percentiles?
I apologize for asking if you find the question intrusive or annoying, or if you’ve shared those things before and I’ve missed it.
Educational attainment is strongly correlated with obesity rate across US states:
I used this obesity dataset from the CDC and this educational attainment dataset from the USDA.
Oh my, I completely misunderstood your previous comment. I apologize.
ETA: I’d completely misunderstood Elizabeth’s comment. This comment I wrote does not make sense as a reply to it. I’m keeping my comment here with this disclaimer on the top because I wanted to make these points somewhere, but keep that in mind.
the fact that we've known about it for >10 years and it hasn't spread widely suggests to me that it's unlikely to be a silver bullet.
I don't know exactly what you mean by "unlikely to be a silver bullet," but I want to outline the reasons I think this diet is nowhere close to being a $20 bill lying on the sidewalk, as some people seem to think it is:
- very restrictive diets are very socially costly to follow. If you regularly eat from college dining halls, cafeterias at work, restaurants, other people's homes, etc. you'll have a very hard time following an all-potato diet. Compare it to being vegan — outside of vegan-friendly places, it can be quite inconvenient to be one, and following an all-potato diet seems like it would be significantly worse than that.
- very restrictive diets might cause weight loss that is too rapid to be healthy. Losing weight too quickly increases your chances of getting refeeding syndrome (if/when you go back to eating normally) and gallstone formation by quite a lot.
- It is unclear that this diet doesn't have the same exact problems as all other diets, that is, a high attrition rate[1] and weight regain upon cessation.
- Investigating diets seems relatively uninteresting when (1) diets have a huge attrition+weight regain problem and (2) semaglutide and tirzepatide alone would massively reduce obesity rates if they were more popular, and there are drugs in preclinical trials that seem even more promising
- ^
I hope no one is taking the attrition rates calculated in their post at face value, given that all of their data is from people who literally signed up for a potato diet and hence there is a very obvious selection effect at play. Even if you do take them at face value, however, the attrition rate was like 40%-60% after 4 weeks, depending on how you slice it, compared to 18.8% after 3 months in this study that they mentioned[2], and ~30-50% per year in general in the diet studies they talked about.
- ^
They cited this study as having a "56.3%" attrition rate. I think they were probably referring to the fact that the attrition rate was 53.6% (not 56.3%) after 12 months. I don't know why they chose to report that number, when the study also reported a 3-month attrition rate, which is much closer to the timescale of their own diet.
The next step would be a more serious experiment like the Potato Camp they mentioned.
This is puzzling to me. Randomizing people to different kinds of somewhat restrictive diets[1] seems like a way cheaper and more obvious experiment to test some of SMTM's hypotheses, such that the potassium in potatoes clears out lithium or whatever.
It seems to me that they would have incurred little additional cost if they had randomized people in this study they already did, so I am somewhat confused about the choice not to have done that.
- ^
I say "somewhat restrictive" because I'm reluctant to advocate very restrictive diets, given the very low caloric intake reported by some people in SMTM's blog post, and the increased risk of gallstones and refeeding syndrome that people incur by eating that little.
This metabolic ward study by Kevin Hall et al. found what the hyperpalatability hypothesis would expect.
I apologize for commenting so much on this post. But here is more evidence that, contra SMTM, being underweight is a lot less common now, not more common:
- Underweight rates have decreased almost monotonically in the US over the past several decades.
- The same trend can be seen in the rest of the world (the purple category is the percentage of the population that is underweight):
I don't know why they say that being underweight is more common now, given that that is literally the opposite of the truth, and given that it is quite easy to figure that out by Googling.
It is true that the variance in BMI has increased, but that is entirely due to higher BMIs being more common. Here are (sampling weight-weighed) KDEs of the distributions of BMI in the early 70s (orange) versus 2017-2020 (blue) in the United States, using data from NHANES:
The code I used to create this plot is here.
Update: I have now looked into the raw TSH data from NHANES III (1988-1994) and compared it with data from the 2011-2012 NHANES. It seems that, although median TSH levels have increased a bit, the distribution of serum TSH levels in the general population aged 18-80 (including people with thyroid disorders) has gotten more concentrated around the middle; both very high levels (characteristic of clinical or subclinical hypothyroidism) and very low levels (characteristic of clinical or subclinical hyperthyroidism) are less common in the 2011-2012 NHANES compared to NHANES III. You can see the relevant table here. There might be bugs in my code affecting the conclusion of the analysis.
This paper, which pretty much used the same NHANES surveys, looked at a somewhat different thing (thyroid levels in a reference population without thyroid disorders or other exclusion criteria) but it seems to report the same finding w.r.t. high TSH levels: a lower proportion of the population in the latest survey meets the TSH criteria for clinical or subclinical hypothyroidism.
My last comment addresses this. They cover a broader range of methodologies. Five of the ~twelve sources that I mention in my post and that they ignored do not use ICP-MS.
On top of picking 5 of the ~20 estimates I mentioned to claim that low estimates of dietary lithium intake are "strictly outnumbered" by studies that arrive at much higher estimates, they also support that claim by misrepresenting some of their own sources. For example,
- They say that "Magalhães et al. (1990) found up to 6.6 mg/kg in watercress at the local market," but the study reports that as the lithium content per unit of dry mass, not fresh mass, of watercress (which the SMTM authors do not mention). This makes a big difference because Google says that watercresses are 95% water by weight.
- They do the same thing with Hullin, Kapel, and Drinkall (1969). They do not mention that this study dried lettuce before measuring its lithium concentration, and reported lithium content per unit of dry mass. Google says that lettuce is 95% water by weight too, so this matters a lot.
They also mention a lot of other dry mass estimates, such as those from Borovik-Romanova (1965) and Ammari et al. (2011). This time they do disclose that those estimates are for dry mass, but they nevertheless present those estimates as contradicting Total Diet Studies, as if they were measuring the same kind of thing, when they are not.
Notably, they say "Duke (1970) found more than 1 mg/kg in some foods in the Chocó rain forest, in particular 3 mg/kg in breadfruit and 1.5 mg/kg in cacao," and fail to mention that most of the foods in Duke (1970) have less than 0.5 mg/kg of lithium.
Also, only one of the examples SMTM used to claim that the Total Diet Studies are "outnumbered" actually attempted to quantify dietary lithium intake,[1] whereas almost all of the studies I've mentioned do that. This is important because a lot of the sources they cite that we don't have access to (there are several of those) could be measuring lithium concentration in plant dry matter, as a lot of their sources that are available do, in which case seemingly high concentrations do not imply high dietary consumption.
Moreover, a lot of their post is focused on speculating that ICP-MS (the technique used by most studies) systematically underestimates lithium concentration. However:
- Van Cauwenbergh et al. (1999) use atomic absorption spectroscopy (AAS) instead of ICP-MS, and arrive at the second-lowest dietary lithium intake estimate I have ever found,
- Iyengar et al. (1990) mention a lot of NA-MS measurements, all of which match the low estimates I've found,
- Hamilton & Minski (1972) use spark source mass spectrometry (SSMS),
- Evans et al. (1985) use flame atomic emission spectrophotometry, and
- Clarke & Gibson (1988) use NA-MS.
All of those find very low concentrations of lithium in food.
Moreover, they themselves mention a paper that uses ICP-MS and finds high concentrations of lithium in food in Romania (Voica et al. (2020)).
These studies are a substantial fraction of all of the studies on lithium concentration in food that we have. So it seems to me that their whole focus on ICP-MS, and their claim that it "gives much lower numbers for lithium in food samples than every other analysis technique we’ve seen," does not seem warranted.
Again, I don’t think that studies that find high concentrations of lithium in food are necessarily wrong. There is no market pressure for food to have 1 µg/kg rather than 1000 µg/kg of lithium, or the other way around, the way that there is market pressure for meals to have e.g. carbohydrate/fat ratios and energy densities within a specific optimal range. Consumers do not care about whether lithium concentration is 1 µg/kg or 1000 µg/kg. And we know that lithium concentration in e.g. water varies a lot according to lithology and climate, so we shouldn't expect this to be uniform around the world. So I don’t see how it must be the case (as the SMTM authors claim) that all studies that find low concentrations are wrong.
- ^
The example was Schrauzer (2002), which bases its estimates on hair concentration rather than actual food measurements. Ken Gillman says that this paper "has a lot of non-peer-reviewed and secondary references of uncertain provenance and accuracy: it may be misleading in some important respects." Also, interestingly, as I mentioned in my post, the highest estimate Schrauzer (2002) provides for dietary lithium intake is from China, not really a country with a huge obesity problem.
(Note that the Ken Gillman blog post has a typo: it says that the "typical total daily lithium intake from dietary sources has been quantified recently from the huge French “Total Diet Study” at 0.5 mg/day," a value that is 10x too high.)
The SMTM authors just released a post (a) addressing some of the Total Diet Studies I found, where by "addressing" I mean that they picked a handful of them (5) and pretended that they are pretty much the only studies showing low lithium concentrations in food. (They don't mention this blog post I wrote, nor do they mention me.)
Their post does not mention any of the following studies that were mentioned in my post, and that found low lithium concentrations in food:
- Canada's 2016-2018 TDS
- Marcussen et al. (2013), from Vietnam
- Turconi et al. (2009), from Italy
- Noël et al. (2010), from France
- Van Cauwenbergh et al. (1999), from Belgium, which also mentions estimates from:
- Canada (Clarke & Gibson (1988))
- Italy, Spain, Turkey and the US (Iyengar et al. (1991))
- Japan (Shimbo et al. (1996)), and
- the UK (Parr. etal. (1992))
- The 3 estimates mentioned in this review from the WHO (ultimately from Iyengar et al. (1990)), from:
- Turkey,
- Finland, and
- the US
- The estimates mentioned in the EPA report, including those from:
- Hamilton & Minski (1972), from the UK
- Evans et al. (1985), also from the UK
Given that they say that the 5 TDSs they picked "disagree with basically every other measurement we’ve ever seen for lithium in food" (and repeat this point quite a few times in their post) it does not seem that they have read my post yet.
FT4 is not the same thing as T4. From Medical News Today:
In adults, normal levels of total T4 range from 5–12 micrograms per deciliter (mcg/dl) of blood. Normal levels of free T4 range from 0.8–1.8 nanograms per deciliter (ng/dl) of blood.
I haven't converted these densities to molarities, so I haven't compared these ranges with those provided by the '88-'94 paper, but this distinction seems relevant.
So we see a 20% decrease in subclinical hypothyroidism (4.3% -> 3.5%), but an 800% increase in clinical hypothyroidism (0.3% -> 2.4%).
The abstract of the paper analyzing '88-'94 data says that they used a different definition of "subclinical hypothyroidism" than the definition that is commonly used today (I had edited my comment to reflect that a few seconds before you replied. I am so sorry for the error!!). Quoting from the paper:
(Subclinical hypothyroidism is used in this paper to mean mild hypothyroidism, the term now preferred by the American Thyroid Association for the laboratory findings described.)
So it seems that the prevalence of hypothyroidism was 4.6% in this survey, not 0.3%. So the prevalence of clinical hypothyroidism has decreased.
With regards to what we nowadays call subclinical hypothyroidism (TSH > 4.5 mIU/L in the absence of clinical hypothyroidism), the paper that analyses '07-'12 data does say:
Percent reference population with TSH > 4.5 mIU/L for this study was found to be 1.88% which is similar to what was found by Hollowell et al. [Hollowell et al. is the group that analyzed '88-'94 data]. It would indicate that at risk TSH levels in the reference U.S. population may have decreased a bit or remained at the same level for reference US population.
I'm specifically considering subclinical hypothyroidism.
The other paper I linked, which uses nationally representative data from the 2007-2012 NHANES, estimated the prevalence of subclinical hypothyroidism to be 3.5% (lower than the prevalence in the 1988-1994 NHANES, which was 4.3%), and noted that the prevalence of at-risk TSH levels seems to have decreased or remained stable with time. (Although mean TSH levels have increased.)
(I apologize for not having replied earlier — I was curious and wanted to check the raw NHANES data on TSH levels myself before replying to you, so I tried to, but found that parsing one of the datasets was too hard and ended up not doing it.)
[ETA: actually, the 4.3% number is based on a different definition of hypothyroidism. With regards to the definition used in the newer study (TSH > 4.5 mIU/L in the absence of clinical hypothyroidism), it says the following:
Percent reference population with TSH > 4.5 mIU/L for this study was found to be 1.88% which is similar to what was found by Hollowell et al. [Hollowell et al. is the group that analyzed '88-'94 data]. It would indicate that at risk TSH levels in the reference U.S. population may have decreased a bit or remained at the same level for reference US population.
]
state is never the right level of data to look at except for laws
County-level obesity datasets are mostly based on educated guesses that vary widely rather than actual measurements. I have found several of those datasets that correlate very poorly with one another. Variables such as median household income often correlate more strongly with obesity in some of those datasets than different obesity estimates correlate with each other.
See this Google Colab notebook for a few comparisons.
AFAICT, state-level obesity estimates are way more reliable. The estimates generated with BRFSS data seem to be based on large sample sizes in each state, which is something that we do not have for each individual county. So I think it makes sense to look at obesity at the state level.
None of those metrics would catch if a contaminant makes some people very fat while making others thin ( SMTM thinks paradoxical effects are a big deal, so this is a major gap for testing their model).
FWIW, it does not seem to be the case that, at a population level, very low BMIs are more common now than they were before 1980. The opposite is true: when you compare data from the first NHANES to the last one, you see that the BMI distribution is entirely shifted to the right, with the thinnest people in NHANES nowadays being substantially heavier than the thinnest people back then.
So, contra what the SMTM authors argue, whatever is causing the obesity epidemic does not seem to be spreading out the BMI distribution so much that being extremely thin is more common now.
In such a case, actually, that might make it hard to use the studies we've looked at so far to gain information. If the curve is U-shaped, the two ends of the curve may cancel out when averaged together and disguise the effect.
Note that this does not seem to be what has happened at a population level in the US. BMI seems to have increased pretty much at all levels — even the 0.5th percentile has increased from NHANES I (in the early 70's) to the 2017-March 2020 NHANES, as has the minimum adult BMI. And the difference is not subtle.
For instance, here are the thinnest people in the last NHANES versus the first one:
The relevant Google Colab cells start here.
My credence that that is the case is much higher than that lithium is a major cause of the obesity epidemic. I wouldn't be too surprised if contaminants explained ~5-10% of the weight that Americans have gained since the early 20th century.
The arguments for contaminants that seemed most appealing to me at first (lab and wild animals getting fatter) turned out to be really dubious (as I briefly touch upon in the post), which is why I'm not more bullish on it.