Genuine question: If Eliezer is so rational why is he fat?
post by DirichletConvolution · 2023-03-22T17:41:46.309Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 21 Raemon 1 TropicalFruit -4 Gerald Monroe -15 Derek M. Jones None 1 comment
So I recently had a lot of fun reading HPMOR and the sequences, and I feel like I learned a bunch of cool things and new ways of looking at the world! I've also been trying to get my friends interested in the rationality community, particularly through the works of Eliezer. There, however, appeared an unexpected obstacle in my way. My friends saw a picture of Eliezer and immediately wondered, "If Eliezer is so rational (at least relative to other people) and also has living as long as possible high on his preference ordering, why is he fat?" It does seem like something that ought not take that much effort but has an overwhelmingly positive impact on one's life. health-wise, and maybe even performance-wise. I initially jokingly brushed the question off, saying his work is too important and doesn't leave him enough time to optimize his health, or that maybe he has some condition that prevents him from losing weight.
But the question stuck with me.
Answers
Actual answer is that Eliezer has tried a bunch of different things to lose weight and it's just pretty hard. (He also did a quite high-effort thing in 2019 which did work. I don't know how well he kept the pounds off in the subsequent time)
You can watch a fun video where he discusses it after the 2019 Solstice here.
(I'm not really sure how I feel about this post. It seems like it's coming from an earnest place, and I kinda expect a other people to have this question, but it's in a genre that feels pretty off to be picking on individual people about and I definitely don't want a bunch more questions similar to this on the site. I settled for downvoting but answering the question)
↑ comment by Rafael Harth (sil-ver) · 2023-03-23T21:40:03.870Z · LW(p) · GW(p)
(He also did a quite high-effort thing in 2019 which did work. I don't know how well he kept the pounds off in the subsequent time)
I'm kinda confused why this is only mentioned in one answer, and in parentheses. Shouldn't this be the main answer -- like, hello, the premise is likely false? (Even if it's not epistemically likely, I feel like one should politely not assume that he since gained weight unless one has evidence for this.)
This is a perfectly reasonable question. No one wants to be fat, so it follows that no highly competent individual will be fat.
Turns out, it's just a very difficult problem, and that degree of difficulty also varies greatly between people. It's far more difficult for certain individuals than for others.
No one really knows why, yet. If they did, we'd all be thin and healthy again.
Because his rational mind is bolted to an evolved lower brain that betrays him with slightly incorrect preference for extra calories when calories are plentiful.
And the conservation of willpower hypothesis says if he fixes his fatness through willpower this comes at the cost of other things.
Eliezer should probably go get a semaglutide or tirzepatide script like everyone else and lose the extra weight. Literally until a few years ago no clinically validated method of weight loss, save extremely dangerous gastric bypass surgery, existed. Diet and exercise do not work for most people.
Epistemic status : I am also slightly fat, though thinner than EY, and intend to cheat with these new drugs soon.
Eliezer's fatness signals his commitment to the belief that AI is a short-term risk to humanity, i.e., he does not expect to live long enough to experience the health problems.
↑ comment by [deleted] · 2023-03-22T18:15:51.683Z · LW(p) · GW(p)
This is probably also why he is ignoring the incredible cost of delaying AI. Assuming good timelines, advanced AI will likely be used to find then administrator effective treatments for aging and severe illness.
In theory an AI designed adult gene edit could shut aging off completely, leaving patients to experience only wear and tear with time.
In theory, AI ICU systems could eventually reach competence levels where by using external replacement organs (made immunologically neutral or by shutting immune reactions off with more advanced drugs) patients can no longer die in an ICU. (There exists no illness the system cannot treat and halt it's progression, so long as they entered the ICU with brain and heart activity or only a few minutes after a code)
Ego a 1 year delay may kill everyone who dies from aging and in hospitals in a year, theoretically around 100 million people. (Due to compute overhang it may not quite be this bad)
1 comment
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2023-03-22T22:20:34.560Z · LW(p) · GW(p)
There's already a good answer to the question, but I'll add a note.
Different people value different things, and so are willing to expend different amounts of effort to achieve different ends. As a result, even rational agents may not all achieve the same ends because they care about different things.
Thus we can have two rational agents, A and B. A cares a lot about finding a mate and not much else. B cares a lot about making money and not much else. A will be willing to invest more effort into things like staying in shape to the extent that helps A find a mate. B will invest a lot less in staying in shape and more in other things to the extent that's the better tradeoff to make a lot of money.
Rationality doesn't prescribe the outcome, just some of the means. Yes, some outcomes are convergent for many concerns, so many agents end up having the same instrumental concerns even if they have different ultimate concerns (e.g. power seeking is a common instrumental goal), but without understanding what an agent cares about you can't judge how well they are succeeding since success must be measures against their goals.