Posts

For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation 2024-09-15T20:49:06.370Z
One person's worth of mental energy for AI doom aversion jobs. What should I do? 2024-08-26T01:29:01.700Z
Uncursing Civilization 2024-07-01T18:44:30.810Z

Comments

Comment by Lorec on For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation · 2024-09-17T13:31:57.992Z · LW · GW

Is Bostrom's original Simulation Hypothesis, the version involving ancestor-simulations, unconvincing to you? If you have decided to implement an epistemic exclusion in yourself with respect to the question of whether we are in a simulation, it is not my business to interfere with that. But we do, for predictive purposes, have to think about the fact that Bostrom's Simulation Hypothesis and other arguments in that vein will probably not be entirely unconvincing [by default] to any ASIs we build, given that they are not entirely unconvincing to the majority of the intelligent human population.

Comment by Lorec on For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation · 2024-09-16T18:33:46.234Z · LW · GW

If a human being doesn't automatically qualify as a program to you, then we are having a much deeper disagreement than I anticipated. I doubt we can go any further until we reach agreement on whether all human beings are programs.

My attempt to answer the question you just restated anyway:

The idea is that you would figure out what the distant superintelligence wanted you to do the same way you would figure out what another human being who wasn't being verbally straight with you, wanted you to do: by picking up on its hints.

Of course this prototypically goes disastrously. Hence the vast cross-cultural literature warning against bargaining with demons and ~0 stories depicting it going well. So you should not actually do it.

Comment by Lorec on For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation · 2024-09-16T10:57:08.536Z · LW · GW

How would you know that you were a program and Omega had a copy of you? If you knew that, how would you know that you weren't that copy?

Comment by Lorec on For Limited Superintelligences, Epistemic Exclusion is Harder than Robustness to Logical Exploitation · 2024-09-16T02:38:40.302Z · LW · GW

Do you want to fully double-crux this? If so, do you one-box?

Comment by Lorec on One person's worth of mental energy for AI doom aversion jobs. What should I do? · 2024-08-29T14:57:51.540Z · LW · GW

Not a woman, sadly.

I believe it, especially if one takes a view of "success" that's about popularity rather than fiat power.

But FYI to future advisors: the thing I would want to prospectively optimize for, along the gov path, when making this decision, is about fiat power. I'm highly uncertain about whether viable paths exist from a standing start to [benevolent] bureaucratic fiat power over AI governance, and if so, where those viable paths originate.

If it was just about reach, I'd probably look for a columnist position instead.

Comment by Lorec on One person's worth of mental energy for AI doom aversion jobs. What should I do? · 2024-08-29T14:41:07.320Z · LW · GW

In what sense do you consider the mechinterp paradigm that originated with Olah, to be working?

Comment by Lorec on One person's worth of mental energy for AI doom aversion jobs. What should I do? · 2024-08-28T13:19:44.831Z · LW · GW

https://x.com/elder_plinius

Comment by Lorec on One person's worth of mental energy for AI doom aversion jobs. What should I do? · 2024-08-28T13:17:37.844Z · LW · GW

"Endpoints are easier to predict than trajectories"; eventual singularity is such an endpoint; on our current trajectory, the person who is going to do it does not necessarily know they are going to do it until it is done.

Comment by Lorec on One person's worth of mental energy for AI doom aversion jobs. What should I do? · 2024-08-26T03:00:01.090Z · LW · GW

Tweet link removed.

Comment by Lorec on On not getting contaminated by the wrong obesity ideas · 2024-08-22T12:40:18.124Z · LW · GW

[ Sorry about the wrecked formatting in this comment, I'm on mobile and may come back and fix it later ]

They call it "burning" calories because it's oxidation. Like fire. More oxygen should help. Less oxygen should hurt.*

At least, if you buy CICO and correspondingly think that quantity food oxidation versus quantity fat oxidation is almost all that matters, metabolically, and know that medically, according to all CICO-compatible convention, quantity food oxidation is almost quota-ed at the level of food intake, while quantity fat oxidation is not. [ Hence why "food intake" is not considered a dependent variable in the CICO equation [ daily weight delta = 3600*[ food intake ] - [ RMR + exercise ] ] ].

Yet you, Scott, SMTM, and several others I've spoken with, who otherwise vastly disagree on obesity science [but most of whom say CICO is "basically true" or "trivially true"] independently think the "low O2 mediates the altitude effect" idea is plausible.

I even independently generated it myself, once, in 2021, back when I was still a CICO believer, before realizing 2 years later that, according to CICO, "low O2 results in weight loss" doesn't actually make any physiological sense.

I think people intuitively feel like it makes sense because marginally suffocating feels bad, and most other things that make you lose weight according to CICO [caloric restriction, forcing yourself to exercise, wearing fewer clothes so you shiver a bit in the cold] feel bad.

But many things that feel bad, don't make you lose weight. Like back problems. Or cancer. Or gaining weight.

And drinking unflavored olive oil in the middle of a fast period [ https://www.lesswrong.com/posts/BD4oExxQguTgpESd ] makes me lose more weight than anything else I've tried, and it doesn't feel bad at all. Keto also works for many people, who say it doesn't feel bad for them.

My intention pointing these things out is not to infantilize or indemn you or anyone else who's had the "low O2 mediates the altitude effect" idea and accepted it without noticing that it went against CICO.

My intention is to help create common knowledge of just how fucked the fact that people keep coming up with that hypothesis, and uncritically running with it, proves the discourse around this topic, and the CICO paradigm specifically, is.

*It's true that it's also "burning" calories when it's not fat calories, it's the calories in your food - such that less oxygen could conceivably hurt the process of gaining weight, like less food could. But the conventional wisdom is that the body treats eaten food as an ~absolute lower bar for "energy" intake quota - ie acts as though it should always oxidize all eaten food and turn it into glycogen, fat, ATP, or heat, no matter how inefficient this is - while the amount of oxidation the body does per hour at rest actually is a dependent variable that could conceivably vary closely with amount of O2 in the air. Medically, it would fly in the face of a lot, if people were actually doing less oxidation of eaten food, rather than just using more oxygen per food molecule, under hypoxic conditions. Conceivably, this decrease in efficiency of metabolism, to meet the oxidation quota, correspondingly slows fat gain - and I think it maaaaaybe does, and I think this is an actual plausible mechanism here. That's something that also, empirically, happens in severe caloric restriction - but importantly for CICO, it's not something that happens, at all, in moderate caloric restriction, of the "cut 200 calories per day" stripe that CICOists suggest to lose weight. And moderate caloric restriction is also sometimes effective, simply by reducing the quantity of "energy" intake, just like CICOists say it should. My point is to say that CICOists' favored method of "reduce the quantity of 'energy' intake", can hardly medically be what chronic hypoxia is doing, although CICO would lead us to expect the opposite effect, of "less body fat is ~passively oxidized", at rest, because the body just cannot do as much metabolism per hour-calorie [as opposed to per calorie eaten] when there is less O2 to work with.

After re-looking at the graph because of this post, I'm surprised by how exactly overweight does correspond with low altitude, and not anything about water tables, as I originally thought. And I do find "hypoxia makes food oxidation [and baseline-necessary homeostatic oxidation] less efficient, in the same way severe CR does" plausible as a mechanism. It's making me question my intitial conviction that the Thing Causing Lipostat Damage Since 1900 necessarily had to be some kind of 1900-era waterborne endocrine disruptor like a heavy metal, and putting other, weirder stuff, like food-borne toxins, and viruses, back on the table.

But from the perspective of someone who's seen the old rat studies [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1225664/] and the old nutrition tables [ https://ageconsearch.umn.edu/nanna/record/136596/files/fris-1961-02-02-438.pdf?withWatermark=0&withMetadata=0&version=1&registerDownload=1, https://tinyurl.com/jnzxkk5z, https://ageconsearch.umn.edu/nanna/record/234624/files/frvol25i3a.pdf?withWatermark=0&withMetadata=0&version=1&registerDownload=1 ] and knows that calories don't explain it, "The SMTM overweight-altitude pattern is in fact downstream of relative hypoxia" deconfuses somewhat about the SMTM overweight-altitude pattern, but doesn't deconfuse at all about why we've all been getting fatter since 1900 in the first place. From the perspective of someone who knows calories can't explain it, the relatively lower rate of obesity in China [even if, surprisingly-to-me, only by a few percentage points - aroaund 35% for the US vs around 31% for China, according to the first Google result I saw], which stands at an overall much lower elevation than the US [especially populated areas], looks more potentially fruitful as an area of investigation. And . . . it looks like in China, there's a regional gradient in obesity [higher in the north] that seems obviously not to be tracking altitude at all. And what about Korea, which is basically on the sea? They sandbag and say 40% obesity in self-reporting [to alarm the locals?], but when measured the same as the US [ https://www.koreaherald.com/view.php?ud=20230425000613 ], their "obesity" rate is around 6% compared to the US's 35%-40%. It makes me suspect the China thing is a reporting issue, too. Altitude clearly isn't most of the inter-regional variance worldwide.

Comment by Lorec on It’s Probably Not Lithium · 2024-08-19T17:02:45.998Z · LW · GW

This changed my mind on whether lithium was at all plausible. I had no idea about the youth-faster-weight-gain thing. 

The one thing you don't seem to have written about, is the possibility that peoples' lipostats might be getting broken primarily during fetal development, while neuronal proliferation is happening, the lowermost layers of the brain are getting wired, hormones [and correspondingly, endocrine disruptors] have an outsized influence, and most other neurological [/neuroendocrine] disorders are contracted.

I think some toxin is probably doing this, and I think the overweight epidemic started picking up [around ~1910] in the US too early for it to have been microplastics [?] or another newfangled endocrine disruptor [?].

But I now see that the cause being relative levels of elemental lithium, even during fetal development, wouldn't make any sense.

Comment by Lorec on [deleted post] 2024-07-14T03:03:08.857Z

Yooo this is sick! Thank you!

Comment by Lorec on Uncursing Civilization · 2024-07-04T03:50:42.061Z · LW · GW

Thanks for the encouraging feedback!

It is true that in future posts I should account for availability of calories over time, and physical activity over time. 

Possibly I would get a better reception if I waded into all the sub-possibilities for what could be causing the increase in self-reported queerness, but that issue is so political that I doubt more positive reception from the audience, would correspond to more accurate Bayesian updates from the audience. As it is, I feel "you can lead a LessWronger to a hypothesis, but you can't make them suborn their political arguments-are-soldiers brain to their adult brain".

"AI alignment is not in the category 'alarmingly impossible problems for the time we have left'" is certainly a position many people hold. I am doing my best to make them correct. Alas, going along with their fantasy world where it's already true, will not help make it true.