Posts

LWLW's Shortform 2025-01-06T21:26:24.375Z

Comments

Comment by LWLW (louis-wenger) on Several Arguments Against the Mathematical Universe Hypothesis · 2025-02-20T19:15:06.364Z · LW · GW

I think that people don’t consider the implications of something like this. This seems to imply that the mathematical object of a malevolent superintelligence exists, and that conscious victims of said superintelligence exist as well. Is that really desirable? do people really prefer that to some sort of teleology?

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T18:31:05.200Z · LW · GW

Yeah something like that, the ASI is an extension of their will.

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T18:22:45.855Z · LW · GW

This is just a definition for the sake of definition, but I think you could define a human as aligned if they could be given an ASI slave and not be an S-risk. I really think that under this definition, the absolute upper bound of “aligned” humans is 5%, and I think it’s probably a lot lower.

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T17:28:13.176Z · LW · GW

I should have clarified, I meant a small fraction and that that is enough to worry. 

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T04:11:22.747Z · LW · GW

I agree. At least I can laugh if the AGI just decides it wants me as paperclips. There will be nothing to laugh about with ruthless power-seeking humans with godlike power.

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T03:58:15.168Z · LW · GW

That sounds very interesting! I always look forward to reading your posts. I don’t know if you know any policy people, but in this world, it would need to be punishable by jail-time to genetically modify intelligence without selecting for pro-sociality. Any world where that is not the case seems much, much worse than just getting turned into paper-clips.

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T03:35:04.167Z · LW · GW

I certainly wouldn’t sign up to do that, but the type of individual I’m concerned about likely wouldn’t mind sacrificing nannies if their lineage could “win” in some abstract sense. I think it’s great that you’re proposing a plan beyond “pray the sand gods/Sam Altman are benevolent.” But alignment is going to be an issue for superhuman agents, regardless of if they’re human or not.

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T02:40:23.493Z · LW · GW

I’m sure you’ve already thought about this, but it seems like the people who would be willing and able to jump through all of the hoops necessary would likely have a higher propensity towards power-seeking and dominance. So if you don’t edit the personality as well, what was it all for besides creating a smarter god-emperor? I think that in the sane world you’ve outlined where people deliberately avoid developing AGI, an additional level of sanity would be holding off on modifying intelligence until we have the capacity to perform the personality edits to make it safe.


I can just imagine this turning into a world where the rich who are able to make their children superbabies compete with the rest of the elite over whose child will end up ruling the world. 

I’m sorry but I’d rather be turned into paper-clips then live in a world where a god-emperor can decide to torture me with their AGI-slave for the hell of it. How is that a better world for anyone but the god-emperor? But people are so blind and selfish, they just assume that they or their offspring would be god-emperor. At least with AI people are scared enough that they’re putting focused effort into trying to make it nice. People won’t put that much effort into their children.


I mean hell, figuring out personality editing would probably just make things backfire. People would choose to make their kids more ruthless, not less. 

Comment by LWLW (louis-wenger) on How to Make Superbabies · 2025-02-20T01:55:47.539Z · LW · GW

How much do people know about the genetic components of personality traits like empathy? Editing personality traits might be almost as or even more controversial than modifying “vanity” traits. But in the sane world you sketched out this could essentially be a very trivial and simple first step of alignment. “We are about to introduce agents more capable than any humans except for extreme outliers: let’s make them nice.” Also, curing personality disorders like NPD and BPD would do a lot of good for subjective wellbeing. 

I guess I’m just thinking of a failure mode where we create superbabies who solve task-alignment and then control the world. The people running the world might be smarter than the current candidates for god-emperor, but we’re still in a god-emperor world. This also seems like the part of the plan most likely to fail. The people who would pursue making their children superbabies might be disinclined towards making their children more caring.

Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-02-15T07:29:32.704Z · LW · GW

>be me, omnipotent creator

>decide to create

>meticulously craft laws of physics

>big bang

>pure chaos

>structure emerges

>galaxies form

>stars form

>planets form

>life

>one cell

>cell eats other cell, multicellular life

>fish

>animals emerge from the oceans

>numerous opportunities for life to disappear, but it continues

>mammals

>monkeys

>super smart monkeys

>make tools, control fire, tame other animals

>monkeys create science, philosophy, art

>the universe is beginning to understand itself

>AI

>Humans and AI together bring superintelligence online

>everyone holds their breath

>superintelligence turns everything into paper clips mfw infinite kek 

Comment by LWLW (louis-wenger) on AI #102: Made in America · 2025-02-12T00:12:54.990Z · LW · GW

I think Noah Carl was coping with the “downsides” he listed. Loss of meaning and loss of status are complete jokes. They are the problems of people who don’t have problems. I would even argue that focusing on X-risks rather than S-risks is a bigger form of cope than denying AI is intelligent at all. I don’t see how you train a superintelligent military AI that doesn’t come to the conclusion that killing your enemies vastly limits the amount of suffering you can inflict upon them.

Edit: I think loss of actual meaning, like conclusive proof we're in a dysteleology, would not be a joke. But I think that loss of meaning in the sense of "what am I going to do if I can't win at agent competition anymore :(" feels like a very first-world problem.

Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-02-10T06:51:58.007Z · LW · GW

Everything feels so low-stakes right now compared to future possibilities, and I am envious of people who don’t realize that. I need to spend less time thinking about it but I still can’t wrap my head around people rolling a dice which might have s-risks on it. It just seems like a -inf EV decision. I do not understand the thought process of people who see -inf and just go “yeah I’ll gamble that.” It’s so fucking stupid.

Comment by LWLW (louis-wenger) on Schizophrenia as a deficiency in long-range cortex-to-cortex communication · 2025-02-02T20:14:21.613Z · LW · GW

Hi Steven! This is an old post, so you probably won't reply, but I'd appreciate it if you did! What do you think might be going on in the brains of schizophrenics with high intelligence? I know schizophrenia is typically associated with MRI abnormalities and lower intelligence, but this isn't always the case! At least for me, my MRI came back normal, and my cognitive abilities were sufficient to do well in upper level math courses at a competitive university: even during my prodromal period. I actually deal with hypersensitivity as well, so taking a very shallow understanding of your post and applying it to me, might my brain have a quirk that enables strong intracircuit communication (resulting in strong working memory and processing speed and hypersensitivity), but not intercircuit communication (resulting in hallucinations/paranoia as downsides but a high DAT score as an upside?)?

Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-02-01T19:39:02.637Z · LW · GW
Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-02-01T00:50:05.088Z · LW · GW
Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-01-31T23:42:19.994Z · LW · GW

I see no reason why any of these will be true at first. But the end-goal for many rational agents in this situation would be to make sure 2 and 3 are true.

Comment by LWLW (louis-wenger) on Yudkowsky on The Trajectory podcast · 2025-01-29T08:43:38.548Z · LW · GW

That makes sense. This may just be wishful thinking on my part/trying to see a positive that doesn't exist, but psychotic tendencies might have higher representation among the population you're interested in than the trend you've described might suggest. Taking the very small, subjective sample that is "the best" mathematician of each of the previous four centuries (Newton, Euler, Gauss, and Grothendieck), 50% of them (Newton and Grothendieck) had some major psychotic experiences (admittedly vastly later in life than is typical for men).

Again, I'm probably being too cautious, but I'm just very apprehensive about approaching the creation of sentient life with the attitude that increased iq = increased well-being. If that intution is incorrect, it would have catastrophic consequences.

Comment by LWLW (louis-wenger) on Yudkowsky on The Trajectory podcast · 2025-01-29T04:30:53.088Z · LW · GW

"I don't think you'll need to worry about this stuff until you get really far out of distribution." I may sound like I'm just commenting for the sake of commenting but I think that's something you want to be crystal clear on. I'm pessimistic in general and this situation is probably unlikely but I guess one of my worst fears would be creating uberpsychosis. Sounding like every LWer, my relatively out of distribution capabilities made my psychotic delusions hyper-analytic/1000x more terrifying & elaborate than they would have been with worse working memory/analytic abilities (once I started ECT I didn't have the horsepower to hyperanalyze existence as much). I guess the best way to describe it was that I could feel the terror of just how bad -inf would truly be as opposed to having an abstract/detached view that -inf = bad. And I wouldn't want anyone else to go through something like that, let alone something much scarier/worse. 

Comment by LWLW (louis-wenger) on Yudkowsky on The Trajectory podcast · 2025-01-28T18:59:07.199Z · LW · GW

This might be a dumb/not particularly nuanced question, but what are the ethics of creating what would effectively be BSI? Chickens have a lot of health problems due to their size, they weren't meant to be that big. Might something similar be true for BSI? How would a limbic system handle that much processing power: I'm not sure it would be able to. How deep of a sense of existential despair and terror might that mind feel?

TLDR: Subjective experience would likely have a vastly higher ceiling and vastly lower floor. To the point where a BSI's (or ASI's for that matter) subjective experience would look like +/-inf to current humans.

Comment by LWLW (louis-wenger) on LWLW's Shortform · 2025-01-06T20:34:18.809Z · LW · GW

Making the (tenuous) assumption that humans remain in control of AGI, won't it just be an absolute shitshow of attempted power grabs over who gets to tell the AGI what to do? For example, supposing OpenAI is the first to AGI, is it really plausible that Sam Altman will be the one actually in charge when there will have been multiple researchers interacting with the model much earlier and much more frequently? I have a hard time believing every researcher will sit by and watch Sam Altman become more powerful than anyone ever dreamed of when there's a chance they're a prompt away from having that power for themselves.