post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by noggin-scratcher · 2022-08-23T12:00:45.933Z · LW(p) · GW(p)

everything is okay

(this is a work of fiction)

Oof, right in the existential anxiety.

comment by tailcalled · 2022-08-23T12:43:22.346Z · LW(p) · GW(p)

Nice story, though insofar as it's supposed to function as a concrete proposal for an AI-based utopia (or any utopia really), I think it's missing the answer to the most important question: what sorts of options or restrictions are there for creating descendants? (Such as children, digital autonomous organizations, nation-states or whatever.)

Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-08-23T13:22:00.215Z · LW(p) · GW(p)

this is a specific focus for what part of utopia i'd like to live in, and i don't really have an interest in creating descendants at the moment. i've written more about rules for creating new persons in ∀V, but the short version is "no kids allowed (they're too hard for me to figure out), only copies of people". though in a setting where aligned Elua has more agency, maybe it could figure out to make kids viable.

Replies from: tailcalled
comment by tailcalled · 2022-08-23T13:39:00.514Z · LW(p) · GW(p)

Nice that you've thought about it and made a decision. So just to check that I understand correctly, your proposed utopia probably contains a a whole bunch of people who really want to start families but are forced not to by Elua?

Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-08-24T07:23:58.818Z · LW(p) · GW(p)

who really want to start a family in a way that can't be satisfied by an alternative, yes. such as: creating a merged version of their minds and have it emit preferences in advance and then consentingly modify itself until it's reasonably childlike, have a non-moral-patient fake mind be in the body until a certain age before being replaced with that merged mind, or any other kind of weird scheme i haven't thought about. there are many possibilities, in virtualia and with Elua to help.

comment by Dave Lindbergh (dave-lindbergh) · 2022-08-23T17:35:26.913Z · LW(p) · GW(p)

It sounds very unchallenging. 

Perhaps boring. Pointless.

(But then most utopias sound that way to me.)

Replies from: carado-1, Viliam, rime
comment by Tamsin Leake (carado-1) · 2022-08-24T07:26:50.883Z · LW(p) · GW(p)

there are probly many challenges one can face if they want that. i'm just fine getting my challenges from little things like video games, at least for a while. maybe i'd get back into the challenge of designing my own video games, too; i enjoyed that one.

comment by Viliam · 2022-08-23T21:47:22.385Z · LW(p) · GW(p)

"Utopias are all alike; every dystopia is horrible in its own way." -- AI Karenbot

comment by rime · 2023-06-08T01:02:46.614Z · LW(p) · GW(p)

I can empathise with the feeling, but I think it stems from the notion that I (used to) find challenges that I set for myself "artificial" in some way, so I can't be happy unless something or somebody else creates it for me. I don't like this attitude, as it seems like my brain is infantilising me. I don't want to depend on irreducible ignorance to be satisfied. I like being responsible for myself. I'm trying to capture something vague by using vague words, so there are likely many ways to misunderstand me here.

Another point is just that our brains fundamentally learn from reward prediction-errors, and this is likely to have generalised into all sorts of broad heuristics we use in episodic future thinking--which I speculate plays a central role in integrating/propagating new proto-values (aka 'moral philosophy').

comment by Writer · 2024-01-08T11:06:28.752Z · LW(p) · GW(p)

i think about this story from time to time. it speaks to my soul.

  • it is cool that straight-up utopian fiction can have this effect on me.
  • it yanks me in a state of longing. it's as if i lost this world a long time ago, and i'm desperately trying to regain it.

i truly wish everything will be ok :,)

thank you for this, tamsin.

comment by Artaxerxes · 2022-08-24T10:45:36.175Z · LW(p) · GW(p)

Thanks for writing! I'm a big fan of utopian fiction, it's really interesting to hear idealised depictions of how people would want to live and how they might want the universe to look. The differences and variation between attempts is fascinating - I genuinely enjoy seeing how different people think different things are important, the different things they value and what aspects they focus on in their stories. It's great when you can get new ideas yourself about what you want out of life, things to aspire to. 

I wouldn't mind at all if writing personal utopian fiction on LW were to become a trend. Like you say, it feels important, not just to help a potential AI and get people thinking about it, but also to help inspire each other, to give each other new ideas about what we could enjoy in the future.

comment by metachirality · 2023-12-20T22:48:41.987Z · LW(p) · GW(p)

Initially, I sorta felt bummed out that a post-singularity utopia would render my achievements meaningless. After reading this, I started thinking about it more and now I feel less bummed out. Could've done with mentions of biblically accurate angels playing incomprehensible 24d MMOs or omniscient buddhas living in equianimous cosmic bliss but it still works.

comment by Stephen McAleese (stephen-mcaleese) · 2022-08-24T20:03:57.908Z · LW(p) · GW(p)

Would you mind explaining why the post is written in lowercase?

Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-08-24T22:52:28.588Z · LW(p) · GW(p)

that's just part of my stylistic choices in blogging; to make my formal writing more representative of my casual chatting. see eg this or this [EDIT: or this]

comment by Olga Babeeva (olga-babeeva) · 2022-08-24T17:51:04.079Z · LW(p) · GW(p)

Thanks for sharing this, I enjoyed it!

I will copy a lil post on an utopia idea I had some time ago here as it seems relevant enough. I think it can also address concern of some commenter utopias being boring.

It is not written in a careful LW style so add caveats as you read.

rat race done right = proximal development zone world

utopias are hard to imagine. some resort to saying that utopia would be so great that we can not even copmrehend how. there are no words or concepts in our language to begin to desribe it. I'll try nevertheless.

imagine a world where every day you get a perfect challenge for you at the time.

if you ace a course, you get into a more advanced one. if you are doing great an your job, you get more responsibity. your relationships deepen.

but very importantly this can also scale down. if your grades slip, you get helped or change it. if you are feeling not inspired, your job gives you a break. if your relationship gets too intense, your other responsibilities will decrease.

you don't ever have to deal with more stuff than you can handle but you are always learning and striving and getting better. like in a computer game.

it even works a little bit like this in this world except there is no magic/benevolent AI overlord computing what's best for you. and some (way too many) people roll too many shitty things in their lives. (and maybe some people get too few?)

let's be kinder and more daring

your princess is in another castle and I am happy for you 👋

comment by Filip Sondej · 2022-09-11T15:47:29.402Z · LW(p) · GW(p)

Thanks! I'm always hungry for good sci-fi utopias :) I particularly liked that mindmelding part.

After also reading Diaspora and ∀V, I was thinking what should be done about minds who self-modify themselves into insanity and suffer terribly. In their case, talking about consent doesn't make much sense.

Maybe we could have a mechanism where:

  • I choose some people I trust the most, for example my partner, my mom, and my best friend
  • I give them the power to revert me back to my previous snapshot from before the modification, even if it's against my insane will (but only if they unanimously agree)
  • (optionally) by old snapshot is temporarily revived to be the final arbiter and decide if I should be reverted - after all I know me the best
Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-09-12T10:07:53.553Z · LW(p) · GW(p)

well, i worry about the ethics of the situation where those third parties don't unanimously agree and you end up suffering. note that your past self, while it is a very close third party, is a third party among others.

i feel like i still wanna stick to my "sorry, you can't go to sufficiently bad hell" limitation.

(also, surely whatever "please take me out of there if X" command you'd trust third parties with, you could simply trust Elua with, no?)

Replies from: Filip Sondej
comment by Filip Sondej · 2022-09-12T10:48:24.045Z · LW(p) · GW(p)

Yeah, unanimous may be too strong - maybe it would be better to have 2 out of 3 majority voting for example. And I agree, my past self is a third party too.

Hm, yeah, trusting Elua to do it would work too. But in scenarios where we don't have Elua, or have some "almost Elua" that I don't fully trust, I'd rather rely on my trusted friends. And those scenarios are likely enough that it's a good option to have.

(As I side note, I don't think I can fully specify that "please take me out of there if X". There may be some Xs which I couldn't foresee, so I want to rely on those third party's judgement, not some hard rules. (of course, sufficiently good Elua could make those judgements too))

As for that limitation, how would you imagine it? That some mind modifications are just forbidden? I have an intuition that there may be modifications so alien, that the only way to predict their consequences is to actually run that modified mind and see what happens. (an analogy may be, that even the most powerful being cannot predict if some Turing machine halts without actually running it). So maybe reverting is still necessary sometimes.

Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-09-12T11:08:45.485Z · LW(p) · GW(p)

i feel like letting people try things, with the possibility of rollback from backup, generally works. let people do stuff by default, and when something looks like a person undergoing too much suffering, roll them back (or terminate them, or whatever other ethically viable outcome is closest to what they would want).

maybe pre-emptive "you can't even try this" would only start making sense if there were concerns that too much experience-time is being filled with people accidentally ending up suffering from unpredictable modifications. (though i suspect i don't really think this because i'm usually more negative-utilitarian and less average-utilitarian than that)

that said, i've never modified my mind in a way that caused me to experience significant suffering. i have a friend who kinda has, by taking LSD and then having a very bad time for the rest of the day, and today-them says they're glad to have been able to try it. but i think LSD-day-them would strongly disagree.

Replies from: Filip Sondej
comment by Filip Sondej · 2022-09-12T13:24:24.173Z · LW(p) · GW(p)

Yeah, that makes sense.

I'd like the serious modifications to (at the very least) require a lot of effort to do. And be gradual, so you can monitor if you're going in the right direction, instead of suddenly jumping into a new mindspace. And maybe even collectively decide to forbid some modifications.

(btw, here is a great story about hedonic modification https://www.utilitarianism.com/greg-egan/Reasons-To-Be-Cheerful.pdf)

The reason that I lean toward relying on my friends, not a godlike entity, is because on default I distrust centralized systems with enormous power. But if we had Elua which is as good as you depicted, I would be okay with that ;)

Replies from: carado-1
comment by Tamsin Leake (carado-1) · 2022-09-16T11:17:04.872Z · LW(p) · GW(p)

thanks for the egan story, it was pretty good!

i tend to dislike such systems as well, but a correctly aligned superintelligence would surely be trustable with anything of the sort. if anything, it would at least know about the ways it could fail at this, and tell us about what it knows of those possibilities.

comment by rime · 2023-06-08T00:54:18.681Z · LW(p) · GW(p)

what i am pretty confident about, is that whatever the situation, somehow, they are okay.

This hit me. Had to read it thrice to parse it. "Is that sentence even finished?"

I've done a lot of endgame speculation, but I've never been close to imagining what it looks like for everyone to be okay. I can imagine, however, what it looks like internally for me to be confident everyone is ok. The same way I can imagine Magnus Carlsen winning a chess game even if the board is a mystery to me.

It's a destabilising feeling, but seems usefwl to backchain from.