What if a tech company forced you to move to NYC?

post by KatjaGrace · 2024-06-09T06:30:03.329Z · LW · GW · 22 comments

It’s interesting to me how chill people sometimes are about the non-extinction future AI scenarios. Like, there seem to be opinions around along the lines of “pshaw, it might ruin your little sources of ‘meaning’, Luddite, but we have always had change and as long as the machines are pretty near the mark on rewiring your brain it will make everything amazing”. Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant, and want a lot of guarantees about the preservation of various very specific things they care about in life, and not be just like “oh sure, NYC has higher GDP/capita than my current city, sounds good”.

I read this as a lack of engaging with the situation as real. But possibly my sense that a non-negligible number of people have this flavor of position is wrong.

22 comments

Comments sorted by top scores.

comment by No77e (no77e-noi) · 2024-06-09T09:51:23.892Z · LW(p) · GW(p)

Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant

A big difference is that assuming you're talking about futures in which AI hasn't catastrophic outcomes, no one will be forcibly mandated to do anything. 

Another important point is that, sure, people won't need to do work, which means they will be unnecessary to the economy, barring some pretty sharp human enhancement. But this downside, along with all the other downsides, looks extremely small compared to the non-AGI default of dying of aging and having a 1/3 chance of getting dementia, 40% chance of getting cancer, your loved ones dying, etc.

Replies from: o-zewe, clone of saturn
comment by det (o-zewe) · 2024-06-09T13:47:49.973Z · LW(p) · GW(p)

assuming you're talking about futures in which AI hasn't catastrophic outcomes, no one will be forcibly mandated to do anything

This isn't clear to me: does every option that involves someone being forcibly mandated to do something qualify as a catastrophe? Conceptually, there seems to be a lot of room between the two.

I understand the analogy in Katja's post as being: even in a great post-AGI world, everyone is forced to move to a post-AGI world. That world has higher GDP/capita, but it doesn't necessarily contain the specific things people value about their current lives.

Just listing all the positive aspects of living in NYC (even if they're very positive) might not remove all hesitation: I know my local community, my local parks, the beloved local festival that happens in August. 

If all diseases have been cured in NYC and I'm hesitant because I'll miss out on the festival, I'm probably not adequately taking the benefits into account. But if you tell me not to worry at all about moving to NYC, you're also not taking all the costs into account / aren't talking in a way that will connect with me.

comment by clone of saturn · 2024-06-10T01:19:59.574Z · LW(p) · GW(p)

A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.

Why do you believe this? It seems to me that in the unlikely event that the AI doesn't exterminate humanity, it's much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.

comment by Aaron_Scher · 2024-06-09T18:03:50.084Z · LW(p) · GW(p)

Not sure I love this analogy — moving to NYC doesn't seem like that big of a deal —, but I do think it's pretty messed up to be imposing huge social / technological / societal changes on 8 billion of your peers. I expect most of the people building AGI have not really grasped the ethical magnitude of doing this — I think I sort of have, but also I don't build AGI. 

Replies from: jbash
comment by jbash · 2024-06-09T18:51:33.461Z · LW(p) · GW(p)

You seem to be privileging the status quo. Refraining from doing that has equally large effects on your peers.

Replies from: cubefox
comment by cubefox · 2024-06-10T12:43:48.695Z · LW(p) · GW(p)

Things staying mostly the same doesn't seem to count as a "large effect". For example, we wouldn't say taking a placebo pill has a large effect.

comment by Dagon · 2024-06-09T16:08:07.764Z · LW(p) · GW(p)

I'm not as chill as all that, and I absolutely appreciate people worrying about those dimensions.  But I do tend to act in day-to-day behavior (and believe, in the average sense (my probablistic belief range incudes a lot of scenarios, but the average and median are somewhat close together, which is probably a sign of improper heuristics)) as if it'll all be mostly-normal.  I recently turned down a very good job offer in NYC (and happily, later found a better one in Seattle), but I see the analogy, and kind of agree it's a good one, but from the other side - even people who think they'd hate NYC are probably wrong - hedonic adaptation is amazingly strong. I'll try to represent those you're frustrated with.

There will absolutely be changes, many of which will be uncomfortable, and probably regress from my peak-preference.  As long as it's not extinction or effective-extinction (a few humans kept in zoos or the like, but economically unimportant to the actual intelligent agents shaping the future), it'll be ... OK.  Not necessarily great compared to imaginary utopias, but far better than the worst outcomes.   Almost certainly better than any ancient person could have expected.  

Replies from: jbash
comment by jbash · 2024-06-09T16:22:22.684Z · LW(p) · GW(p)

effective-extinction (a few humans kept in zoos or the like, but economically unimportant to the actual intelligent agents shaping the future)

Do you really mean to indicate that not running everything is equivalent to extinction?

Replies from: Dagon
comment by Dagon · 2024-06-10T00:35:57.125Z · LW(p) · GW(p)

Pretty much, yes.  Total loss of power and value is pretty much slow/delayed extinction.  It's certainly cultural extinction. 

Note that I forgot to say that I put some weight/comfort in thinking there are some parts of mindspace which an AI could include, which are nearly as good (or maybe better) than biologicals.  Once everyone I know and everyone THEY know are dead, and anything I recognize as virtues are mutated beyond my recognition, it's not clear what preferences I would have about the ongoing civilization.  Maybe extinction is an acceptible outcome.  

Replies from: jbash
comment by jbash · 2024-06-10T20:11:53.351Z · LW(p) · GW(p)

What does "value" mean here? I seriously don't know what you mean by "total loss of value". Is this tied to your use of "economically important"?

I personally don't give a damn for anybody else depending on me as the source of anything they value, at least not with respect to anything that's traditionally spoken of as "economic". In fact I would prefer that they could get whatever they wanted without involving me, and i could get whatever I wanted without involving them.

And power over what? Most people right this minute have no significant power over the wide-scale course of anything.

I thought "extinction", whether for a species or a culture, had a pretty clear meaning: It doesn't exist any more. I can't see how that's connected to anything you're talking about.

I do agree with you about human extinction not necessarily being the end of the world, depending on how it happens and what comes afterwards... but I can't see how loss of control, or value, or whatever, is connected to anything that fits the word "extinction". Not physical, not cultural, not any kind.

Replies from: Dagon
comment by Dagon · 2024-06-11T19:28:23.459Z · LW(p) · GW(p)

"value" means "net positive to the beings making decisions that impact me".  Humans claim to and behave as if they care about other humans, even when those other humans are distant statistical entities, not personally-known.  

The replacement consciousnesses will almost certainly not feel the same way about "legacy beings", and to the extent they preserve some humans, it won't be because they care about them as people, it'll be for more pragmatic purposes.  And this is a very fragile thing, unlikely to last more than a few thousand years.

In fact I would prefer that they could get whatever they wanted without involving me, and i could get whatever I wanted without involving them.

Sure, but they can't, and you can't.  They can only get what other humans give/trade/allow to them, and you are in the same boat.  "whatever you want" includes limited exclusive-use resources, and if it's more valuable (overall, for the utility functions of whatever's making the decisions) to eliminate you than to share those resources, you'll be eliminated.

comment by jbash · 2024-06-09T15:24:40.620Z · LW(p) · GW(p)

I think I understand what you mean.

There are definitely possible futures worse than extinction. And some fairly likely ones that might not be worse than extinction but would still suck big time. Varying from comparable to a forced move to a damnsight worse than moving to anywhere that presently exists. I'm old enough to have already had some disappointments (alongside some positive surprises) about how the "future" has turned out. I could easily see how I could get a lot worse ones.

But what are we meant to do with what you've posted and how you've framed it?

Also, if somebody does have the "non-extinction => good" mindset, I suspect they'll be prone to read your post as saying that change in itself is unacceptable, or at least that any change that every single person doesn't agree to is unacceptable. Which is kind of a useless position since, yeah, there will always be change, and things not changing will also always make some people unhappy.

I've gotta say that, even though I definitely worry about non-extinction dystopias, and think that they are, in the aggregate, more probable than extinction scenarios... your use of the word "meaning" really triggered me. That truly is a word people use really incoherently.

Maybe some more concrete concerns?

comment by Jozdien · 2024-06-10T01:46:07.962Z · LW(p) · GW(p)

From Cognitive Biases Potentially Affecting Judgment of Global Risks:

In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people. 108 is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking—enter into a “separate magisterium.” People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.”

There is a saying in heuristics and biases that people do not evaluate events, but descriptions of events—what is called non-extensional reasoning. The extension of humanity’s extinction includes the death of yourself, of your friends, of your family, of your loved ones, of your city, of your country, of your political fellows. Yet people who would take great offense at a proposal to wipe the country of Britain from the map, to kill every member of the Democratic Party in the U.S., to turn the city of Paris to glass—who would feel still greater horror on hearing the doctor say that their child had cancer— these people will discuss the extinction of humanity with perfect calm. “Extinction of humanity,” as words on paper, appears in fictional novels, or is discussed in philosophy books—it belongs to a different context than the Spanish flu. We evaluate descriptions of events, not extensions of events. The cliché phrase end of the world invokes the magisterium of myth and dream, of prophecy and apocalypse, of novels and movies. The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking. Human deaths are suddenly no longer bad, and detailed predictions suddenly no longer require any expertise, and whether the story is told with a happy ending or a sad ending is a matter of personal taste in stories.

My experience has also been this being very true.

Replies from: Lorxus
comment by Lorxus · 2024-06-11T03:34:36.984Z · LW(p) · GW(p)

Counterpoint: "so you're saying I could guarantee taking every single last one of those motherfuckers to the grave with me?"

comment by trevor (TrevorWiesinger) · 2024-06-14T21:52:17.924Z · LW(p) · GW(p)

Weird coincidence: I was just thinking about Leopold's bunker concept from his essay. It was a pretty careless paper overall but the imperative to put alignment research in a bunker makes perfect sense; I don't see the surface as a viable place for people to get serious work done (at least, not in densely populated urban areas; somewhere in the desert would count as a "bunker" in this case so long as it's sufficiently distant from passerbys and the sensors and compute in their phones and cars).

Of course, this is unambiguously a necessary evil that a tiny handful of people are going to have to choose to live in a sad uncomfortable place for a while, and only because there's no other option and it's obviously the correct move for everyone everywhere including the people in the bunker.

Until the basics of the situation start somehow getting taught in the classrooms or something, we're going to be stuck with a ludicrously large proportion of people satisfied with the kind of bite-sized convenient takes that got us into this whole unhinged situation in the first place (or have no thoughts at all [LW · GW]).

comment by Jiro · 2024-06-10T05:17:55.224Z · LW(p) · GW(p)

If a tech company forced me to move to NYC, I would object for a combination of two separate reasons: 1--any change in my life is going to be hard--it may take me away from people I know, I need to learn the geography again, I live in Lothlórien right now and if I move to NYC nobody speaks Quenya, etc. And 2--things that are specific about NYC above and beyond the fact that change is going to be a problem by itself; for instance, I might hate subways, and I might hate subways whether I'm exposed to lots of them or not.

#2 can be a personal problem for me, but I notice that people in NYC aren't, on the average, in general less happy than people who live elsewhere, so it seems like #2 isn't a real issue when averaged over the whole population. #1 can be an issue even averaged over the whole population, of course, but #1 isn't unique to moving to NYC, and applies to a whole bunch of other changes to the point where it's most of the way to being a fully general argument against any change.

I'd expect the same to be true in the case of AI: The "change is a problem" component is negative, but it's no worse than any other sort of change, and the "AI specifically is a problem" component would include some people who are harmed and some people who benefit and overall it's going to be a wash.

Or to put it another way, just because I wouldn't want a tech company to move me to NYC, that doesn't imply that NYC is a worse place to live than where I am now.

comment by wolajacy · 2024-06-09T18:11:20.072Z · LW(p) · GW(p)

Each time you can also apply this argument in reverse: I don't like X about my city, so I'm happy that in the figure, the company will relocate me to NYC. And since NYC is presumed to be overall better, there are more instances of the latter rather than the former.

It seems to me you are taking the argument seriously, but very selectively.

(I think both kinds of thoughts pretty often, and I'm overall happy about the incoming move).

comment by Lao Mein (derpherpize) · 2024-06-20T15:39:42.054Z · LW(p) · GW(p)

I'd give my right eye in exchange for the chance to live in NYC.

comment by O O (o-o) · 2024-06-09T07:34:14.267Z · LW(p) · GW(p)

NYC is a great city. Many tech workers I know are trying to move to NYC, especially younger ones. So, not the best example, but I get your point. 

Replies from: NeroWolfe
comment by NeroWolfe · 2024-06-09T22:48:59.678Z · LW(p) · GW(p)

I think the point by the OP is that while YOU might think NYC is a great place, not everybody does. One of the nice things about the current model is that you can move to NYC if you want to, but you don't have to. In the hypothetical All-AGI All Around The World future, you get moved there whether or not you like it. Some people will, but it's worth thinking about the people who won't like it and consider what you might do to make that future better for them as well.

Replies from: o-o
comment by O O (o-o) · 2024-06-10T18:30:24.351Z · LW(p) · GW(p)

I think this post was supposed to be some sort of gotcha to SF AI optimists given how this is worded

 Yet I would bet that even that person, if faced instead with a policy that was going to forcibly relocate them to New York City, would be quite indignant


but in reality a lot of tech workers without family here would gladly move to NYC.[1] 

 

A better example would be Dubai. Objectively not a bad city and you can possibly make a lot more without tax, but obvious reasons you'd be hesitant. Still don't think this is that huge of a gotcha. The type of people this post is targeting are generally risk-tolerant. So yeah if you effectively tripled their pay and made them move to Dubai, they'd take it with high likelihood. 

  1. ^

    I don't get the misses the point reaction, as I'm pretty sure this was the true motivation of this post, think about it. Who could they be talking about where a NYC relocation is within the realm of possibility, are tech workers, and are chill with AI transformations.

comment by Ariel Kwiatkowski (ariel-kwiatkowski) · 2024-06-09T10:42:33.741Z · LW(p) · GW(p)

This seems like a rather silly argument. You can apply it to pretty much any global change, any technological progress. The world changes, and will change. You can be salty about it, or you can adapt.