rvnnt's Shortform
post by rvnnt · 2025-02-13T15:21:33.046Z · LW · GW · 3 commentsContents
3 comments
3 comments
Comments sorted by top scores.
comment by rvnnt · 2025-02-13T15:21:33.039Z · LW(p) · GW(p)
A potentially somewhat important thing which I haven't seen discussed:
- People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
- In fact, so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
- Decisions about AI policy are mostly determined by people with a lot of capital or political power. (E.g. Andreessen Horowitz, JD Vance [LW · GW], Trump, etc.)
(This looks like a decisionmaker is not the beneficiary -type of situation.)
Why does that matter?
-
It has implications for modeling decisionmakers, interpreting their words, and for how to interact with them.[1]
-
If we are in a gradual-takeoff world[2], then we should perhaps not be too surprised to see the wealthy and powerful push for AI-related policies that make them more wealthy and powerful, while a majority of humans become disempowered and starve to death (or live in destitution, or get put down with viruses or robotic armies, or whatever). (OTOH, I'm not sure if that possibility can be planned/prepared for, so maybe that's irrelevant, actually?)
For example: we maybe should not expect decisionmakers to take risks from AI seriously until they realize those risks include a high probability of "I, personally, will die". As another example: when people like JD Vance output rhetoric like "[AI] is not going to replace human beings. It will never replace human beings" [LW · GW], we should perhaps not just infer that "Vance does not believe in AGI", but instead also assign some probability to hypotheses like "Vance thinks AGI will in fact replace lots of human beings, just not him personally; and he maybe does not believe in ASI, or imagines he will be able to control ASI". ↩︎
Here I'll define "gradual takeoff" very loosely as "a world in which there is a >1 year window during which it is possible to replace >90% of human labor, before the first ASI comes into existence". ↩︎
↑ comment by Dagon · 2025-02-13T20:52:01.632Z · LW(p) · GW(p)
People who have a lot of political power or own a lot of capital, are unlikely to be adversely affected if (say) 90% of human labor becomes obsolete and replaced by AI.
That's certainly the hope of the powerful. It's unclear whether there is a tipping point where the 90% decide not to respect the on-paper ownership of capital.
so long as property rights are enforced, and humans retain a monopoly on decisionmaking/political power, such people are not-unlikely to benefit from the economic boost that such automation would bring.
Don't use passive voice for this. Who is enforcing which rights, and how well can they maintain the control? This is a HUGE variable that's hard to control in large-scale social changes.
Replies from: rvnnt↑ comment by rvnnt · 2025-02-14T13:55:20.601Z · LW(p) · GW(p)
It's unclear whether there is a tipping point where [...]
Yes. Also unclear whether the 90% could coordinate to take any effective action, or whether any effective action would be available to them. (Might be hard to coordinate when AIs control/influence the information landscape; might be hard to rise up against e.g. robotic law enforcement or bioweapons.)
Don't use passive voice for this. [...]
Good point! I guess one way to frame that would be as
by what kind of process do the humans in law enforcement, military, and intelligence agencies get replaced by AIs? Who/what is in effective control of those systems (or their successors) at various points in time?
And yeah, that seems very difficult to predict or reliably control. OTOH, if someone were to gain control of the AIs (possibly even copies of a single model?) that are running all the systems, that might make centralized control easier? </wild, probably-useless speculation>