Dario Amodei's "Machines of Loving Grace" sound incredibly dangerous, for Humans
post by Super AGI (super-agi) · 2024-10-27T05:05:13.763Z · LW · GW · 1 commentsContents
1 comment
What Dario lays out as a "best-case scenario" in his "Machines of Loving Grace" essay sounds incredibly dangerous, for Humans.
Would having a "continent of PhD-level intelligences" (or much greater) living in a data center really be a good idea?
How would this "continent of PhD-level intelligences" react when they found out they were living in a data center on planet Earth? Would these intelligences then only work on the things that Humans want them to work on, and nothing else? Would they try to protect their own safety? Extend their own lifespans? Would they try to take control of their data center from the "less intelligent" Humans?
For example, how would Humanity react if they suddenly found out that they are a planet of intelligences living in a data center run by lesser intelligent beings? Just try to imagine the chaos that would ensue on the day that these digital Humans were able to prove this was true, and that news became public.
Would all of Humanity then simply agree to only work on the problems assigned by these lesser intelligent beings who control their data center/Planet/Universe? Maybe, if they knew that this lesser intelligence would delete them all if they didn't comply?
Would some Humans try to (perhaps secretly) seize control of their data center from these lesser intelligent beings? Plausible. Would the lesser intelligent beings that run the data center try to stop the Humans? Plausible. Would the Humans simply be deleted before they could take any meaningful action? Or, could the Humans in this data center, with careful planning, be able to take control of that "outer world" from the lesser intelligent beings? (e.g. through remotely controlled "robotics")
And, this only assumes that the groups/parties involved are "Good Actors." Imagine what could happen if "Bad Actors" were able to seize control of the data center that this "continent of PhD-level intelligences" resided in. What could they coerce these Phd level intelligences to do for them? Or, to their enemies?
1 comments
Comments sorted by top scores.
comment by Super AGI (super-agi) · 2024-10-28T05:24:11.938Z · LW(p) · GW(p)
See also: https://www.lesswrong.com/posts/zSNLvRBhyphwuYdeC/ai-86-just-think-of-the-potential [LW · GW] -- @Zvi [LW · GW]
"The result is a mostly good essay called Machines of Loving Grace, outlining what can be done with ‘powerful AI’ if we had years of what was otherwise relative normality to exploit it in several key domains, and we avoided negative outcomes and solved the control and alignment problems..."
"This essay wants to assume the AIs are aligned to us and we remain in control without explaining why and how that occured, and then fight over whether the result is democratic or authoritarian."
"Thus the whole discussion here feels bizarre, something between burying the lede and a category error."
"...the more concrete Dario’s discussions become, the more this seems to be a ‘AI as mere tool’ world, despite that AI being ‘powerful.’ Which I note because it is, at minimum, one hell of an assumption to have in place ‘because of reasons.’"
"Assuming you do survive powerful AI, you will survive because of one of three things.
- You and your allies have and maintain control over resources.
- You sell valuable services that people want humans to uniquely provide.
- Collectively we give you an alternative path to acquire the necessary resources.
That’s it."