post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by mruwnik · 2023-04-04T13:02:54.043Z · LW(p) · GW(p)

be worth distinguishing

An additional question is whether this could also explain the issues people have with takeoff speeds? I wonder how often someone says “I think it will take an AI a few days from waking up to putting unstoppable plans to take over the world into action” while the listeners hear “I think we’ll all fall over dead a few days after the AI wakes up”?

comment by AnthonyC · 2023-04-04T15:42:14.916Z · LW(p) · GW(p)

"the years it takes to build precision machinery."

This one is an interesting question, I think. Right now, sure, this seems reasonable. But 1) it's possible enough take-over-able precision machinery already exists, I don't really know how to accurately evaluate that, especially for superhuman levels of ability to utilize what machinery exists, and 2) humans are building more and better precision machinery every day. So sure, a near-term fast-cognitive-takeoff AGI might have to bide its time while early plans bear fruit, but a later-term fast-cognitive-takeoff AGI (especially one in a world where people have been using its predecessors to accelerate R&D for a while) could easily come into being in a world where everything it needs for a big impact already exists.

comment by mruwnik · 2023-04-04T13:04:25.453Z · LW(p) · GW(p)

You have to be able to stop human coordination

This could actually be quite quick - if you have the throughput to find or generate kompromat for everyone that could resist you, as well as a mechanism for negotiating with massive numbers of people at once, you could conceivably cow or convince most people into submission. With a couple of highly efficient information attacks (or even physical “accidents”) targeting those that turn out to have a backbone.