Doom sooner

post by Flaglandbase · 2022-04-28T07:24:10.276Z · LW · GW · 0 comments

Contents

No comments

It took me a long time to understand because I think software as it currently exists is fantastically weak and infuriatingly defective, in fact criminally so - but after reading a lot of stuff here, AI research suddenly seems a lot more dangerous than before. 
The basic truth of any complex system is that it's always more complex than it seems. Most projects take longer than planned. However, if you repeat a process enough it can be done faster. 
Multicellular life took eons to evolve. Animals took millions of centuries to develop intelligence. Primitive humans were stuck in the stone age for thousands of centuries.

Right now, society is about as dumb and inefficient as it can get away with. The most powerful force in the world is the implacable consensus everywhere. There are too few geniuses to overcome the sometimes monstrously deliberate inefficiencies of life.

For those reasons, it seems probable that developing something as complex as Artificial Superintelligence will take several decades at least, and only with a great deal of effort. By which I mean that completely unexpected delays will arise that will keep slowing things down.
Yet it's the only thing that might possibly save us, the closest thing to a magic genie. 

The posts on this site make a powerful case that when the first AI does develop superintelligence, it will likely not be "well rounded", but hyper-focused on some inadequately defined goal. Having less general intelligence will not make it less dangerous. The threat range may be "smaller" but no less deadly.

What is the simplest way a brute-force AI could run amuck? All it would take is one super clever idea, like the easiest self-replicating nanobot, DNA rewriting meta-viruses, or even social memes to manipulate personalities. We vastly underestimate how badly things could go wrong. Just dropping a test tube with bat guano can crash the world economy for three years.

Open-ended software entities running on sufficiently powerful hardware are likely to be controlled by nations or large corporations. Due to their extreme cost, and thanks to fears generated on this website and others, it may be possible to impose worldwide restrictions on such projects. For example, they could only be allowed to run on a shared global network, with many "kill switches". 

The real danger comes from smaller AI projects using cobbled together supercomputers or rented CPU farms. These will also arrive sooner.
Any efforts to anticipate how they might go wrong would generate new dangerous ideas themselves. There are a million ways the biosphere could be poisoned or society disrupted (even this extremely obscure blog post could be dangerous, though the expected costs from increased human obliteration risks could hardly be more than a dollar or two).

Anyway, the point of this post is very simple: We don't have to worry about the threat of artificially intelligent entities destroying humanity, not the least bit.
Long before then, a vast array of semi-intelligent software will be able to obliterate the world just as thoroughly. 
If we manage to overcome that threat, the things we learn then will prepare us for full AI far better than anything we can imagine now.

For that reason smaller AI projects should also have mandatory oversight (without excessive costs being imposed); or else they can't benefit from any discoveries they make. Copyright and patents only work if most countries enforce them, so only a few countries would need to pass pro-alignment legislation.
For AI to be controlled, the whole world would have to be open to full inspection for global safety risks, including areas that seemingly have nothing to do with AI. (I wrote an incredibly obscure novel about such inspectors. Also I've been told the female characters especially have been written in a very unrealistic way, so it may not be too readable.) Global inspection would only be practical if all needlessly intrusive laws (like for non-violent crimes) would not be prosecuted as a result of the inspection process. 

Again, the principle of mediocrity applies. There is a likely limit to how much damage early AI projects can do, unless we get very unlucky.
Perhaps we will be protected from an all-encompassing Singularity takeover by several pre-Singularity crises that help us prepare better. Of course millions of people would have to die first. I tend to think that is how it will go.

I also want to repeat my unpopular proposal not to rely on developing super-AI tools to solve the problem of human mortality for us, but to also focus on that problem directly. (That includes minor philosophical questions like what would be the highest ethical principle across all reality.)

0 comments

Comments sorted by top scores.