post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by cousin_it · 2018-11-25T12:50:04.808Z · LW(p) · GW(p)

There's a huge, mysterious gap between the possibility of terrorism and actual terrorism. To leave a city without power or water, you only need a handful of ex-military people without any special tech, and the pool of people capable of that today is surely larger than the pool of machine learning enthusiasts. So why isn't that happening all the time? It seems that we are protected in many ways: there are fewer terrorists than we think, they are less serious about it, fewer of them can cooperate with each other, a larger proportion than we think are already being tracked, etc.

So before building a global singleton, I'd look into why current measures work so well, and how they can be expanded. You could probably get >99% of the x-risk protection at <1% of the cost.

Replies from: SaidAchmiz, avturchin
comment by Said Achmiz (SaidAchmiz) · 2018-11-25T19:52:08.645Z · LW(p) · GW(p)

there are fewer terrorists than we think, they are less serious about it

Gwern wrote about this in his essay “Terrorism is not about terror”.

comment by avturchin · 2018-11-25T19:36:29.440Z · LW(p) · GW(p)

Yes, it is interesting. But a few pilots were capable to create almost apocalyptic destruction during 9.11, and it could be even worse if they were capable to hit a nuclear power plant or White House.

There were also a few smaller cases of deliberate plane crashing by pilots.

comment by Dagon · 2018-11-25T16:07:24.139Z · LW(p) · GW(p)

I don't think we know enough about human beliefs to say that anarchy (in the form of individual, indexical valuations) isn't a fundamental component of our CEV. We _like_ making individual choices, even when those choices are harmful or risky.

What's the friendly-AI take on removing (important aspects of) humanity in order to further intelligence preservation and expansion?

comment by jbash · 2018-11-26T17:09:23.093Z · LW(p) · GW(p)

I don't believe that present-day synthentic biology is anywhere close to being able to create "total destruction" or "almost certain annihilation"... and in fact it may never get there without more-than-human AI.

If you made super-nasty smallpox and spread it all over the place, it would suck, for sure, but it wouldn't kill everybody and it wouldn't destroy "technical civilization", either. Human institutions have survived that sort of thing. The human species has survived much worse. Humans have recovered from really serious population bottlenecks.

Even if it were easy to create any genome you wanted and put it into a functioning organism, nobody knows how to design it. Biology is monstrously complicated. It's not even clear that a human can hold enough of that complexity in mind to ever design a weapon of total destruction. Such a weapon might not even be possible; there are always going to be oddball cases where it doesn't work.

For that matter, you're not even going to be creating super smallpox in your garage, even if you get the synthesis tools. An expert could maybe identify some changes that might make a pathogen worse, but they'd have to test it to be sure. On human subjects. Many of them. Which is conspicuous and expensive and beyond the reach of the garage operator.

I actually can't think of anything already built or specifically projected that you could use to reliably kill everybody or even destroy civilization... except maybe for the AI. Nanotech without AI wouldn't do it. And even the AI involves a lot of unknowns.

comment by avturchin · 2018-11-25T09:57:50.855Z · LW(p) · GW(p)

I have the following idea how to solve this conundrum. A global control system capable to find all dangerous agents could be created using some Narrow AI, not superintelligent agential AI. This may look like ubiquitous surveillance with human faces and actions recognition capabilities.

Another part of this Narrow AI Nanny is its capability to provide decisive strategic advantage to the owner and help him quickly take over the world (for example, by leveraging nuclear strategy and military intelligence) - which is needed to prevent appearing of dangerous agents in other countries.

Yes, it looks like a totalitarianism and especially its Chinese version. But the extinction is worse than totalitarianism. I lived most of my life during totalitarian regimes and - I hate to said it - but 90 per cent of time the life is normal under them. So totalitarianism is survivable and calling its x-risk is overestimation.

I wrote more about the idea here: https://www.lesswrong.com/posts/7ysKDyQDPK3dDAbkT/narrow-ai-nanny-reaching-strategic-advantage-via-narrow-ai