Discussion: weighting inside view versus outside view on extinction events

post by Ilverin the Stupid and Offensive (Ilverin) · 2016-02-25T05:18:15.378Z · LW · GW · Legacy · 4 comments

Articles covering the ideas of inside view and outside view:

Beware the Inside View (by Robin Hanson)

Outside View LessWrong wiki article

Article discussing the weighting of inside view and outside view:

The World is Mad (by ozymandias)

 

A couple of potential extinction events which seem to be most easily mitigated (the machinery involved is expensive):

Broadcasting powerful messages to the stars: 

Should Earth Shut the Hell Up? (by Robin Hanson)

Arecibo message (Wikipedia)

Large Hadron Collider: 

Anyone who thinks the Large Hadron Collider will destroy the world is a t**t. (by Rebecca Roache)

 

How should the inside view versus the outside view be weighted when considering extinction events?

Should the broadcast of future Arecibo messages (or powerful signals in general) be opposed?

Should the expansion of energy levels (or continued operation at all) of the Large Hadron Collider be opposed?

4 comments

Comments sorted by top scores.

comment by turchin · 2016-02-25T09:08:48.367Z · LW(p) · GW(p)

I mentioned that people inside such projects tend to claim that the projects are more safe than outsiders, which is not surprising. The same is true about AI research - most concerns about AI safety come from outsiders, not actual researchers.

I think that insiders tend to underestimate risks because they interested in continuation of the project and because of selection effect - those who do not think that it is wise to proceed with a project already leaved it. It is like driving - everybody could see that someone is reckless driver, but he is sure that he is ok.

Outsiders tend to overestimate risks, as they may not see inside safety mechanisms.

In general I think that we need independent safety committee to evaluate safety of all projects. And I oppose any project including LHC, which safety was not estimated BEFORE its start by independent body.

Replies from: HungryHobo
comment by HungryHobo · 2016-02-25T15:57:35.222Z · LW(p) · GW(p)

Keep in mind that independent safety committee's are not free. Suddenly everything costs more time, money and energy.

Does my friends AI-based masters thesis on translation need to be evaluated?

Realistically we're at more risk of the worlds ants suddenly turning on us than his translation code but at this point you have an Independent Safety Committee and the precept that AI related projects need to be assessed.

Suddenly he'd find that he'd have to spend weeks preparing applications and reports for the committee and if they're inept they could kill his whole project for no good reason.

If real risks are very very rare and realistic concerns hard to come by your committee either becomes simply damaging, killing projects occasionally to justify their existence while contributing nothing or value or bored and simply start rubber stamping everything.

If the last 10,000 projects across their desk have all been obviously, clearly ~zero risk and none have been clearly high risk then they are also likely to simply start rubber stamping everything because they have better things to do with their lives.

It's important to delay the creation of any Independent Safety Committee until there's realistic risk.

Sometimes lots of batshit insane people start coming up with theories and and in those cases the purpose of an "Independent Safety Committee" is to show some experts droning in a bored voice "no, it really isn't likely to destroy the world, it's more likely to spontaneously spit out a fully formed live unicorn" in the hope that it will cause some of the more sane members of the baying mob to relax and go home.

Replies from: turchin
comment by turchin · 2016-02-25T20:15:14.121Z · LW(p) · GW(p)

In fact this is the way safety evaluation of new drugs is done by FDA.

Your example is clearly ad hoc. But what if your friend will create a new virus which will kill all mosquitos in his neiborhood? It is plausible in next 5 years. Should he proceed uncontrolled?

In case of many similar projects where may be established a law, like do not release self-replicating units into the wild, or do not create AI projects which are going to change its source code.

Replies from: HungryHobo
comment by HungryHobo · 2016-02-29T13:51:44.948Z · LW(p) · GW(p)

Feeding biologically active compounds to large numbers of humans has a long track record of being dangerous in a reasonably large portion of cases.

The FDA was created once there was a realistic, significant risk.

Similarly, if you want to release pathogens or modified animals there's already a history of adverse events and a reasonable chance of non-zero risk. Even without GM we've had killer bees from normal crossbreeding. There's an established pattern of realistic, significant risk.

There are already lots of ~zero risk AI projects which change their own source code. Any law which bans Tierra or Avida are, likewise, poorly thought out laws.