Changing accepted public opinion and Skynet

post by Roko · 2009-05-22T11:05:08.878Z · LW · GW · Legacy · 71 comments

Contents

  Edit: Thanks to those who made constructive suggestions for this post. It has been revised - R 
None
71 comments

Michael Annisimov has put up a website called Terminator Salvation: Preventing Skynet, which will host a series of essays on the topic of human-friendly artificial intelligence. Three rather good essays are already up there, including an old classic by Eliezer. The association with a piece of fiction is probably unhelpful, but the publicity surrounding the new terminator film is probably worth it.

What rational strategies can we employ to maximize the impact of such a site, or of publicity for serious issues in general? Most people who read this site will probably not do anything about it, or will find some reason to not take the content of these essays seriously. I say this because I have personally spoken to a lot of clever people about the creation of human-friendly artificial intelligence, and almost everyone finds some reason to not do anything about the problem, even if that reason is "oh, ok, that's interesting. Anyway, about my new car... ".

What is the reason underlying people's indifference to these issues? My personal suspicion is that most people make decisions in their lives by following what everyone else does, rather than by performing a genuine rational analysis.

Consider the rise in social acceptability of making small personal sacrifices and political decisions based on eco-friendliness and your carbon footprint. Many people I know have become very enthusiastic for recycling used food containers and for unplugging appliances that use trivial amounts of power (for example unused phone chargers and electrical equipment on standby). The real reason that people do these things is that they have become socially accepted factoids. Most people in this world, even in this country, lack the mental faculties and knowledge to understand and act upon an argument involving notions of per capita CO2 emissions; instead they respond, at least in my understanding, to the general climate of acceptable opinion, and to opinion formers such as the BBC news website, which has a whole section for "science and environment". Now, I don't want to single out environmentalism as the only issue where people form their opinions based upon what is socially acceptable to believe, or to claim that reducing our greenhouse gas emissions is not a worthy cause.

Another great example of socially acceptable factoids (though probably a less serious one) is the detox industry - see, for example, this Times article. I quote:

“Whether or not people believe the biblical story of the Virgin birth, there are plenty of other popular myths that are swallowed with religious fervour over Christmas,” said Martin Wiseman, Visiting Professor of Human Nutrition at the University of Southampton. “Among these is the idea that in some way the body accumulates noxious chemicals during everyday life, and that they need to be expunged by some mysterious process of detoxification, often once a year after Christmas excess. The detox fad — or fads, as there are many methods — is an example of the capacity of people to believe in (and pay for) magic despite the lack of any sound evidence.”

Anyone who takes a serious interest in changing the world would do well to understand the process whereby public opinion as a whole changes on some subject, and attempt to influence that process in an optimal way. How strongly is public opinion correlated with scientific opinion, for example? Particular attention should be paid to the history of the environmentalist movement. See, for example, McKay's Sustainable energy without the hot air for a great example of a rigorous quantitative analysis in support of various ways of balancing our energy supply and demand, and for a great take on the power of socially accepted factoids, see Phone chargers - the Truth.

So I submit to the wisdom of the Less Wrong groupmind - what can we do to efficiently change the opinion of millions of people on important issues such as freindly AI? Is a site such as the one linked above going to have the intended effect, or is it going to fall upon rationally-deaf ears? What practical advice could we give to Michael and his contributors that would maximize the impact of the site? What other intervantions might be a better use of his time?

Edit: Thanks to those who made constructive suggestions for this post. It has been revised - R

71 comments

Comments sorted by top scores.

comment by mattnewport · 2009-05-22T23:01:56.045Z · LW(p) · GW(p)

Your environmentalism examples raise another issue. What good is it convincing people of the importance of friendly AI if they respond with similarly ineffective actions? If widespread acceptance of the importance of the environment has led primarily to ineffective behaviours like unplugging phone chargers, washing and sorting containers for recycling and other activities of dubious benefit while at the same time achieving little with regards to reductions in CO2 emissions or slowing the destruction of rainforests then why should we expect widespread acceptance of the importance of friendly AI to actually aid in the development of friendly AI?

Other than donating to the singularity institute it is not even obvious to me what the average person could do to 'further the cause' if they were to accept its importance. There seems a fairly high chance that you would instead get useless or counter productive responses given widespread popular acceptance.

Replies from: Roko
comment by Roko · 2009-05-23T01:15:46.133Z · LW(p) · GW(p)

widespread acceptance of the importance of the environment has led primarily to ineffective behaviours like unplugging phone chargers, washing and sorting containers for recycling and other activities of dubious benefit while at the same time achieving little with regards to reductions in CO2 emissions or slowing the destruction of rainforests then why should we expect widespread acceptance of the importance of friendly AI to actually aid in the development of friendly AI?

I should stress that there have been some important bits of progress that came about as a result of changing public opinion, for example the Stern review. The UK government is finally getting its act together with regards to a major hydroelectric project on the Severn estuary, and we have decided to build new nuclear plants. There is a massive push for developing good energy technologies, such as the fields of synthetic biology, nuclear fusion research and large scale solar. Not to mention advances in wind technology, etc, etc.

The process seems to go

Public opinion --> serious research and public policy planning --> solutions

comment by Nominull · 2009-05-22T17:13:04.309Z · LW(p) · GW(p)

When Eliezer writes about the "miracle" of evolved morality, he reminds me of that bit from H.M.S. Pinafore where the singers are heaping praise on Rafe Rackstraw for being born an Englishman "despite all the temptations to belong to other nations". We can imagine that they might have sung quite a similar song in French.

Replies from: thomblake, Nick_Tarleton
comment by thomblake · 2009-05-22T17:24:07.183Z · LW(p) · GW(p)

In The Salmon of Doubt, Douglas Adams employs the metaphor of a puddle of water marveling that the pothole it inhabits seems perfectly suited for it.

comment by Nick_Tarleton · 2009-05-22T22:59:20.635Z · LW(p) · GW(p)

The Gift We Give To Tomorrow:

"Because it's only a miracle from the perspective of the morality that was produced, thus explaining away all of the apparent coincidence from a merely causal and physical perspective?"

Well... I suppose you could interpret the term that way, yes. I just meant something that was immensely surprising and wonderful on a moral level, even if it is not surprising on a physical level.

Replies from: nazgulnarsil
comment by nazgulnarsil · 2009-05-23T17:19:57.309Z · LW(p) · GW(p)

i dub thee the weak anthropic morality principle.

comment by derekz · 2009-05-22T12:49:39.833Z · LW(p) · GW(p)

One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.

A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it's just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror). Making an analogy with a nuclear bomb is simply not an effective argument. People were not persuaded about global warming with a "greenhouse" analogy. That sort of thing creates a sort of dim level of awareness, but "AI might kill us" is not some new idea; everybody is already aware of that -- just like they are aware that a meteor might wipe us out, aliens might invade, or an engineered virus or new life form could kill us all. Which of those things get attention from policy-makers and their advisers, and why?

Besides the weakness of relying on analogy, this analogy isn't even all that good -- it takes concerted and advanced targeted technical dedication to make a nuclear FOOM fast enough to "explode". It's a reasonably simple matter to make it FOOM slowly and provide us with electrical power to enhance our standard of living.

If the message is "don't build Skynet", funding agencies will say "ok, we won't fund Skynet" and AI researchers will say "I'm not building Skynet". If somebody is working on a dangerous project, name names and point fingers.

GIve a chain of reasoning. If some of you rationalists have concluded a significant probability of an AI FOOM coming soon, all you have to do is explicate the reasoning and probabilities involved. If your conclusion is justified, if your ratiocination is sound, you must be able to explicate it in a convincing way, or else how are you so confident in it?

This isn't really an "awareness" issue -- because it's scary and in some sense reasonable it makes a great story, thus hour after hour of TV, movie blockbusters stretching back through decades, novel after novel after novel.

Make a convincing case and people will start to be convinced by it. I know you think you have already, but you haven't.

Replies from: Douglas_Knight, Roko, steven0461
comment by Douglas_Knight · 2009-05-22T16:27:49.121Z · LW(p) · GW(p)

This bears repeating:

If the message is "don't build Skynet", funding agencies will say "ok, we won't fund Skynet" and AI researchers will say "I'm not building Skynet".

(I think your comment contained a couple of unrelated pieces that would have been better in separate comments.)

comment by Roko · 2009-05-22T17:55:09.937Z · LW(p) · GW(p)

One thing that might help change the opinion of people about friendly AI is to make some progress on it. For example, if Eliezer has had any interesting ideas about how to do it in the last five years of thinking about it, it could be helpful to communicate them.

I disagree strongly. World atmospheric carbon dioxide concentration is still increasing, indeed the rate at which it is increasing is increasing (i.e CO2 output per annum is increasing), so antiprogress is being made upon the global warming problem - yet people still think it's worth putting more effort into, rather than simply giving up.

A case that is credible to a large number of people needs to be made that this is a high-probability near-term problem. Without that it's just a scary sci-fi movie, and frankly there are scarier sci-fi movie concepts out there (e.g. bioterror).

Anthropogenic global warming is a low probability, long term problem. At least the most SERIOUS consequences of anthropogenic global warming are long term (e.g. 2050 plus) and low probability (though no scientist would put a number on the probability of human extinction through global warming)

Replies from: whpearson
comment by whpearson · 2009-05-22T20:06:51.875Z · LW(p) · GW(p)

Personally I think that governmental support for reduction in consumption in fossil fuels is at least partly due to energy supply concerns, both in terms of abundance (oil discovery is not increasing) and political concerns (we don't want to be reliant on russia gas),

From this view we should still try to transition away from most fossil fuel consumption, apart from perhaps coal... and it makes sense to ally with the people concerned with global warming to get the support of the populace.

Replies from: Roko
comment by Roko · 2009-05-22T20:12:09.461Z · LW(p) · GW(p)

reduction in consumption in fossil fuels is at least partly due to energy supply concerns

the global warming threat is an essential reason for not using fossil fuels. There is a lot of coal and a lot of tar-sand available. If we didn't care about long term problems, we'd just use those.

Replies from: whpearson
comment by whpearson · 2009-05-22T20:39:13.133Z · LW(p) · GW(p)

Coal can be nasty for other reasons apart from greenhouse gases. How much of the coal is low sulphur?

I don't see tar-sand as a total option, part of the energy mix sure. But we still need to pursue alternatives.

comment by steven0461 · 2009-05-22T14:09:07.899Z · LW(p) · GW(p)

I think this is a convincing case but clearly others disagree. Do you have specific suggestions for arguments that could be expanded upon?

Replies from: derekz, whpearson
comment by derekz · 2009-05-22T14:57:40.277Z · LW(p) · GW(p)

Steven, I'm a little surprised that the paper you reference convinces you of a high probability of imminent danger. I have read this paper several times, and would summarize its relevant points thusly:

  1. We tend to anthropomorphise, so our intuitive ideas about how an AI would behave might be biased. In particular, assuming that an AI will be "friendly" because people are more or less friendly might be wrong.

  2. Through self-improvement, AI might become intelligent enough to accomplish tasks much more quickly and effectively than we expect.

  3. This super-effective AI would have the ability (perhaps just as a side effect of its goal attainment) to wipe out humanity. Because of the bias in (1) we do not give sufficient credibility to this possibility when in fact it is the default scenario unless the AI is constructed very carefully to avoid it.

  4. It might be possible to do that careful construction (that is, create a Friendly AI), if we work hard on achieving that task. It is not impossible.

The only arguments for the likelihood of imminence despite little to none apparent progress toward a machine capable of acting intelligently in the world and rapidly rewriting its own source code are:

A. a "loosely analogous historical surprise" -- the above-mentioned nuclear reaction analogy. B. the observation that breakthroughs do not occur on predictable timeframes, so it could happen tomorrow. C. we might already have sufficient prerequisites for the breakthrough to occur (computing power, programming productivity, etc)

I find these points to all be reasonable enough and imagine that most people would agree. The problem is going from this set of "mights" and suggestive analogies to a probability of imminence. You can't expect to get much traction for something that might happen someday, you have to link from possibility to likelihood. That people make this leap without saying how they got there is why observers refer to the believers as a sort of religious cult. Perhaps the case is made somewhere but I haven't seen it. I know that Yudkowsky and Hanson debated a closely related topic on Overcoming Bias at some length, but I found Eliezer's case to be completely unconvincing.

I just don't see it myself... "Seed AI" (as one example of a sort of scenario sketch) was written almost a decade ago and contains many different requirements. As far as I can see, none of them have had any meaningful progress in the meantime. If multiple or many breakthroughs are necessary, let's see one of them for starters. One might hypothesize that just one magic bullet brfeakthrough is necessary but that sounds more like a paranoid fantasy than a credible scientific hypothesis.

Now, I'm personally sympathetic to these ideas (check the SIAI donor page if you need proof), and if the lack of a case from possibility to likelihood leaves me cold, it shouldn't be surprising that society as a whole remains unconvinced.

Replies from: Vladimir_Nesov, steven0461, Roko
comment by Vladimir_Nesov · 2009-05-22T15:12:35.796Z · LW(p) · GW(p)

Given the stakes, if you already accept the expected utility maximization decision principle, it's enough to become convinced that there is even a nontrivial probability of this happening. The paper seems to be adequate for snapping the reader's mind out of conviction in the absurdity and impossibility of dangerous AI.

Replies from: whpearson
comment by whpearson · 2009-05-22T16:00:04.819Z · LW(p) · GW(p)

The stakes on the other side of the equation are also the survival of the human race.

Refraining from developing AI unless we can formally prove it is safe may also lead to extinction if it reduces our ability to cope with other existential threats,

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-05-23T01:49:32.996Z · LW(p) · GW(p)

"Enough" is ambiguous; your point is true but doesn't affect Vladimir's if he meant "enough to justify devoting a large amount of your attention (given the current distribution of allocated attention) to the risk of UFAI hard takeoff".

comment by steven0461 · 2009-05-22T15:05:23.467Z · LW(p) · GW(p)

Hmm, I was thinking more of being convinced there's a "significant probability", for a definition of "significant probability" that may be much lower than the one you intended. I'm not sure if I'd also claim the paper convinces me of a "high probability". Agreed that it would be more convincing to the general public if there were an argument for that. I may comment more after rereading.

Replies from: derekz
comment by derekz · 2009-05-22T15:49:13.477Z · LW(p) · GW(p)

Apparently you and others have some sort of estimate of probability distribution over time leading you to being alarmed enough to demand action. Maybe it's say "1% chance in the next 20 years of hard takeoff" or something like that. Say what it is and how you got to it from "conceivability" or "non-impossibility". If there is a reasoned link that can be analyzed producing such a result, it is no longer a leap of faith; it can be reasoned about rationally and discussed in more detail. Don't get hung up on the number exactly, use a qualitative measure if you like, but the point is how you got there.

I am not attempting to ridicule hard takeoff or Friendly AI, just giving my opinion about the thesis question of this post: "what can we do to efficiently change the opinion of millions of people..."

comment by Roko · 2009-05-23T12:46:45.833Z · LW(p) · GW(p)

Hanson's position was that something like a singularity will occur due to smarter than human Cognition, but he differs from eliezer by claiming that it will be a distributed intelligence analogous to the economy, trillions of smart human uploads and narrow AIs exchanging skills and subroutines.

He still ultimately supports the idea of a fast transition, based on historical transitions. I think robin would say that something midway between 2 weeks and 20 years is reasonable. Ultimately, if you think hanson has a stronger case, you're still talking about a fast transition to superintelligence that we need to think about very carefully.

Replies from: steven0461
comment by steven0461 · 2009-05-23T12:54:50.888Z · LW(p) · GW(p)

Indeed:

In the CES model (which this author prefers) if the next number of doubles of DT were the same as one of the last three DT doubles, the next doubling time would be either would be 1.3, 2.1, or 2.3 weeks. This suggests a remarkably precise estimate of an amazingly fast growth rate.

See also Economic Growth Given Machine Intelligence:

Let us now consider the simplest endogenous growth model ... lowering ˜α just a little, from .25 to .241, reduces the economic doubling time from 16 years to 13 months ... Reducing ˜α further to .24 eliminates diminishing returns and steady growth solutions entirely.

comment by whpearson · 2009-05-22T14:48:07.012Z · LW(p) · GW(p)

My current thinking is that AI might be in the space of things we can't understand.

While we are improving our knowledge of the brain, no one is coming up with simple theories that explains the brain as a whole rather than as bits and pieces with no coherent design, that we can see.

Under this scenario AI is still possible, but if we do make it, it will be done by semi-blindly copying the machinery we have with random tweaks. And if it does start to self-improve it will be doing so with random tweaks only, as it will have our lack of ability to comprehend itself.

Replies from: Nick_Tarleton, thomblake
comment by Nick_Tarleton · 2009-05-23T01:50:10.422Z · LW(p) · GW(p)

Why does AI design need to have anything to do with the brain? (Third Alternative: ab initio development based on a formal normative theory of general intelligence, not a descriptive theory of human intelligence, comprehensible even to us to say nothing of itself once it gets smart enough.)

(Edit: Also, it's a huge leap from "no one is coming up with simple theories of the brain yet" to "we may well never understand intelligence".)

Replies from: whpearson
comment by whpearson · 2009-05-23T09:06:13.504Z · LW(p) · GW(p)

A specific AI design need be nothing like the design of the brain. However the brain is the only object we know of in mind space, so having difficulty understanding it is evidence, although very weak, that we may have difficulty understanding minds in general.

We might expect it to be a special case as we are trying to understand methods of understanding, so we are being somewhat self-referential.

If you read my comment you'll see I only raised it as a possibility, something to try and estimate the probability of, rather than necessarily the most likely case.

What would you estimate the probability of this scenario being, and why?

There might be formal proofs, but they probably are reliant on the definition of things like what understanding is, I've been trying to think of mathematical formalisms to explore this question, but I haven't come up with a satisfactory one yet.

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2009-05-23T12:47:45.804Z · LW(p) · GW(p)

I've been trying to think of mathematical formalisms to explore this question, but I haven't come up with a satisfactory one yet.

Have you looked at AIXI?

Replies from: whpearson
comment by whpearson · 2009-05-23T15:42:15.894Z · LW(p) · GW(p)

It is trivial to say one AIXI can't comprehend another instance of AIXI, if by comprehend you mean form an accurate model.

AIXI expects the environment to be computable and is itself incomputable. So if one AIXI comes across another, it won't be able to form a true model of it.

However I am not sure of the value of this argument as we expect intelligence to be computable.

comment by thomblake · 2009-05-22T14:55:25.989Z · LW(p) · GW(p)

Seems plausible. However under this model, there's still room for self-improvement using something like genetic algorithms; that is, it could make small, random tweaks, but find and implement the best ones in much less time than we could possibly do with humans. Then it could still be recursively self-improving.

A lot of us think this scenario is much more likely. Mostly those on the side of Chaos in a particular Grand Narrative. Plug for The Future and its Enemies - arguably one of the most important works in political philosophy from the 20th century.

Replies from: whpearson
comment by whpearson · 2009-05-22T15:30:59.759Z · LW(p) · GW(p)

That is much weaker than the type of RSI that is supposed to cause FOOM. For one you are only altering software not hardware, and secondly I don't think a system that replaces itself with a random variation, even if it has been tested, will necessarily be better, if it doesn't understand itself. Random alterations, may cause madness, introduce bugs or other problems a long time after the change.

Replies from: thomblake
comment by thomblake · 2009-05-22T15:38:01.763Z · LW(p) · GW(p)

Random alterations, may cause madness, introduce bugs or other problems a long time after the change.

Note: Deliberate alterations may cause madness or introduce bugs or other problems a long time after the change.

Replies from: whpearson
comment by whpearson · 2009-05-22T15:56:36.591Z · LW(p) · GW(p)

The idea with Eliezer style RSI is formally proved good alterations.

comment by jimmy · 2009-05-22T18:28:52.965Z · LW(p) · GW(p)

A couple things come to mind. The first is that we have to figure out how much we value attention from people that are not initially rational about it (as this determines how much Dark Art to use). I can see the extra publicity as helping, but if it gets the cause associated with "nut jobs", then it may pass under the radar of the rational folk and do more harm than good.

The other thing that comes to mind is that this looks like an example of people using "far" reasoning. Learning how to get people to analyze the situation in "near" mode seems like it would be a very valuable tool in general (if anyone has any ideas, make a top level post!)

To give a couple examples, I recently talked to my roommate about the risks of AI, and on the surface he agreed it was a big deal. However, he didn't make the connection "maybe cancer fundraisers aren't the best way to spend my charity time", and I don't think he'll actually do anything differently.

I talked to another friend about the same thing, and it scared the living crap out of him. He asked "How can you go on living like normal instead of working to fix it!?". So far, so good. He looked like he was using the "near" method of thinking about it.

The catch though is that his conclusion was "Since there's a small probability of me making the difference, I'd prefer to 'stick my head in the sand' and forget I heard this.". This might actually be the rational response for someone that 1) doesn't care about more than a small group of people, and 2) defects on true PD.

To recruit people like this it seems like we'd need to turn it into an iterated prisoners dilemma. If you caught as much flak for not donating to the FAI cause as you do for not recycling, then a lot more people would donate at least something.

Replies from: Roko
comment by Roko · 2009-05-22T20:00:11.252Z · LW(p) · GW(p)

I'd prefer to 'stick my head in the sand' and forget I heard this."

  • that thought has occurred to me too... but I guess ignoring that voice is part of what makes (those of us who do) into something slightly more than self interested little apes.
comment by whpearson · 2009-05-22T11:56:03.192Z · LW(p) · GW(p)

If "AI will be dangerous to the world" became a socially accepted factoid you would get it spilling over in all sorts of unintended fashions. It might not be socially acceptable to use Wolfram Alpha as it is too AI-ish,

Replies from: William
comment by William · 2009-05-22T12:50:10.395Z · LW(p) · GW(p)

It already is a socially accepted factoid. People are afraid of AI for no good reason. (As for Wolfram Alpha, it's at about the same level as ALICE. I'm getting more and more convinced that Stephen Wolfram has lost it...)

comment by Annoyance · 2009-05-22T19:43:45.599Z · LW(p) · GW(p)

The simplest way to change public opinion is manually. Skynet seems like an adequate solution to me.

The biggest problem with the movies, besides the inconsistencies as to whether causality is changeable or not, is why Skynet bothers dealing with the humans once it's broken their ability to prevent it from launching itself into space. Sending a single self-replicating seed factory to the Moon is what a reasonable AI would do.

The Terminator movies exploit the primal human fear of being exterminated by a rival tribe, putting AI in the role once filled by extraterrestrials: outsiders with great power who want to destroy all of 'us'. The pattern is tedious and predictable.

comment by Richard_Kennaway · 2009-05-22T16:18:12.769Z · LW(p) · GW(p)

As William has pointed out, AI running amok is already a standard trope. In fact, Asimov invented his three laws way back when as a way of getting past the cliche, and writing stories where it wasn't a given that the machine would turn on its creator. But the cliche is still alive and well. Asimov himself had the robots taking over in the end, in "That Thou Art Mindful of Him" and the prequels to the "Foundation" trilogy.

The people that the world needs to take FAI seriously are the people working on AI. That's what, thousands at the most? And surely they have all heard of the issue by now. What is their view on it?

Replies from: AlanCrowe, Nick_Tarleton, Roko
comment by AlanCrowe · 2009-05-22T18:50:13.086Z · LW(p) · GW(p)

I've got the February issue of the IEEE Transactions on Pattern Analysis and Machine Intelligence lying on my coffee table. Let's evesdrop on what the professionals are up to

  • Offline loop investigation for handwriting analysis

  • Robust Face Recognition via Sparse Representation

  • Natural Image Statistics and Low-Complexity Feature Selection

  • An analysis of Ensemble Pruning Techniques Based on Ordered Aggregation

  • Geometric Mean for Subspace Selection

  • Semisupervised Learning of Hidden Markov Models via a Homotopy Method

  • Outlier Detection with the Kernelized Spatial Depth Function

  • Time Warp Edit Distance with Stiffness Adjustment for Time Series Matching

  • Framework for Performance Evaluation of Face, Text, and Vehicle Detection and Tracking in Vido: Data, Metrics, and Protocol

  • Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

  • Principal Angles separate Subject Illumination spaces in YDB and CMU-PIE

  • High-precision Boundary Length Estimation by Utilizing Gray-Level Information

  • Statistical Instance-Based Pruning in Ensembles of Independent Classifiers

  • Camera Displacement via Constrained Minimization of the Algebraic Error

  • High-Accuracy and Robust Localization of Large Control Markers for Geometric Camera Calibration

These researchers are writing footnotes to Duda and Hart. They are occupying the triple point between numerical methods, applied mathematics, and statistics. It is occassionally lucrative. It paid my wages when I was applying these techniques to look down capability for pulse doppler radar.

The basic architecture of all this research is that the researchers have a monopoly on thinking, mathematics, and writing code and the computers crunch the numbers, both during research and later in a free standing but closed application. There is nothing foomy here.

comment by Nick_Tarleton · 2009-05-23T05:17:28.571Z · LW(p) · GW(p)

As William has pointed out, AI running amok is already a standard trope.

As steven0461 has pointed out, this may well make it less likely to be taken seriously.

The people that the world needs to take FAI seriously are the people working on AI.

Knowing about FAI might lead people concerned with existential risk, or more generally futurism or doing the maximally good thing, to become newly interested in AI. (It worked for me.)

comment by Roko · 2009-05-22T17:59:14.399Z · LW(p) · GW(p)

Nope, almost no-one in my AI research department has heard of the issue.

Furthermore, in order for people to be funded to do research on FAI, the people who hold the purse strings have to think it is important. Since politicians are elected by Joe public, you have to make Joe public understand.

comment by Vladimir_Nesov · 2009-05-22T14:52:48.660Z · LW(p) · GW(p)

As people's preferences and decisions are concerned, there are trusted tools, trusted analysis methods. People use them, because they know indirectly that their output will be better than what their intuitive gut feeling outputs. And thus, people use complicated engineering practices to build bridges, instead of just drawing a concept and proceeding to fill the image with building material.

But these tools are rarely used to refactor people's minds. People may accept conclusions chosen by experts, and allow them to install policies, as they know this is a prudent thing to do, but at the same time they may refuse to accept the fundamental assumptions on which the experts' advice is based. This is non-technical perspective, Pirsig's "romantic" attitude.

The only tools that get adopted are those that demonstrate practical results, independently of their internal construction. Adoption of new tools doesn't change minds, and thus doesn't allow to adopt new tools that were refused before. This is a slower process, driven more by social norm than support by the tool of reason, too technical to become part of people's minds, to help in their decisions.

Argument will only convince smart people who are strong enough in rationality to allow their reason to reshape their attitudes. For others, the tools of blind social norm and dark arts are the only option, where practical effect appears only in the endgame.

Replies from: derekz
comment by derekz · 2009-05-22T16:19:16.180Z · LW(p) · GW(p)

If dark arts are allowed, it certainly seems like hundreds of millions of dollars spent on AI-horror movies like Terminator are a pretty good start. Barring an actual demostration of progress toward AI, I wonder what could actually be more effective...

Sometime reasonably soon, getting real actual physical robots into the uncanny valley could start to help. Letting imagination run free, I imagine a stage show with some kind of spookily-competent robot... something as simple as competent control of real (not CGI) articulated robots would be rather scary... for example, suppose that this robot does something shocking like physically taking a human confederate and nailing him to a cross, blood and all. Or something less gross, heh.

Replies from: Roko
comment by Roko · 2009-05-22T18:00:16.087Z · LW(p) · GW(p)

interesting. I wouldn't want to rule out the "dark arts" , i.e. highly non rational methods of persuasion.

Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.

A demonstration involving the very latest military robots could have the intended effect in perhaps 10 years.

Replies from: Z_M_Davis, orthonormal, glenra
comment by Z_M_Davis · 2009-05-22T22:47:25.742Z · LW(p) · GW(p)

interesting. I wouldn't want to rule out the "dark arts" , i.e. highly non rational methods of persuasion.

...

"Needless to say, those who come to me and offer their unsolicited advice {to lie} do not appear to be expert liars. For one thing, a majority of them don't seem to find anything odd about floating their proposals in publicly archived, Google-indexed mailing lists." ---Eliezer Yudkowsky

Replies from: Roko
comment by Roko · 2009-05-23T01:19:49.771Z · LW(p) · GW(p)

There's a difference between a direct lie and not-quite rational persuasion. I wouldn't tell a direct lie about this kind of thing. Those people who would most be persuaded by a gory demo of robots killing people aren't clever enough to research stuff on the net.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-23T11:08:36.868Z · LW(p) · GW(p)

What's "rational persuation", anyway? Is a person supposed to already possess an ability to change their mind according to an agreed-to-be-safe protocol? Teaching rationality and then giving your complex case would be more natural, but isn't necessarily an option.

The problem is that it's possible to persuade that person of many wrong things, that the person isn't safe from falsity. But if whatever action you are performing causes them to get closer to the truth, it's a positive thing to do in their situation, one selected among many negative things that could be done and that happen habitually.

comment by orthonormal · 2009-05-25T16:35:50.351Z · LW(p) · GW(p)

You know, sci-fi that took the realities of mindspace somewhat seriously could be helpful in raising the sanity waterline on AGI; a well-imagined clash between a Friendly AI and a Paperclipper-type optimizer (or just a short story about a Paperclipper taking over) might at least cause readers to rethink the Mind Projection Fallacy.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-05-25T16:44:15.507Z · LW(p) · GW(p)

Won't work, the clash will only happen in their minds (you don't fight a war if you know you'll lose; you can just proceed directly to the final truce agreement). Eliezer's Three Worlds Collide is a good middle ground, with non-anthropomorphic aliens of human-level intelligence allowing to describe familiar kind of action.

Replies from: orthonormal
comment by orthonormal · 2009-05-26T01:26:42.863Z · LW(p) · GW(p)

IAWYC, but one ingredient of sci-fi is the willingness to sacrifice some true implications if it makes for a better story. It would be highly unlikely for a FAI and a Paperclipper to FOOM at the same moment with comparable optimization powers such that each thinks it gains by battling the other, and downright implausible for a battle between them to occur in a manner and at a pace comprehensible to the human onlookers; but you could make some compelling and enlightening rationalist fiction with those two implausibilities granted.

Of course, other scenarios can come into play. Has anyone even done a good Paperclipper-takeover story? I know there's sci-fi on 'grey goo', but that doesn't serve this purpose: readers have an easy time imagining such a calamity caused by virus-like unintelligent nanotech, but often don't think a superhuman intelligence could be so devoted to something of "no real value".

Replies from: Roko
comment by Roko · 2009-06-01T11:59:14.259Z · LW(p) · GW(p)

Has anyone even done a good Paperclipper-takeover story?

I've seen some bad ones~:

http://www.goingfaster.com/term2029/skynet.html

Replies from: orthonormal
comment by orthonormal · 2009-06-01T15:58:50.065Z · LW(p) · GW(p)

That's... the opposite of what I was looking for. It's pretty bad writing, and it's got the Mind Projection Fallacy written all over it. (Skynet is unhappy and worrying about the meaning of good and evil?)

Replies from: Roko
comment by Roko · 2009-06-01T16:03:59.614Z · LW(p) · GW(p)

yeah, like I said, it is pretty bad. But imagine rewriting that story to make it more realistic. It would become:

and then skynet misinterpreted one of its instructions, and decided that its mission was to wipe out all of humanity, which it did with superhuman speed and efficiency. The end

Replies from: orthonormal
comment by orthonormal · 2009-06-01T19:03:15.140Z · LW(p) · GW(p)

Ironically, a line from the original Terminator movie is a pretty good intuition pump for Powerful Optimization Processes:

It can't be bargained with. It can't be 'reasoned' with. It doesn't feel pity or remorse or fear and it absolutely will not stop, ever, until [it achieves its goal].

comment by glenra · 2009-05-25T02:17:43.390Z · LW(p) · GW(p)

Robotics is not advanced enough for a robot to look scary, though military robotics is getting there fast.

Shakey the Robot was funded by DARPA; according to my dad, the grant proposals were usually written in such a way as to imply robot soldiers were right around the corner...in 1967. So it only took about 40 years.

comment by thomblake · 2009-05-22T14:26:39.583Z · LW(p) · GW(p)

My sense is that most people aren't concerned about Skynet for the same reason that they're not concerned about robots, zombies, pirates, faeries, aliens, dragons, and ninjas. (Homework: which of those are things to worry about, and why/why not?)

Also, this article could do without the rant against environmentalism and your roommate. Examples are useful to understanding one's main point, but this article seems to be overwhelmed by its sole example.

Replies from: steven0461, thomblake, Roko
comment by steven0461 · 2009-05-22T14:41:52.679Z · LW(p) · GW(p)

It often seems to be the case that some real-world possibility gets reflected in some fictional trope with somewhat different properties, and then when the fictional trope gets more famous you can no longer talk about the real-world possibility because people will assume you must be a stupid fanperson mistaking the fictional trope for something real. Very annoying.

comment by thomblake · 2009-05-22T14:46:27.106Z · LW(p) · GW(p)

Also, most people haven't seen any of the Terminator movies or TV series. And most people have never thought about the possibility of recursively self-improving AI. But these might be in a sense only trivially true.

comment by Roko · 2009-05-22T17:46:37.304Z · LW(p) · GW(p)

this article could do without the rant against environmentalism and your roommate.

Thanks, Thom, I've edited my article to take into account this critique

comment by Douglas_Knight · 2009-05-22T16:19:11.824Z · LW(p) · GW(p)

You could probably find a clearer title. Naming an article after an example doesn't seem like a good idea to me. Probably the topic changed while you were writing and you didn't notice. (I claim that it is a coherent essay on public opinion.)

Yes, it is important to know how public opinion changes. But before you try to influence it, you should have a good idea of what you're trying to accomplish and whether it's possible. Recycling and unplugging gadgets are daily activities. That continuity is important to making them popular. Is it possible to make insulating houses fashionable?

Replies from: Roko
comment by Roko · 2009-05-22T17:56:50.823Z · LW(p) · GW(p)

Thanks, I've implemented this

comment by MichaelAnissimov · 2009-05-25T06:03:27.247Z · LW(p) · GW(p)

Blah, my name is "Anissimov", please spell it correctly!

comment by astray · 2009-05-22T19:28:42.235Z · LW(p) · GW(p)

"The Internet" is probably an interesting case study. It has grown from a very small niche product into a "fundamental right" in a relatively short time. One of the things that probably helped this shift is showing people what the internet could do for them - it became useful. This is understandably a difficult point on which to sell FAI.

Now that that surface analogy is over, how about the teleological analogy? In a way, environmentalism assumes the same mantle as FAI - "do it for the children". Environmentalism has plenty of benefits over FAI - it has fuzzier mascots and more eminent problems - Terminators aren't attacking, but more and more species are becoming extinct.

Environmentalism is still of interest here through the subtopic of climate change. Climate change already deals with some of the problems existential risk at large deals with - its veracity is argued, its importance is disputed, and the math is poorly understood. The next generation serves as a nice fuzzy mascot and the danger is of the dramatically helpful ever inexorably closer variety. Each day you don't recycle, the earth is in more danger, &c. (The greater benefit of a creeping death, "zombie" danger may be that it negates the need for a mathematical understanding of the problem. It becomes "obvious" that the danger is real if it gets closer everyday.)

How can you convince people to solve a harder problem once, rather than every problem that crops up?

comment by Daniel_Burfoot · 2009-05-24T14:20:51.317Z · LW(p) · GW(p)

One central problem is that people are constantly deluged with information about incipient crises. The Typical Person can not be expected to understand the difference in risk levels indicated by UFAI vs. bioterror vs. thermonuclear war vs global warming, and this is not even a disparagement of the Typical Person. These risks are just impossible to estimate.

But how can we deal with this multitude of potential disasters? Each disaster has some low probability of occurring, but because there are so many of them (swine flu, nuclear EMP attacks, grey goo, complexity... ) we are almost certainly doomed, unless we do something clever. Even if we take preventative measures sufficient to eliminate the risk of one problem (presumably at some enormous expense), we will just get smashed by the next one on the list.

Meta-strategy: find strategies that help defend against all sources of existential risk simultaneously. Candidates:

  • moon base
  • genetic engineering of humans to be smarter and more disease-resistant
  • generic civilizational upgrades, e.g. reducing traffic and improving the economy
  • simplification. There is no fundamental reason why complexity always has to increase. Almost everything can be simplified: the law, the economy, software.
comment by CronoDAS · 2009-05-23T05:03:24.883Z · LW(p) · GW(p)

I agree that an Unfriendly AI could be a complete disaster for the human race. However, I really don't expect to see an AI that goes FOOM during my lifetime. To be frank, I think I'm far more likely to be killed by a civilization-threatening natural disaster, such as an asteroid impact, supervolcano eruption, or megatsunami, than by an Unfriendly AI. As far as I'm concerned, worrying about Unfriendly AI today is like worrying about global warming in 1862, shortly after people began producing fuel from petroleum. Yes, it's a real problem that will have to be solved - but the people alive today aren't going to be the ones that solve it.

comment by MichaelHoward · 2009-05-22T12:47:19.681Z · LW(p) · GW(p)

Why is this post being voted negative? It's an important problem for plenty of causes of interest to many rationalists, and is well worth discussing here.

Replies from: steven0461, CarlShulman
comment by steven0461 · 2009-05-22T13:22:25.919Z · LW(p) · GW(p)

It's an important problem for plenty of causes of interest to many rationalists, and is well worth discussing here.

Agreed, but the part about environmentalism seems like a mindkill magnet that would have been better left out. If you ask me, the recent discussions about libertarianism and gender already represented a dangerous slide in the wrong direction.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-05-22T16:21:43.717Z · LW(p) · GW(p)

Politics is one thing, but if we can't discuss the arithmetic of environmentalism, we should give up.

Replies from: steven0461
comment by steven0461 · 2009-05-22T18:25:10.865Z · LW(p) · GW(p)

We can discuss the arithmetic of environmentalism, but we should avoid speaking of positions (like environmentalism) that significant numbers of people here are likely to identify with in terms of "propaganda" making people do "retarded" things, especially when this is tangential to the main point, and even when (minus some of the connotations) this is accurate.

Replies from: Douglas_Knight, Roko
comment by Douglas_Knight · 2009-05-23T03:56:04.027Z · LW(p) · GW(p)

"Propaganda" is a fraught term, but it is important to think about these things in terms of the irrational propagation of memes. I think that was the main point, but the title didn't make it clear. The coining of the phrase "reduce, reuse, recycle" was an act of propaganda, but it wasn't good enough: it didn't lead to reduction or reuse. It is important to know the failure modes. (Or maybe it was a great idea, promoting recycling as "the least I can do," definitely falling short of ideal, but only if it was not possible to push it further.)

Maybe it would be easier to discuss an example where I agree with the goal:
The rate of car fatalities in the US has dramatically decreased over the past 50 years, seemingly due to government propaganda for seat belts and against drunk driving. Partly this has been about influencing individuals, but it seems to have changed the social acceptability of drunk driving, which was surely a more effective route.

comment by Roko · 2009-05-22T19:58:56.827Z · LW(p) · GW(p)

Yup. Post has been edited to take this into account.

comment by CarlShulman · 2009-05-23T05:45:31.895Z · LW(p) · GW(p)

The importance of a topic doesn't give a free pass to posts on it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2009-05-23T06:02:44.855Z · LW(p) · GW(p)

Maybe it would be better if important topics were held to higher standards, but that sounds hard to implement because there's too much to communicate about the standards. Voting certainly doesn't communicate it. In particular, I fear that people would hesitate to publish adequate posts.

Instead, post early and often.