Q&A with experts on risks from AI #4

post by XiXiDu · 2012-01-19T16:29:53.996Z · LW · GW · Legacy · 24 comments

Contents

    [Click here to see a list of all interviews]
  The Interview (New Questions)
    Laurent Orseau:
    Monica Anderson: 
    Michael G. Dyer: -
  The Interview (Old Questions)
None
24 comments

[Click here to see a list of all interviews]

Professor Michael G. Dyer is an author of over 100 publications, including In-Depth Understanding, MIT Press, 1983. He serves on the editorial board of the journals: Applied Intelligence, Connection Science, Knowledge-Based Systems, International Journal of Expert Systems, and Cognitive Systems Research. His research interests are centered around semantic processing of natural language, through symbolic, connectionist, and evolutionary techniques. [Homepage]

Dr. John Tromp is interested in Board Games and Artificial Intelligence, Algorithms, Complexity, Algorithmic Information Theory, Distributed Computing, Computational biology. His recent research has focused on the Combinatorics of Go, specifically counting the number of legal positions. [Homepage]

Dr. Kevin Korb both developed and taught the following subjects at Monash University: Machine Learning, Bayesian Reasoning, Causal Reasoning, The Computer Industry: historical, social and professional issues, Research Methods, Bayesian Models, Causal Discovery, Epistemology of Computer Simulation, The Art of Causal. [Curriculum vitae] [Bayesian Artificial Intelligence]

Dr. Leo Pape is a postdoc in Jürgen Schmidhuber's group at IDSIA (Dalle Molle Institute for Artificial Intelligence). He is interested in artificial curiosity, chaos, metalearning, music, nonlinearity, order, philosophy of science, predictability, recurrent neural networks, reinforcement learning, robotics, science of metaphysics, sequence learning, transcendental idealism, unifying principles. [Homepage] [Publications]

Professor Peter Gacs is interested in Fault-tolerant cellular automata, algorithmic information theory, computational complexity theory, quantum information theory. [Homepage]

Professor Donald Loveland does focus his research on automated theorem proving, logic programming, knowledge evaluation, expert systems, test-and-treatment problem. [Curriculum vitae]

Eray Ozkural is a computer scientist whose research interests are mainly in parallel computing, data mining, artificial intelligence, information theory, and computer architecture. He has an Msc. and is trying to complete a long overdue PhD in his field. He also has a keen interest in philosophical foundations of artificial intelligence. With regards to AI, his current goal is to complete an AI system based on the Alpha architecture of Solomonoff. His most recent work (http://arxiv.org/abs/1107.2788) discusses axiomatization of AI.

Dr. Laurent Orseau is mainly interested in Artificial General Intelligence, which overall goal is the grand goal of AI: building an intelligent, autonomous machine. [Homepage] [Publications] [Self-Modification and Mortality in Artificial Agents]

Richard Loosemore is currently a lecturer in the Department of Mathematical and Physical Sciences at Wells College, Aurora NY, USA. Loosemore's principle expertise is in the field known as Artificial General Intelligence, which seeks a return to the original roots of AI (the construction of complete, human-level thinking systems). Unlike many AGI researchers, his approach is as much about psychology as traditional AI, because he believes that the complex-system nature of thinking systems make it almost impossible to build a safe and functioning AGI unless its design is as close as possible to the design of the human cognitive system. [Homepage]

Monica Anderson has been interested in the quest for computer based cognition since college, and ever since then has sought out positions with startup companies that have used cutting-edge technologies that have been labeled as "AI". However, those that worked well, such as expert systems, have clearly been of the "Weak AI" variety. In 2001 she moved from using AI techiques as a programmer to trying to advance the field of "Strong AI" as a researcher. She is the founder of Syntience Inc., which was established to manage funding for her exploration of this field. She has a Master's degree in Computer Science from Linköping University in Sweden. She created three expert systems for Cisco Systems for product configuration verification; She has co-designed systems to automatically classify documents by content; She has (co-)designed and/or (co-)written LISP interpreters, debuggers, chat systems, OCR output parsers, visualization tools, operating system kernels, MIDI control real-time systems for music, virtual worlds, and peer-to-peer distributed database systems. She was Manager of Systems Support for Schlumberger Palo Alto Research. She has worked with robotics, industrial control, marine, and other kinds of embedded systems. She has worked on improving the quality of web searches for Google. She wrote a Genetic Algorithm which successfully generated solutions for the Set Coverage Problem (which has been shown to be NP-hard) around 1994. She has used more than a dozen programming languages professionally and designed or co-designed at least four programming languages, large or small. English is her third human language out of four or five. [More]

The Interview (New Questions)

Peter Gacs: I will try to answer your questions, but probably not all, and with some disclaimers.

As another disclaimer: the questions, and the website lesswrong.com that I glanced at, seem to be influenced by Raymond Kurzweil's books.  I have not read those books, though of course, I heard about them in conversations, and have seen some reviews.  I do not promise never to read them, but waiting for this would delay my answers indefinitely.

Laurent Orseau: Keep in mind that the thoughts expressed here reflect my state of mind and my knowledge at the time of the writing, and may significantly differ after further discussions, readings and thoughts. I have no definite idea about any of the given questions.

Q1: Assuming beneficial political and economic development and that no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of artificial intelligence that is roughly as good as humans at science, mathematics, engineering and programming?

Kevin Korb: 2050/2200/2500

The assumptions, by the way, are unrealistic. There will be disruptions.

John Tromp: I believe that, in my lifetime, computers will only be proficient at well-defined and specialized tasks. Success in the above disciplines requires too much real-world understanding and social interaction. I will not even attempt projections beyond my lifetime (let's say beyond 40 years).

Michael G. Dyer: See Ray Kurzweil's book:  The Singularity Is Near.

As I recall, he thinks it will occur before mid-century.

I think he is off by at least an additional 50 years (but I think we'll have as manypersonal robots as cars by 2100.)

One must also distinguish between the first breakthrough of a technology vs. that breakthrough becoming cheap enough to be commonplace, so I won't give you any percentages.  (Several decades passed between the first cell phone and billions of people having cell phones.)

Peter Gacs: I cannot calibrate my answer as exactly as the percentages require, so I will
just concentrate on the 90%.  The question is a common one, but in my opinion
history will not answer it in this form.  Machines do not develop in direct
competition of human capabilities, but rather in attempts to enhance and
complement them.  If they still become better at certain tasks, this is a side
effect.  But as a side effect, it will indeed happen that more and more tasks
that we proudly claim to be creative in a human way, will be taken over by
computer systems.  Given that the promise of artificial intelligence is by now
50 years old, I am very cautious with numbers, and will say that at least 80
more years are needed before jokes about the stupidity of machines will become
outdated.

Eray Ozkural: 2025/2030/2045.

Assuming that we have the right program in 2035 by 100% probability, it could still take about 10 years to train it adequately, even though we might find that our programs by then learn much faster than humans. I anticipate that the most expensive part of developing an AI will be training, although we tend to assume that after we bring it up to primary school level, i.e. it can read and write, it would be able to learn much on its own. I optimistically estimated that it would take $10 million dollars and 10 years to train an AI in basic science. Extending that to cover all four of science, mathematics, engineering and programming could take even longer. It takes a human, arguably 15-20 years of training to be a good programmer, and very few humans can program well after that much educational effort and expense.

Laurent Orseau:

10%: 2017
50%: 2032
90%: 2100

With a quite high uncertainty though.
My current estimate is that (I hope) we will know we have built a core AGI by 2025, but a lot of both research and engineering work and time (and learning for the AGI) will be required for the AGI to reach human level in most domains, up to 20 years in the worst case I speculate and 5 years at least, considering that a lot of people will probably be working on it at that time. That is, if we really want to make it human-like.

Richard Loosemore: 2015 - 2020 - 2025

Monica Anderson:

10%  2020
50%  2026
90%  2034

These are all Reductionist sciences. I assume the question is whether we'll have machines capable of performing Reduction in these fields. If working on pre-reduced problems, where we already have determined which Models (formulas, equations, etc) to use and know the values of all input variables, then we already have Mathematica. But here the Reduction was done by a human so Mathematica is not AI.

AIs would be useful for more everyday things, such as (truly) Understanding human languages years before they Understand enough to learn the Sciences and can perform full-blown Reduction. This is a much easier task, but is still AI-Complete. I think the chance we'll see a program truly Understand a human language at the level of a 14-year old teenager is

10% 2014
50% 2018
90% 2022

Such an AI would be worth hundreds of billions of dollars and makes a worthy near-term research goal. It could help us radically speed up research in all areas by allowing for vastly better text-based information filtering and gathering capabilities, perfect voice based input, perfect translation, etc.

Q2: Once we build AI that is roughly as good as humans at science, mathematics, engineering and programming, how much more difficult will it be for humans and/or AIs to build an AI which is substantially better at those activities than humans?

Kevin Korb: It depends upon how AGI is achieved. If it's through design breakthroughs in AI architecture, then the Singularity will follow. If it's through mimicking nanorecordings, then no Singularity is implied and may not occur at all.

John Tromp: Not much. I would guess something on the order of a decade or two.

Michael G. Dyer: Machines and many specific algorithms are already substantially better at their tasks than humans.
(What human can compete with a relational database?, or with a Bayesian reasoner? or with scheduler?, or with an intersection-search mechanism like WATSON? etc.)

For dominance over humans, machines have to first acquire the ability to understand human language and to have thoughts in the way humans have thoughts.   Even though the WATSON program is impressive, it does NOT know what a word actually means (in the sense of being able to answer the question:  "How does the meaning of the word "walk"  differ from the meaning of the word "dance", physically, emotionally, cognitively, socially?"

It's much easier to get computers to beat humans at technical tasks (such as sci, math, eng. prog.) but humans are vastly superior at understanding language, which makes humans the master of the planet.  So the real question is:  At what point will computers understand natural language as well as humans?

Peter Gacs: This is also hard to quantify, since in some areas machines will still be behind, while in others they will already be substantially better: in my opinion, this is already the case.  If I still need to give a number, I say 30 years.

Eray Ozkural: I expect that by the time such a knowledgeable AI is developed, it will already be thinking and learning faster than an average human. Therefore, I think, simply by virtue of continuing miniaturization of computer architecture, or other technological developments that increase our computational resources (e.g., cheaper energy technologies such as fusion), a general-purpose AI could vastly transcend human-level intelligence.

Laurent Orseau: Wild guess: It will follow Moore's law (see below).

Richard Loosemore: Very little difficulty. I expect it to happen immediately after the first achievement, because at the very least we could simply increase the clock speed in relevant areas. It does depend exactly how you measure "better", though.

Monica Anderson: What does "better" mean? If we believe, as many do, that Intelligence is for Prediction, and that the best measure of our intelligence is whether we can predict the future in complex domains, then we can interpret the given question as "when can an AI significantly outpredict a human in their mundane everyday environment".

For any reasonable definition of "significant", the answer is "never". The world is too complex to be predictable. All intelligences are "best-effort" systems where we do as best we can and learn from our mistakes when we fail, for fail we must. Human intelligences have evolved to the level they have because it is a reasonable level for superior survival chances in the environments in which we've evolved. More processing power, faster machines, etc.  do not necessarily translate into an improved ability to predict the environment, especially if we add AIs to this environment. A larger number of competent agents like AIs will make the domain even MORE complex, leading to LOWER predictability. For more about this, see http://hplusmagazine.com/2010/12/15/problem-solved-unfriendly-ai.

Improved ability to handle Models (creating a "super Mathematica") is of limited utility for the purpose of making longer-term predictions. Chains of Reductionist Models attempting to predict the future tend to look like Rube Goldberg machines and are very likely to fail, and to fail spectacularly (which is what Brittleness is all about).

Computers will not get better at Reduction (the main skill required for Science, Mathematics, Engineering, and Programming) until they gather a lot of experience of the real world. For instance, a programming task is 1% about Understanding programming and 99% about Understanding the complex reality expressed in the spec of the program. This can only be improved by Understanding reality better, which is a slow process with the limitations described above. For an introduction to this topic, see my article "Reduction Considered Harmful" at http://hplusmagazine.com/2011/03/31/reduction-considered-harmful.

The "Problem with Reduction" is actually "The Frame Problem" as described by John McCarthy and Pat Hayes, viewed from a different angle. It is not a problem that AI research can continue to ignore, which is what we've done for decades. It will not go away. The only approach that works is to sidestep the issue of continuous Model update by not using Models. AIs must use nothing but Model Free Methods since these work without performing Reduction (to Models) and hence can be used to IMPLEMENT automatic Reduction.

Q3: Do you ever expect artificial intelligence to overwhelmingly outperform humans at typical academic research, in the way that they may soon overwhelmingly outperform humans at trivia contests, or do you expect that humans will always play an important role in scientific progress?

Kevin Korb: They will overwhelmingly outperform if and only if we achieve artificial general intelligence through human understanding of intelligence or through artificial understanding of intelligence (vs nanomeasurements).

John Tromp: Like I said, not in my lifetime, and projections beyond that are somewhat meaningless I think.

Michael G. Dyer: Regarding trivia contests, WATSON's performance fools people into thinking that the language problem has been solved, but the WATSON program does not understand language.   It does an intersection search across text so, for example, if it knows that the answer category is human and that a clue is "Kitty Hawk" then it can do an intersection search and come up with Wright Brothers.  The question can sound complicated but WATSON can avoid comprehending the question and just return the best intersection that fits the answer category.  It can treat each sentence as a bag of words/phrases.  WATSON cannot read a child's story and answer questions about what the characters wanted and why they did what they did.

Peter Gacs: Absolutely, but the contest will never be direct, and therefore the victory will never have to be acknowledged.  Whatever tasks the machines are taking over, will always be considered as tasks that are just not worthy of humans.

Eray Ozkural: Yes. In Ray Solomonoff's paper titled "The time scale of artificial intelligence: Reflections on social effects", he predicted that it will be possible to build an AI that is as intelligent as many times the entire computer science community (AI Milestone F). He predicted that it would take a short time to go from a human-level AI, to such a vastly intelligent AI that would overwhelmingly outperform not only individual humans, but the entire computer science community. This is called the "infinity point" hypothesis, and it was the first scientific formulation of singularity (1985). He formalized the feedback loop by which an AI could increase its own intelligence by working on miniaturization of computer architectures, e.g, Moore's law. The idea is that by being smarter than humans, the AI would accelerate Moore's law, theoretically achieving infinite intelligence in a short, finite time, depending on the initial investment.

However, of course, infinite intelligence is impossible due to physical limits. Unaided Moore's law can only continue up to physical limits of computation which would be reached by 2060's if current rate of progress continued, and needless to mention those limits are sort of impossible to achieve (since they might involve processes that are a bit like blackholes). However, imagine this, the AI could design fusion reactors using the H3 on Moon and energy-efficient processors to achieve large amounts of computation. There could be alternative ways to obtain extremely fast supercomputers, and so forth, Solomonoff's hypothesis could be extended to deal with all sorts of technological advances, for instance a self-improving AI could improve its own code, which designs like Goedel Machine and Solomonoff's Alpha are supposed to accomplish. Therefore, ultimately, such AI's would help improve computer architecture, artificial intelligence, electronics, aerospace, energy, communication technologies, all of which would help build AI's that are perhaps hundreds of thousands of times smarter than individual humans, or perhaps much smarter than the entire humanity as Ray Kurzweil predicts, not just particular scientific communities like the computer science community,

Laurent Orseau: "Always" is a very strong word. So probably not for that last part.
I give 100% chances for an AI to vastly outperform humans in some domains (which we already have algorithms for, like calculus and chess of course), 50% in many domains, and 10% in all domains. Humans have some good old genetic biases that might be hard to challenge.
But how much better it will be is still very unclear, mostly due to NP-hardness, Legg's prediction hardness results and related no-free-lunch problems, where progress might only be gained through more computing power.

The AGI might have significantly different hot research topics than humans, so I don't think we will lose our philosophers that fast. And good philosophy can only be done with good science.

Also, machines are better at chess and other games than me, but that doesn't prevent me from playing.

Richard Loosemore: Yes. Except for one thing. Humans will be able to (and some will choose to) augment their own intellectual capacity to the same level as the AIs. In that case, your question gets a little blurred.

Monica Anderson: I don't believe AI will even reliably "overwhelmingly outperform" humans at trivia contests until they fully Understand language. Language Understanding computers will be a great help, but the overwhelming outperformance in Reduction-related tasks is unlikely to happen. Reduction is very difficult.

Q4: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?

Kevin Korb: If through nanorecording: approx 0%. Otherwise, the speed/acceleration at which AGIs improve themselves is hard to guess at.

John Tromp: I expect such modification will require plenty of real-life interaction.

Michael G. Dyer: -

Peter Gacs: This question presupposes a particular sci-fi scenario that I do not believe in.

Eray Ozkural: In 5 years, without doing anything, it would already be faster than a human simply by running on a faster computer. If Moore's law continued by then, it would be 20-30 times faster than a human. But if you mean by "vastly" a difference of thousand times faster, I give it a probability of only 10% because there might be other kinds of bottlenecks involved (mostly physical). There is also another problem with Solomonoff's hypothesis, which Kurzweil generalized, that we are gladly omitting. An exponential increase in computational speed may only amount to a linear increase in intelligence. It at least corresponds only to a linear increase in the algorithmic complexity of solutions that can be found by any AGI, which is a well known fact, and cannot be worked around by simple shortcuts. If solution complexity is the best measure of intelligence, then, getting much more intelligent is not so easy (take this with a grain of salt, though, and please contrast it with the AIQ idea).

Laurent Orseau: I think the answer to your question is similar to the answer to:
Suppose we suddenly find a way to copy and emulate a whole human brain on a computer; How long would it take *us* to make it vastly better than it is right now?
My guess is that we will make relatively slow progress. This progress can get faster with time, but I don't expect any sudden explosion. Optimizing the software sounds a very hard task, if that is even possible: if there were an easy way to modify the software, it is probable that natural selection would have found it by now. Optimizing the hardware should then follows Moore's law, at least for some time.
That said, the digital world might allow for some possibilities that might be more difficult in a real brain, like copy/paste or memory extension (although that one is debatable).

I don't even know if "vastly superhuman" capabilities is something that is even possible. That sounds very nice (in the best scenario) but is a bit dubious. Either Moore's law will go on forever, or it will stop at some point. How much faster than a human can a computer compute, taking thermodynamics into account?

So, before it really becomes much more intelligent/powerful than humans, it should take some time.
But we may need to get prepared for otherwise, just in case.

Richard Loosemore: Depending on the circumstances (which means, this will not be possible if the AI is built using dumb techniques) the answer is: near certainty.

Monica Anderson: 0.00% . Reasoning is useless without Understanding because if you don't Understand (the problem domain), then you have nothing to reason about. Symbols in logic have to be anchored in general Understanding of the problem domain we're trying to reason about.

Q5: How important is it to research risks associated with artificial intelligence that is good enough at general reasoning (including science, mathematics, engineering and programming) to be capable of radical self-modification, before attempting to build one?

Kevin Korb: It is the key issue in the ethics of AI. Without a good case to make, the research may need to cease. To be sure, one aspect of a good case may well be that unethical projects are underway and likely to succeed. Per my answers above, I do not currently believe anything of the kind. No project is near to success.

John Tromp: Its importance grows with the extent to which we allow computers control over critical industrial/medical/economic processes, infrastructure, etc. As long as their role is limited to assisting humans in control, there appears to be little risk.

Michael G. Dyer: A robot that does not self-replicate is probably not very dangerous (leaving out robots for warfare).
A robot that wants to make multiple copies of itself would be dangerous (because it could undergo a rapid form of Lamarckian evolution.  There are two type of replication:   factory replication and factory division via the creation of a new factory.   In social insects this is the difference between the queen laying new eggs and a hive splitting up to go build new hive structures at a new site.

Assuming that humans remain in control of the energy and resources to a robot-producing factory, then factory replication could be shut down.  Robots smart enough to go build a new factory and maintain control over the needed resources would pose the more serious problem.  As robots are designed (and design themselves) to follow their own goals (for their own self-survival, especially in outer space) then those goals will come into conflict with those of humans.   Asimov's laws are too weak to protect humans and as robots design new versions of themselves then they will eliminate those laws anyway.

Monica Anderson: Not very important. Radical self-modification cannot be undertaken by anyone (including AIs) without Understanding of what would make a better Understander. While it is possible that an AI could be helpful in this research I believe the advances in this area would be small, slow to arrive, and easy to control, hitting various brick walls of radically diminishing returns that easily dis-compensates advances of all kinds including Moore's Law.

We already use computers to design faster, better, logically larger and physically smaller computers. This has nothing to do with AI since the improvements come from Understanding about the problem domain – computer design – that is performed by humans. Greater capability in a computer translates to very small advances in Reductive capability. Yes, Understanding machines may be able to eventually Understand Understanding to the point of creating a better Understander. This is a long ways off; Understanding Understanding is uncommon even among humans. But even then, the unpredictability of our Mundane reality is what limits he advantage any intelligent agent might have.

Q5-old: How important is it to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to build AI that is good enough at general reasoning (including science, mathematics, engineering and programming) to undergo radical self-modification?

Peter Gacs: This is an impossible task.  "AI" is not a separate development that can be regulated the way that governments regulate research over infectious bacteria to make sure they do not escape the laboratory.  Day for day, we are yielding decision power to smart machines, since we draw---sometimes competitive---advantage from this.  Emphasizing that the process is very gradual, I still constructed a parable that illustrates the process via a quick and catastrophic denuement.

Thinking it out almost fourty years ago, I assumed that the nuclear superpowers, the Soviet Union and the USA, would live on till the age of very smart machines. So, at some day, for whatever reason, World War 3 breaks out between these superpowers.  Both governments consult their advanced computer systems on how to proceed, and both sides get analogous answers.  The Soviet computer says: the first bomb must be dropped on the Kremlin; in the US, the advice is to drop the first bomb on the Pentagon.  The Americans still retain enough common sense to ignore the advice; but Soviets are more disciplined, and obey their machine. After the war plays out, the Soviet side wins, since the computer advice was correct on both sides. (And from then on, machines rule...)

Eray Ozkural: A sense of benevolence or universal ethics/morality would only be required if the said AI is also an intelligent agent that would have to interact socially with humans. There is no reason for a general-purpose AI to be an intelligent agent, which is an abstraction of animal, i.e., as commonly known as an "animat" since early cyberneticists. Instead, the God-level intelligence could be an ordinary computer that solves scientific problems on demand. There is no reason for it to control robotic hardware or act on its own, or act like a human or an animal. It could be a general-purpose expert system of some sort, just another computer program, but one that is extremely useful. Ray Solomonoff wrote this about human-like behavior in his paper presented at the 2006 Dartmouth Artificial Intelligence conference (50th year anniversary) titled"Machine Learning - Past and Future", which you can download from his website:

http://world.std.com/~rjs/dart.pdf

"To start, I’d like to define the scope of my interest in A.I. I am not particularly interested in simulating human behavior. I am interested in creating a machine that can work very difficult problems much better and/or faster than humans can – and this machine should be embodied in a technology to which Moore’s Law applies. I would like it to give a better understanding of the relation of quantum mechanics to general relativity. I would like it to discover cures for cancer and AIDS. I would like it to find some very good high temperature superconductors. I would not be disappointed if it were unable to pass itself off as a rock star."

That is, if you constrain the subject to a non-autonomous, scientific AI, I don't think you'll have to deal with human concepts like "friendly" at all. Without even mentioning how difficult it might be to teach any common sense term to an AI. For that, you would presumably need to imitate the way humans act and experience.

However, to solve the problems in science and engineering that you mention, a robotic body, or a fully autonomous, intelligent agent, is not needed at all. Therefore, I think it is not very important to work on friendliness for that purpose. Also, one person's friend is another's enemy. Do we really want to introduce more chaos to our society?

Laurent Orseau: It is quite dubious that "provably friendly" is something that is possible.
A provably friendly AI is a dead AI, just like a provably friendly human is a dead human, at least because of how humans would use/teach it, and there are bad guys who would love to use such a nice tool.
The safest "AI" system that I can think of is a Q/A system that is *not* allowed to ask questions (i.e. to do actions). But then it cannot learn autonomously and may not get as smart as we'd like, at least in reasonable time; I think it would be quite similar to a TSP solver: its "intelligence" would be tightly linked to its CPU speed.

"Provably epsilon-friendly" (with epsilon << 1 probability that it might not be always friendly) is probably a more adequate notion, but I'm still unsure this is possible to get either, though maybe under some constraints we might get something.

That said, I think this problem is quite important, as there is still a non-negligible possibility that an AGI gets much more *power* (no need for vastly more intelligence) than humanity, even without being more intelligent. An AGI could travel at the speed of information transfer (so, light speed) and is virtually immortal by restoring from backups and creating copies of itself. It could send emails on behalf of anyone, and could crack high security sites with as much social engineering as we do. As it would be very hard to put in jail or to annihilate, it would feel quite safe (for its own life) to do whatever it takes to achieve its goals.
Regarding power and morality (i.e. what are good goals), here is a question: Suppose you are going for a long walk in the woods in a low populated country, on your own. In the middle of the walk, some big guy pops out of nowhere and comes to talk to you. He is ugly, dirty, smells horribly bad, and never stops talking. He gets really annoying, poking you and saying nasty things, and it's getting worse and worse. You really can't stand it anymore. You run, you go back and forth, you shout at him, you vainly try to reason him but you can't get rid of him. He just follows you everywhere. You don't really want to start a fight as he looks much stronger than you are. Alas, it will take you some 5 more hours to get back to your car and nobody else is in the woods. But In your pocket you have an incredible device: A small box with a single button that can make everything you wish simply disappear instantly. No blood, no pain, no scream, no trace, no witness, no legal problem, 100% certified. At some instant the guy would be here, the next instant he would not, having simply vanished. As simple as that. You don't know what happens to the disappeared person. Maybe he dies, maybe he gets teleported somewhere, or reincarnated or whatever. You know that nobody knows this guy, so nobody can miss him or even look for him. You try to explain to him what this box is, you threaten him to press the button but he does not care. And he's getting so, so annoying, that you can't refrain to scream. Then you stare at the button... Will you press it?
My guess is that most people would like to say no, because culture and law say it's bad, but the truth may be that most of them would be highly tempted if facing such a situation. But if they had a gun or a saber instead of a button, the answer would probably be a straighter no (note that a weapon injury is much like a death sentence in the woods). The definition of morality might depend on the power you have.

But, hopefully, we will be sufficiently smart to put a number of safety measures and perform a lot of testing under stressful conditions before launching it in the wild.

Richard Loosemore: Absolutely essential. Having said that, the task of making it "provably" friendly is not as difficult as portrayed by organizations (SIAI, FHI) that have a monomaniacal dedication to AI techniques that make it impossible. So in other words: essential, but not a difficult task at all.

Q6: What probability do you assign to the possibility of human extinction within 100 years as a result of AI capable of self-modification (that is not provably non-dangerous, if that is even possible)? P(human extinction by AI | AI capable of self-modification and not provably non-dangerous is created)

Kevin Korb: This question is poorly phrased.

You should ask relative to a time frame. After all, the probability of human extinction sometime or other is 1. (Note by XiXiDu: I added "within 100 years" to the question after I received his answers.)

"Provably" is also problematic. Outside of mathematics, little is provable.

My generic answer is that we have every prospect of building an AI that behaves reasonably vis-a-vis humans, should we be able to build one at all. We should, of course, take up those prospects and make sure we do a good job rather than a bad one.

John Tromp: The ability of humans to speed up their own extinction will, I expect, not be matched any time soon by machine, again not in my lifetime.

Michael G. Dyer: Loss of human dominance is a foregone conclusion (100% for loss of dominance). Every alien civilization (including humans) that survives its own annihilation (via nuclear, molecular and nano technologies) will at some point figure out how to produce synthetic forms of its own intelligence.   These synthetics beings are necessary for space travel (because there is most likely no warp drive possible and even planets in the Goldilocks zone will have unpleasant viral and cellular agents). Biological alien creatures will be too adapted to their own planets.

As to extinction, we will only not go extinct if our robot masters decide to keep some of us around.  If they decide to populate new planets with human life then they could make the journey and humans would thrive (but only because the synthetic agents wanted this).

If a flying saucer ever lands, the chances are 99.99% that what steps out will be a synthetic intelligent entity.   It's just too hard for biological entities (adapted to their planet) to make the long voyages required.

Peter Gacs: I give it a probability near 1%.  Humans may become irrelevant in the sense of losing their role of being at the forefront of the progress of "self-knowledge of the universe" (whatever this means).  But irrelevance will also mean that it will not be important to eradicate them completely.  On the other hand, there are just too many, too diverse imaginable scenarios for their coexistence with machines that are smarter than they are, so I don't dare to predict any details.  Of course, species do die out daily even without our intent to extinguish them, but I assume that at least some humans would find ways to survive for some more centuries to come.

Eray Ozkural: Assuming that we are talking about intelligent agents, which are strictly unnecessary for working on scientific problems which is your main concern, I think first that it is not possible to build something that is provably non-dangerous, unless you can encode a rule of non-interference into its behavior. Otherwise, an interfering AI can basically do anything, and since it is much smarter than us, it can create actual problems that we had no way of anticipating or solving. I have thought at length on this question, and considered some possible AI objectives in a blog essay:

http://www.examachine.net/blog/?p=72

I think that it does depend on the objectives. In particular, selfish/expansionist AI objectives are very dangerous. They might almost certainly result in interference with our vital resources. I cannot give a probability, because it is a can of worms, but let me try to summarize. For instance, the objective to maximize its knowledge about the world, a similar version of which was considered by Laurent Orseau in a reinforcement learning setting, and previously by a student of Solomonoff. Well, it's an intuitive idea that a scientist tries to learn as much as possible about the world. What if we built an intelligent agent that did that? If it's successful, it would have to increase its computation and physical capacity to such an extent that it might expand rapidly, first assimilating the solar system and then expand at our galactic neighborhood to be able to pursue its unsatisfiable urge to learn. Similar scenarios might happen in any kind of intelligent agent with selfish objectives (i.e., optimize some aspect of itself). Those might be recognized as Omohundro drives, but the objectives themselves are the main problem mostly.

This is a problem when you are stuck in this reinforcement learning mentality, thinking in terms of rewards and punishment. The utility function that you will define will tend to be centered around the AI itself rather than humanity, and things have a good chance of going very wrong. This is mostly regardless of what kind of selfishness is pursued, be it knowledge, intelligence, power, control, satisfaction of pseudo pleasure, etc. In the end, the problem is with the relentless pursuit of a singular, general objective that seeks to benefit only the self. And this cannot be mitigated by any amount of obstruction rules (like Robot Laws or any other kind of laws). The motivation is what matters, and even when you are not pursuing silly motivations like stamp collection, there is a lot of danger involved, not due to our neglect of human values, which are mostly irrelevant at the level which such an intelligent agent would operate, but our design of its motivations.

However, even if benevolent looking objectives were adopted, it is not altogether clear, what sorts of crazy schemes an AI would come up with. In fact, we could not predict the plans of an intelligent agent smarter than the entire humanity. Therefore, it's a gamble at best, and even if we made a life-loving, information-loving, selfless, autonomous AI as I suggested, it might still do a lot of things that many people would disagree with. And although such an AI might not extinguish our species, it might decide, for instance, that it would be best to scan and archive our species for using later. That is, there is no reason to expect that an intelligent agent that is superior to us in every respect should abide by our will.

One might try to imagine many solutions to make such intelligent agents "fool-proof" and "fail-safe", but I suspect that for the first, human foolishness has unbounded inventiveness, and for the second, no amount of security methods that we design would make a mind that is smarter than the entire humanity "safe", as we have no way of anticipating every situation that would be created by its massive intelligence, and the amount of chaotic change that it would bring. It would simply go out of control, and we would be at the mercy of its evolved personality. I said personality on purpose, because personality seems to be a result of initial motivations, a priori knowledge, and its life experience. Since its life experience and intelligence will overshadow any initial programming, we cannot really foresee its future personality. All in all, I think it is great for thinking about, but it does not look like a practical engineering solution. That's why I simply advise against building fully autonomous intelligent agents. I sometimes say, play God, and you will fail. I tend to think there is a Frankenstein Complex, it is as if there is an incredible urge in many people to create an independent artificial person.

On the other hand, I can imagine how I could build semi-autonomous agents that might be useful for many special tasks, avoiding interference with humans as much as possible, with practical ways to test for their compliance with law and customs. However, personally speaking, I cannot imagine a single reason why I would want to  create an artificial person that is superior to me in every respect. Unless of course, I have elected to bow down to a superior species.

Laurent Orseau: It depends if we consider that we will simply leave safety issues aside before creating an AGI, thinking that all will go well, or if we take into account that we will actually do some research on that.
If an human-level AGI was built today, then we probably wouldn't be ready and the risks due to excitement to get something out of it might be high ("hey look, it can drive the tank, how cool is that?!").

But if we build one and can show to the world a simple proof of concept that we do have (sub-human level) AGI and that will grow to human-level and most researchers acknowledge it, I presume we will start to think hard about the consequences.

Then all depends on how much unfriendly it is.
Humanity is intelligent enough to care for its own life, and try to avoid high risks (most of the time), unless there is some really huge benefit (like supremacy).

Also, if an AGI wants to kill all humans, humanity would not just wait for it, doing nothing.
This might be dangerous for the AI itself too (with EMPs for example). And an AGI also wants to avoid high risks unless there is a huge benefit. If some compromise is possible, it should be better.

If we can build an AGI that is quite friendly (i.e. has "good" goals and wants to cooperate with humans without pressing them too much, or at least has no incentive to kill humans) but may become nasty only if its life is at stake, then I don't think we need to worry *too* much: just be friendly with it as you would be with an ally, and its safety will be paired with your own safety.

So I think the risks of human extinction will be pretty low, as long as we take them into account seriously.

Richard Loosemore: The question is loaded, and I reject the premises. It assumes that someone can build an AI that is both generally intelligent (enough to be able to improve itself) whilst also having a design whose motivation is impossible to prove. That is a false assumption. People who try to build AI systems with the kind of design whose motivation is unstable will actually not succeed in building anything that has enough general intelligence to become a danger.

Monica Anderson: 0.00%. All intelligences must be fallible in order to deal with a complex and illogical world (with only incomplete information available) on a best effort basis. And if an AI is fallible, then we can unplug it... sooner or later, even if it is "designed to be unstoppable". Ten people armed with pitchforks, and armed also with ten copies of last year's best AI can always unplug the latest model AI.

The Interview (Old Questions)

Q1: Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of roughly human-level machine intelligence?

Explanatory remark to Q1:

P(human-level AI by (year) | no wars ∧ no disasters ∧ beneficially political and economic development) = 10%/50%/90%

Leo Pape: For me, roughly human-level machine intelligence is an embodied machine. Given the current difficulties of making such machines I expect it will last at least several hundred years before human-level intelligence can be reached. Making better machines is not a question of superintelligence, but of long and hard work. Try getting some responses to your questionnaire from roboticists.

Donald Loveland: Experts usually are correct in their predictions but terrible in their timing predictions. They usually see things as coming earlier than the event actually occurs as they fail to see the obstacles. Also, it is unclear what you mean as human-level intelligence. The Turing test will be passed in its simplest form perhaps in 20 years. Full functional replacements for humans will likely take over 100 years (50% likelihood). 200 years (90% likelihood).

Q2: What probability do you assign to the possibility of human extinction as a result of badly done AI?

Explanatory remark to Q2:

P(human extinction | badly done AI) = ?

(Where 'badly done' = AGI capable of self-modification that is not provably non-dangerous.)

Leo Pape: Human beings are already using all sorts of artificial intelligence in their (war)machines, so there it is not impossible that our machines will be helpful in human extinction.

Donald Loveland: Ultimately 95% (and not just by bad AI, but just by generalized evolution). In other words, in this sense all AI is badly done AI for I think it is a natural sequence that AI leads to superior artificial minds that leads to eventual evolution, or replacement (depending on the speed of the transformation), of humans to artificial life.

Q3: What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?

Explanatory remark to Q3:

P(superhuman intelligence within hours | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within days | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?
P(superhuman intelligence within < 5 years | human-level AI running at human-level speed equipped with a 100 GB Internet connection) = ?

Leo Pape: I don’t know what "massive superhuman intelligence" is, what it is for, and if it existed how to measure it.

Donald Loveland: I am not sure of the question, or maybe only that I do not understand an answer. Let me comment anyway. I have always felt it likely that the first superhuman intelligence would be a simulation of the human mind; e.g., by advanced neural-net-like structures. I have never thought seriously about learning time, but I guess the first success would be after some years of processing. I am not sure of what you mean by ``massive''. Such a mind as above coupled to good retrieval algorithms with extensive databases such as those being developed now could appear to have massive superhuman intelligence.

Q4: Is it important to figure out how to make AI provably friendly to us and our values (non-dangerous), before attempting to solve artificial general intelligence?

Explanatory remark to Q4:

How much money is currently required to mitigate possible risks from AI (to be instrumental in maximizing your personal long-term goals, e.g. surviving this century), less/no more/little more/much more/vastly more?

Leo Pape: Proof is for mathematics, not for actual machines. Even for the simplest machines we have nowadays we cannot proof any aspect of their operation. If this were possible, airplane travel would be a lot safer.

Donald Loveland: It is important to try. I do not think it can be done. I feel that humans are safe from AI takeover for this century. Maybe not from other calamities, however.

Q5: Do possible risks from AI outweigh other possible existential risks, e.g. risks associated with the possibility of advanced nanotechnology?

Explanatory remark to Q5:

What existential risk (human extinction type event) is currently most likely to have the greatest negative impact on your personal long-term goals, under the condition that nothing is done to mitigate the risk?

Leo Pape: No idea how to compare these risks.

Donald Loveland: Ouch. Now you want me to amplify my casual remark above. I guess that I can only say that I hope we are lucky enough for the human race to survive long enough to evolve into, or be taken over by, another type of intelligence.

Q6: What is the current level of awareness of possible risks from AI, relative to the ideal level?

Leo Pape: People think that current AI is much more capable than it is in reality, and therefore they often overestimate the risks. This is partly due to the movies and due to scientists overselling their work in scientific papers and in the media. So I think the risk is highly overestimated.

Donald Loveland: The current level of awareness of the AI risks is low. The risk that I most focus on now is the economic repercussions of advancing AI. Together with outsourcing, the advancing automation of the workplace, now dominated by AI advances,  is leading to increasing unemployment. This progression will not be monotonic, but each recession will result in more permanently unemployed and weaker recoveries. At some point our economic philosophy could change radically in the U.S., an event very similar to the great depression. We may not recover, in the sense of returning to the same economic structure. I think (hope) that democracy will survive.

Q7: Can you think of any milestone such that if it were ever reached you would expect human-level machine intelligence to be developed within five years thereafter?

Leo Pape: I would be impressed if a team of soccer playing robots could win a match against professional human players. Of course, the real challenge is finding human players that are willing to play against machines (imagine being tackled by a metal robot).

Donald Loveland: A "pure" learning program that won at Jeopardy ???

Q8: Are you familiar with formal concepts of optimal AI design which relate to searches over complete spaces of computable hypotheses or computational strategies, such as Solomonoff induction, Levin search, Hutter's algorithm M, AIXI, or Gödel machines?

Donald Loveland: I have some familiarity with Solomonoff inductive inference but not Hutter's algorithm. I have been retired for 10 years so didn't know of Hutter until this email. Looks like something interesting to pursue.

24 comments

Comments sorted by top scores.

comment by lavalamp · 2012-01-19T17:45:11.311Z · LW(p) · GW(p)

Possibly of interest:

In late 2010, Tromp won a longstanding $1000 bet by beating a computer program (MFoG) at go.

This year (just a few days ago, actually), Tromp lost a match against the current strongest computer program (Zen).

One could say he correctly predicted the year that computers would overtake him at go. I think that had a great deal to do with luck, but still, he definitely was within 5 years of the true time, so it should probably count as some sort of evidence in favor of his forecasts. (I also think MFoG was not stronger than Zen back in 2010, so he may not have been playing the strongest program available.)

Replies from: timtyler
comment by timtyler · 2012-01-19T20:55:22.082Z · LW(p) · GW(p)

Tromp vs Zen - coverage: http://dcook.org/gobet/

comment by examachine · 2012-01-19T21:24:21.528Z · LW(p) · GW(p)

Thanks for the nice interview Alexander. I'm Eray Ozkural by the way, if you have any further questions, I would love to answer them.

I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.

Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once have they raised a point about ethics. Neither do data mining or information retrieval researchers seem to have such qualms (except when they are pretending to, at academic conferences). At companies like facebook, they think they have a right to exploit the data they have collected and use it for all sorts of commercial and police state purposes. Likewise in AI and robotics, I see people cheering whenever military drones or robots are mentioned, as if automatization of warfare is civil or better in some sense because it is higher technology.

I think that at least AGI researchers must understand that they must have no dealing with the military and the government, and by doing so, they may be putting themselves and all of us at risk. Maybe fear tactics will work, I don't know.

On the other hand, I don't think that "friendly" AI is such a big concern, for reasons I mention above, artificial persons simply aren't needed. I have heard the argument that "but someone will build it sooner or later", though there is no reason that person is going to listen to you. The way I see it, it's better to focus on technology right now, so we can have a better sense of applications first. People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It's just nonsense. Or does a fire rescuer really need to think about whether it wants to go on to exterminate the human race after extinguishing the fire? The simple answer is that the people who give those examples are not focusing on the engineering requirements of the applications they have in mind. Another example: military robots. They think that military robots must have a sense of morality. I ask you, why is it important to have moral individuals in an enterprise that is fundamentally immoral? All war is murder, and I suggest you to stay away from professional murder business. That is, if you have any true sense of morality.

Instead of "friendly", a sense of "benevolence" may instead be thought, and that might make sense from an ethical theory viewpoint. It is possible to formalize some theories of ethics and implement them on an autonomous AI, however, for all the capabilities that autonomous trans-sapient AI's may possess, I think it is not a good idea to let such machines develop into distinctive personalities of their own, or meddle in human affairs. I think there are already too many people on earth, I don't think we need artificial persons. We might need robots, we might need AI's, but not artificial persons, or AI's that will decide instead of us. I prefer that as humans we remain at the helm. That I say with respect to some totalitarian sounding proposals like CEV. In general, I do not think that we need to replace critical decision making with AI's. Give AI's to us scientists and engineers and that shall be enough. For the rest, like replacing corrupt and ineffective politicians, a broken economic system, social injustice, etc., we need human solutions, because ultimately we must replace some sub-standard human models with harmful motivations like greed and superstitious ideas, with better human models that have the intellectual capacity to understand the human condition, science, philosophy, etc., regardless of any progress in AI. :) In the end, there is a pandemic of stupidity and ignorance that we must cure for those social problems, and I doubt we can cure it with an AI vaccine.

Replies from: timtyler
comment by timtyler · 2012-01-20T00:51:49.679Z · LW(p) · GW(p)

People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It's just nonsense.

You can't have a human in the loop all the time - it's too slow. So: many machines in the future will be at least semi-autonomous - as many of them are today.

Probably a somewhat more interesting question is whether machines will be given rights as "people". It's a complex political question, but I expect that eventually they will. Thus things like The Campaign for Robot Rights . The era of machine slavery will someday be looked back on as a kind of moral dark ages - much as the era of human slavery is looked back on today.

I prefer that as humans we remain at the helm.

Right - but what is a human? No doubt the first mammals also wished to "remain at the helm". In a sense they did - though many of their modern descendants don't look much like the mouse-like creatures they all descended from. It seems likely to be much the same with us.

On the other hand, I don't think that "friendly" AI is such a big concern, for reasons I mention above, artificial persons simply aren't needed.

That isn't the SIAI proposal, FWIW. See: http://lesswrong.com/lw/x5/nonsentient_optimizers/

comment by [deleted] · 2012-01-19T19:55:22.517Z · LW(p) · GW(p)

Probably relevant: Richard Loosemore was on the SL4 mailing list, but was banned by Eliezer in 2006.

Replies from: None
comment by [deleted] · 2012-01-20T14:06:27.923Z · LW(p) · GW(p)

Why would that be relevant?

Replies from: None
comment by [deleted] · 2012-01-20T17:09:35.132Z · LW(p) · GW(p)

For one thing, it means that you have been exposed to SIAI and Eliezer's ideas for many years now, which most of the other experts in XiXiDu's survey haven't. For another, your falling out with Eliezer may or may not be relevant. (Given this particular set of questions, though, I'd say probably not.)

Replies from: wedrifid
comment by wedrifid · 2012-01-20T17:22:58.463Z · LW(p) · GW(p)

For another, your falling out with Eliezer may or may not be relevant.

It would certainly be relevant to the selection algorithm that is used to select which people are surveyed.

Replies from: XiXiDu
comment by XiXiDu · 2012-01-20T18:35:02.958Z · LW(p) · GW(p)

It would certainly be relevant to the selection algorithm that is used to select which people are surveyed.

I am not using any well-defined selection algorithm. I am simply writing everyone who is famous (many citations, good reputation, genuine insights, academic degree etc.), everyone listed on Wikipedia, everyone I can find by googling certain keywords (AIXI, reinforcement learning etc.), authors of studies and papers, people who have been suggested to me (e.g. by lesswrong) and everyone those people link to on their sites.

(And also people who "ask" me to post their answers ;-)

ETA I think that the possibility to suggest people is kind of a reassurance that I do not cherry-pick people. Just tell me who and I will ask. I post every answer I get as long as they gave their permission to publish it.

comment by Vladimir_Nesov · 2012-01-19T18:02:24.714Z · LW(p) · GW(p)

Phrasing "human-level" as "roughly as good as humans at science etc." (in the questions) is incorrect, because it requires the AI to be human-like in their ability. Instead, it should be something like "roughly as good as humans (or better, perhaps unevenly) at science etc.". That parenthetical is important, as it distinguishes the magical matching of multiple abilities to human level, which respondents rightly object to, from a more well-defined lower bound where you require that it's at least as capable.

Replies from: examachine, XiXiDu
comment by examachine · 2012-01-19T21:35:01.776Z · LW(p) · GW(p)

If human-level is defined as "able to solve the same set of problems that a human can within the same time", I don't think there would be the problem that you mention. The whole purpose of the "human-level" adjective, as far as I can tell, is to avoid the condition that the AI architecture in question is similar to the human brain in any way whatsoever.

Consecutively, the set of human-level AI's is much larger than the set of human-level human-like AI's.

comment by XiXiDu · 2012-01-19T18:32:25.085Z · LW(p) · GW(p)

..."roughly as good as humans (or better, perhaps unevenly) at science etc.". That parenthetical is important, as it distinguishes the magical matching of multiple abilities to human level...

Right, thanks. I didn't see that. Will change the questions according to your suggestion.

comment by XiXiDu · 2012-01-21T17:28:17.714Z · LW(p) · GW(p)

NOTE

I made an erroneous correction to the answer by John Tromp to question #5 . He originally wrote:

Its importance grows...

I miscorrected it to "It's importance growth" (temporary lapse of sanity). I usually only edit answers when I am sure that I detected an obvious typo, which only happened about 2 times in all of the interviews so far. Upon publishing the answers I ask each person to review them again for possible mistakes and offer them to send me desired corrections or additions.

I will be more careful in future.

comment by timtyler · 2012-01-19T20:50:31.679Z · LW(p) · GW(p)

I think the answers to Q1 should resemble a log-normal distribution.

David Lucifer apparently agrees. I notice that not all interviewees seem to agree, though.

comment by MileyCyrus · 2012-01-20T04:08:30.502Z · LW(p) · GW(p)

Why is Monica Anderson giving 0.00% answers? I wouldn't even assign P(God created the universe 6000 years ago) at 0.00%.

Replies from: paulfchristiano
comment by paulfchristiano · 2012-01-20T04:37:34.054Z · LW(p) · GW(p)

0.00% (as opposed to say 0%) presumably indicates that the response has been rounded to the nearest 0.01.

Replies from: timtyler, MileyCyrus
comment by timtyler · 2012-01-20T14:42:39.351Z · LW(p) · GW(p)

It is pretty bad form to round probabilities down to zero. Use log odds, exponentiation, or anything - but that.

Replies from: paulfchristiano
comment by paulfchristiano · 2012-01-21T02:38:45.393Z · LW(p) · GW(p)

Using log odds fails to achieve the same function, which is to absolve the estimator of having to think about things they deem outlandish.

comment by MileyCyrus · 2012-01-20T04:44:05.758Z · LW(p) · GW(p)

True, but even a .005% chance of young earth creationism is far too low.

Replies from: faul_sname
comment by faul_sname · 2012-01-20T06:15:21.582Z · LW(p) · GW(p)

Really? What probability would you assign to the possibility of each of the following?

*Some form of god exists

*That god created the entire observable universe/is giving inputs indistinguishable from an outside universe existing

*The god created this universe occurred between 1,000 and 10,000 years ago (as opposed to 5e-44 seconds ago, or 5e+17 seconds ago, or any amount of time between)

This conjunction allows for many worlds that are not even remotely similar to what most young earth creationists believe, and even so I would estimate the odds as well under 1/20000. 1/20000 says the odds of young earth creationism being true are better than being dealt 4 kings or better in five card stud, or of flipping a fair coin and getting heads 15 times in a row.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-01-20T06:32:23.739Z · LW(p) · GW(p)

I've been converted once. There's a reasonable chance I'll be converted again.

Replies from: faul_sname, JoshuaZ
comment by faul_sname · 2012-01-20T07:00:10.288Z · LW(p) · GW(p)

Yes, and that's an excellent attitude to have (your knowledge of the truth is only as good as your ability to change your mind on conflicting evidence) but the chance that you will be converted to that particular thing is probably quite low. The chance of something equally implausible-sounding being true is much higher that .005%, but that's because there are far more than 20000 things that sound equally implausible.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-01-20T07:47:48.128Z · LW(p) · GW(p)

You're presuming that only one proposition with .005% probability can be true. But if there are far more than 20,000 propositions that have a around a .005% chance of being true, then there are probably multiple such propositions that are true.

comment by JoshuaZ · 2015-02-09T22:14:05.915Z · LW(p) · GW(p)

I've been converted once. There's a reasonable chance I'll be converted again.

The probability that one will convert to a religion or belief system is not the same as the probability one should give that the belief system is correct. Mental health problems, cognitive biases, and emotional pull are all highly relevant.