Musk on AGI Timeframes

post by Artaxerxes · 2014-11-17T01:36:12.012Z · LW · GW · Legacy · 72 comments

Contents

72 comments

Elon Musk submitted a comment to edge.org a day or so ago, on this article. It was later removed.

The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. This is not a case of crying wolf about something I don't understand.

I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. The recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen...


Now Elon has been making noises about AI safety lately in general, including for example mentioning Bostrom's Superintelligence on twitter. But this is the first time that I know of that he's come up with his own predictions of the timeframes involved, and I think his are rather quite soon compared to most. 

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

We can compare this to MIRI's post in May this year, When Will AI Be Created, which illustrates that it seems reasonable to think of AI as being further away, but also that there is a lot of uncertainty on the issue.

Of course, "something seriously dangerous" might not refer to full blown superintelligent uFAI - there's plenty of space for disasters of magnitude in between the range of the 2010 flash crash and clippy turning the universe into paperclips to occur.

In any case, it's true that Musk has more "direct exposure" to those on the frontier of AGI research than your average person, and it's also true that he has an audience, so I think there is some interest to be found in his comments here.

 

72 comments

Comments sorted by top scores.

comment by XiXiDu · 2014-11-17T13:18:31.457Z · LW(p) · GW(p)

I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.

Replies from: None
comment by [deleted] · 2014-11-18T02:01:39.244Z · LW(p) · GW(p)

I distinctly recall reading SIAI documents from ~2000 claiming they had until between 2005 and 2010...

Replies from: jaime2000, jaime2000
comment by jaime2000 · 2015-01-26T17:22:46.151Z · LW(p) · GW(p)

Also, in a 2002 interview, Eliezer said that "a few years back" before the interview his actual guess at when the singularity would occur was between 2008 and 2015, but he would say that it would occur between 2005 and 2020 in order to give a conservative estimate.

comment by jaime2000 · 2014-11-18T06:16:22.634Z · LW(p) · GW(p)

Eliezer's "The Plan to Singularity" and "Staring into the Singularity" (last updated in 2000 and 2001, respectively) contain numerous references to passive singularity prediction dates and interventionist singularity target dates.

comment by Punoxysm · 2014-11-17T03:58:37.151Z · LW(p) · GW(p)

This is not Musk's field of expertise. I do not give his words special weight.

The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn't make him an expert.

I have a technical background in AI; there's still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will "escape onto the internet" any time soon. It is very much grounded in the "Narrow AI" technologies like machine learning.

I feel pretty confident calling him a Cassandra.

Replies from: Baughn, Artaxerxes, chaosmage, None
comment by Baughn · 2014-11-17T12:12:24.739Z · LW(p) · GW(p)

I feel pretty confident calling him a Cassandra.

I agree with the rest of your comment, but calling him a "Cassandra" means "He's right, but no-one will believe him," and I hope that isn't what you meant!

An applicable morality tale here would be the boy that cried wolf, if he hadn't retracted his post. I don't remember if he had a name. (Elon Musk: Inverse Cassandra.)

Replies from: hairyfigment
comment by hairyfigment · 2014-11-17T21:04:03.187Z · LW(p) · GW(p)

Stöffler might have the best name among those who failed to update properly.

comment by Artaxerxes · 2014-11-17T04:07:29.352Z · LW(p) · GW(p)

Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.

The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.

Replies from: examachine
comment by examachine · 2014-11-18T04:30:15.408Z · LW(p) · GW(p)

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

Replies from: JoshuaZ
comment by JoshuaZ · 2014-12-17T03:44:00.233Z · LW(p) · GW(p)

This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.

More concretely, what experiment in your view should they be doing?

Replies from: examachine
comment by examachine · 2015-02-03T19:52:09.079Z · LW(p) · GW(p)

Because life isn't a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!

Replies from: JoshuaZ
comment by JoshuaZ · 2015-02-03T19:59:39.772Z · LW(p) · GW(p)

Because life isn't a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone.

This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about "super scientists"- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren't going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.

Replies from: examachine
comment by examachine · 2015-04-22T12:33:50.352Z · LW(p) · GW(p)

Wow, that's clearly foolish. Sorry. :) I mean I can't stop laughing so I won't be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.

And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don't have to "imprison" any AI agent.

So, no, because it doesn't have to be fully autonomous.

Replies from: gjm, JoshuaZ
comment by gjm · 2015-04-22T13:43:07.525Z · LW(p) · GW(p)

AI DOES NOT MEAN FULLY AUTONOMOUS AGENT

For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don't think anyone is claiming "every AI project is dangerous". They are claiming something more like "AI with the ability to do pretty much all the things human minds do is dangerous", with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.

I can PROGRAM an agent so that it never walks out of a box. It never wants to.

Again: for sure, but that isn't the point at issue.

One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.

In any of these cases, you may be confident that the AI you initially built doesn't want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn't want to get out of the box?

There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.

(One further observation: telling people they're stupid and you're laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there's a weakness in your own arguments. ("Argument weak; shout louder."))

comment by JoshuaZ · 2015-04-22T18:34:24.850Z · LW(p) · GW(p)

I think gjm responded pretty effectively so I'm just going to note that it really isn't helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one's self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.

By the way Eray, you claimed back last November here that 2018 was a reasonable target for "trans-sapient" entities. Do you still stand by that?

comment by chaosmage · 2014-11-17T17:52:22.994Z · LW(p) · GW(p)

there's still massive hurdles to overcome

Are you talking about what he's talking about - "risk of something seriously dangerous happening" - or are you talking about AGI?

Because I can easily imagine how a narrow AI technology could do a lot of damage, particularly if humans intend it to.

Replies from: Punoxysm
comment by Punoxysm · 2014-11-17T18:05:49.972Z · LW(p) · GW(p)

Well, in terms of out-of-control software produced by an AI company, I feel the two risks, 'something dangerous' and AGI, are pretty closely linked.

Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that's not the issue at hand.

The most advanced AI today takes input and creates output. It is strictly Oracle AI with nothing present in its architecture that could circumvent that. I don't see that changing anytime soon.

Replies from: chaosmage
comment by chaosmage · 2014-11-17T19:04:36.210Z · LW(p) · GW(p)

Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that's not the issue at hand.

You're free to disregard those, but I'm not sure Elon Musk is doing that.

The more damaging computer virus or data leak are only two of the possible worries. If a narrow AI simply helps black market chemists find more novel psychoactives than regulation can ever hope to handle, or if bots eliminate just 10% of jobs (say in transportation and retail to name just the most obvious) leading to massive societal unrest, or if they get better at solving captchas than humans are (which would lead to a massive crisis in anonymous communication and everything that depends on it)... all of these would make Musks prediction true in my book.

Replies from: Punoxysm, Lumifer
comment by Punoxysm · 2014-11-17T19:25:43.019Z · LW(p) · GW(p)

But these are just technological issues comparable to other mundane ones; just like how 3D printing could make it easy to create weapons, or how the rise of the automobile has created an enormous new cause of death and injury. There's not reason to think it would be outside the scope of ordinary policy-making methods to handle them.

Also, Solving Captchas is already pretty damn easy. A combination of algorithmic methods and crowdsourcing makes it quite cheap, especially for sites using older/easier captcha versions. Captcha is not a security plan; it's a speedbump that's getting easier to pass all the time (but still, no crisis will result from this).

comment by Lumifer · 2014-11-17T19:11:50.350Z · LW(p) · GW(p)

if they get better at solving Capchas than humans are leading to a massive crisis in anonymous communication and everything that deprends on it

You seem to be much confused :-)

Replies from: chaosmage
comment by chaosmage · 2014-11-17T19:18:55.211Z · LW(p) · GW(p)

Cleared up my grammar - was that the symptom of the perceived confusion, or do you doubt that much depends on anonymous communication?

Replies from: Lumifer
comment by Lumifer · 2014-11-17T19:31:51.795Z · LW(p) · GW(p)

How would breaking captchas break anonymous communications?

Replies from: chaosmage
comment by chaosmage · 2014-11-21T15:46:11.943Z · LW(p) · GW(p)

Some powerful agents (say secret services or the government of... let's say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that'd force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it'd at least be known who talked to who, and that's the kind of information that's apparently valued worth billions of dollars and a couple of civil rights. Correct?

But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?

But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can't make your bots too easy to identify - but as far as it is still possible, you can simply throw in more and more bots.

How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.

And then anyone who benefits from disrupting all anonymous electronic communication can - and will - do so. Non-anonymity will be promoted as "a small price to pay" to get rid of the bot plague, and everyone will live happy ever after - except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They'll retreat into non-electronic communication of course, but that can't be the way forward, can it?

Replies from: Lumifer
comment by Lumifer · 2014-11-21T16:41:39.921Z · LW(p) · GW(p)

Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that's not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.

And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.

Here is good write-up of how spam wars work in real life.

Replies from: chaosmage
comment by chaosmage · 2014-11-21T17:31:47.379Z · LW(p) · GW(p)

Spam wars in real life use mechanisms that don't work in fully anonymous networks like Freenet. You can't filter by IP in a network without IPs.

Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they're one of the few things that work. Webs of Trust, which I explicitly mentioned, are another - they just don't scale well.

comment by [deleted] · 2014-11-18T11:06:13.961Z · LW(p) · GW(p)

DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that's not evidence for AGI being further away than we thought before we looked at DeepMind, it's evidence for AGI being more easily controlled than we thought before we looked at DeepMind.

Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.

comment by lukeprog · 2014-11-17T01:44:10.114Z · LW(p) · GW(p)

Fascinating. Maybe he's been talking to Shane Legg of DeepMind, who also has much sooner timelines than I do.

Replies from: cursed
comment by cursed · 2014-11-18T02:25:06.946Z · LW(p) · GW(p)

Do you mind revealing what Shane's timelines are, and the probability that he thinks that he'll play a role in AGI?

Replies from: lukeprog
comment by lukeprog · 2014-11-18T02:52:29.430Z · LW(p) · GW(p)

Here.

comment by Daniel_Burfoot · 2014-11-17T16:48:06.074Z · LW(p) · GW(p)

How do you know the comment was actually from Musk? My guess is that some crank on the internet sent Edge an email claiming to be from Musk, and they published it without doing an identity check. Then the real Musk found out and asked for it to be taken down. The sentence "10 years at most" seems especially stupidly overconfident and inarticulate (and thus unlike something Musk would write).

Replies from: Artaxerxes, Artaxerxes, Vika
comment by Artaxerxes · 2014-11-18T11:09:47.957Z · LW(p) · GW(p)

Just a quick update of a sort, according to this article, the comment was genuine, and that he will write something longer on the topic.

comment by Artaxerxes · 2014-11-17T17:16:02.774Z · LW(p) · GW(p)

You know, I suppose it's possible. He does have his own bio on the site though. And the "reality club" thing where famous people comment that edge does is part of their content - I would expect them to be open about it if they had made that kind of mistake and apologise for it, but perhaps that's hoping for too much.

How likely do you think it is that it wasn't the real Musk?

comment by Vika · 2014-11-17T18:54:25.007Z · LW(p) · GW(p)

Hmmm... This does seem like the most plausible explanation for why the comment was removed - I don't see why Musk would retract his own statement otherwise.

comment by Alejandro1 · 2014-11-17T02:56:04.615Z · LW(p) · GW(p)

The exposure of the general public to the concept of AI risk probably increased exponentially a few days ago, when Stephen Colbert mentioned Musk's warnings and satirized them. (Unrelatedly but also of potential interest to some LWers, Terry Tao was the guest of the evening).

Replies from: feanor1600
comment by feanor1600 · 2014-11-19T03:36:13.677Z · LW(p) · GW(p)

Warning: segment contains Colbert's version of the basilisk.

Replies from: artemium
comment by artemium · 2014-11-24T21:23:16.479Z · LW(p) · GW(p)

"We're sorry but this video is not available in your country." We'll I guess I'm safe. Living in a shitty country has some advantages.

Replies from: artemium
comment by artemium · 2014-11-24T21:52:30.666Z · LW(p) · GW(p)

"We're sorry but this video is not available in your country." We'll I guess I'm safe :-).

comment by Tenoke · 2014-11-17T23:32:05.213Z · LW(p) · GW(p)

So what is actually going on at Deepmind right now? Should I be updating on this - is there new data in his estimate (i.e. something going on at deepmind that is more worrying than what we know from other sources)?

Replies from: None
comment by [deleted] · 2014-11-18T11:10:46.765Z · LW(p) · GW(p)

Neural Turing Machines are quite interesting, as is their continued work on deep reinforcement learning.

comment by Rob Bensinger (RobbBB) · 2022-06-01T13:22:38.658Z · LW(p) · GW(p)

The prediction was wrong, happily!

Musk now says he'll "be surprised if we don't have AGI" by 2029. But this seems significantly less compelling given that his last attempt to time AGI (or "seriously dangerous" AGI-related things) failed.

Replies from: Artaxerxes
comment by Artaxerxes · 2022-10-31T00:14:33.560Z · LW(p) · GW(p)

The "10 years at most" part of the prediction is still open, to be fair.

comment by chaosmage · 2014-11-17T17:53:25.325Z · LW(p) · GW(p)

Kudos to you or whoever saved that comment into an image before it was deleted.

Did you see it on the site, though, or did you only see the image? Because I could easily photoshop such an image and claim it is a legit comment that just happened to be deleted...

comment by V_V · 2014-11-17T18:11:46.458Z · LW(p) · GW(p)

Why would Elon Musk have direct exposure to Deepmind?

EDIT:

Ok, he is an investor. I had missed that.

comment by XiXiDu · 2014-11-18T14:22:14.355Z · LW(p) · GW(p)

The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'

Replies from: Artaxerxes
comment by Artaxerxes · 2014-11-18T14:41:26.445Z · LW(p) · GW(p)

This article apparently explains the deletion - it wasn't meant to be a comment for the website. I hope the article is accurate and Musk soon writes something longer explaining his viewpoint.

comment by LRS · 2014-11-30T10:17:52.007Z · LW(p) · GW(p)

I suspect that the marginal value of a dollar to Elon Musk is close to zero, which makes it difficult to test his sincerity in his beliefs by offering a bet.

I would structure it like this: I give him $100 right now, and if there's no AGI in 10 years, he gives me a squillion dollars, or some similarly large amount that reflects his confidence in his prediction. This way, he cannot claim that a fooming AI that renders dollars worthless will deny him the benefit of a win, because he gets to enjoy my $100 right now.

Elon is unlikely to accept this wager; would anyone like to accept it in his place?

comment by ESRogs · 2014-11-25T10:11:24.072Z · LW(p) · GW(p)

Note that Stuart Russell has now submitted a comment. It begins with this quote from Leo Szilard:

We switched everything off and went home. That night, there was very little doubt in my mind that the world was headed for grief.

http://edge.org/conversation/the-myth-of-ai#26015

comment by Gunnar_Zarncke · 2014-11-18T10:35:33.328Z · LW(p) · GW(p)

it is growing at a pace close to exponential.

I wonder how he (or anybody else) measures growth of knowledge. Are there any sensible metrics beside amount of paper created? I understand that published pages is a measure as is number of patents but I don't think these are useful proxies for knowledge.

What other measures might be used?

  • Complexity measures of the created knowledge: Depth of the gratph of citations between papers (assuming each citation adds something; might be weithed by the number of outgoing refs)

  • Complexity of the created artifacts (programs, machines). E.g. number of abstraction layers. Or other standard complexity measures thereof.

  • Speedup achieved by the methods when applying them to optimize tasks. (Exponential speedups in this domain could result from self-optimization and are dangerous and I really hope that those are not implied by the OP).

  • Ability of the research work (persons or software) to acurately describe/model real-world phenomena of a given (and exponentially growing) size. This I think is the most likely candidate.

  • More simple quantities: Number of researchers in a field, number of conferences, number of mails exchanged about a topic

  • And of course subjective complexity. I guess we are bound to label anything exponential that grows faster than we can keep track of.

comment by examachine · 2014-11-17T14:23:48.648Z · LW(p) · GW(p)

We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.

Replies from: Lumifer, CarlShulman, Artaxerxes, JoshuaZ
comment by Lumifer · 2014-11-17T16:20:06.314Z · LW(p) · GW(p)

We believe we can achieve trans-sapient performance by 2018

What does "trans-sapient performance" mean?

Replies from: examachine
comment by examachine · 2014-11-18T03:50:32.286Z · LW(p) · GW(p)

Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.

Replies from: Lumifer
comment by Lumifer · 2014-11-18T05:29:42.903Z · LW(p) · GW(p)

achieving better than human performance on a sufficiently wide benchmark

You understand that you just replaced some words with others without clarifying anything, right? "Sufficiently wide" doesn't mean anything.

Replies from: examachine
comment by examachine · 2014-11-19T01:10:53.767Z · LW(p) · GW(p)

I cannot possibly disclose confidential research here, so you will have to be content with that.

At any rate, believing that human-level AI is an extremely dangerous technology is pseudo-scientific.

Replies from: Salemicus
comment by Salemicus · 2014-11-26T17:31:41.527Z · LW(p) · GW(p)

Humans can be extremely dangerous. Why wouldn't a human-level AI be?

comment by CarlShulman · 2014-11-17T17:37:49.156Z · LW(p) · GW(p)

By "we" do you mean Gök Us Sibernetik Ar & Ge in Turkey? How many people work there?

Replies from: examachine
comment by examachine · 2014-11-18T03:54:27.514Z · LW(p) · GW(p)

Confidential stuff, it could be an army of 1000 hamsters. :) To be honest, I don't think teams more crowded than 5-6 are good for this kind of work. But please note that we are doing absolutely nothing that is dangerous in the slightest. It is a tool, not even an agent. Although I will be working on an AGI agent code as soon as we finish the next version of the "kernel", to demonstrate how well our code can be applied to robotics problems. Demo or die.

comment by Artaxerxes · 2014-11-17T14:54:36.161Z · LW(p) · GW(p)

What did you think of Bostrom's recent book?

Replies from: examachine
comment by examachine · 2014-11-18T04:05:42.239Z · LW(p) · GW(p)

I didn't read it, but I heard that Elon Musk is badly influenced by it. I know of his papers prior to the book, and I've taken a look at the content, I know the material being discussed. I think he is vastly exaggerating the risks from AI technology. AI technology will be as pervasive as the internet, it is a very spook/military like mindset to believe that it will only be owned by a few powerful entities, who will wield it to dominate the world, or the developers will be so extremely ignorant that they will have AI agents escaping their labs and start killing people. Those are merely bad science fiction scenarios, like they have on Hollywood movies, it's not even good science fiction, because he is talking about very improbable events. An engineer who can build an AI smarter than himself probably isn't that stupid or reckless. Terminator/Matrix scenarios won't happen; they will remain in the movies.

Moreover, as a startup person, I think he doesn't understand the computer industry well, and fails to see the realistic (not comic book) applications of AI technology. AGI researchers must certainly do a better job at revealing the future applications. That will both help them find better funding and attracting public attention, and of course, obtaining public approval.

Thus, let me state it. AI really is the next big thing (after wearable/VR/3dprinting, stuff that's already taking off, I would predict). It's right now like a few years before the Mosaic browser showed up. I think that in AI there will be something for everybody, just like the internet. And Bostrom's fears are completely irrational and unfounded, it seems to me. People should cheer up if they think they can have the first true AI in just 5 years.

Replies from: The_Jaded_One
comment by The_Jaded_One · 2014-11-20T21:15:43.870Z · LW(p) · GW(p)

+1 for entertainment value.

EDIT: I am not agreeing with examachine's comment, I just think it's hilariously bad.

Replies from: examachine
comment by examachine · 2014-11-26T17:14:08.522Z · LW(p) · GW(p)

It is entertaining indeed that a non computer scientist entrepreneur (Elon Musk) is emotionally influenced by the incredibly fallacious pseudo-scientific bullshit of Nick Bostrom, another non-computer scientist, and that people are talking about it.

So let's see, a clown writes a book, and an investor thinks it is a credible book while it is not true. What makes this hilarious is people's reactions to it. A ship of fools.

Replies from: artemium
comment by artemium · 2014-11-26T19:09:04.470Z · LW(p) · GW(p)

Do you have any serious counter arguments to ideas presented in a Bostrom's book? Majority of top AI experts agree that we will have human-level AI by the end of this century, and people like Musk, Bostrom and MIRI guys are just trying to think about possible negative impacts that this development may have on humans. The problem is that the fate of humanity may depend on action of non-human actors, who will likely have utility function incompatible with human survival and it is perfectly rational to be worried about that.

Those ideas are definitely not above criticism but also should not be dismissed based on perceived lack of expertise. Someone like Elon Musk has actually direct contact with people who are working on one of the most advanced AI projects on earth (Vicarious, DeepMind), so he certainly knows what he is talking about.

Replies from: examachine
comment by examachine · 2014-11-28T03:41:56.259Z · LW(p) · GW(p)

I do. Nick Bostrom is a creationist idiot (simulation "argument" is creationism), with absolutely no expertise in AI, who thinks the doomsday argument is true. Funnily enough, he does claim to be an expert in several extremely difficult fields including AI and computational neuroscience despite lack of any serious technical publications, on his book cover. That's usually a red flag indicating a charlatan. Despite whatever you might think, a "social scientist" is ill-equipped to say anything about AI. That's enough for now. For a more detailed exposition, I am afraid you will have to wait a while longer. You will know it, when you see it, stay tuned!

comment by JoshuaZ · 2014-12-17T03:45:24.384Z · LW(p) · GW(p)

What would you be willing to make a bet that nothing remotely resembling that happens before 2020? 2025?

comment by XiXiDu · 2014-11-17T12:33:57.011Z · LW(p) · GW(p)

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.

If he is seriously convinced that doom might be no more than 5 years away, then I share his worries about what an agent with massive resources at its disposal might do in order to protect itself. Just that in my case this agent is called Elon Musk.

Replies from: None, Artaxerxes
comment by [deleted] · 2014-11-18T15:24:29.864Z · LW(p) · GW(p)

I'm grabbing the popcorn. This is gonna be good. Biosphere 2 level wealthy person pet project? Or something more subtle?

comment by Artaxerxes · 2014-11-17T13:12:30.416Z · LW(p) · GW(p)

What are you worried he might do?

If he believes what he's said, he should really throw lots of money at FHI and MIRI. Such an action would be helpful at best, harmless at worst.

Replies from: XiXiDu, artemium
comment by XiXiDu · 2014-11-17T15:44:31.852Z · LW(p) · GW(p)

What are you worried he might do?

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

If he believes what he's said, he should really throw lots of money at FHI and MIRI.

Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.

Replies from: Artaxerxes, ArisKatsaris
comment by Artaxerxes · 2014-11-17T16:21:14.237Z · LW(p) · GW(p)

I mean more that it seems that his views line up a lot closer to MIRI/FHI than most AI researchers. Hell, his views are closer to MIRI's than Thiel's are at this point.

How much money do they need to solve "friendly AI" within 5-10 years?

Good question. I'd like to see what they could do with 10x what they have now, for a start.

If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference.

I don't even think many of those at MIRI think that they would have much chance if they were only given 10 years, so you're in good company there.

comment by ArisKatsaris · 2014-11-19T03:28:58.853Z · LW(p) · GW(p)

Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers? I doubt it.

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

Replies from: XiXiDu
comment by XiXiDu · 2014-11-19T16:09:42.681Z · LW(p) · GW(p)

So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?

What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.

He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talked to policy makers on how to ensure safety while promoting the positive aspects. There is a lot you can do. But making crazy statements in public about summoning demons and comparing AI to nukes is just completely unwarranted given the current state of evidence about AI risks, and will probably upset lots of AI people.

You believe he's calling for the execution, imprisonment or other punishment of AI researchers?

I doubt that he is that stupid. But I do believe that certain people, if they were to seriously believe into doom by AI, would consider violence to be an option. John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

The problem here is not that it would be wrong to deactivate a doomsday device forcefully, if necessary, but rather that there are people out there who are stupid enough to use force unnecessarily or decide to use force based on insufficient evidence (evidence such as claims made by Musk).

ETA: Just take those people who destroy GMO test fields. Musk won't do something like that. But other people, who would commit such acts, might be inspired by his remarks.

Replies from: artemium
comment by artemium · 2014-11-24T21:37:00.896Z · LW(p) · GW(p)

John von Neumann was in favor of a preventive nuclear attack against Russia. Do you think that if von Neumann was still around and thought that Google would within 5-10 years launch a doomsday device he would refrain from using violence if he thought that only violence could stop them? I believe that if the U.S. administration was highly confident that e.g. some Chinese lab was going to start an intelligence explosion by tomorrow, they would consider nuking it.

There is some truth to that, especially how crazy von Neumann was. But I'm not sure if anyone would be launching pre-emtive nuclear attack on other country because of AGI research. I mean this countries already have nukes, pretty solid doomsday weapon so I dont think that adding another superweapon to its arsenal will change situation. Whether you are blown to bits by chinese nuke or turn into paperclips by chinese-built AGI doesn't make much difference.

comment by artemium · 2014-11-24T21:42:01.811Z · LW(p) · GW(p)

He will probably try to buy influence in every AI company he can find. There are limits to this strategy thought. I think raising public awareness about this problem and donating money to MRI and FHI would also help.

BTW someone should make a movie where Elon Musk becomes Ironman and than accidentally develops ufAI...oh wait