I know when the Singularity will occur
post by PhilGoetz · 2013-09-06T20:04:18.560Z · LW · GW · Legacy · 28 commentsContents
28 comments
More precisely, if we suppose that sometime in the next 30 years, an artificial intelligence will begin bootstrapping its own code and explode into a super-intelligence, I can give you 2.3 bits of further information on when the Singularity will occur.
Between midnight and 5 AM, Pacific Standard Time.
Why? Well, first, let's just admit this: The race to win the Singularity is over, and Google has won. They have the world's greatest computational capacity, the most expertise in massively distributed processing, the greatest collection of minds interested in and capable of work on AI, the largest store of online data, the largest store of personal data, and the largest library of scanned, computer-readable books. That includes textbooks. Like, all of them. All they have to do is subscribe to Springer-Verlag's online journals, and they'll have the entire collected knowledge of humanity in computer-readable format. They almost certainly have the biggest research budget for natural language processing with which to interpret all those things. They have two of the four smartest executives in Silicon Valley.1 Their corporate strategy for the past 15 years can be approximated as "Win the Singularity."2 If someone gave you a billion dollars today to begin your attempt, you'd still be 15 years and about two-hundred and ninety-nine billion dollars behind Google. If you believe in a circa-2030 Singularity, there isn't enough time left for anybody to catch up with them.
(And I'm okay with that, considering that the other contenders include Microsoft and the NSA. But it alarms me that Google hasn't gone into bioinformatics or neuroscience. Apparently their plans don't include humans.)
So the first bootstrapping AI will be created at Google. It will be designed to use Google's massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
A more important implication is that this scenario decreases the possibility of FOOM. The AI will be designed to run on the computational resources available to Google, and they'll build and test it as soon as they think that is just enough computational power for it to run. That means that its minimum computational requirements will be within one or two orders of magnitude of that of all the computers on Earth. (We don't know how many servers Google has, but we know they installed their one millionth server on July 9, 2008. Google may—may—own less than 1% of the world's CPU power, but connectivity within its system is vastly superior to that between other internet servers, let alone a botnet of random compromised PCs.)
So when the AI breaks out of the computational grid composed of all the Google data centers in the world, into the "vast wide world of the Internet", it's going to be very disappointed.
Of course, the distribution of computational power will change before then. Widespread teraflop GPU graphics cards could change this scenario completely in the next ten years.
In which case Google might take a sudden interest in GPUs...
ADDED:
Do I really believe all that? No. I do believe "Google wins" is a likely scenario—more likely than "X wins" for any other single value of X. Perhaps more importantly, you need to factor the size of the first AI built into your FOOM-speed probability distribution, because if the first AI is built by a large organization, with a lot of funding, that changes the FOOM paths open to it.
AI FOOMs if it can improve its own intelligence in one way or another. The people who build the first AI will make its algorithms as efficient as they are able to. For the AI to make itself more intelligent by scaling, it has to get more resources, while to make itself more intelligent by algorithm redesign, it will have to be smarter than the smartest humans who work on AI. The former is trivial for an AI built in a basement, but severely limited for an AI brought to life at the direction of Page and Brin.
The first "human-level" AI will probably be roughly as smart as a human, because people will try to build them before they can reach that level, the distribution of effectiveness-of-attempted-AIs will be skewed hard left, with many failures before the first success, and the first success will be a marginal improvement over a previous failure. That means the first AI will have about the same effective intelligence, regardless of how it's built.
As smart as "a human" is closer to "some human" than to "all humans". The first AI will almost certainly be at most as intelligent as the average human, and considerably less intelligent than its designers. But for an AI to make itself smarter through algorithm improvement requires the AI to have more intelligence than the smartest humans working on AI (the ones who just built it).
The easier, more-likely AI-foom path is: Build an AI as smart as a chimp. That AI grabs (or is given) orders of magnitude of resources, and gets smarter simply by brute force. THEN it redesigns itself.
That scaling-foom path is harder for AIs that start big than AIs that start small. This means that the probability distribution for FOOM speed depends on the probability distribution for the amount of dollars that will be spent to build the first AI.
Remember you are Bayesians. Your objective is not to accept or reject the hypothesis that the first AI will be developed according to this scenario. Your objective is to consider whether these ideas change the probability distribution you assign to FOOM speed.
The question I hope you'll ask yourself now is not, "Won't data centers in Asia outnumber those in America by then?", nor, "Isn't X smarter than Larry Page?", but, "What is the probability distribution over <capital investment that will produce the first average-human-level AI>?" I expect that the probabilities will be dominated by large investments, because the probability distribution over "capital investment that will produce the first X" appears to me to be dominated in recent decades by large investments, for similarly-ambitious X such as "spaceflight to the moon" or "sequence of the human genome". A very clever person could have invented low-cost genome sequencing in the 1990s and sequenced the genome him/herself. But no very clever person did.
1. I'm counting Elon Musk and Peter Thiel as the others.
2. This doesn't need to be intentional. Trying to dominate information search should look about the same as trying to win the Singularity. Think of it as a long chess game in which Brin and Page keep making good moves that strengthen their position. Eventually they'll look around and find they're in a position to checkmate the world.
28 comments
Comments sorted by top scores.
comment by katydee · 2013-09-06T06:23:23.061Z · LW(p) · GW(p)
This post seems more appropriate for the Discussion section.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T12:41:38.001Z · LW(p) · GW(p)
Why?
Replies from: None, katydee↑ comment by [deleted] · 2013-09-06T14:09:52.825Z · LW(p) · GW(p)
Your "narrative hook" is the only claim with even a tiny amount of real substance behind it. The essay was a minimal waste of my time only because I realized it was armchair-quality before the middle of the third paragraph.
(You asked.)
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T20:14:40.352Z · LW(p) · GW(p)
You're not claiming that it belongs in Discussion due to content, but merely based on your perception of its quality. I don't think that's what people usually mean when they say something is more appropriate for Discussion.
When you say everything else in the post has no substance, I wonder what you mean. Do you dispute that Google has large data centers? That they have a market cap of $300 billion? That they have the expertise I listed? That such expertise is relevant? That computational resources matter? What, exactly, would qualify as "substance" for a discussion of this type? Can you point to any other speculations about the timing of the Singularity that have more substance?
Replies from: 9eB1, None↑ comment by 9eB1 · 2013-09-06T23:38:47.571Z · LW(p) · GW(p)
I think the epistemic status of the typical main post (now, standards may have been different in the past) is "believed" while the epistemic status of the original post seems to be "musing for reaction" based on your statements in this thread. I think it would be possible for it to be rewritten in such a way that fewer people would complain about it being in main without actually changing the core of the information it contains.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-07T01:41:54.192Z · LW(p) · GW(p)
That is admirably precise advice, and I appreciate it. But I think Bayesians should not get into the habit of believing things. Things clear-cut enough for me to believe them are never interesting to me. In speculative matters such as the singularity, musings are more useful than provable beliefs, as they cover far more probability, and forming opinions about them will probably reduce your error by more.
↑ comment by [deleted] · 2013-09-07T03:50:17.703Z · LW(p) · GW(p)
You're not claiming that it belongs in Discussion due to content
Wrong.
Your "narrative hook" is the only claim with even a tiny amount of real substance behind it.
This is a claim about its content.
I don't think that's what people usually mean when they say something is more appropriate for Discussion.
Even if I were arguing that it belongs in Discussion merely on the basis of quality (which I am not), this is what many actual people have actually meant.
What, exactly, would qualify as "substance" for a discussion of this type? Can you point to any other speculations about the timing of the Singularity that have more substance?
Kurzweil's process of fitting everything to an exponential curve. Gwern's essay on computational overhang. Fermi estimates of the processing power necessary to emulate a human brain, and possible corrections to Moore's law in the post ~5nm world.
I believe this makes my position sufficiently clear. As the post has already been moved to Discussion, any further clarification is kind of pointless.
comment by So8res · 2013-09-06T06:00:03.009Z · LW(p) · GW(p)
This seems overconfident and somewhat misinformed to me.
First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
Secondly, you make the assumption that the AI team would run the AI on Google's "empty space". That implies a huge number of unspoken assumptions. I'd expect a Google AI team to have a (huge) allocation of resources that they utilize at all times.
Thirdly, it's quite a leap to jump from "there's slightly less server load at these hours" to "therefore an AI would go super-intelligent in these hours". To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Finally, I don't see how it decreases risk of a foom. If you already believed that a small AI could foom given a large portion of the world's resources, then it seems like an AI that starts out with massive computing power should foom even faster.
(The "fooming" of a brute-force AI with a huge portion of the world's resources involves sharply reducing resource usage while maintaining or expanding resource control.)
If you're already afraid of small kludge AIs, shouldn't you be even more afraid of large kludge AIs? If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world's resources already in hand. It seems to me like an AI with all of Google's servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it's likely to have insights that humans are incapable of.
If that's the case, then a monolithic Google AI is bad news.
(Disclosure: I write software at Google.)
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T12:46:35.014Z · LW(p) · GW(p)
First of all, it seems reasonable to wager that Google servers already do a lot of work when traffic is light: log aggregation, data mining, etc.
That's why I said "one or two orders of magnitude".
Thirdly, it's quite a leap to jump from "there's slightly less server load at these hours" to "therefore an AI would go super-intelligent in these hours". To make such a statement at your level of expressed confidence (with little to no support) strikes me as brazen and arrogant.
Thank you. What, you think I believe what I said? I'm a Bayesian. Show me where I expressed a confidence level in that post.
If you already believed that a small AI could foom given a large portion of the world's resources, then it seems like an AI that starts out with massive computing power should foom even faster.
One variant of the "foom" argument is that software that is "about as intelligent as a human" and runs on a desktop can escape into the Internet and augment its intelligence not by having insights into how to recode itself, but just by getting orders of magnitude more processing power. That then enables it to improve its own code, starting from software no smarter than a human.
If the software can't grab many more computational resources than it was meant to run with, because those resources don't exist, that means it has to foom on raw intelligence. That raises the minimum intelligence needed for FOOM to the superhuman level.
If you believe that small-AI is both possible and dangerous, then surely you should be even more afraid of large-AI searching for small-AI with a sizable portion of the world's resources already in hand.
No. That's the point of the article! "AI" indicates a program of roughly human intelligence. The intelligence needed to count as AI, and to start an intelligence explosion, is constant. Small AI and large AI have the same level of effective intelligence. A small AI needs to be written in a much more clever manner, to get the same performance out of a desktop as out of the Google data centers. When it grabs a million times more computational power, it will be much more intelligent than a Google AI that started out with the same intelligence when running on a million servers.
Replies from: timtyler, So8res↑ comment by So8res · 2013-09-06T15:38:02.576Z · LW(p) · GW(p)
That's why I said 'one or two orders of magnitude"
That's not the part of your post I was criticizing. I was criticizing this:
And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
Which doesn't seem to be a good model of how Google servers work.
Show me where I expressed a confidence level in that post.
Confidence in English can be expressed non-numerically. Here's a few sentences that seemed brazenly overconfident to me:
I know when the singularity will occur
(Sensationalized title.)
I can give you 2.3 bits of further information on when the Singularity will occur
(The number of significant digits you're counting on your measure of transmitted information implies confidence that I don't think you should possess.)
So the first bootstrapping AI will be created at Google. It will be designed to use Google's massive distributed server system. And they will run it between midnight and 5AM Pacific time, when the load on those servers is smallest.
(I understand that among Bayesians there is no certainty, and that a statement of fact should be taken as a statement of high confidence. I did not take this paragraph to express certainty: however, it surely seems to express higher confidence than your arguments merit.)
One variant of the "foom" argument is that software that is "about as intelligent as a human" and runs on a desktop can escape ... If the software can't grab many more computational resources than it was meant to run with, because those resources don't exist, that means it has to foom on raw intelligence ... A small AI needs to be written in a much more clever manner ...
Did you even read my counter-argument?
It seems to me like an AI with all of Google's servers available is likely to find the small-AI faster than a team of human researchers: it already has extraordinary computing power, and it's likely to have insights that humans are incapable of.
I concede that a large-AI could foom slower than a small-AI, if decreasing resources usage is harder than resource acquisition. You haven't supported this (rather bold) claim. Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling, which doesn't seem obvious to me.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise. A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
A more important implication is that this scenario decreases the possibility of FOOM
I don't buy it. At best, it doesn't foom as fast as a small-AI could. Even then, it still seems to drastically increase the probability of a foom.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T20:01:38.193Z · LW(p) · GW(p)
The confidence I expressed linguistically was to avoid making the article boring. It shouldn't matter to you how confident I am anyway. Take the ideas and come up with your own probabilities.
The key point, as far as I'm concerned, is that an AI built by a large corporation for a large computational grid doesn't have this easy FOOM path open to it: Stupidly add orders of magnitude of resources; get smart; THEN redesign self. So size of entity that builds the first AI is a crucial variable in thinking about foom scenarios.
I consider it very possible that the probability distribution of dollars-that-will-be-spent-to-build-the-first-AI has a power-law distribution, and hence is be dominated by large corporations, so that scenarios involving them should have more weight in your estimations than scenarios involving lone wolf hackers, no matter how many of those hackers there are.
Scaling a program up is difficult. So is acquiring more servers. So is optimizing a program to run on less resources. Fooming is hard no matter how you do it. Your argument hinges upon resource-usage-reduction being far more difficult than scaling
I do think resource-usage reduction is far more difficult than scaling. The former requires radically new application-specific algorithms; the latter uses general solutions that Google is already familiar with. In fact, I'll go out on a limb here and say I know (for Bayesian values of the word "know") resource-usage reduction is far more difficult than scaling. Scaling is pretty routine and goes on on a continual basis for every major website & web application. Reducing order-of-complexity of an algorithm is a thing that happens every 10 years or so, and is considered publication-worthy (which scaling is not).
My argument has larger consequences (greater FOOM delay) if this is true, but it doesn't depend on it to imply some delay. The big AI has to scale itself down a very great deal simply to be as resource-efficient as the small AI. After doing so, it is then in exactly the same starting position as the small AI. So foom is delayed by however long it takes a big AI to scale itself down to a small AI.
But suppose that I accept it: The Google AI still brings about a foom earlier than it would have come otherwise.
Yes, foom at an earlier date. But a foom with more advance warning, at least to someone.
A large-AI seems more capable of finding a small-AI (it has some first-hand AI insights, lots of computing power, and a team of Google researches on its side) than an independent team of humans.
No; the large AI is the first AI built, and is therefore roughly as smart as a human, whether it is big or small.
comment by private_messaging · 2013-09-06T09:27:44.851Z · LW(p) · GW(p)
Well, it's more plausible than the bulk of speculations you'd see, I'll give you that, but it is still rather improbable. No, they do not have 2 executives out of 4 smartest; that's highly implausible, and if you include Thiel you ought to include dozens and dozens others. Nor do they have a particularly high fraction of world's most intelligent people.
What I envision is that Google is uniquely posed to create an AI that primarily relies on human knowledge, being capable of seemingly superhuman achievement while only requiring subhuman general problem solving capacity. I.e. the very stereotype of uncreative artificial intelligence. Watson on steroids; an automatic plagiarist of ideas.
More interesting (but also less likely) possibilities ought to benefit less from the availability of utterly massive dataset. After all, we humans grow up with a much smaller data set.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T20:22:29.418Z · LW(p) · GW(p)
No, they do not have 2 executives out of 4 smartest; that's highly implausible, and if you include Thiel you ought to include dozens and dozens others.
I realize that if you administered IQ tests to all the executives in Silicon Valley, you would likely find many others who would score higher. This is shorthand for "2 of the 4 smartest top-level executives", which was my original wording, but which I removed because once you begin introducing qualifications, people start to think you're being precise, and misunderstand you even worse than before.
Then I'd have to start talking about what "smart" means, and explain that actually I don't believe that "smarter than" is a useful concept for people on that level, and we'd be here all day.
I still doubt you'd find dozens and dozens of executives "smarter than" Thiel. Thiel is pretty damn smart. Aside from my own observations, that's what Mike Vassar told me. I expect he's has had more facetime with Thiel than the entire rest of LessWrong combined (and his judgement is considerably better than that of the entire rest of LessWrong combined, for naive "combine" algorithms).
What I envision is that Google is uniquely posed to create an AI that primarily relies on human knowledge, being capable of seemingly superhuman achievement while only requiring subhuman general problem solving capacity.
"Human knowledge" and "general problem solving capacity" are both sufficiently mysterious phrases that you can push content back and forth between them as you like, to reach whatever conclusion you please.
Replies from: private_messaging↑ comment by private_messaging · 2013-09-07T05:44:51.408Z · LW(p) · GW(p)
I still doubt you'd find dozens and dozens of executives "smarter than" Thiel. Thiel is pretty damn smart. Aside from my own observations, that's what Mike Vassar told me.
This is really silly. You can't predict a winner in some sort of mental contest by just going, ohh I talked with that guy, that guy's smart. You'd probably do no better than chance if nobody's seriously stupid. Mostly when someone like this says that someone else is smart, that merely means it is the most useful thing to say / the views align the most. Plus given the views on intelligence rating expressed here its a way to fake-signal "I am really smart, too - I have to be to recognize other's intelligence" (the way to genuinely-signal that is to have some sort of achievement from which high intelligence can be inferred with sufficiently low false positive rate)
(That being said, Thiel is pretty damn smart based on his chess performance. But he doesn't have much experience in the technical subjects, compared to many, many others, and to expect him to outperform them is as silly as to expect someone of similar intelligence who didn't play chess, to win vs Thiel. Training does matter.)
"Human knowledge" and "general problem solving capacity" are both sufficiently mysterious phrases that you can push content back and forth between them as you like, to reach whatever conclusion you please.
I did outline exactly what I mean. Look at the superhuman performance of Watson. Make that tad more useful.
comment by timtyler · 2013-09-06T17:37:50.848Z · LW(p) · GW(p)
Well, first, let's just admit this: The race to win the Singularity is over, and Google has won.
I checked with Google trends. It seems as though they may yet face some competition. Also, 15 years of real time is quite a bit of internet time. Previously it looked as though Microsoft had won - and IBM had won.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-06T21:28:10.501Z · LW(p) · GW(p)
Tim, you don't seriously think that Facebook is more likely to build an AI because more people google for Facebook than for Google, do you? I think that's what you just said.
It looked like Microsoft had won the OS market (which it still mostly has). It looked like IBM had won the PC market. Before that, it looked like Apple had won the PC market. Before that, it looked like IBM had won the minicomputer market.
But at no time did any of those companies act like their goal was to develop artificial intelligence. They didn't even build out the hardware infrastructure, let alone develop expertise in AI and collect all the relevant data. Nor did any of them ever have executives capable of the same level of vision as Larry & Sergei. Nor, I think, did they ever have profitability approaching Google's. Not a single one of the many points I listed applied to any of those companies at any time.
Replies from: Houshalter↑ comment by Houshalter · 2013-09-20T18:06:05.749Z · LW(p) · GW(p)
Facebook has the potential to get a ton of revenue if they aren't already. Millions of people use the site and spend a decent amount of time there, which gives them tons of ads. And they are currently recruiting the best machine learning experts through Kaggle competitions.
IBM is currently working on Watson which is the most impressive AI that I know of. They also built Deep Blue.
comment by Darklight · 2013-09-06T20:54:56.134Z · LW(p) · GW(p)
If I had to take a gamble on what organization was best primed to cause the Singularity, I'd probably also pick Google, mainly because they seem to be gathering the world's best machine learning researchers together. They've already hired Geoffrey Hinton from Toronto and Andrew Ng from Stanford. Both of these researchers are considered among the foremost minds in machine learning, and both have been working on Deep Neural Networks that have shown a lot of promise with the pattern recognition problems like object recognition and speech recognition. Last I heard Google managed to improve the performance of their speech recognition software by something like 20% by switching to such neural nets.
It's my own opinion that machine learning is the key to AGI, because any general intelligence needs to be able to learn about things it hasn't been programmed to know already. That adaptability, the ability to change parameters or code to adapt to new information is, I think an essential element that separates a mere Optimization Algorithm, from something capable of developing General Intelligence. Also, this is just a personal intuition, but I think that being able to reason effectively about the world requires being able to semantically represent things like objects and concepts, which is something that artificial neural networks can hypothetically do, while things like expert systems tend to just shuffle around symbols syntactically without really understanding what they are.
The bottom up approach is also likely to be easier to reach superhuman intelligence levels sooner than some top down approach to A.I., as all we have to do is scale up artificial neural networks that copy the human brain's architecture, whereas it seems like a top down approach will have to come up with a lot of new math before they can really make progress there. But then, I'm a connectionist, so I'm kinda biased.
Perhaps one interesting thought is that if the first superintelligent A.I. is actually an artificial neural network, it'll probably be more "human-like" or at least similar to an evolved intelligence, than if it was created by top-down A.I. Not saying that that gets rid of the Orthogonality Thesis, but it might mean that an artificial neural network based A.I. might be more likely to land in the part of the mindspace that humans tend to fall into, because of similar architectures of sentience. Maybe.
comment by Alejandro1 · 2013-09-06T22:54:02.504Z · LW(p) · GW(p)
I do believe "Google wins" is a likely scenario—more likely than "X wins" for any other single value of X.
This is something of a non-sequitur. "Google wins" might be more likely than any other "X wins" with an X than we can name today, and still very unlikely in absolute terms. Like a lottery with a thousand tickets where only one person has bought ten of them and all the others go to one person each. Let us be precise: conditional of there being a singularity before 2040, what is your probability that Google initiates it?
Replies from: PhilGoetz↑ comment by PhilGoetz · 2013-09-07T06:29:28.257Z · LW(p) · GW(p)
First, take this 2-question quiz:
Before reading this post, had you already recognized the impact that the size of the organization who builds the first AI has on its probability of FOOMing, formed an estimate of the likelihood of first AI being big or small, and factor that into your estimate for the expected speed of a FOOM?
Did you upvote this post?
If the answer to the first question is yes, then I might try to answer a variant of your question.
If your answer to the first question is no, and your answer to the second question is no because you think this is not something you need to factor into that probabilty, then it would be counterproductive for me to answer questions on tangential issues.
If your answer to the first question is no, and your answer to the second question is no because you're waiting for more specifics, why should I bother, since you've already decided my answer would be worth nothing to you?
A relevant quote from Site Mechanics::
Please do not vote solely based on how much you agree or disagree with someone's conclusions. A better heuristic is to vote based on how much a comment or post improves the accuracy of your map. For example, a comment you agree with that doesn't add to the discussion should be voted down or left alone. A comment you disagree with that raises important points should be voted up.
So the specific probability I assign to Google in particular winning is irrelevant to voting, and IMHO something of a digression from the main point of the post. If you care, make up your own probability. That's what you should be doing anyway. I've given you many relevant facts.
A more-important question is, What is the probability distribution over "capital investment that will produce the first average-human-level AI"? I expect that the probabilities will be dominated by large investments, because the probability distribution over "capital investment that will produce the first X" appears to me to be dominated by large investments, for similarly-ambitious X such as "spaceflight to the moon" or "sequence of the human genome". A very clever person could have invented low-cost genome sequencing in the 1990s and sequenced the genome him/herself. But no very clever person did.
comment by Houshalter · 2013-09-20T17:48:58.745Z · LW(p) · GW(p)
I disagree with your premises. The hard part of AI is the algorithm itself. Google's supercomputers would be an advantage if you had an AI already and wanted it to take off as fast as possible. Maybe an AI running on a Google computer would take off in a day instead of a month. Not to mention how fast computers will be in 30 years or what the distribution of computing power will then be. 30 years ago Google wasn't even around and computing power has grown enormously since then.
The amount of data available is an even smaller advantage. I don't really think it's an advantage at all. The internet has virtually unlimited data if you need it, and there is no reason an AI wouldn't be just as smart working with a very limited data set or just on some optimization problems with no external data at all.
Google doesn't make up the majority of AI research. A good portion sure, but not the majority. Further I don't think the approaches they are investing in are likely to lead to AGI.
comment by Pablo (Pablo_Stafforini) · 2013-09-07T03:28:53.794Z · LW(p) · GW(p)
Yudkowsky said it first:
Google is your friend. Trust in Google. Google is your Extended Long-Term Memory. Google is the Source of All Knowledge. Have you accepted Google into your heart?
comment by Bugmaster · 2013-09-06T21:25:26.072Z · LW(p) · GW(p)
...if we suppose that sometime in the next 30 years, an artificial intelligence will begin bootstrapping its own code and explode into a super-intelligence, I can give you 2.3 bits of further information on when the Singularity will occur.
Yes. If we begin by supposing random things out of the blue, there's no end to the cool conclusions we can draw !
comment by wwa · 2013-09-07T02:16:47.072Z · LW(p) · GW(p)
The race to win the Singularity is over, and Google has won
Counter-argument: NSA has a track record of having Math advanced decades beyond public. Also, quantum computing should be within reach in 30 years... might be NSA as well, since it'd make a perfect crypto cracker.