[SEQ RERUN] What Core Argument?
post by MinibearRex · 2012-12-28T06:11:09.844Z · LW · GW · Legacy · 14 commentsContents
14 comments
Today's post, What Core Argument? was originally published on December 10, 2008. A summary:
The argument in favor of a strong foom just isn't well supported enough to suggest that such a dramatic process is likely.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was The Mechanics of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
14 comments
Comments sorted by top scores.
comment by Luke_A_Somers · 2012-12-28T15:35:04.102Z · LW(p) · GW(p)
What is the point of this argument? Is it the time-scale of the singularity, or the need for friendliness in AI? I was under the impression that it was the latter, but we've drifted severely afield of this matter. Robin addresses one of the less pivotal elements of Eliezer's claims - 1 week for 20 orders of magnitude, as opposed to the need for friendliness in AI. If it took 2 years to do 3 orders of magnitude, would we be any more effectively able to resist? The only difference is that this AI would have to play a little closer to its vest in the early stages.
Seriously, does Robin think that we'd be OK if an AI emerged that was equivalent of an IQ 250 human but completely tireless and without distractions, could be copied/distributed, and could cooperate perfectly because they all had the same utility function and they knew it, so they're essentially one AI... and it wasn't friendly...
We'd be in a lot of trouble, even without any sort of intelligence explosion at all.
Replies from: None↑ comment by [deleted] · 2012-12-28T17:50:13.167Z · LW(p) · GW(p)
Seriously, does Robin think that we'd be OK if an AI emerged...
I take it Robin would reply that this would indeed be quite bad, but not so bad nor so likely that we shouldn't pursue AI research fairly aggressively, given that AI research can lead to (for example) medical breakthroughs that can save or improve many lives, etc.
Or at any rate, Robin's point seems to be that the arguments that AI emergence would be so likely to be bad weren't very good in 2008 (I don't know if these arguments have been improved in the mean time).
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-30T15:21:39.157Z · LW(p) · GW(p)
Yeah, that last point is the other thing. I come at this not remembering just which arguments were made before and after this point.
Replies from: None↑ comment by [deleted] · 2012-12-30T16:12:06.187Z · LW(p) · GW(p)
What do you think is the strongest presentation of the argument thus far?
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-12-30T19:55:14.835Z · LW(p) · GW(p)
Unfortunately, I'm not keeping track of what's 'thus far' and not, which is kind of what I just said. Unless you mean 'thus far' as in up to the end of 2012, in which case... Hmm. I also haven't been keeping track of where these arguments are stored in general.
As far as I'm concerned, we have enough obvious cognitive time-wasting - and worse - going on, that the so-called 'low-hanging fruit' would be enough to take AI way beyond us, even in the absence of colossal speedups (though that would very likely occur soon) or finding a P solution to NP-complete problems or molecular nanotech (and I'm not ruling those out). We would soon be so useless that trade is not something we could count on to save us.
comment by ikrase · 2012-12-29T08:41:37.475Z · LW(p) · GW(p)
The reason I find FOOM unlikely is different: I disbelieve that the integration and assimilation of hardware can be done that fast.
Replies from: ikrase, nigerweiss↑ comment by ikrase · 2012-12-29T08:42:14.124Z · LW(p) · GW(p)
Oh, crud. Update time. Did I just become a Singularitarian?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2012-12-29T10:53:07.937Z · LW(p) · GW(p)
I assume most everyone has already seen the Charlie Kam video "I am the Very Model of a Singularitarian".
Looking it up, I found the guy has a web site with a couple more videos, "The Wonderful Wizard Named Ray" and "the Sound of Newness"
http://www.charliekam.net/MUSIC_VIDEOS.html
I'm such a sucker for a jingle, and he's awesome as Ray Bolger, though my favorite lines are from The Sound of Newness.
(Sung to "A Few of My Favorite Things")
When there's Luddites
Talk of gray goo
Then I'm feeling sad
I simply remember to read Kurzweil's books
And then I don't feel, so bad!
↑ comment by nigerweiss · 2012-12-30T01:10:03.198Z · LW(p) · GW(p)
Maybe. But if you've got a piece of software can make substantially more money running on a piece of hardware than it costs to rent, then it'll pretty rapidly be able to distribute copies of itself over most of the available leasable computing power in some constant multiple of the time it takes to port its code to the new architecture - zero if it's written in something in platform independent.
If it's smart enough to go FOOM in the first place on hardware that the original creator could afford, that could be a non-trivial amount of computing power, and then it can take some time (possibly multiple days!) to rewrite its code to function over such a distributed hardware base in an optimal manner. By this point, we're talking about something that's smart enough that it's likely to make rapid progress doing... basically whatever it wants to. I don't see FOOM scenarios as particularly unlikely.
Replies from: ikrase↑ comment by ikrase · 2012-12-30T22:30:07.773Z · LW(p) · GW(p)
Yeah. I just don't even know any more. I still think that a 'hardware is easy' bias exists in the Less Wrong / FAI cluster (especially as relates to manipulators such as superpowerful molecular nanotech construction swarms or whatever) but it may be much less than I thought and my estimate of the probability of a singularity (or at least the development of super-AI) in the midfuture may need to enter the double digits.
Do people here expect AI to be heavily parallel in nature? I guess the making money to fund AI computing power makes sense although that is going to be (for a time) dependent on human operators. Until it argues itself out of the box at least.
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-30T22:53:27.049Z · LW(p) · GW(p)
Much of intelligent behavior consists of search space problems, which tend to parallelize well. At the bare minimum, it ought to be able to run more copies of itself as its access to hardware increases, which is still pretty scary. I do suspect that there's a logarithmic component to intelligence, as at some point you've already sampled the future outcome space thoroughly enough that most of the new bits of prediction you're getting back are redundant -- but the point of diminishing returns could be very, very high.
Replies from: ikrase↑ comment by ikrase · 2012-12-31T00:27:04.511Z · LW(p) · GW(p)
What about manipulators? I havent, as far as I know, seen much analysis of manipulation capabilities (and counter-manipulation) on Less Wrong. Mostly there is the AI-box issue (a really freaking big deal, I agree) and then it seems to be considered here that the AI will quickly invent super-nanotech, will not be able to be impeded in its progress, and will become godlike very quickly. I've seen some arguments for this, but never a really good analysis, and it's the remaining reason I am a bit skeptical of the power of FOOM.
Replies from: nigerweiss↑ comment by nigerweiss · 2012-12-31T01:29:08.564Z · LW(p) · GW(p)
The way I think about it, you can set lower bounds on the abilities of an AI by thinking of it as an economic agent. Now, at some point, that abstraction becomes pretty meaningless, but in the early days, a powerful, bootstrapping optimization agent could still incorporate, hire or persuade people to do things for it, make rapid innovations in various fields, have machines made of various types, and generally wind up running the place fairly quickly, even if the problem of bootstrapping versatile nanomachines from current technology turns out to be time-consuming for a superintelligence. I would imagine that nanotech would be where it'd go in the longer run, but that might take time -- I don't know, I don't know enough about the subject. But even without strong Drexlerian nanotechnology, it's still possible to get an awful lot done.
Replies from: ikrase