The Curve of Capability

post by rwallace · 2010-11-04T20:22:48.876Z · LW · GW · Legacy · 266 comments

Contents

266 comments

or: Why our universe has already had its one and only foom

In the late 1980s, I added half a megabyte of RAM to my Amiga 500. A few months ago, I added 2048 megabytes of RAM to my Dell PC. The later upgrade was four thousand times larger, yet subjectively they felt about the same, and in practice they conferred about the same benefit. Why? Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

That's a pretty important rule, so let's test it by looking at some more examples.

How does the performance of a chess program vary with the amount of computing power you can apply to the task? The answer is that each doubling of computing power adds roughly the same number of ELO rating points. The curve must flatten off eventually (after all, the computation required to fully solve chess is finite, albeit large), yet it remains surprisingly constant over a surprisingly wide range.

Is that idiosyncratic to chess? Let's look at Go, a more difficult game that must be solved by different methods, where the alpha-beta minimax algorithm that served chess so well, breaks down. For a long time, the curve of capability also broke down: in the 90s and early 00s, the strongest Go programs were based on hand coded knowledge such that some of them literally did not know what to do with extra computing power; additional CPU speed resulted in zero improvement.

The breakthrough came in the second half of last decade, with Monte Carlo tree search algorithms. It wasn't just that they provided a performance improvement, it was that they were scalable. Computer Go is now on the same curve of capability as computer chess: whether measured on the ELO or the kyu/dan scale, each doubling of power gives a roughly constant rating improvement.

Where do these doublings come from? Moore's Law is driven by improvements in a number of technologies, one of which is chip design. Each generation of computers is used, among other things, to design the next generation. Each generation needs twice the computing power of the last generation to design in a given amount of time.

Looking away from computers to one of the other big success stories of 20th-century technology, space travel, from Goddard's first crude liquid fuel rockets, to the V2, to Sputnik, to the half a million people who worked on Apollo, we again find that successive qualitative improvements in capability required order of magnitude after order of magnitude increase in the energy a rocket could deliver to its payload, with corresponding increases in the labor input.

What about the nuclear bomb? Surely that at least was discontinuous?

At the simplest physical level it was: nuclear explosives have six orders of magnitude more energy density than chemical explosives. But what about the effects? Those are what we care about, after all.

The death tolls from the bombings of Hiroshima and Nagasaki have been estimated respectively at 90,000-166,000 and 60,000-80,000. That from the firebombing of Hamburg in 1943 has been estimated at 42,600; that from the firebombing of Tokyo on the 10th of March 1945 alone has been estimated at over 100,000. So the actual effects were in the same league as other major bombing raids of World War II. To be sure, the destruction was now being carried out with single bombs, but what of it? The production of those bombs took the labor of 130,000 people, the industrial infrastructure of the worlds most powerful nation, and $2 billion of investment in 1945 dollars, nor did even that investment at that time gain the US the ability to produce additional nuclear weapons in large numbers at short notice. The construction of the massive nuclear arsenals of the later Cold War took additional decades.

(To digress for a moment from the curve of capability itself, we may also note that destructive power, unlike constructive power, is purely relative. The death toll from the Mongol sack of Baghdad in 1258 was several hundred thousand; the total from the Mongol invasions was several tens of millions. The raw numbers, of course, do not fully capture the effect on a world whose population was much smaller than today's.)

Does the same pattern apply to software as hardware? Indeed it does. There's a significant difference between the capability of a program you can write in one day versus two days. On a larger scale, there's a significant difference between the capability of a program you can write in one year versus two years. But there is no significant difference between the capability of a program you can write in 365 days versus 366 days. Looking away from programming to the task of writing an essay or a short story, a textbook or a novel, the rule holds true: each significant increase in capability requires a doubling, not a mere linear addition. And if we look at pure science, continued progress over the last few centuries has been driven by exponentially greater inputs both in number of trained human minds applied and in the capabilities of the tools used.

If this is such a general law, should it not apply outside human endeavor? Indeed it does. From protozoa which pack a minimal learning mechanism into a single cell, to C. elegans with hundreds of neurons, to insects with thousands, to vertebrates with millions and then billions, each increase in capability takes an exponential increase in brain size, not the mere addition of a constant number of neurons.

But, some readers are probably thinking at this point, what about...

... what about the elephant at the dining table? The one exception that so spectacularly broke the law?

Over the last five or six million years, our lineage upgraded computing power (brain size) by about a factor of three, and upgraded firmware to an extent that is unknown but was surely more like a percentage than an order of magnitude. The result was not a corresponding improvement in capability. It was a jump from almost no to fully general symbolic intelligence, which took us from a small niche to mastery of the world. How? Why?

To answer that question, consider what an extraordinary thing is a chimpanzee. In raw computing power, it leaves our greatest supercomputers in the dust; in perception, motor control, spatial and social reasoning, it has performance our engineers can only dream about. Yet even chimpanzees trained in sign language cannot parse a sentence as well as the Infocom text adventures that ran on the Commodore 64. They are incapable of arithmetic that would be trivial with an abacus let alone an early pocket calculator.

The solution to the paradox is that a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided.

(Is there an explanation why this state of affairs came about in the first place? I think there is - in a nutshell, most conscious observers should expect to live in a universe where it happens exactly once - but that would require a digression into philosophy and anthropic reasoning, so it really belongs in another post; let me know if there's interest, and I'll have a go at writing that post.)

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no. After all, an essential part of what we mean by foom in the first place - why it's so scarily attractive - is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. The dolphins couldn't say hey, these apes are on to something, let's snarf the code for this symbolic intelligence thing, oh and the hands too, we're going to need manipulators for the toolmaking application, or maybe octopus tentacles would work better in the marine environment. Human engineers carry out exactly this sort of technology transfer on a routine basis.

But it doesn't matter, because the lopsidedness is not occurring. Obviously computer technology hasn't lagged in symbol processing - quite the contrary. Nor has it really lagged in areas like vision and pattern matching - a lot of work has gone into those, and our best efforts aren't clearly worse than would be expected given the available development effort and computing power. And some of us are making progress on actually developing AGI - very slow, as would be expected if the theory outlined here is correct, but progress nonetheless.

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways. Hitherto no such shunning has occurred: every even slightly promising path has had people working on it. I advocate continuing to make progress across the board as rapidly as possible, because every year that drips away may be an irreplaceable loss; but if you believe there is a potential threat from unfriendly AI, then such continued progress becomes the one reliable safeguard.

 

266 comments

Comments sorted by top scores.

comment by JoshuaZ · 2010-11-05T01:56:47.661Z · LW(p) · GW(p)

While it is true that exponential improvements in computer speed and memory often have the sort of limited impact you are describing, algorithmic improvements are frequently much more helpful. When RSA-129 was published as a factoring challenge, it was estimated that even assuming Moore's law it would take a very long time to factor (the classic estimate was that it would take on the order of 10^15 years assuming that one could do modular arithmetic operations at 1 per a nanosecond. Assuming a steady progress of Moore's law one got an estimate in the range of hundreds of years at minimum.) However, it was factored only a few years later because new algorithms made factoring much much easier. In particular, the quadratic sieve and the number field sieve were both subexponential. The analogy here is roughly to the jump in Go programs that occurred when the new Monte Carlo methods were introduced.

An AI that is a very good mathematician, and can come up with lots of good algorithms might plausibly go FOOM. For example, if it has internet access and has finds a practical polynomial time factoring algorithm it will control much of the internet quite quickly. This is not the only example of this sort of problem. Indeed, I place most of the probability mass of an AI going FOOM on the chance that P=NP and that there's an actually practical fast way of solving NP complete problems. (ETA: Although note that prior discussion with Cousin It has convinced me that the level of barrier that P !=NP would be might be much smaller than I'd have estimated earlier. But the upshot is that if P=NP then FOOM really does seem plausible.)

I'll breathe much more easily if we show that P != NP.

The upshot is that improvement in raw computation is not the only thing that can potentially lead to FOOMing.

Replies from: paulfchristiano, PhilGoetz, Liron, rwallace
comment by paulfchristiano · 2010-11-05T19:02:44.978Z · LW(p) · GW(p)

If you believe that FOOM is comparably probable to P = NP, I think you should be breathing pretty easily. Based on purely mathematical arguments it would be extraordinarily surprising if P = NP (and even more surprising if SAT had a solution with low degree polynomial running time), but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks. Your concern at this point isn't really about an AI, its just that a universe where worst-case SAT can be practically solved is absolutely unlike the universe we are used to. If anyone has doubts about this assertion I would be happy to argue, although I think its an argument that as been made on the internet before.

So I guess maybe you shouldn't breathe easy, but you should at least have a different set of worries.

In reality I would bet my life against a dollar on the assertion P != NP, but I don't think this makes it difficult for FOOM to occur. I don't think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well.

As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don't think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T20:59:05.313Z · LW(p) · GW(p)

If you believe that FOOM is comparably probable to P = NP, I think you should be breathing pretty easily. Based on purely mathematical arguments it would be extraordinarily surprising if P = NP (and even more surprising if SAT had a solution > with low degree polynomial running time),

Yes, I'm familiar with these arguments. I find that suggestive but not nearly as persuasive as others seem to. I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP. Since this is close to my area of expertise, I think I can make these estimates fairly safely.

but even setting this aside, if there were a fast (say, quadratic) algorithm for SAT, then it would probably allow a smart human to go FOOM within a few weeks.

Absolutely not. Humans can't do good FOOM. We evolved in circumstances where we very rarely had to solve NP hard or NP complete problems. And our self-modification system is essentially unconscious. There's little historical evolutionary incentive to take advantage of fast SAT solving. If one doesn't believe this just look at how much trouble humans have doing all sorts of very tiny instances of simple computational problems like multiplying small numbers, or factoring small integers (say under 10 digits).

In reality I would bet my life against a dollar on the assertion P != NP,

Really? In that case we have a sharply different probability estimate. Would you care to make an actual bet? Is it fair to say that you are putting an estimate of less than 10^-6 that P=NP?

In reality I would bet my life against a dollar on the assertion P != NP, but I don't think this makes it difficult for FOOM to occur. I don't think the possibility of a FOOMing AI existing on modern computers is even worth debating, its just how likely humans are to stumble upon it. If anyone wants to challenge the assertion that a FOOMing AI can exist, as a statement about the natural world rather than a statement about human capabilities, I would be happy to argue with that as well.

As an aside, it seems likely that a reasonably powerful AI would be able to build a quantum computer good enough to break most encryption used in practice in the near future. I don't think this is really a serious issue, since breaking RSA seems like about the least threatening thing a reasonably intelligent agent could do to our world as it stands right now.

If it an AI can make quantum computers that can do that then it hardly has so much matter manipulation ability it has likely already won (although I doubt that even a reasonably powerful AI could necessarily do this simply because quantum computers are so finicky and unstable.)

But if P=NP in a practical way, RSA cracking is just one of the many things the AI will have fun with. Many crypto systems not just RSA will be vulnerable. The AI might quickly control many computer systems, increasing its intelligence and data input drastically. Many sensitive systems will likely fall under its control. And if P=NP then the AI also has shortcuts to all sorts of other things that could help it, like designing new circuits for it to use (and chip factories are close to automated at this point), and lots of neat biological tricks (protein folding becomes a lot easier although there seems to be some disagreement about from a computational perspective what class general protein folding falls into.) And of course, all those secure systems that are on the net which shouldn't be become far more vulnerable (nuclear power plants, particle accelerators, hydroelectric dams), as do lots of commercial and military satellites. And those are just a handful of the things that my little human mind comes up without being very creative. Harry James Potter Evans-Verres would do a lot better. (Incidentally I didn't remember how to spell his name so I started typing in Harry James Potter to Google and at "Harry James Pot" the third suggestion is for Evans-Verres. Apparently HPMoR is frequently googled.) And neither Harry nor I is as smart as decently intelligent AI.

Replies from: paulfchristiano, paulfchristiano, Douglas_Knight
comment by paulfchristiano · 2010-12-21T23:38:56.774Z · LW(p) · GW(p)

I now agree that I was overconfident in P != NP. I was thinking only of failures where my general understanding of and intuition about math and computer science are correct. In fact most of the failure probability comes from the case where I (and most computer scientists) are completely off base and don't know at all what is going on. I think that worlds like this are unlikely, but probably not 1 in a million.

comment by paulfchristiano · 2010-11-06T02:08:52.806Z · LW(p) · GW(p)

We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000.

This bet is particularly attractive to me, because if such a fast algorithm for SAT appears I will probably cease to care about the million dollars. My actual probability estimate is somewhere in the ballpark of 10^-6, though its hard to be precise about probabilities so small.

Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don't imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency. I think the step I assume is possible that you don't is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver.

A similar thought experiment my friends and I have occasionally contemplated is: given a machine which can run any C program in exactly 1 second (or report an infinite loop), how many seconds would it take you to ?

Replies from: JoshuaZ, JoshuaZ
comment by JoshuaZ · 2010-11-12T01:28:40.826Z · LW(p) · GW(p)

Replying a second time to remind you of this subthread in case you still have any interest in making a bet. If we change it to degree 7 rather than degree 4 and changed the monetary aspects as I outlined I'm ok with the bet.

comment by JoshuaZ · 2010-11-06T17:48:15.395Z · LW(p) · GW(p)

We have very different beliefs about P != NP. I would be willing to make the following wager, if it could be suitably enforced with sufficiently low overhead. If a proof that P != NP reaches general acceptance, you will pay me $10000 with probability 1/1000000 (expectation $.01). If an algorithm provably solves 3SAT on n variables in time O(n^4) or less, I will pay you $1000000.

We may need to adjust the terms. The most worrisome parts of that bet are twofold: 1) I don't have 10^5 dollars, so on my end, paying $1000 with probability 1/100000 which has the same expectation probably makes more sense. 2) I'm not willing to agree to that O(n^4) simply because there are many problems which are in P where our best algorithm known is much worse than that. For example, the AKS primality test is O(n^6) and deterministic Miller Rabin might be O(n^4) but only if one makes strong assumptions corresponding to a generalized Riemann hypothesis.

Perhaps it was not clear what I meant by an individual going FOOM, which is fair since it was a bit of a misuse. I mean just that an individual with access to such an algorithm could quickly amplify their own power and then exert a dominant influence on human society. I don't imagine a human will physically alter themselves. It might be entertaining to develop and write up a plan of attack for this contingency.

Not necessarily. While such an algoirthm would be useful many of the more effective uses would only last as long as one kept quiet that one had such a fast SAT solver. So to take full advantage requires some subtlety.

. I think the step I assume is possible that you don't is the use of a SAT solver to search for a compact program whose behavior satisfies some desired property, which can be used to better leverage your SAT solver.

There's a limit to how much you can do with this since general questions about properties of algorithms are still strongly not decidable.

comment by Douglas_Knight · 2010-11-05T21:20:26.022Z · LW(p) · GW(p)

I estimate about a 1% chance that P=NP is provable in ZFC, around a 2% chance that P=NP is undecideable in ZFC (this is a fairly recent update. This number used to be much smaller. I am willing to discuss reasons for it if anyone cares.) and a 97% chance that P !=NP.

I'm not sure I'm parsing that correctly. Is that 2% for undecidable or undecidable+true? Don't most people consider undecidability evidence against?

But if P=NP in a practical way, ... Many crypto systems not just RSA will be vulnerable.

All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T21:25:18.903Z · LW(p) · GW(p)

I'm not sure I'm parsing that correctly. Is that 2% for undecidable or undecidable+true? Don't most people consider undecidability evidence against?

2% is undecidable in general. Most of that probability mass is for "There's no polynomial time solver for solving an NP complete problem but that is not provable in ZFC" (obviously one then needs to be careful about what one means by saying such a thing doesn't exist, but I don't want to have to deal with those details). A tiny part of that 2% is the possibility that there's a polynomial time algorithm for solving some NP complete problem but one can't prove in ZFC that the algorithm is polynomial time. That's such a weird option that I'm not sure how small a probability for it, other than "very unlikely."

All crypto systems would be vulnerable. At least, all that have ever been deployed on a computer.

Actually, no. There are some that would not. For example, one-time pads have been deployed on computer systems (among other methods using USB flash drives to deliver the secure bits.) One-time pads are provably secure. But all public key cryptography would be vulnerable, which means most forms of modern crypto.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-11-06T01:23:49.192Z · LW(p) · GW(p)

I forgot about one-time pads, which certainly are deployed, but which I don't think of as crypto in the sense of "turning small shared secrets into large shared secrets." My point was that breaks not just public-key cryptography, but also symmetric cryptography, which tends to be formalizable as equivalent to one-way functions.

comment by PhilGoetz · 2010-11-05T04:50:22.037Z · LW(p) · GW(p)

I'll breathe much more easily if we show that P != NP.

Agreed. (Though it would be kinda cool in the long run if P = NP.)

comment by Liron · 2010-11-05T23:44:12.029Z · LW(p) · GW(p)

P vs NP has nothing to do with AI FOOM.

P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen.

And assuming P != NP, the fact that NP-complete problems aren't efficiently solvable in the general case doesn't mean paperclipping the universe is the least bit difficult.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-06T17:55:03.102Z · LW(p) · GW(p)

P vs NP has nothing to do with AI FOOM.

Nothing to do with it at all? I'm curious as to how you reach that conclusion.

P = NP is effectively indistinguishable from P = ALL. Like PaulFChristiano said, If P = NP (in a practical way), then FOOMs are a dime a dozen.

No. There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP

The difficulty of FOOMing for most entities such as humans even given practical P=NP is still severe. See my discussion with Paul.

And assuming P != NP, the fact that NP-complete problems aren't efficiently solvable in the general case doesn't mean paperclipping the universe is the least bit difficult.

I'm puzzled at how you can reach such a conclusion. Many natural problems that an entity trying to FOOM would want to do are NP complete. For example, graph coloring comes up in memory optimization and traveling salesman comes up in circuit design. Now, in a prior conversation I had with Cousin It he made the excellent point that a FOOMing AI might not need to actually deal with worst case instances of these problems. But that is a more subtle issue than what you seem to be claiming.

Replies from: soreff, Liron, paulfchristiano
comment by soreff · 2010-11-06T18:50:23.335Z · LW(p) · GW(p)

I'm confused or about to display my ignorance. When you write:

There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP

Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine? I thought that any given string of moves to the end of the game could be evaluated in polynomial (N^2?) time. I thought that the full set of possible strings of moves could be evaluated (naively) in exponential time on a deterministic machine and polynomial time on a nondeterministic machine - so I thought Go position evaluation would be in NP. I think you are saying that it is more costly than that. Are you saying that, and what am I getting wrong?

Replies from: darius, Perplexed, Douglas_Knight, JoshuaZ
comment by darius · 2010-11-07T01:39:01.499Z · LW(p) · GW(p)

"Does this position win?" has a structure like "Is there a move such that, for each possible reply there is a move such that, for each possible reply... you win." -- where existential and universal quantifiers alternate in the nesting. In a SAT problem on the other hand you just have a nest of existentials. I don't know about Go specifically, but that's my understanding of the usual core difference between games and SAT.

Replies from: soreff
comment by soreff · 2010-11-07T03:21:11.920Z · LW(p) · GW(p)

Much appreciated! So the NP solution to SAT is basically an OR over all of the possible assignments of the variables, where here (or for alternating move games in general), we've got alternating ORs and ANDs on sequential moves.

comment by Perplexed · 2010-11-07T01:56:19.507Z · LW(p) · GW(p)

I'm not sure what you are getting wrong. My initial inclination was to think as you do. But then I thought of this:

You are probably imagining N to be the number of moves deep that need to be searched. Which you probably think is roughly the square of the board size (nominally, 19). The trouble is, what N really means is the size of the specific problem instance. So, it is possible to imagine Go problems on 1001x1001 boards where the number of stones already played is small. I.e. N is much less than a million, but the amount of computation needed to search the tree is on the order of 1000000^1000000.

ETA: This explanation is wrong. Darius got it right.

Replies from: soreff
comment by soreff · 2010-11-07T02:58:45.934Z · LW(p) · GW(p)

Much appreciated! I was taking N to be the number of squares on the board. My current thought is that, as you said, the number of possible move sequences on an N square board is of the order of N^N (actually, I think slightly smaller: N!). As you said, N may be much larger than the number of stones already played.

My current understanding is that board size if fixed for any given Go problem. Is that true or false? If it is true, then I'd think that the factor of N branching at each step in the tree of moves is just what gets swept into the nondeterministic part of NP.

comment by Douglas_Knight · 2010-11-07T03:52:50.751Z · LW(p) · GW(p)

There are many fairly natural problems which are not in P. For example, given a specific Go position does white or black have a win? This is in EXP

Are you saying that evaluating a Go position has exponential (time?) cost even on a nondeterministic machine?

If one assumes everything that is conjectured, then yes. To say that it is EXP-hard is to say that it takes exponential time on a deterministic machine. This does not immediately say how much time it takes on a non-deterministic machine. It is not ruled out that NP=EXP, but it is extremely implausible. Also doubted, though more plausible, is that PSPACE=EXP. PSPACE doesn't care about determinism.

comment by JoshuaZ · 2010-11-08T03:29:00.537Z · LW(p) · GW(p)

I'm not completely sure what you mean. Darius's response seems relevant (in particular, you may want to look at the difference between a general non-deterministic Turing machine and an alternating Turing machine). However, there seems to be possibly another issue here: When mathematicians and computer scientists discuss polynomial time, they are talking about polynomials of the length of the input, not polynomials of the input (similarly, for exponential time and other classes). Thus, for example, to say that PRIMES is in P we mean that there's an algorithm that answers whether a given integer is prime that is time bounded by a polynomial of log_2 p (assuming p is written in base 2).

comment by Liron · 2010-11-08T04:38:33.227Z · LW(p) · GW(p)

There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle's answers.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-08T04:45:10.516Z · LW(p) · GW(p)

There are indeed natural problems outside of NP, but an AI will be able to quickly answer any such queries in a way that, to a lesser intelligence, looks indistinguishable from an oracle's answers.

How do you know that? What makes you reach that conclusion?

Do you mean an AI that has already FOOMed? If so, this is sort of a trivial claim.

If you are talking about a pre-FOOM AI then I don't see why you think this is such an obvious conclusion. And the issue at hand is precisely what the AI could do leading up to and during FOOM.

comment by paulfchristiano · 2010-11-07T16:47:10.989Z · LW(p) · GW(p)

Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don't care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against).

There is a compelling argument to be made that any problem you care about is in NP, although in general the constants will be impractical in the same sense that building a computer to simulate the universe is impractical, even if the problem is in P. In fact, this question doesn't really matter, because P is not actually the class of problems which can be solved in practice. It is a convenient approximation which allows us to state some theorems we have a chance of proving in the next century.

I think the existence of computational lower bounds is clearly of extreme importance to anything clever enough to discover optimal algorithms (and probably also to humans in the very long term for similar reasons). P != NP is basically the crudest such question, and even though I am fairly certain I know which way that question goes the probability of an AI fooming depends on much subtler problems which I can't even begin to understand. In fact, basically the only reason I personally am interested in the P vs NP question is because I think it involves techniques which will eventually help us address these more difficult problems.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-08T03:33:59.777Z · LW(p) · GW(p)

Determining whether white or black wins at GO is certainly not in P (in fact certainly not in NP, I think, if the game can be exponentially long), but in the real world you don't care whether white or black wins. You care about whether you win in the particular game of Go you are playing, which is in NP (although with bad constants, since you have to simulate whoever you are playing against).

Huh? I don't follow this at all. The question of any fixed game who would win is trivially in NP because it is doable in constant time. Any single question is always answerable in constant time. Am I misunderstanding you?

Replies from: paulfchristiano
comment by paulfchristiano · 2010-11-09T00:54:01.321Z · LW(p) · GW(p)

Suppose I want to choose how to behave to achieve some goal. Either what genes to put in a cell I am growing or what moves to play in a game of go or etc. Presumably I can determine whether any fixed prescription will cause me to attain my goal---I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P. Thus finding one with the desired property is in NP. The same applies to determining how to build a cell with desired properties, or how to beat the world's best go player, etc. None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-09T03:26:38.939Z · LW(p) · GW(p)

I'm having some trouble parsing what you have wrote.

Presumably I can determine whether any fixed prescription will cause me to attain my goal---I can simulate the universe and check the outcome at the end. Thus checking whether a particular sequence of actions (or a particular design, strategy, etc.) has the desired property is in P.

I don't follow this line of reasoning at all. Whether a problem is in P is a statement about the length of time it takes in general to solve instances. Also, a problem for this purpose is a collection of questions of the form "for given input N, does N have property A?"

None of this is to say that P = NP sheds light on how easy these questions actually are, but P = NP is the normal theoretical interpretation, and in fact the only theoretical interpretation that makes sense if you are going to stick with the position that P is precisely the class of problems that an AI can solve.

I'm not sure what you mean by this. First of all, the general consensus is that P !=NP. Second of all, in no interpretation is P is somehow precisely the set of problems that an AI can solve. It seems you are failing to distinguish between instances of problems and the general problems. Thus for example, the traveling salesman problem is NP complete. Even if P != NP, I can still solve individual traveling salesman problems (you can probably solve any instance with fewer than five nodes more or less by hand without too much effort.). Similarly, even if factoring turns out to be not in P, it doesn't mean anyone is going to have trouble factoring 15.

comment by rwallace · 2010-11-05T04:00:33.839Z · LW(p) · GW(p)

Right, but one of the reasons for the curve of capability is the general version of Amdahl's law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore's Law.

But I will grant you that a constructive proof of P=NP would... be interesting. I don't know that it would enable AI to go foom (that would still have other difficulties to overcome), but it would be of such wide applicability that my argument from the curve of capability against AI foom, would be invalidated.

I share the consensus view that P=NP doesn't seem to be on the cards, but I agree it would be better to have a proof of P!=NP.

Replies from: Jordan, JoshuaZ
comment by Jordan · 2010-11-05T04:13:12.261Z · LW(p) · GW(p)

There are plenty of instances where an algorithmic breakthrough didn't just apply to 5% of a problem, but the major part of it. My field (applied math) is riddled with such breakthroughs.

Replies from: rwallace
comment by rwallace · 2010-11-05T04:44:36.612Z · LW(p) · GW(p)

Yeah, but it's fractal. For every such problem, there is a bigger problem such that the original one was a 5% subgoal. This is one of the reasons why the curve of capability manifests so consistently: you always find yourself hitting the general version of Amdahl's law in the context of a larger problem

Replies from: Jordan
comment by Jordan · 2010-11-05T07:55:19.658Z · LW(p) · GW(p)

For every such problem, there is a bigger problem such that the original one was a 5% subgoal.

Sure, you can always cook up some ad hoc problem such that my perfect 100% solution to problem A is just a measly 5% subcomponent of problem B . That doesn't change the fact that I've solved problem A, and all the ramifications that come along with it. You're just relabeling things to automate a moving goal post. Luckily, an algorithm by any other name would still smell as sweet.

Replies from: magfrump, rwallace
comment by magfrump · 2010-11-05T09:04:18.136Z · LW(p) · GW(p)

I think the point he was trying to make was that the set of expanding subgoals that an AI would have to make its way through would be sufficient to slow it down to within the exponential we've all been working with.

Phrased this way, however, it's a much stronger point and I think it would require more discussion to be meaningful.

comment by rwallace · 2010-11-05T17:32:08.459Z · LW(p) · GW(p)

Bear in mind that if we zoom in to sufficiently fine granularity, we can get literally infinite speedup -- some of my most productive programming days have been when I've found a way to make a section of code unnecessary, and delete it entirely.

To show this part of my argument is not infinitely flexible, I will say one algorithmic breakthrough that would invalidate it, would be a constructive proof of P=NP (or a way to make quantum computers solve NP-complete problems in polynomial time - I'm told the latter has been proven impossible, but I don't know how certain it is that there aren't any ways around that).

Replies from: paulfchristiano
comment by paulfchristiano · 2010-11-06T15:20:15.527Z · LW(p) · GW(p)

There are no known strong results concerning the relationship between BQP (roughly the analogue of P for quantum computers) and NP. There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.

Replies from: bentarm, Psy-Kosh
comment by bentarm · 2010-11-06T17:06:11.004Z · LW(p) · GW(p)

There is strong consensus that BQP does not contain NP, but it is not as strong as the overwhelming consensus that P != NP.

Presumably because P = NP would imply that NP is contained in BQP, so you can't believe the first of your statements without believing the second.

comment by Psy-Kosh · 2010-11-06T15:35:12.159Z · LW(p) · GW(p)

It's not even known if NP contains BQP?

Replies from: bentarm
comment by bentarm · 2010-11-06T17:03:53.343Z · LW(p) · GW(p)

It's not even known if NP contains BQP?

No. The best we can do is that both contain BPP and are contained in PP, as far as I recall.

Replies from: wnoise
comment by wnoise · 2010-11-06T22:09:00.405Z · LW(p) · GW(p)

And there exist oracles relative to which BQP is not contained in MA (which contains NP).

comment by JoshuaZ · 2010-11-05T04:04:19.799Z · LW(p) · GW(p)

Right, but one of the reasons for the curve of capability is the general version of Amdahl's law. A particular new algorithm may make a particular computational task much easier, but if that task was only 5% of the total problem you are trying to solve, then even an infinite speedup on that task will only give you 5% overall improvement. The upshot is that new algorithms only make enough of a splash to be heard of by computer scientists, whereas the proverbial man on the street has heard of Moore's Law.

May I suggest that the reasons that the proverbial person on the street has heard of Moore's Law is more due to that 1) It is easier to understand 2) It has more of a visibly obvious impact on their lives?

Edit: Also, regarding 5%, sometimes the entire problem is just in the algorithm. For example, in the one I gave, factoring, the entire problem is can you factor integers quickly.

comment by Jordan · 2010-11-04T23:16:50.984Z · LW(p) · GW(p)

This is an interesting case, and reason enough to form your hypothesis, but I don't think observation backs up the hypothesis:

The difference in intelligence between the smartest academics and even another academic is phenomenal, to say nothing of the difference between the smartest academics and an average person. Nonetheless, the brain size of all these people is more or less the same. The difference in effectiveness is similar to the gulf between men and apes. The accomplishments of the smartest people are beyond the reach of average people. There are things smart people can do that average people couldn't, irregardless of their numbers or resources.

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways.

Such as, for instance, the fact that our brains aren't likely optimal computing machines, and could be greatly accelerated on silicon.

Forget about recursive FOOMs for a minute. Do you not think a greatly accelerated human would be orders of magnitude more useful (more powerful) than a regular human?

Replies from: rwallace
comment by rwallace · 2010-11-05T17:56:15.902Z · LW(p) · GW(p)

You raise an interesting point about differences among humans. It seems to me that, caveats aside about there being a lot of exceptions, different kinds of intelligence, IQ being an imperfect measure etc., there is indeed a large difference in typical effectiveness between, say, IQ 80 and 130...

... and yet not such a large difference between IQ 130 and IQ 180. Last I heard, the world's highest IQ person wasn't cracking problems the rest of us had found intractable, she was just writing self-help books. I find this counterintuitive. One possible explanation is the general version of Amdahl's law: maybe by the time you get to IQ 130, it's not so much the limiting factor. It's also been suggested that to get a human brain into very high IQ levels, you have to make trade-offs; I don't know whether there's much evidence for or against this.

As for uploading, yes, I think it would be great if we could hitch human thought to Moore's Law, and I don't see any reason why this shouldn't eventually be possible.

Replies from: Jordan
comment by Jordan · 2010-11-05T21:02:39.168Z · LW(p) · GW(p)

Correlation between IQ and effectiveness does break down at higher IQs, you're right. Nonetheless, there doesn't appear to be any sharp limit to effectiveness itself. This suggests to me that it is IQ that is breaking down, rather than us reaching some point of diminishing returns.

As for uploading, yes, I think it would be great if we could hitch human thought to Moore's Law, and I don't see any reason why this shouldn't eventually be possible.

My point here was that human mind are lop-sided, to use your terminology. They are sorely lacking in certain hardware optimizations that could render them thousands or millions of times faster (this is contentious, but I think reasonable). Exposing human minds to Moore's Law doesn't just give them the continued benefit of exponential growth, it gives them a huge one-off explosion in capability.

For all intents and purposes, an uploaded IQ 150 person accelerated a million times might as well be a FOOM in terms of capability. Likewise an artificially constructed AI with similar abilities.

(Edit: To be clear, I'm skeptical of true recursive FOOMs as well. However, I don't think something that powerful is needed in practice for a hard take off to occur, and think arguments for FAI carry through just as well even if self modifying AIs hit a ceiling after the first or second round of self modification.)

Replies from: CarlShulman, rwallace
comment by CarlShulman · 2010-11-08T14:12:11.480Z · LW(p) · GW(p)

Correlation between IQ and effectiveness does break down at higher IQs, you're right.

Average performance in science and income keeps improving substantially with IQ well past 130: http://www.vanderbilt.edu/Peabody/SMPY/Top1in10000.pdf

Some sources of high intelligence likely work (and aren't fixed throughout the population) because of other psychological tradeoffs.

Replies from: Jordan
comment by Jordan · 2010-11-08T20:11:05.030Z · LW(p) · GW(p)

Thanks for the correction!

comment by rwallace · 2010-11-06T04:29:26.797Z · LW(p) · GW(p)

Sure, I'm not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won.

As for accelerating human minds, I'm skeptical about a factor of millions, but thousands, yes, I could see that ultimately happening. But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there's going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they're not orders of magnitude slower than the originals. Only then will the question of doubling their speed every couple of years even arise.

Replies from: Jordan
comment by Jordan · 2010-11-06T07:59:37.985Z · LW(p) · GW(p)

Sure, I'm not saying there is a sharp limit to effectiveness, at least not one we have nearly reached, only that improvements in effectiveness will continue to be hard-won.

My original example about academics was to demonstrate that there are huge jumps in effectiveness between individuals, on the order of the gap between man and ape. This goes against your claim that the jump from ape to man was a one time bonanza. The question isn't if additional gains are hard-won or not, but how discontinuous their effects are. There is a striking discontinuity between the effectiveness of different people.

But getting to that point is not going to be a one-off event. Even after we have the technology for uploading, there's going to be an awful lot of work just debugging the first uploaded minds, let alone getting them to the point where they're not orders of magnitude slower than the originals.

That is one possible future. Here's another one:

Small animal brains are uploaded first, and the kinks and bugs are largely worked out there. The original models are incredibly detailed and high fidelity (because no one knows what details to throw out). Once working animal brains are emulating well, a plethora of simplifications to the model are found which preserve the qualitative behavior of the mind, allowing for orders of magnitude speed ups. Human uploads quickly follow, and intense pressure to optimize leads to additional orders of magnitude speed up. Within a year the fastest uploads are well beyond what meatspace humans can compete with. The uploads then leverage their power to pursue additional research in software and hardware optimization, further securing an enormous lead.

(If Moore's Law continued to hold in their subjective time frame, then even if they are only 1000x faster they would double in speed every day. In fact, if Moore's Law held indefinitely they would create a literal singularity in 2 days. That's absurd, of course. But the point is that what the future Moore's Law looks like could be unexpected once uploads arrive).

There's a million other possible futures, of course. I'm just pointing out that you can't look at one thing (Moore's Law) and expect to capture the whole picture.

comment by Kaj_Sotala · 2010-11-05T16:04:44.553Z · LW(p) · GW(p)

Because each was a factor of two increase, and it is a general rule that each doubling tends to bring about the same increase in capability.

I found this an interesting rule, and thought that after the examples you would establish some firmer theoretical basis for why it might work the way it does. But you didn't do that, and instead jumped to talking about AGI. It feels like you're trying to apply a rule before having established how and why it works, which raises "possible reasoning by surface analogy" warning bells in my head. The tone in your post is a lot more confident than it should be.

Replies from: rwallace
comment by rwallace · 2010-11-05T17:26:43.280Z · LW(p) · GW(p)

Fair enough; it's true that I don't have a rigorous mathematical model, nor do I expect to have one anytime soon. My current best sketch at an explanation is that it's a combination of:

  1. P != NP, i.e. the familiar exponential difficulty of finding solutions given only a way to evaluate them - I think this is the part that contributes the overall exponential shape.

  2. The general version of Amdahl's law, if subproblem X was only e.g. 10% of the overall job, then no improvement in X can by itself give more than a 10% overall improvement - I think this is the part that makes it so robust; as I mentioned in the subthread about algorithmic improvements, even if you can break the curve of capability in a subproblem, it persists at the next level up.

comment by NihilCredo · 2010-11-04T21:32:38.632Z · LW(p) · GW(p)

it is a general rule that each doubling tends to bring about the same increase in capability.

This rule smells awfully like a product of positive/confirmation bias to me.

Did you pre-select a wide range of various endeavours and projects and afterwards analysed their rate of progress, thus isolating this common pattern;

or did you notice a similarity between a couple of diminishing-returns phaenomena, and then kept coming up with more examples?

My guess is strongly on the latter.

Replies from: rwallace, wedrifid
comment by rwallace · 2010-11-04T22:22:31.991Z · LW(p) · GW(p)

You guess incorrectly -- bear in mind that when I first looked into AI go foom it was not with a view to disproving it! Actually, I went out of my way to choose examples from areas that were already noteworthy for their rate of progress. The hard part of writing this post wasn't finding examples -- there were plenty of those -- it was understanding the reason for and implications of the one big exception.

Replies from: Perplexed
comment by Perplexed · 2010-11-04T22:33:39.379Z · LW(p) · GW(p)

I'm not convinced you have nailed the reasons for and implications of the exception. The cognitive significance of human language is not simply that it forced brain evolution into developing new kinds of algorithms (symbolic processing). Rather language enabled culture, which resulted in an explosion of intra-species co-evolution. But although this led to a rapid jump (roughly 4x) in the capacity of brain hardware, the significant thing for mankind is not the increased individual smarts - it is the greatly increased collective smarts that makes us what we are today.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:41:41.549Z · LW(p) · GW(p)

Well yes -- that's exactly why adding language to a chimpanzee led to such a dramatic increase in capability.

Replies from: Perplexed
comment by Perplexed · 2010-11-04T22:52:44.786Z · LW(p) · GW(p)

But, as EY points out, there may be some upcoming aspects of AI technology evolution which can have the same dramatic effects. Not self-modifying code, but maybe high bandwidth networks or ultra-fine-grained parallel processing. Eliezer hasn't convinced me that a FOOM is inevitable, but you have come nowhere near convincing me that another one is very unlikely.

Replies from: rwallace
comment by rwallace · 2010-11-04T23:46:07.276Z · LW(p) · GW(p)

High-bandwidth networks and parallel processing have fit perfectly well within the curve of capability thus far.

If you aren't convinced yet that another one is very unlikely, okay, what would convince you? Formal proof of a negative isn't possible outside pure mathematics.

Replies from: Perplexed
comment by Perplexed · 2010-11-04T23:56:49.406Z · LW(p) · GW(p)

If you aren't convinced yet that another one is very unlikely, okay, what would convince you?

I'm open to the usual kinds of Bayesian evidence. Lets see. H is "there will be no more FOOMs". What do you have in mind as a good E? Hmm, lets see. How will the world be observably different if you are right, from how it will look if you are wrong?

Point out such an E, and then observe it, and you may sway me to your side.

Removing my tongue from my cheek, I will make an observation. I'm sure that you have heard the statement "Extraordinary claims require extraordinary evidence." Well, there is another kind of claim that requires extraordinary evidence. Claims of the form "We don't have to worry about that, anymore."

Replies from: orthonormal, rwallace
comment by orthonormal · 2010-11-07T18:57:37.150Z · LW(p) · GW(p)

Removing my tongue from my cheek, I will make an observation. I'm sure that you have heard the statement "Extraordinary claims require extraordinary evidence." Well, there is another kind of claim that requires extraordinary evidence. Claims of the form "We don't have to worry about that, anymore."

IWICUTT.

(I Wish I Could Upvote This Twice.)

comment by rwallace · 2010-11-05T03:21:41.995Z · LW(p) · GW(p)

How will the world be observably different if you are right, from how it will look if you are wrong?

If I'm wrong, then wherever we can make use of some degree of recursive self-improvement -- to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko -- we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement.

If I'm right, then the curve of capability should hold in all cases, even when some degree of recursive self-improvement is in operation, and steady exponential improvement should remain the best we can get.

All the evidence we have thus far, supports the latter case, but I'm open to -- and would very much like -- demonstrations of the contrary.

I'm sure that you have heard the statement "Extraordinary claims require extraordinary evidence." Well, there is another kind of claim that requires extraordinary evidence. Claims of the form "We don't have to worry about that, anymore."

I address that position here and here.

Replies from: Perplexed
comment by Perplexed · 2010-11-05T04:23:34.285Z · LW(p) · GW(p)

If I'm wrong, then wherever we can make use of some degree of recursive self-improvement -- to the extent that we can close the loop, feed the output of an optimization process into the process itself, as in e.g. programming tools, chip design and Eurisko -- we should be able to break the curve of capability and demonstrate sustained faster than exponential improvement.

Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM.

To be honest, my intuition is that recursive self-improvement opportunities generating several orders of magnitude of improvement must be very rare. And where the do exist, there probably must be significant "overhang" already in place to make them FOOM-capable. So a FOOM strikes me as unlikely. But your posting here hasn't led me to consider it any less likely than I had before.

Your "curve of capability" strikes me as a rediscovery of something economists have known about for years - the "law of diminishing returns". Since my economics education took place more than 40 years ago, "diminishing returns" is burnt deep into my intuitions. The trouble is that "diminishing returns" is not a really a law. It is, like your capability curve, more of a rough empirical observation - though admitted one with lots of examples to support it.

What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of "increasing returns". And they find examples of it practically everywhere that people are climbing up a new-technology learning curve. In places like electronics and biotech. They are seeing the phenomenon in almost every new technology. Even without invoking recursive self-improvement. But so far, not in AI. That seems to be the one new industry that is still stumbling around in the dark. Kind of makes you wonder what will happen when you guys finally find the light switch.

Replies from: rwallace
comment by rwallace · 2010-11-05T04:58:09.990Z · LW(p) · GW(p)

Then maybe I misunderstood your claim, because I thought you had claimed that there are no kinds of recursive self-improvement that break your curve. Or at least no kinds of recursive self-improvement that are relevant to a FOOM.

That is what I'm claiming, so if you can demonstrate one, you'll have falsified my theory.

Your "curve of capability" strikes me as a rediscovery of something economists have known about for years - the "law of diminishing returns".

I don't think so, I think that's a different thing. In fact...

What I hear is that, since I got my degree and formed my intuitions, economists have been exploring the possibility of "increasing returns".

... I would've liked to use the law of increasing returns as a positive example, but I couldn't find a citation. The version I remember reading about (in a paper book, back in the 90s) said that every doubling of the number of widgets you make, lets you improve the process/cut costs/whatever, by a certain amount; and that this was remarkably consistent across industries -- so once again we have the same pattern, double the optimization effort and you get a certain degree of improvement.

Replies from: NancyLebovitz, JoshuaZ
comment by NancyLebovitz · 2010-11-05T10:58:33.089Z · LW(p) · GW(p)

I think I read that, too, and the claimed improvement was 20% with each doubling.

Replies from: Perplexed
comment by Perplexed · 2010-11-05T13:26:36.593Z · LW(p) · GW(p)

That would look linear on a log-log graph. A power-law response.

I understood rwallace to be hawking a "curve of capability" which looks linear on a semi-log graph. A logarithmic response.

Of course, one of the problems with rwallace's hypothesis is that it becomes vague when you try to quantify it. "Capability increases by the same amount with each doubling of resources" can be interpreted in two ways. "Same amount" meaning "same percentage", or meaning literally "same amount".

Replies from: rwallace
comment by rwallace · 2010-11-05T18:15:03.646Z · LW(p) · GW(p)

Right, to clarify, I'm saying the curve of capability is a straight line on a log-log graph, perhaps the clearest example being the one I gave of chip design, which gives repeated doublings of output for doublings of input. I'm arguing against the "AI foom" notion of faster growth than that, e.g. each doubling taking half the time of the previous one.

Replies from: JGWeissman
comment by JGWeissman · 2010-11-05T18:29:43.668Z · LW(p) · GW(p)

I'm saying the curve of capability is a straight line on a log-log graph

So this could be falsified by continous capability curves that curve upwards on a log-log graphs, and you arguments in various other threads that the discussed situations result in continous capability curves are not strong enough to support your theory.

comment by JoshuaZ · 2010-11-05T05:02:15.623Z · LW(p) · GW(p)

Some models of communication equipment suggest high return rates for new devices since the number of possible options increases at the square of the number of people with the communication system. I don't know if anyone has looked at this in any real detail although I would naively guess that someone must have.

comment by wedrifid · 2010-11-04T21:46:28.800Z · LW(p) · GW(p)

This rule smells awfully like a product of positive/confirmation bias to me.

Yet somehow the post is managing to hover in the single figure positives!

comment by steven0461 · 2010-11-04T20:58:12.631Z · LW(p) · GW(p)

It seems like you're entirely ignoring feedback effects from more and better intelligence being better at creating more and better intelligence, as argued in Yudkowsky's side of the FOOM debate.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-04T21:10:33.460Z · LW(p) · GW(p)

And hardware overhang (faster computers developed before general cognitive algorithms, first AGI taking over all the supercomputers on the Internet) and fast infrastructure (molecular nanotechnology) and many other inconvenient ideas.

Also if you strip away the talk about "imbalance" what it works out to is that there's a self-contained functioning creature, the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability. Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself. Chimpanzees were not "lopsided", they were complete packages designed for an environment; it turned out there were things that could be done which created a huge increase in optimization power (calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken) and perhaps there are yet more things like that, such as, oh, say, self-modification of code.

Replies from: xamdam, Will_Sawin, XiXiDu, rwallace, rwallace
comment by xamdam · 2010-11-04T22:07:41.269Z · LW(p) · GW(p)

calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken

Interesting. Can you elaborate or link to something?

Replies from: cousin_it, timtyler
comment by cousin_it · 2010-11-05T11:14:13.589Z · LW(p) · GW(p)

I'm not Eliezer, but will try to guess what he'd have answered. The awesome powers of your mind only feel like they're about "symbols", because symbols are available to the surface layer of your mind, while most of the real (difficult) processing is hidden. Relevant posts: Detached Lever Fallacy, Words as Mental Paintbrush Handles.

Replies from: xamdam
comment by xamdam · 2010-11-05T20:48:02.627Z · LW(p) · GW(p)

Thanks.

The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual).

In this context the question is whether the symbolic processing (there is definitely some - math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.

Replies from: Perplexed
comment by Perplexed · 2010-11-05T23:48:16.083Z · LW(p) · GW(p)

Speech is a kind of symbolic processing, and is probably an important capability in mankind's intellectual evolution, even if symbolic processing for the purpose of reasoning (as in syllogisms and such) is an ineffectual modern invention.

comment by timtyler · 2011-06-06T19:23:35.914Z · LW(p) · GW(p)

calling this "symbolic processing" assumes a particular theory of mind, and I think it is mistaken

Interesting. Can you elaborate or link to something?

Susan Blackmore argues that what originally caused the "huge increase in optimization power" was memes - not symbolic processing - which probably started up a bit later than the human cranium's expansion did.

comment by Will_Sawin · 2010-11-05T03:00:18.553Z · LW(p) · GW(p)

What's clearly fundamental about the human/chimpanzee advantage, the thing that made us go FOOM and take over the world, is that we can, extremely efficiently, share knowledge. This is not as good as fusing all our brains into a giant brain, but it's much much better than just having a brain.

This analysis possibly suggests that "taking over the world's computing resources" is the most likely FOOM, because it is similar to the past FOOM, but that is weak evidence.

comment by XiXiDu · 2010-11-05T17:46:31.191Z · LW(p) · GW(p)

...the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability.

The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it's not like you're adding a tiny bit of code and get a superapish intelligence.

Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself.

Nothing is offered to support the assertion that there is another such jump. If you were to assert this then another premise of yours, that an universal computing device can simulate every physical process, could be questioned based on the same principle. So here is an antiprediction, humans are on equal footing with any other intelligence who can master abstract reasoning (that does not necessarily include speed or overcoming bias).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T18:15:31.914Z · LW(p) · GW(p)

Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself.

Nothing is offered to support the assertion that there is another such jump.

In a public debate, it makes sense to defend both sides of an argument, because each of the debaters actually tries to convince a passive third party whose beliefs are not clearly known. But any given person that you are trying to convince doesn't have a duty to convince you that the argument you offer is incorrect. It's just not an efficient thing to do. A person should always be allowed to refute an argument on the grounds that they don't currently believe it to be true. They can be called on contradicting assertions of not believing or believing certain things, but never required to prove a belief. The latter would open a separate argument, maybe one worth engaging in, but often a distraction from the original one, especially when the new argument being separate is not explicitly acknowledged.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-05T19:56:56.469Z · LW(p) · GW(p)

I agree with some of what you wrote although I'm not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, "[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code." [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone.

  1. Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T20:16:15.810Z · LW(p) · GW(p)

You should keep track of whose beliefs you are talking about, as it's not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons.

If A believes X, then (NOT X) is a "bare assertion", not enough to justify A changing their belief. For B, who believes (NOT X), stating "X" is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because "(NOT X) is a bare assertion", even though A believes both that "(NOT X) is a bare assertion" (correctly) and X (of unknown truth).

Replies from: XiXiDu
comment by XiXiDu · 2010-11-05T20:45:28.240Z · LW(p) · GW(p)

There is no inferential link between refuted assertions and beliefs that were held all along.

That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T20:56:21.649Z · LW(p) · GW(p)

Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander.

They don't cancel out each other, as they both lack convincing power, equally irrelevant. It's an error to state as arguments what you know your audience won't agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-06T10:30:08.170Z · LW(p) · GW(p)

They don't cancel out each other, as they both lack convincing power, equally irrelevant.

Let's assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other.

Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it.

Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you'll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other.

ETA: Fixed the logic, thanks Vladimir_Nesov.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-06T10:53:38.553Z · LW(p) · GW(p)

Let's assume that you contemplate the possibility Z XOR ¬Z.

Z XOR ¬Z is always TRUE.

(I know what you mean, but it looks funny.)

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-11-06T11:42:25.404Z · LW(p) · GW(p)

Fixed it now (I hope), thanks.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-06T11:49:12.562Z · LW(p) · GW(p)

I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can't now expect evidence for C to always be counter-evidence for D.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-06T12:19:42.334Z · LW(p) · GW(p)

Thanks for your patience and feedback, I updated it again. I hope it is now somewhat more clear what I'm trying to state.

comment by XiXiDu · 2010-11-06T11:28:50.824Z · LW(p) · GW(p)

Whoops, I'm just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?

comment by Vladimir_Nesov · 2010-11-06T12:41:59.135Z · LW(p) · GW(p)

If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z

B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.)

Also, it's strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them.

(Haven't yet got around to a complete reply rectifying the model, but will do eventually.)

comment by rwallace · 2010-11-04T22:30:01.672Z · LW(p) · GW(p)

See my reply to saturn on recursive self-improvement. Potential hardware overhang, I already addressed. Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future. I already explained the sense in which chimpanzees were lopsided. Self modification of code has been around for decades.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-05T12:35:54.606Z · LW(p) · GW(p)

Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future.

May be minorly off-topic: nothing Drexler hypothesised has, as far as I know, even been started. As I understand it, the state of things is still that we still have literally no idea how to get there from here, and what's called "nanotechnology" is material science or synthetic biology. Do you have details of what you're describing as following the curve?

Replies from: timtyler, rwallace
comment by timtyler · 2011-06-06T19:27:53.968Z · LW(p) · GW(p)

May be minorly off-topic: nothing Drexler hypothesised has, as far as I know, even been started.

Perhaps start here, with his early work on the potential of hypertext ;-)

comment by rwallace · 2010-11-05T19:58:54.698Z · LW(p) · GW(p)

A good source of such details is Drexler's blog, where he has written some good articles about -- and seems to consider highly relevant -- topics like protein design and DNA origami.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-05T22:09:24.290Z · LW(p) · GW(p)

(cough) I'm sure Drexler has much detail on Drexler's ideas. Assume I'm familiar with the advocates. I'm speaking of third-party sources, such as from the working worlds of physics, chemistry, physical chemistry and material science for example.

As far as I know - and I have looked - there's little or nothing. No progress to nanobots, no progress to nanofactories. The curve in this case is a flat line at zero. Hence asking you specifically for detail on what you are plotting on your graph.

Replies from: lsparrish, rwallace
comment by lsparrish · 2010-11-05T22:59:11.621Z · LW(p) · GW(p)

There has been some impressive sounding research done on simulated diamondoid tooltips for this kind of thing. (Admittedly, done by advocates.)

I suspect when these things do arrive, they will tend to have hard vacuum, cryogenic temperatures, and flat surfaces as design constraints.

comment by rwallace · 2010-11-06T04:25:07.003Z · LW(p) · GW(p)

Well, that's a bit like saying figuring out how to smelt iron constituted no progress to the Industrial Revolution. These things have to go a step at a time, and my point in referring to Drexler's blog was that he seems to think e.g. protein design and DNA origami do constitute real progress.

As for things you could plot on a graph, consider the exponentially increasing amount of computing power put into molecular modeling simulations, not just by nanotechnology advocates, but people who actually do e.g. protein design for living today.

comment by rwallace · 2010-11-04T22:51:31.999Z · LW(p) · GW(p)

Also, I'm not sure what you mean by "symbolic processing" assuming a particular theory of mind -- theories of mind differ on the importance thereof, but I'm not aware of any that dispute its existence. I'll second the request for elaboration on this.

I'll also ask, assuming I'm right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?

Replies from: wedrifid
comment by wedrifid · 2010-11-04T23:05:04.759Z · LW(p) · GW(p)

I'll also ask, assuming I'm right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?

It would be better if you waited until you had made somewhat of a solid argument before you resorted to that appeal. Even Robin's "Trust me, I'm an Economist!" is more persuasive.

The Bottom Line is one of the earliest posts in Eliezer's own rationality sequences and describes approximately this objection. You'll note that he added an Addendum:

This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don't like.

Replies from: rwallace
comment by rwallace · 2010-11-05T00:13:14.264Z · LW(p) · GW(p)

I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-) Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe.

But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T00:39:49.621Z · LW(p) · GW(p)

I'm resisting the temptation to say "trust me, I'm an AGI researcher" :-)

But barely. ;)

You would not believe how little that would impress me. Well, I suppose you would - I've been talking with XiXi about Ben, after all. I wouldn't exactly say that your status incentives promote neutral reasoning on this position - or Robin on the same. It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Bear in mind that my bottom line was actually the pro "AI go foom" side; it's still what I would like to believe.

You are trying to create AGI without friendliness and you would like to believe it will go foom? And this is supposed to make us trust your judgement with respect to AI risks?

Incidentally, 'the bottom line' accusation here was yours, not the other way around. The reference was to question its premature use as a fully general counterargument.

But my theory is clearly falsifiable. I stand by my position that it's fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.

We are talking here about predictions of the future. Predictions. That's an important keyword that is related to falsifiability. Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

You just tagged teamed one general counterargument out to replace it with a new one. Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity. Predictions, for crying out loud.

Replies from: rwallace, XiXiDu
comment by rwallace · 2010-11-05T03:45:15.117Z · LW(p) · GW(p)

I wouldn't exactly say that your status incentives promote neutral reasoning on this position

No indeed, they very strongly promote belief in AI foom - that's why I bought into that belief system for a while, because if true, it would make me a potential superhero.

It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.

Nope, it's exactly in the core of my expertise. Not that I'm expecting you to believe my conclusions for that reason.

You are trying to create AGI without friendliness and you would like to believe it will go foom?

When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I've reluctantly accepted human level AI in the near future is not possible, and I'm working on smarter tool AI instead - well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today.

We are talking here about predictions of the future. Predictions. That's an important keyword that is related to falsifiability.

That is what falsifiability refers to, yes.

My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement.

Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

Are you saying your theory makes no other predictions than this?

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-11-05T04:13:08.693Z · LW(p) · GW(p)

Are you saying your theory makes no other predictions than this?

RWallace, you made a suggestion of unfalsifiabiity, a ridiculous claim. I humored you by giving the most significant, obvious and overwhelmingly critical way to falsify (or confirm) the theory. You now presume to suggest that such a reply amounts to a claim that this is the only prediction that could be made. This is, to put it in the most polite terms I am willing, disingenuous.

Replies from: rwallace
comment by rwallace · 2010-11-05T04:38:50.022Z · LW(p) · GW(p)

-sigh-

This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went "oh hey, our predictions keep failing, maybe that's evidence our theory isn't actually right after all"? If you guessed none at all, give yourself a brownie point for an inspired guess. It's like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go "well our date was wrong, but that doesn't mean it's not going to happen, of course it is, Real Soon Now." Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it's never "well, maybe recursive self-improvement isn't all it's cracked up to be," it's always "your faith wasn't strong enough," oops, "you weren't using enough of it," or "that's not the right kind" or some other excuse.

That's what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you'll see when the Apocalypse comes and we all die, ha ha. And that's the most polite terms I'm willing to put it in.

I've made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn't have to be taking over the world, just sustained improvement beyond what my theory says should be possible.

If you're willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you'll agree your theory has been falsified, great. If not, fine, I'll assume your faith is absolute and drop this debate.

Replies from: shokwave, wedrifid, JoshuaZ, Larks
comment by shokwave · 2010-11-05T05:23:29.617Z · LW(p) · GW(p)

It's like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go "well our date was wrong, but that doesn't mean it's not going to happen, of course it is, Real Soon Now."

That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T05:27:00.053Z · LW(p) · GW(p)

rwallace's argument isn't centering on the standard argument that makes it look like a doomsday cult. He's focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That's different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.

Replies from: shokwave
comment by shokwave · 2010-11-05T06:25:03.490Z · LW(p) · GW(p)

Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don't happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.

comment by wedrifid · 2010-11-05T05:25:58.604Z · LW(p) · GW(p)

The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.

You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be 'straw man'... and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the 'unfalsifiable' thing here, somehow.)

Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.

Replies from: magfrump
comment by magfrump · 2010-11-05T09:33:29.631Z · LW(p) · GW(p)

The way that this thread played out bothered me.

I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.

On the other hand, it's possible that this isn't the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.

The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki's claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the "standards that are found to be effective and well received on lesswrong," albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant "downgrade" from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.

You could have responded by saying that "fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em's speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM."

In which case you would have presented what rwallace requested--a possibility of falsification--without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.

I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.

I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.

To be clear: I do not agree with rwallace's position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.

Replies from: wedrifid, Eliezer_Yudkowsky
comment by wedrifid · 2010-11-05T10:39:32.463Z · LW(p) · GW(p)

I am disappointed in you

This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me - I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?

Replies from: magfrump
comment by magfrump · 2010-11-05T17:18:52.079Z · LW(p) · GW(p)

You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.

I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.

I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.

I don't know you well enough to dislike you. I've seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn't give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.

Replies from: wedrifid, shokwave, Douglas_Knight
comment by wedrifid · 2010-11-05T18:10:36.029Z · LW(p) · GW(p)

You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.

Ahh. That does make sense. I fundamentally disagree with everything else of significance in your judgement here but from your premises I can see how dissapointment is consistent.

I will not respond to those judgments except in as much as to say that I don't agree with you on any of the significant points. My responses here are considered, necessary and if anything erred on the side of restraint. Bullshit, in the technical sense is the enemy here. This post and particularly the techniques used to defend it are bullshit in that sense. That it somehow got voted above -5 is troubling to me.

Replies from: magfrump
comment by magfrump · 2010-11-05T22:03:12.075Z · LW(p) · GW(p)

I agree that the arguments made in the original post tend to brush relevant details under the rug. But there is a difference between saying that an argument is flawed and trying to help fix it, and saying that it is irrelevant and the person is making a pure appeal to their own authority.

I was interested to see a more technical discussion of what sorts of things might be from the same reference class as recursive self-improvement. I was happy to see a viewpoint being represented on Less Wrong that was more diverse than the standard "party line." Even if the argument is flawed I was glad to see it.

I would have been much happier to see the argument deconstructed than I am now having seen it turned into a flame war.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T22:18:21.847Z · LW(p) · GW(p)

and saying that it is irrelevant and the person is making a pure appeal to their own authority.

I believe I observed that it was far worse than an appeal to authority.

You do not understand the mechanisms of reasoning as employed here well enough to see why the comments here received the reception that they did.

Replies from: magfrump
comment by magfrump · 2010-11-05T22:33:28.055Z · LW(p) · GW(p)

In this comment rwallace asks you to make a falsifiable prediction. In this comment you state:

Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.

rwallace responded by saying:

My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement. ... Are you saying your theory makes no other predictions than [AI will cause the world to end]?

Then in your reply you say he is accusing you of making that claim.

The way he asked his question was impolite. However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.

It is true that I do not understand the mechanisms. I thought that I understood that the policy of LessWrong is not to dismiss arguments but to fight the strongest argument that can be built out of that argument's corpse.

At no point did the thread become, in my mind, about your belief that his argument was despicable. If I understand correctly, you believe that by drawing attention to technical details, he is drawing attention away from the strongest arguments on the topic and therefore moving people towards less correct beliefs in a dangerous way. This is a reasonable objection, but again at no point did I see this thread become about your objection in a positive light rather than being about his post in a negative light.

If you are interested in making your case explicitly, or demonstrating where you have attempted to make it, I would be very interested to see it. If you are interested in providing other explicit falsifiable claims or demonstrating where they have been made I would be interested to see that as well. If you are interested only in discussing who knows the community better and using extremely vague terms like "mechanisms of reasoning as employed here" then I think we both have better ways to spend our time.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T23:05:42.160Z · LW(p) · GW(p)

However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.

You are simply wrong.

'Falsifiable' isn't a rally call... it actually refers to a distinct concept - and was supplied multiple times in a completely unambiguous fashion.

I think we both have better ways to spend our time.

I did not initiate this conversation and at no time did I desire it. I did choose to reply to some of your comments.

comment by shokwave · 2010-11-05T19:09:32.301Z · LW(p) · GW(p)

I am disappointed that such a well-established member of our community would behave in the way you did

Wedrifid pointed out flaws in a flawed post, and pointed out flaws in a series of flawed arguments. You could debate the degree of politeness required but pointing out flaws is in some fundamental ways an impolite act. It is also a foundation of improving rationality. To the extent that these comment sections are about improving rationality, wedrifid behaved exactly as they should have.

Karma on LessWrong isn't about politeness, as far as I have seen. For what it's worth, in my kibitzer'd neutral observations, the unanimous downvoting is because readers spotted flaws; unanimous upvoting is for posts that point out flaws in posts.

Replies from: wedrifid, magfrump
comment by wedrifid · 2010-11-05T23:02:33.242Z · LW(p) · GW(p)

I'm starting to think we may need to bring up Eliezer's 'tending to the garden before it becomes overgrown' and 'raising the sanity waterline' posts from early on. There has been a recent trend of new users picking an agenda to support then employing the same kinds of fallacies and debating tactics in their advocacy. Then, when they are inevitably downvoted there is the same sense of outrage that mere LW participants dare evaluate their comments negatively.

It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn't be that people here are able to evaluate the reasoning and consider the reasoning used to b more important than which side the author is on.

This isn't a problem if it happens now and again. Either the new user has too much arrogance to learn to adapt to lesswrong standards and leave or they learn what is expected here and integrate into the culture. The real problem comes when arational debators are able to lend support to each other, preventing natural social pressures from having the full effect. That's when the sanity waterline can really start to fall.

Replies from: shokwave
comment by shokwave · 2010-11-06T03:33:41.226Z · LW(p) · GW(p)

It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn't be that people here are able to evaluate the reasoning and consider the reasoning used to be more important than which side the author is on.

When we see this, we should point them to the correspondence bias and the evil enemies posts and caution them not to assume that a critical reply is an attack from someone who is subverting the community - or worse, defending the community from the truth.

As an aside, top level posts are scary. Twice I have written up something, and both times I deleted it because I thought I wouldn't be able to accept criticism. There is this weird feeling you get when you look at your pet theories and novel ideas you have come up with: they feel like truth, and you know how good LessWrong is with the truth. They are going to love this idea, know that it is true immediately and with the same conviction that you have, and celebrate you as a good poster and community member. After deleting the posts (and maybe this is rationalization) it occurred to me that had anyone disagreed, that would have been evidence not that I was wrong, but that they hated truth.

comment by magfrump · 2010-11-05T22:01:53.681Z · LW(p) · GW(p)

I didn't mean just that he was impolite, or just that pointing out flaws in a flawed argument is bad or impolite. Of course when a post is flawed it should be criticized.

I am disappointed that the criticism was destructive, claiming that the post was a pure appeal to authority, rather than constructive, discussing how we might best update on this evidence, even if our update is very small or even in the opposite direction.

I guess what I'm saying is that we should hold our upvotes to a higher standard than just "pointing out flaws in an argument."

Replies from: wedrifid, Perplexed
comment by wedrifid · 2010-11-05T22:39:28.113Z · LW(p) · GW(p)

I guess what I'm saying is that we should hold our upvotes to a higher standard than just "pointing out flaws in an argument."

It's called less wrong for a reason. Encouraging the use of fallacious reasoning and dark arts rhetoric even by leaving it with a neutral reception would be fundamentally opposed to the purpose of this site. Most of the sequences, in fact, have been about how not to think stupid thoughts. One of the ways to do that is to prevent your habitat from overwhelming you with them and limiting your discussions to those that are up to at least a crudely acceptable level.

If you want a debate about AI subjects where the environment isn't primarily focussed on rewarding sound reasoning then I am almost certain that there are other places that are more welcoming.

Replies from: magfrump
comment by magfrump · 2010-11-05T22:52:54.876Z · LW(p) · GW(p)

This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning. The argument could be made, and if you had started or if you continue by making that argument I would be satisfied with that.

I am happy to see that elsewhere there are responses which acknowledge that interesting information has been presented before completely demolishing the original article.

This makes me think that pursuing this argument between the two of us is not worthwhile, as it draws attention to both of us making posts that are not satisfying to each other and away from other posts which may seem productive to both of us.

Replies from: shokwave, wedrifid
comment by shokwave · 2010-11-06T05:04:37.048Z · LW(p) · GW(p)

This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning.

Agreed. It takes an effort of willpower not to get defensive when you are criticised, so an attack (especially with insults) is likely to cause the target to become defensive and try to fight back rather than learn where they went wrong. As we know from the politics sequence, an attack might even make their conviction stronger!

However,

I do not believe that this is necessarily the best way to promote sound reasoning.

I actually can't find a post on LessWrong specifically about this, but it has been said many times that the best is the enemy of the good. Be very wary of shooting down an idea because it is not the best idea. In the overwhelming majority of cases, the idea is better than doing nothing, and (again I don't have the cite, but it has been discussed here before) if you spend too much time looking for the best, you don't have any time left to do any of the ideas, so you end up doing nothing - which is worse than the mediocre idea you argued against.

If I was to order the ways of dealing with poor reasoning, it would look like this: Point out poor reasoning > Attack poor reasoning with insult > Leave poor reasoning alone.

comment by wedrifid · 2010-11-05T23:04:24.713Z · LW(p) · GW(p)

Again, I disagree substantially with your observations on the critical premises.

comment by Perplexed · 2010-11-05T22:27:19.637Z · LW(p) · GW(p)

I guess what I'm saying is that we should hold our upvotes to a higher standard than just "pointing out flaws in an argument."

I tend to agree, but what are those higher standards? One I would suggest is that the act of pointing out a flaw ought to be considered unsuccessful if the author of the flaw is not enlightened by the criticism. Sometimes communicating the existence of a flaw requires some handholding.

To those who object "It is not my job to educate a bias-laden idiot", I respond, "And it is not my job to upvote your comment, either."

Replies from: magfrump
comment by magfrump · 2010-11-05T22:37:52.731Z · LW(p) · GW(p)

Pointing out a flaw and suggesting how it might be amended would be an excellent post. Asking politely if the author has a different amendment in mind would be terrific.

And I could be incorrect here, but isn't this site about nurturing rationalists? As I understand it, all of us humans (and clippy) are bias-laden idiots and the point of LessWrong is for us to educate ourselves and each other.

comment by Douglas_Knight · 2010-11-05T18:41:36.062Z · LW(p) · GW(p)

You keep switching back and forth between "is" and "ought" and I think this leads you into error.

The simplest prediction from wedrifid's high karma is that his comments will be voted up. On the whole, his comments on this thread were voted up. The community normally agrees with him and today it agrees with him. This suggests that he is not behaving differently.

You have been around this community a while and should already have assessed its judgement and the meaning of karma. If you think that the community expresses bad judgement through its karma, then you should not be disappointed in bad behavior by high karma users. (So it would seem rather strange to write the above comment!) If you normally think that the community expresses good judgement through karma, then it is probably expressing similarly good judgement today.

Most likely, the difference is you, that you do not have the distance to adequately judge your interactions. Yes, there are other possibilities; it is also possible that "foom" is a special topic that the community and wedrifid cannot deal with rationally. But is it so likely that they cannot deal with it civilly?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-05T18:07:22.906Z · LW(p) · GW(p)

Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps.

I did not say that. I said that symbolic logic probably wasn't It. You made up your own reason why, and a poor one.

Replies from: shokwave, magfrump
comment by shokwave · 2010-11-05T18:55:07.990Z · LW(p) · GW(p)

Out of morbid curiosity, what is your reason for symbolic logic not being it?

Replies from: Liron
comment by Liron · 2010-11-05T23:13:16.623Z · LW(p) · GW(p)

I second the question out of healthy curiosity.

comment by magfrump · 2010-11-05T21:54:15.016Z · LW(p) · GW(p)

That's fair. I apologize, I shouldn't have put words in your mouth. That was the impression I got, but it was unfounded to say it came from you.

comment by JoshuaZ · 2010-11-05T04:41:24.059Z · LW(p) · GW(p)

So, I'm vaguely aware of Singularity claims for 2010. Do you have citations for people making such claims that it would happen in 2000 or 2005?

I agree that pushing something farther and farther into the future is a potential warning sign.

Replies from: timtyler, steven0461, rwallace
comment by timtyler · 2010-11-05T09:49:56.763Z · LW(p) · GW(p)

In the "The Maes-Garreau Point" Kevin Kelly lists poorly-referenced predictions of "when they think the Singularity will appear" of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.

comment by steven0461 · 2010-11-05T19:49:08.475Z · LW(p) · GW(p)

I agree that pushing something farther and farther into the future is a potential warning sign.

But only a potential warning sign -- fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T21:00:59.513Z · LW(p) · GW(p)

But only a potential warning sign -- fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.

Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn't alter their hypotheses at all, I'd be very worried about the state of physics.

comment by rwallace · 2010-11-05T04:52:48.321Z · LW(p) · GW(p)

Not off the top of my head, which is one reason I didn't bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T04:57:21.304Z · LW(p) · GW(p)

Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing's remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we'd have AI. As far as I know, he didn't comment on anything like a Singularity.

Replies from: rwallace
comment by rwallace · 2010-11-05T05:02:06.811Z · LW(p) · GW(p)

Ah, that's the one I'm thinking of -- he didn't comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn't save any citations at the time and a quick Google search didn't find any, which is one of the reasons I'm not writing a post on failed Singularity predictions.

Replies from: steven0461
comment by steven0461 · 2010-11-05T19:27:20.029Z · LW(p) · GW(p)

Another reason, hopefully, is that there would always have been a wide range of predictions, and there's a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.

comment by Larks · 2010-11-05T14:50:00.205Z · LW(p) · GW(p)

If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn't very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren't ravaged.

Replies from: rwallace
comment by rwallace · 2010-11-05T20:05:53.872Z · LW(p) · GW(p)

This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn't matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)

If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don't Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn't yet reached us.

Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.

Replies from: Document
comment by Document · 2010-11-05T23:48:06.403Z · LW(p) · GW(p)

If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don't Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn't yet reached us.

Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)

comment by wedrifid · 2010-11-05T04:15:25.274Z · LW(p) · GW(p)

Nope, it's exactly in the core of my expertise.

You are not an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-05T04:18:28.813Z · LW(p) · GW(p)

You are no an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

Replies from: wedrifid
comment by wedrifid · 2010-11-05T04:26:01.334Z · LW(p) · GW(p)

In fairness, I'm not sure anyone is really an expert on this (although this doesn't detract from your point at all.)

You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn't because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it - intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.

comment by XiXiDu · 2010-11-05T18:02:24.970Z · LW(p) · GW(p)

Predictions, for crying out loud.

Yes, but why would the antipredictions of AGI researcher not outweigh yours as they are directly inverse? Further, if your predictions are not falsifiable then they are by definition true and cannot be refuted. Therefore it is not unreasonable to ask for what would prematurely disqualify your predictions so as to be able to argue based on diverging opinions here. Otherwise, as I said above, we'll have two inverse predictions outweigh each other, and not the discussion about risk estimations we should be having.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T20:47:29.170Z · LW(p) · GW(p)

The claim being countered was falsifiability. Your reply here is beyond irrelevant to the comment you quote.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-06T09:26:35.662Z · LW(p) · GW(p)

rwallace said it all in his comment that has been downvoted. Since I'm unable to find anything wrong with his comment and don't understand yours at all, which has for unknown reasons be upvoted, there's no way for me to counter what you say besides by what I've already said.

Here's a wild guess of what I believe to be the positions. rwallace asks you what information would make you update or abandon your predictions. You in turn seem to believe that predictions are just that, the utterance of that might be possible, unquestionable and not subject to any empirical criticism.

I believe I'm at least smarter than the general public, although I haven't read a lot of Less Wrong yet. Further I'm always willing to announce that I have been wrong and to change my mind. This should at least make you question your communication skills regarding outsiders, a little bit.

Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity.

Theories are collections or proofs and a hypothesis is a prediction or collection of predictions and must be falsifiable or proven to become a collection of proofs that is a theory. It is not absurd at all to challenge predictions based on their refutability, as any prediction that isn't falsifiable will be eternal and therefore useless.

Replies from: wedrifid
comment by wedrifid · 2010-11-06T10:34:37.212Z · LW(p) · GW(p)

The wikipedia article on falsifiablility would be a good place to start if you wish to understand what is wrong with way falsification has been used (or misused) here. With falsifiability understood, seeing the problem should be straightforward.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-11-06T12:47:28.776Z · LW(p) · GW(p)

I'll just back out and withdraw my previous statements here. I have already been reading that Wiki entry when you replied. It would certainly take too long to figure out where I might be wrong here. I thought falsifiablility has been sufficiently clear to me to ask for what would change someones mind if I believe that a given prediction is sufficiently unspecific.

I have to immerse myself into the shallows that are the foundations of falsifiability (philosophy). I have done so in the past and will continue to do so, but that will take time. Nothing so far has really convinced me that a unfalsifiable idea can provide more than hints of what might be possible and therefore something new to try. Yet empirical criticism, in the form of the eventual realization of ones ideas, or a prove of contradiction (respectively inconsistency), seems to be the best bedding of any truth-value (at least in retrospect to a prediction). That is why I like to ask for what information would change ones mind about an idea, prediction or hypothesis. I call this falsifiability. If one replied, "nothing falsifiability is misused here", I would conclude that his idea is unfalsifiable. Maybe wrongly so!

Replies from: wedrifid
comment by wedrifid · 2010-11-07T07:49:11.555Z · LW(p) · GW(p)

Thou art wise.

comment by XiXiDu · 2010-11-06T10:41:10.135Z · LW(p) · GW(p)

I'd like to know if you disagree with this comment. It would help me to figure out where we disagree or what exactly I'm missing or misunderstand with regard to falsifiability and the value of predictions.

comment by JamesAndrix · 2010-11-05T21:40:38.177Z · LW(p) · GW(p)

The answer is that each doubling of computing power adds roughly the same number of ELO rating points.

When you make a sound carry 10 times as much energy, it only sounds a bit louder.

If your unit of measure already compensates for huge leaps in underlying power, then you'll tend to ignore that the leaps in power are huge.

How you feel about a ram upgrade is one such measure, because you don't feel everything that happens inside your computer. You're measuring benefit by how it works today vs. yesterday, instead of what "it" is doing today vs. 20 years ago.

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no.

Isn't that lopsideness the state computers are currently in? So the first computer that gets the 'general intelligence' thing will have a huge advantage even before any fancy self-modification.

comment by jimrandomh · 2010-11-04T22:56:59.088Z · LW(p) · GW(p)

You seem to be arguing from the claim that there are exponentially diminishing returns within a certain range, to the claim that there are no phase transitions outside that range. You explain away the one phase transition you've noticed (ape to human) in an unconvinving manner, and ignored three other discontinuous phase transitions: agriculture, industrialization, and computerization.

Also, this statement would require extraordinary evidence that is not present:

every even slightly promising path has had people working on it

comment by Vladimir_Nesov · 2010-11-05T10:36:41.818Z · LW(p) · GW(p)

It seems to me that the first part of your post, that examines PC memory and such, simply states that (1) there are some algorithms (real-world process patterns) that take exponential amount of resources, and (2) if capability and resources are roughly the same thing, then we can unpack the concept of "significant improvement in capability" as "roughly a doubling in capability", or, given that resources are similar to capability in such cases, "roughly a doubling in resources".

Yes, such things exist. There are also other kinds of things.

comment by PhilGoetz · 2010-11-05T04:36:08.172Z · LW(p) · GW(p)

I'm not convinced; but it's interesting. A lot hinges on the next-to-last paragraph, which is dubious and handwavy.

One weakness is that, when you say that chimpanzees looked like they were well-developed creatures, but really they had this huge unknown gap in their capabilities, which we filled in, I don't read that as evidence that now we are fully-balanced creatures with no gaps. I wonder where the next gap is. (EDIT: See jimrandomh's excellent comment below.)

What if an AI invents quantum computing? Or, I don't know, is rational?

Another weakness is the assumption that the various scales you measure things on, like go ratings, are "linear". Go ratings, at least, are not. A decrease in 1 kyu is supposed to mean an increase in likelihood ratio of winning by a factor of 3. Also, by your logic, it should take twice as long to go from 29 kyu to 28, as from 30 to 29; no one should ever reach 10 kyu.

Over the last five or six million years, our lineage upgraded computing power (brain size) by about a factor of three, and upgraded firmware to an extent that is unknown but was surely more like a percentage than an order of magnitude. The result was not a corresponding improvement in capability. It was a jump from almost no to fully general symbolic intelligence, which took us from a small niche to mastery of the world. How? Why?

It is certainly intriguing; but it's not proven that there was any jump in capability. You could also argue that there was a level of intelligence that, once crossed, led to a phase transition, perhaps a self-sustaining increase in culturally-transmitted knowledge. Maybe chimpanzees were 99% of the way there. A chimpanzee might think she was immeasurably smarter than a monkey.

I don't think people use symbol processing and logic the way computer programs do. The mouse the cat the dog chased hunted squeaked. I'm not convinced that the cognitive difference between a very bright chimpanzee and a dull human is as large as the cognitive difference between a dull human and a very bright human. I'm not convinced that the ability to string words together into sentences is as big a deal as humans say it is. If it were, nonverbal communication wouldn't be as important as it is. And I wouldn't have heard so many depressingly-stupid sentences.

Replies from: rwallace
comment by rwallace · 2010-11-05T18:08:33.096Z · LW(p) · GW(p)

I don't read that as evidence that now we are fully-balanced creatures with no gaps. I wonder where the next gap is.

Mind you, I don't think we, in isolation, are close to fully balanced; we are still quite deficient in areas like accurate data storage and arithmetic. Fortunately we have computers to fill those gaps for us. Your question is then essentially, are there big gaps in the combined system of humans plus computers -- in other words, are there big opportunities we're overlooking, important application domains within the reach of present-day technology, not yet exploited? I think the answer is no; of course outside pure mathematics, it's not possible to prove a negative, only to keep accumulating absence of evidence to the point where it becomes evidence of absence. But I would certainly be interested in any ideas for such gaps.

Another weakness is the assumption that the various scales you measure things on, like go ratings, are "linear". Go ratings, at least, are not.

No indeed! I should clarify that exponential inputs to certain to produce exponential outputs -- an example I gave is chip design, where the outputs feed back into the inputs, getting us fairly smooth exponential growth. Put another way, the curve of capability is a straight line on a log-log graph; I'm merely arguing against the existence of steeper growth than that.

And I wouldn't have heard so many depressingly-stupid sentences.

QFT. Necessary conditions are not, unfortunately, sufficient conditions :P

comment by saturn · 2010-11-04T21:00:14.824Z · LW(p) · GW(p)

You seem to have completely glossed over the idea of recursive self-improvement.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:26:59.985Z · LW(p) · GW(p)

Not at all -- we already have recursive self-improvement, each new generation of computers is taking on more of the work of designing the next generation. But -- as an observation of empirical fact, not just a conjecture -- this does not produce foom, it produces steady exponential growth; as I explained, this is because recursive self-improvement is still subject to the curve of capability.

Replies from: saturn, JGWeissman
comment by saturn · 2010-11-04T23:32:48.826Z · LW(p) · GW(p)

Whenever you have a process that requires multiple steps to complete, you can't go any faster than the slowest step. Unless Intel's R&D department actually does nothing but press the "make a new CPU design" button every few months, I think the limiting factor is still the step that involves unimproved human brains.

Elsewhere in this thread you talk about other bottlenecks, but as far as I know FOOM was never meant to imply unbounded speed of progress, only fast enough that humans have no hope of keeping up.

Replies from: magfrump
comment by magfrump · 2010-11-05T08:58:58.295Z · LW(p) · GW(p)

I saw a discussion somewhere with a link to a CPU design researcher discussing the level of innovation required to press that "make a new CPU design" button, and how much of the time is spent waiting for compiler results and bug testing new designs.

I'm getting confused trying to remember it off the top of my head, but here are some links to what I could dig up with a quick search.

Anyway I haven't reread the whole discussion but the weak inside view seems to be that speeding up humans would not be quite as big of a gap as the outside view predicts--of course with a half dozen caveats that mean a FOOM could still happen. soreff's comment below also seems related.

Replies from: saturn
comment by saturn · 2010-11-05T17:42:53.519Z · LW(p) · GW(p)

As long as nontrivial human reasoning is a necessary part of the process, even if it's a small part, the process as a whole will stay at least somewhat "anchored" to a human-tractable time scale. Progress can't speed up past human comprehension as long as human comprehension is required for progress. If the human bottleneck goes away there's no guarantee that other bottlenecks will always conveniently appear in the right places to have the same effect.

Replies from: magfrump
comment by magfrump · 2010-11-05T21:55:32.251Z · LW(p) · GW(p)

I agree, but that is a factual and falsifiable claim that can be tested (albeit only in very weak ways) by looking at where current research is most bottlenecked by human comprehension.

comment by JGWeissman · 2010-11-04T22:32:59.361Z · LW(p) · GW(p)

No. We have only observed weak recursive self improvement, where the bottleneck is outside the part being improved.

Replies from: soreff, wedrifid, rwallace
comment by soreff · 2010-11-05T01:06:13.732Z · LW(p) · GW(p)

where the bottleneck is outside the part being improved.

I work in CAD, and I can assure you that the users of my department's code are very interested in performance improvements. The CAD software run time is a major bottleneck in designing the next generation of computers.

Replies from: wedrifid, Douglas_Knight
comment by wedrifid · 2010-11-05T01:18:08.224Z · LW(p) · GW(p)

I work in CAD, and I can assure you that the users of my department's code are very interested in performance improvements.

Might I suggest that the greatest improvements in CAD performance will come from R&D into CAD techniques and software? At least, that seems to be the case basically everywhere else. Not to belittle computing power but the software to use the raw power tends to be far more critical. If this wasn't the case, and given that much of the CAD task can be parellelized, recursive self improvement in computer chip self improvement would result in an exponentially expanding number of exponentially improving supercomputer clusters. On the scale of "We're using 30% of the chips we create to make new supercomputers instead of selling them. We'll keep doing that until our processors are sufficiently ahead of ASUS that we can reinvest less of our production capacity and still be improving faster than all competitors".

I think folk like yourself and the researchers into CAD are the greater bottleneck here. That's a compliment to the importance of your work. I think. Or maybe an insult to your human frailty. I never can tell. :)

comment by Douglas_Knight · 2010-11-05T05:05:31.137Z · LW(p) · GW(p)

Jed Harris says similar things in the comments here, but this seems to make predictions that don't seem born out to me (cf wedrifid). If serial runtime is a recursive bottleneck, then the break of exponentially increasing clockspeed should cause problems for the chip design process and then also break exponential transistor density. But if these processes can be parallelized, then they should have been parallelized long ago.

A way to reconcile some of these claims is that serial clockspeed has only recently become a bottleneck, as a result of the clockspeed plateau.

comment by wedrifid · 2010-11-04T22:42:15.068Z · LW(p) · GW(p)

It is interesting to note that rw first expends effort to argue show something could kind of be considered to be recursive improvement so as to go on and show how weakly recursive it is. That's not even 'reference class tennis'... it's reference class Aikido!

Replies from: rwallace
comment by rwallace · 2010-11-04T22:47:09.100Z · LW(p) · GW(p)

I'll take that as a compliment :-) but to clarify, I'm not saying its weakly recursive. I'm saying it's quite strongly recursive -- and noting that recursion isn't magic fairy dust, the curve of capability limits rate of progress even when you do have recursive self-improvement.

Replies from: wedrifid
comment by wedrifid · 2010-11-04T22:54:02.975Z · LW(p) · GW(p)

I'm saying it's quite strongly recursive

I suppose 'quite' is a relative term. It's improvement with a bottleneck that resides firmly in the human brain.

and noting that recursion isn't magic fairy dust, the curve of capability limits rate of progress even when you do have recursive self-improvement.

Of course it does. Which is why it matters so much how steep the curve of recursion is compared to the curve of capability. It is trivial maths.

comment by rwallace · 2010-11-04T22:37:17.367Z · LW(p) · GW(p)

But there will always be a bottleneck outside the part being improved, if nothing else because the ultimate source of information is feedback from the real world.

(Well, that might stop being true if it turns out to be possible to Sublime into hyperspace or something like that. But it will remain true as long as we are talking about entities that exist in the physical universe.)

Replies from: JGWeissman
comment by JGWeissman · 2010-11-04T22:47:21.975Z · LW(p) · GW(p)

An AGI could FOOM and surpass us before it runs into that limit. It's not like we are making observations anywhere near as fast as physically possible, nor are we drawing all the conclusions we ideally could from the data we do observe.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:57:25.165Z · LW(p) · GW(p)

"That limit"? The mathematically ultimate limit is Solomonoff induction on an infinitely powerful computer, but that's of no physical relevance. I'm talking about the observed bound on rates of progress, including rates of successive removal of bottlenecks. To be sure, there may -- hopefully will! -- someday exist entities capable of making much better use of data than we can today; but there is no reason to believe the process of getting to that stage will be in any way discontinuous, and plenty of reason to believe it will not.

Replies from: JGWeissman
comment by JGWeissman · 2010-11-04T23:03:01.968Z · LW(p) · GW(p)

Are you being deliberately obtuse? "That limit" refers to the thing you brought up: the rate at which observations can be made.

Replies from: rwallace
comment by rwallace · 2010-11-05T00:01:37.920Z · LW(p) · GW(p)

Yes, but you were the one who started talking about it as something you can "run into", together with terms like "as fast as physically possible" and "ideally could from the data" - that last term in particular has previously been used in conversations like this to refer to Solomonoff induction on an infinitely powerful computer.

My point is that at any given moment an awful lot of things will be bottlenecks, including real-world data. The curve of capability is already observed in cases where you are free to optimize whatever variable is the lowest hanging fruit at the moment.

In other words, you are already "into" the current data limit; if you could get better performance by using less data and substituting e.g. more computation, you would already be doing it.

As time goes by, the amount of data you need for a given degree of performance will drop as you obtain more computing power, better algorithms etc. (But of course, better performance still will be obtainable by using more data.) However,

  1. The amount of data needed won't drop below some lower bound,

  2. More to the point, the rate at which the amount needed drops, is itself bound by the curve of capability.

comment by Jonii · 2010-11-05T20:50:45.184Z · LW(p) · GW(p)

Even if AGI had only very small comparative advance(in skill and ability to recursively self-improve) over humans supported by then best available computer technology and thus their ability to self-improve, it would eventually, propably even then quite fast, overpower humans totally and utterly. And it seems fairly likely that eventually you could build fully artifical agent that was strictly superior(or could recursively self-update to be one) to humans. This intuition is fairly likely given that humans are not ultimately designed to be the singularity-survivors with best possible mindware to keep up with ever advancing technology, and re-engineering would most likely be more difficult than it was to build a new AI from scratch.

So in conclusion, nothing of a great importance, as far as I can tell, is changed. AGI that comes forward still has to be Friendly, or we're doomed. And humans still are going to be totally overpowered by that machine intelligence in a relatively short time.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T21:05:08.776Z · LW(p) · GW(p)

Yup. Intelligence explosion is pretty much irrelevant at this point (even if in fact real). Given the moral weight of the consequences, one doesn't need impending doom to argue high marginal worth of pursuing Friendly AI. Abstract arguments get stronger by discarding irrelevant detail, even correct detail.

(It's unclear what's more difficult to argue, intelligence explosion, or expected utility of starting to work on possibly long-term Friendly AI right now. But using both abstract arguments allows to convince even if only one of them gets accepted.)

comment by Vladimir_Nesov · 2010-11-05T10:38:44.113Z · LW(p) · GW(p)

There is no law of nature that requires consequences to be commensurate with their causes. One can build a doom machine that is activated by pressing a single button. A mouse exerts gravitational attraction on all of the galaxies in the future light cone.

comment by Manfred · 2010-11-04T23:57:12.578Z · LW(p) · GW(p)

Counterexample-thinking-of time.

What has capability that's super-logarithmic?

  • Money. Interest and return on investment are linear at my current point in money-space.
  • Vacuum emptiness. Each tiny bit better vacuum we could make had a large increase in usefulness. Now we've hit diminishing returns, but there was definitely a historical foom in the usefulness of vacuum.
  • Physical understanding. One thing just leads to another thing, which leads to another thing... Multiple definitions of capability here, but what I'm thinking of is the fact that there are so many important phenomena that require a quantum-mechanical explanation (all of chemistry), so there are steady returns at least up to standard QM.

  • Oh, of course. Lifespan.

Replies from: David_Gerard
comment by David_Gerard · 2010-11-05T12:30:26.049Z · LW(p) · GW(p)

Most growth curves are sigmoid. They start off looking like a FOOM and finish with diminishing returns.

Replies from: Manfred, rabidchicken
comment by Manfred · 2010-11-05T20:01:50.052Z · LW(p) · GW(p)

Most growth curves of things that grow because they are self-replicating are sigmoid.

"Capability functions" get to do whatever the heck they want, limited only by the whims of the humans (mostly me) I'm basing my estimates on.

comment by rabidchicken · 2010-11-05T16:47:19.063Z · LW(p) · GW(p)

If it is a finite universe, there will never be a "foom" (i love our technical lingo)

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-10T04:57:56.691Z · LW(p) · GW(p)

If it is a finite universe, there will never be a "foom"

What does the size of the universe have to do with this?

Replies from: rabidchicken
comment by rabidchicken · 2010-11-12T19:07:34.663Z · LW(p) · GW(p)

David's post implied that we should only consider something to be a FOOM if it follows exponential growth and never sees diminishing returns. In that case, we cannot have a true foom if energy and matter are finite. No matter how intelligent a computer gets, it eventually will slow down and stop increasing capacity because energy and matter both are limiting factors. I don't recall seeing the definition of a foom anywhere on this site, but it seems there is some inconstancy in how people use the word.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-12T20:29:32.612Z · LW(p) · GW(p)

Hmm, you must be reading David's remarks differently. David's observation about sigmoids seemed to be more of an observation that in practice growth curves do eventually slow down and that they generally slow down well before the most optimistic and naive estimates would say so.

comment by Jonathan_Graehl · 2010-11-04T21:10:04.412Z · LW(p) · GW(p)

Looking away from programming to the task of writing an essay or a short story, a textbook or a novel, the rule holds true: each significant increase in capability requires a doubling, not a mere linear addition.

If this is such a general law, should it not apply outside human endeavor?

You're reaching. There is no such general law. There is just the observation that whatever is at the top of the competition in any area, is probably subject to some diminishing returns from that local optimum. This is an interesting generalization and could provide insight fuel. But the diminishing returns are in no way magically logarithmic total benefit per effort, no matter that in a few particular cases they are.

The only way to create the conditions for any sort of foom would be to shun a key area completely for a long time, so that ultimately it could be rapidly plugged into a system that is very highly developed in other ways.

every even slightly promising path has had people working on it

This is a nice argument. You still can't exclude the possibility of potent ingredients or approaches toward AI that we haven't yet conceived of, but I also generally believe as you do.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:31:52.330Z · LW(p) · GW(p)

Certainly if someone can come up with a potent approach towards AI that we haven't yet conceived of, I will be very interested in looking at it!

comment by Eneasz · 2010-11-05T21:36:42.876Z · LW(p) · GW(p)

in a nutshell, most conscious observers should expect to live in a universe where it happens exactly once - but that would require a digression into philosophy and anthropic reasoning, so it really belongs in another post; let me know if there's interest, and I'll have a go at writing that post.

This, to me, could be a much more compelling argument, if presented well. So there's definitely a lot of interest from me.

comment by timtyler · 2010-11-05T09:40:57.738Z · LW(p) · GW(p)

This one is a little bit like my "The Intelligence Explosion Is Happening Now" essay.

Surely my essay's perspective is a more realistic one, though.

Replies from: Jonii, rwallace, Thomas
comment by Jonii · 2010-11-05T20:24:50.374Z · LW(p) · GW(p)

Your essay was the first thing that came to my mind after reading this post. I think the core argument is the same, and valid. It does seem to me that people vastly underestimate what computer-supported humans are capable of doing when AGI becomes more feasible, and thus overestimate how superior AGI actually would be, even after continously updating its own software and hardware.

Replies from: timtyler
comment by timtyler · 2010-11-05T20:58:09.360Z · LW(p) · GW(p)

That sounds a bit of a different subject - one that I didn't really discuss at all.

Human go skill goes from 30kyu to 1 kyu and then from 1 dan up to 9 dan. God is supposed to be 11 dan. Not that much smarter than the smartest human. How common is that sort of thing?

Replies from: Jonii
comment by Jonii · 2010-11-05T21:18:50.451Z · LW(p) · GW(p)

Dunno how to answer so I just stay quiet. I communicated badly there, and I don't see fast way out of here.

Replies from: timtyler
comment by timtyler · 2010-11-05T23:11:09.606Z · LW(p) · GW(p)

Your comment seems absolutely fine to me. Thanks for thinking of my essay - and thanks for the feedback!

comment by rwallace · 2010-11-05T18:11:47.543Z · LW(p) · GW(p)

I see your excellent essay as being a complement to my post, and I should really have remembered to refer to it -- upvoted, thanks for the reminder!

comment by Thomas · 2010-11-05T10:58:58.986Z · LW(p) · GW(p)

You are correct there, so I have no choice but to upvote your post.

comment by orthonormal · 2010-11-07T19:00:56.046Z · LW(p) · GW(p)

Essentially, this is an argument that a FOOM would be a black swan event in the history of optimization power. That provides some evidence against overconfident Singularity forecasts, but doesn't give us enough reason to dismiss FOOM as an existential risk.

comment by timtyler · 2010-11-06T07:02:57.012Z · LW(p) · GW(p)

The ELO rating scheme is calculated on a logistic curve - and so includes an exponent - see details here. It gets harder to climb up the ratings the higher you get.

It's the same with traditional go kyu/dan ratings - 9 kyu to 8 kyu is easy, 8 dan to 9 dan is very difficult.

Replies from: Perplexed, wedrifid
comment by Perplexed · 2010-11-06T14:39:27.264Z · LW(p) · GW(p)

Actually, the argument could be turned around. A 5 kyu player can give a 9 kyu player a 4 stone handicap and still have a good chance of winning. A 9 dan, offering a 4 stone handicap to a 5 dan, will be crushed.

By this metric, the distance between levels becomes smaller at the higher levels of skill.

It is unclear whether odds of winning, log odds of winning, number of handicap stones required to equalize odds of winning, or number of komi points required to equalize odds of winning ought to be the metric of relative skill.

Replies from: timtyler
comment by timtyler · 2011-06-06T19:38:39.241Z · LW(p) · GW(p)

By this metric, the distance between levels becomes smaller at the higher levels of skill.

Probably not by very much. One of the main motivations behind the grading system is to allow people of different grades to easily calculate the handicap needed to produce a fair game - e.g. see here:

Skill in the traditional board game Go is measured by a number of different national, regional and online ranking and rating systems. Traditionally, go rankings have been measured using a system of dan and kyu ranks. Especially in amateur play, these ranks facilitate the handicapping system, with a difference of one rank roughly corresponding to one free move at the beginning of the game.

You may be right that the system is flawed - but I don't think it is hugely flawed.

Replies from: skepsci
comment by skepsci · 2012-02-29T07:19:41.028Z · LW(p) · GW(p)

The difference is between amateur and professional ratings. Amateur dan ratings, just like kyu ratings, are designed so that a difference of n ranks corresponds to suitability of a n-stone handicap, but pro dan ratings are more bunched together.

See Wikipedia:Go pro.

comment by wedrifid · 2010-11-06T07:06:42.095Z · LW(p) · GW(p)

Does this suggest anything except that the scale mostly useless at the top end?

Replies from: timtyler
comment by timtyler · 2010-11-06T07:10:59.011Z · LW(p) · GW(p)

The idea in the post was:

Computer Go is now on the same curve of capability as computer chess: whether measured on the ELO or the kyu/dan scale, each doubling of power gives a roughly constant rating improvement.

My observation is that ELO ratings are calculated on a logistic curve - and so contain a "hidden" exponent - so the "constant rating improvement" should be taken with a pinch of salt.

comment by [deleted] · 2012-02-28T20:34:39.205Z · LW(p) · GW(p)

This is a very interesting proposal, but as a programmer and with my knowledge of statistical analysis I have to disagree:

The increase in computing power that you regrettably cannot observe at user level is a fairly minor quirk of the development; pray tell, does the user interface of your Amiga system's OS and your Dell system's OS look alike? The reason why modern computers don't feel faster is because the programs we run on them are wasteful and gimmicky. However, in therms of raw mathematics, we have 3D games with millions and millions of polygons, we have multi megapixel screens.

Statistical analysis is a breeze: Exponential regression on 5'000'000 data points is a blink of an eye; you can actually hold it all in memory at once. Raw numerical analysis can be done in paralell using general purpose GPU programming, etc. Your computer can solve your algebra problems faster than you can type them in.

So I fail to see why computers evidently being myriads times faster now than 30 years ago is an argument against intelligence explosion.

comment by wedrifid · 2010-11-04T21:14:37.958Z · LW(p) · GW(p)

This post is missing the part where the observations made support the conclusion to any significant degree.

Computer chips used to assist practically identical human brains to tweak computer chips is a superficial similarity to a potential foom at the very best.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:17:31.630Z · LW(p) · GW(p)

Current computer aided design is already far from mere tweaking and getting further each generation.

Replies from: wedrifid
comment by wedrifid · 2010-11-04T22:23:32.952Z · LW(p) · GW(p)

It would become a significantly relevant 'fooming' anecdote when the new generation chips were themselves inventing new computer aided design software and techniques.

Replies from: rwallace
comment by rwallace · 2010-11-04T22:40:01.967Z · LW(p) · GW(p)

When that happens, I'll be pointing to that as an example of the curve of capability, and FAI believers will be saying it will become significantly relevant when the chips are themselves inventing new ways to invent new computer aided design techniques. Etc.

Replies from: wedrifid
comment by wedrifid · 2010-11-04T22:46:39.805Z · LW(p) · GW(p)

When that happens, I'll be pointing to that as an example of the curve of capability, and FAI believers will be saying it will become significantly relevant when the chips are themselves inventing new ways to invent new computer aided design techniques. Etc.

No, shortly after that happens all the FAI believers will be dead, with the rest of humanity. ;)

We'll have about enough time to be the "O(h shit!)" in FOOM.

Replies from: rwallace
comment by rwallace · 2010-11-04T23:43:56.716Z · LW(p) · GW(p)

So is there any way at all to falsify your theory?

Replies from: wedrifid
comment by wedrifid · 2010-11-05T00:15:09.330Z · LW(p) · GW(p)

Are you serious? You presented your own hypothesis for the outcome of the experiment before I gave mine! They are both obviously falsifiable.

I think, in no uncertain terms, that this rhetorical question was an extremely poor one.

Replies from: rwallace
comment by rwallace · 2010-11-05T00:34:15.573Z · LW(p) · GW(p)

Clearly we aren't going to agree on whether my question was a good one or a poor one, so agreeing to differ on that, what exactly would falsify your theory?

Replies from: wedrifid
comment by wedrifid · 2010-11-05T00:52:17.376Z · LW(p) · GW(p)

The grandparent answered that question quite clearly.

You make a prediction here of what would happen if this happened. I reply that that would actually happen instead. You falsify each of these theories by making this happen and observing the results.

I note that you are trying to play the 'unfalsifiable card' in two different places here and I am treating them differently because you question different predictions. I note this to avoid confusion if you meant them to be a single challenge to the overall position. So see other branch if you mean only to say "FOOM is unfalsifiable".

Replies from: rwallace
comment by rwallace · 2010-11-05T03:50:11.561Z · LW(p) · GW(p)

Ah, then I'm asking whether "in situation X, the world will end" is your theory's only prediction - since that's the same question I've ended up asking in the other branch, let's pursue it in the other branch.

comment by A1987dM (army1987) · 2012-02-28T19:35:04.717Z · LW(p) · GW(p)

The answer is that each doubling of computing power adds roughly the same number of ELO rating points.

Well, ELO rating is a logarithmic scale, after all.

comment by Vladimir_Nesov · 2010-11-05T10:42:39.119Z · LW(p) · GW(p)

A very familiar class of discontinuities in engineering refers to functionality being successfully implemented for the first time. This is a moment where a design finally comes together, all its components adjusted to fit with each other, the show-stopping bugs removed from the system. And then it just works, where it didn't before.

Replies from: rwallace
comment by rwallace · 2010-11-05T17:58:57.405Z · LW(p) · GW(p)

And yet, counterintuitively, these discontinuities don't lead to discontinuities, because it is invariably the case that the first prototype is not dramatically more useful than what came before -- often less so -- and requires many rounds of debugging and improvement before the new technology can fulfill its potential.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T18:06:37.727Z · LW(p) · GW(p)

Arguably, you can call the whole process of development "planning", and look only at the final product. So, your argument seems to depend on an arbitrarily chosen line between cognitive work and effects that are to be judged for "discontinuity".

comment by marchdown · 2010-11-08T19:55:14.170Z · LW(p) · GW(p)

Following logic of the article: our current mind-designs, both human, and real AIs are lopsided in that they can't effectively modify themselves. There is theory of mind, but as far as maps go, this one is plenty far from the territory. Once this obstacle is dealt with there might be another FOOM.

comment by XiXiDu · 2010-11-06T13:21:13.855Z · LW(p) · GW(p)

I got some questions regarding self-improvement:

  • Is it possible for an intelligence to simulate itself?
  • Is self-modification practicable without running simulations to prove goal stability?
  • Does creating an improved second version of oneself demand prove of friendliness?

I'm asking because I sense that those questions might be important in regard to the argument in the original post that each significant increase in capability requires a doubling of resources.

Replies from: Perplexed
comment by Perplexed · 2010-11-06T14:14:14.209Z · LW(p) · GW(p)

I have some questions regarding self-improvement:

  • Is it possible for an intelligence to simulate itself?

The naive and direct answer is "Yes, but only at reduced speed and perhaps with the use of external simulated memory". A smarter answer is "Of course it can, if it can afford to pay for the necessary computing resources."

  • Is self-modification practicable without running simulations to prove goal stability?

I would say that self-modification is incredibly stupid without proof of much more fundamental things than goal-stability. And that you really can't prove anything by simulation, since you cannot exercise a simulation with all possible test data, and you want a proof that good things happen forall data. So you want to prove the correctness of the self-modification symbolically.

  • Does creating an improved second version of oneself demand prove of friendliness?

That seems to be a wise demand to make of any powerful AI.

I'm asking because I sense that those questions might be important in regard to the argument in the original post that each significant increase in capability requires a doubling of resources.

Could you explain why you sense that?

I will comment that, in the course of this thread, I have become more and more puzzled at the usual assumption here that we are concerned with one big AI with unlimited power which self-modifies, rather than being concerned with an entire ecosystem of AIs, no single one of which is overwhelmingly powerful, which don't self-modify but rather build machines of the next generation, and which are entirely capable of proving the friendliness of next-generation machines before those new machines are allowed to have any significant power.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-06T15:06:09.731Z · LW(p) · GW(p)

I was curious if there might exist something analog to the halting problem regarding provability of goal-stability and general friendliness.

Could you explain why you sense that?

If every AGI has to prove the friendliness of its successor then this might demand considerable effort and therefore resources as it would demand great care and extensive simulations or, as your answer suggests, proving the correctness of the self-modification symbolically. In both cases the AGI would have to acquire new resources slowly, as it couldn't just self-improve to come up with faster and more efficient solutions. In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

Replies from: Perplexed
comment by Perplexed · 2010-11-06T16:22:37.184Z · LW(p) · GW(p)

I was curious if there might exist something analogous to the halting problem regarding provability of goal-stability and general friendliness.

There is a lot of confusion out there as to whether the logic of the halting problem means that the idea of proving program correctness is somehow seriously limited. It does not mean that. All it means is that there can be programs so badly written that it is impossible to say what they do.

However, if programs are written for some purpose, and their creator can say why he expects the programs to do what he meant them to do, then that understanding of the programs can be turned into a checkable proof of program correctness.

It may be the case that it is impossible to decide whether an arbitrary program even halts. But we are not interested in arbitrary programs. We are interested in well-written programs accompanied by a proof of their correctness. Checking such a proof is not only a feasible task, it is an almost trivial task.

To say this again, it may well be the case that there are programs for which it is impossible to find a proof that they work. But no one sane would run such a program with any particular expectations regarding its behavior. However, if a program was constructed for some purpose, and the constructor can give reasons for believing that the program does what it is supposed to do, then a machine can check whether those reasons are sufficient.

It is a reasonable conjecture that for every correct program there exists a provably correct program which does the same thing and requires only marginally more resources to do it.

In other words, self-improvement would demand resources, therefore the AGI could not profit from its ability to self-improve regarding the necessary acquisition of resources to be able to self-improve in the first place.

Huh?

If that is meant to be an argument against a singleton AI being able to FOOM without using external resources, then you may be right. But why would it not have access to external resources? And why limit consideration to singleton AIs?

Replies from: wnoise, JGWeissman, XiXiDu
comment by wnoise · 2010-11-06T22:14:07.316Z · LW(p) · GW(p)

All it means is that there can be programs so badly written that it is impossible to say what they do.

It's true that the result is "there exists programs such that", and it says nothing about the relative sizes of those that can be proved to terminate vs those that can't in the subset we care about and are likely to write.

However, if programs are written for some purpose,

Sometimes that purpose is to test mathematical conjectures. In that case, the straightforward and clear way to write the program actually is the one used, and is often apparently does fall into this subclass.

(And then there are all the programs that aren't designed to normally terminate, and then we should be thinking about codata and coinduction.)

Replies from: None, Perplexed
comment by [deleted] · 2010-11-06T23:50:59.282Z · LW(p) · GW(p)

Sometimes that purpose is to test mathematical conjectures.

Upvoted because I think this best captures what the halting problem actually means. The programs for which we really want to know if they halt aren't recursive functions that might be missing a base case or something. Rather, they're like the program that enumerates the non-trivial zeroes of the Riemann zeta function until it finds one with real part not equal to 1/2.

If we could test whether such programs halted, we would be able to solve any mathematical existence problem ever that we could appropriately describe to a computer.

Replies from: Perplexed
comment by Perplexed · 2010-11-07T01:08:03.144Z · LW(p) · GW(p)

The programs for which we really want to know if they halt aren't recursive functions that might be missing a base case or something.

What you mean 'we,' white man?

People researching program verification for purposes of improving software engineering really are interested in the kinds of mistakes programmers make, like "recursive functions that might be missing a base case or something". Or like unbounded iteration for which we can prove termination, but really ought to double-check that proof.

Unbounded iteration for which we cannot (currently) prove termination may be of interest to mathematicians, but even a mathematician isn't stupid enough to attempt to resolve the Zeta function conjecture by mapping the problem to a question about halting computer programs. That is a step in the wrong direction.

And if we build an AGI that attempts to resolve the conjecture by actually enumerating the roots in order, then I think that we have an AI that is either no longer rational or has too much time on its hands.

Replies from: None
comment by [deleted] · 2010-11-07T01:40:15.704Z · LW(p) · GW(p)

I suppose I wasn't precise enough. Actual researchers in that area are, rightly, interested in more-or-less reasonable halting questions. But these don't contradict our intuition about what halting problems are. In fact, I would guess that a large class of reasonable programming mistakes can be checked algorithmically.

But as far as consequences for there being, hypothetically, a solution to the general halting problem... the really significant result would be the existence of a general "problem solving algorithm". Sure, Turing's undecidability problem proves that the halting problem is undecidable... but this is my best attempt at capturing the intuition of why it's not even a little bit close to being decidable.

Of course, even in a world where we had a halting solver, we probably wouldn't want to use it to prove the Riemann Hypothesis. But your comment actually reminds me of a clever little problem I once saw. It went as follows:

If someone ever proves that P=NP, it's possible the proof will be non-constructive: it won't actually show how to solve an NP-complete problem in polynomial time. However, this doesn't actually matter. Write an algorithm for solving SAT (or any other NP-complete problem) with the following property: if P=NP, then your algorithm should run in polynomial time.

(the connection to your comment is that even if P=NP, we wouldn't actually use the solution to the above problem in practical applications. We'd try our best to find something better)

comment by Perplexed · 2010-11-06T23:02:39.891Z · LW(p) · GW(p)

And then there are all the programs that aren't designed to normally terminate, and then we should be thinking about codata and coinduction.

Which raises a question I have never considered ...

Is there something similar to the halting-problem theorem for this kind of computation?

That is, can we classify programs as either productive or non-productive, and then prove the non-existence of a program that can classify?

Replies from: wnoise
comment by wnoise · 2010-11-06T23:42:29.402Z · LW(p) · GW(p)

That's a great question. I haven't found anything on a brief search, but it seems like we can fairly readily embed a normal program inside a coinductive-style one and have it be productive after the normal program terminates.

comment by JGWeissman · 2010-11-06T16:54:47.665Z · LW(p) · GW(p)

All it means is that there can be programs so badly written that it is impossible to say what they do.

I would like to see all introductions to the Halting Problem explain this point. Unfortunately, it seems that "computer scientists" have only limited interest in real physical computers and how people program them.

Replies from: fiddlemath
comment by fiddlemath · 2010-11-06T22:27:43.291Z · LW(p) · GW(p)

I'm working on my ph.d. in program verification. Every problem we're trying to solve is as hard as the halting problem, and so we make the assumption, essentially, that we're operating over real programs: programs that humans are likely to write, and actually want to run. It's the only way we can get any purchase on the problem.

Trouble is, the field doesn't have any recognizable standard for what makes a program "human-writable", so we don't talk much about that assumption. We should really get a formal model, so we have some basis for expecting that a particular formal method will work well before we implement it... but that would be harder to publish, so no one in academia is likely to do it.

Replies from: andreas, gwern
comment by andreas · 2010-11-07T00:07:22.979Z · LW(p) · GW(p)

Similarly, inference (conditioning) is incomputable in general, even if your prior is computable. However, if you assume that observations are corrupted by independent, absolutely continuous noise, conditioning becomes computable.

comment by gwern · 2011-01-03T18:48:39.419Z · LW(p) · GW(p)

Offhand, I don't see any good way of specifying it in general, either (even the weirdest program might be written as part of some sort of arcane security exploit). Why don't you guys limit yourselves to some empirical subset of programs that humans have written, like 'Debian 5.0'?

comment by XiXiDu · 2010-11-06T19:18:08.253Z · LW(p) · GW(p)

But we are not interested in arbitrary programs. We are interested in well-written programs accompanied by a proof of their correctness. Checking such a proof is not only a feasible task, it is an almost trivial task.

Interesting! I have read that many mathematical proofs today require computer analysis. If it is such a trivial task to check the correctness of code that gives rise to the level of intelligence above ours, then my questions have not been relevant regarding the argument of the original post. I mistakenly expected that no intelligence is able to easily verify conclusively the workings of another intelligence that is a level above its own without painstakingly acquiring resources, devising the necessary tools and building the required infrastructure. I expected that what applied for humans creating superhuman AGI would also hold for AGI creating its successor. Humans first had to invent science, bumble through the industrial revolution and develop computers to be able to prove modern mathematical problems. So I thought a superhuman AGI would have to invent meta-science, advanced nanotechnology and figure out how to create an intelligence that could solve problems it couldn't solve itself.

Replies from: Perplexed
comment by Perplexed · 2010-11-06T19:34:20.220Z · LW(p) · GW(p)

If it is such a trivial task to check the correctness of code that gives rise to the level of intelligence above ours ...

Careful! I said checking the validity of a proof of code correctness, not checking the correctness of code. The two are very different in theory, but not (I conjecture) very different in practice, because code is almost never correct unless the author of the code has at least a sketch proof that it does what he intended.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-06T19:51:17.882Z · LW(p) · GW(p)

I see, it wasn't my intention to misquote you, I was simply not aware of a significant distinction.

By the way, has the checking of validity of a proof of code correctness to be validated or proven as well or is there some agreed limit to the depth of verification sufficient to be sure that if you run the code nothing unexpected will happen?

Replies from: fiddlemath, Perplexed
comment by fiddlemath · 2010-11-06T22:12:45.944Z · LW(p) · GW(p)

The usual approach, taken in tools like Coq and HOL, is to reduce the proof checker to a small component of the overall proof assistant, with the hope that these small pieces can be reasonably understood by humans, and proved correct without machine assistance. This keeps the human correctness-checking burden low, while also keeping the piece that must be verified in a domain that's reasonable for humans to work with, as opposed to, say, writing meta-meta-meta proofs.

The consequence of this approach is that, since the underlying logic is very simple, the resulting proof language is very low-level. Human users give higher-level specifications, and the bulk of the proof assistant compiles those into low-level proofs.

comment by Perplexed · 2010-11-06T22:45:44.812Z · LW(p) · GW(p)

There are a couple of branches worth mentioning in the "meta-proof". One is the proof of correctness of the proof-checking software. I believe that fiddlemath is addressing this problem. The "specification" of the proof-checking software is essentially a presentation of an "inference system". The second is the proof of the soundness of the inference system with respect to the programming language and specification language. In principle, this is a task which is done once - when the languages are defined. The difficulty of this task is (AFAIK) roughly like that of a proof of cut-elimination - that is O(n^2) where n is the number of productions in the language grammars.

comment by shokwave · 2010-11-05T04:45:18.389Z · LW(p) · GW(p)

there's a significant difference between the capability of a program you can write in one year versus two years

The program that the AI is writing is itself, so the second half of those two years takes less than one year - as determined by the factor of "significant difference". And if there's a significant difference from 1 to 2, there ought to be a significant difference from 2 to 4 as well, no? But the time taken to get from 2 to 4 is not two years; it is 2 years divided by the square of whatever integer you care to represent "significant difference" with. And then from 4 to 8, is (4 years / sigdif^3). You see where this goes?

EDIT: I see that you addressed recursion in posts further down. Compound interest does not follow your curve of capability; compound interest is an example of strong recursion: money begets money. Weak recursion: money buys hardware to run software to aid human to design new hardware to earn money to buy hardware. Strong recursion: Software designs new software to design newer software; money begets money begets more money. Think of the foom as compound interest on intelligence.

Replies from: rwallace, PeterS
comment by rwallace · 2010-11-05T20:12:42.973Z · LW(p) · GW(p)

Compound interest is a fine analogy, it delivers a smooth exponential growth curve, and we have seen technological progress do likewise.

What I am arguing against is the "AI foom" claims that you can get faster growth than this, e.g. each successive doubling taking half the time. The reason this doesn't work is that each successive doubling is exponentially harder.

What if the output feeds back into the input, so the system as a whole is developing itself? Then you get the curve of capability, which is a straight line on a log-log graph, which again manifests itself as smooth exponential growth.

comment by PeterS · 2010-11-05T10:39:43.923Z · LW(p) · GW(p)

Strong recursion: Software designs new software to design newer software; money begets money begets more money. Think of the foom as compound interest on intelligence.

Suppose A designs B, which then designs C. Why does it follow that C is more capable than B (logically, disregarding any hardware advances made between B and C)? Alternatively, why couldn't A have designed C initially?

Replies from: khafra, shokwave
comment by khafra · 2010-11-05T13:53:13.023Z · LW(p) · GW(p)

It does not necessarily follow; but the FOOM contention is that once A can design a B more capable than itself, B's increased capability will include the capability to design C, which would have been impossible for A. C can then design D, which would have been impossible for B and even more impossible for A.

Currently, each round of technology aids in developing the next, but the feedback isn't quite this strong.

comment by shokwave · 2010-11-05T18:39:45.147Z · LW(p) · GW(p)

As per khafra's post, though I would add that it looks likely: after all, that we as humans are capable of any kind of AI at all is proof that designing intelligent agents is the work of intelligent agents. It would be surprising if there was some hard cap on how intelligent an agent you can make - like if it topped out at exactly your level or below.

comment by Thomas · 2010-11-04T21:01:32.399Z · LW(p) · GW(p)

Could the current computing power we have, be obtained by enough Commodore 64 machines?

Maybe, but we would have to have many orders of magnitude bigger power supply for that. And this is just the beginning. The cell phone based on those processors could weight a ton each.

In other words, an impossibility from a whole bunch of reasons.

We already had a foom stretched over the last 30 years. Not the first and not the last one, if we are going to proceed as planed.

comment by XiXiDu · 2010-11-06T19:41:29.465Z · LW(p) · GW(p)

What is the Relationship between Language, Analogy, and Cognition?

What makes us so smart as a species, and what makes children such rapid learners? We argue that the answer to both questions lies in a mutual bootstrapping system comprised of (1) our exceptional capacity for relational cognition and (2) symbolic systems that augment this capacity. The ability to carry out structure-mapping processes of alignment and inference is inherent in human cognition. It is arguably the key inherent difference between humans and other great apes. But an equally important difference is that humans possess a symbolic language.The acquisition of language influences cognitive development in many ways. We focus here on the role of language in a mutually facilitating partnership with relational representation and reasoning. We suggest a positive feedback relation in which structural alignment processes support the acquisition of language, and in turn, language — especially relational language — supports structural alignment and reasoning.We review three kinds of evidence (a) evidence that analogical processes support children's learning in a variety of domains; (b) more specifically, evidence that analogical processing fosters the acquisition of language, especially relational language; and (c) in the other direction, evidence that acquiring language fosters children's ability to process analogies, focusing on spatial language and spatial analogies. We conclude with an analysis of the acquisition of cardinality — which we offer as a canonical case of how the combination of language and analogical processing fosters cognitive development.

comment by NancyLebovitz · 2010-11-04T22:41:07.060Z · LW(p) · GW(p)

Side issue: should the military effectiveness of bombs be measured by the death toll?

I tentatively suggest that atomic and nuclear bombs are of a different kind than chemical explosives, as shown by the former changing the world politically.

I'm not sure exactly why-- it may have been the shock of novelty.

Atom and nuclear bombs combine explosion, fire, and poison, but I see no reason to think there would have have been the same sort of wide spread revulsion against chemical explosives, and the world outlawed poison gas in WWI and just kept on going with chemical explosives.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-11-04T23:13:41.378Z · LW(p) · GW(p)

I tentatively suggest that atomic and nuclear bombs are of a different kind than chemical explosives, as shown by the former changing the world politically.

I'm not sure exactly why-- it may have been the shock of novelty.

Chemical explosives changed the world politically, just longer ago. Particularly when they put the chemicals in a confined area and put lead pellets on top...

comment by wedrifid · 2010-11-04T23:15:13.829Z · LW(p) · GW(p)

and the world outlawed poison gas in WWI and just kept on going with chemical explosives.

I've never really got that. If you are going to kill people kill them well. War isn't nice and people get injured horribly. That's kind of the point.

Replies from: CronoDAS
comment by CronoDAS · 2010-11-05T06:09:38.841Z · LW(p) · GW(p)

I think that when both sides use poison gas in warfare, the net effect is that everyone's soldiers end up having to fight while wearing rubber suits, which offer an effective defense against gas but are damn inconvenient to fight in. So it just ends up making things worse for everyone. Furthermore, being the first to use poison gas, before your enemy starts to defend against it and retaliate in kind, doesn't really provide that big of an advantage. In the end, I guess that the reason gas was successfully banned after WWI was that everyone involved agreed that it was more trouble than it was worth.

I suppose that, even in warfare, not everything is zero-sum.

Replies from: Jordan
comment by Jordan · 2010-11-05T07:59:45.286Z · LW(p) · GW(p)

Seems like a classic iterated prisoner's dilemma.

comment by JGWeissman · 2010-11-04T21:12:14.945Z · LW(p) · GW(p)

The neglected missing piece is understanding of intelligence. (Not understanding how to solve a a Rubik cube, but understanding the generalized process that upon seeing a Rubik cube for the first time, and hearing the goal of the puzzle, figures out how to solve it.)

Replies from: rwallace
comment by rwallace · 2010-11-04T22:20:15.458Z · LW(p) · GW(p)

The point you are missing is that understanding intelligence is not a binary thing. It is a task that will be subject to the curve of capability like any other complex task.

Replies from: JGWeissman
comment by JGWeissman · 2010-11-04T23:15:14.874Z · LW(p) · GW(p)

What is binary is whether or not you understand intelligence enough to implement it on a computer that then will not need to rely on any much slower human brain for any aspect of cognition.

Replies from: rwallace
comment by rwallace · 2010-11-04T23:41:49.146Z · LW(p) · GW(p)

A useful analogy here is self replicating factories. At one time they were proposed as a binary thing, just make a single self replicating factory that does not need to rely on humans for any aspect of manufacturing and you have an unlimited supply of manufactured goods thereafter.

It was discovered that in reality it's about as far from binary as you can get. While in principle such a thing must be possible, in practice it's so far off as to be entirely irrelevant to today's plans; what is real is a continuous curve of increasing automation. By the time our distant descendants are in a position to automate the last tiny fraction, it may not even matter anymore.

Regardless, it doesn't matter. Computers are bound by the curve of capability just as much as humans are. There is no evidence that they're going to possess any special sauce that will allow them past it, and plenty of evidence that they aren't.

My theory is falsifiable. Is yours? If so, what evidence would you agree would falsify it, if said evidence were obtained?

comment by CarlShulman · 2010-11-04T20:41:50.463Z · LW(p) · GW(p)

Do you have a citation for the value of the Go performance improvement per hardware doubling?

Replies from: gwern, rwallace
comment by gwern · 2012-02-28T18:45:39.384Z · LW(p) · GW(p)

There is now reportedly a Go program (available as a bot on KGS), which has hit 5-dan as of 2012.

Replies from: CarlShulman
comment by CarlShulman · 2012-02-29T02:33:38.156Z · LW(p) · GW(p)

This computer Go poll, conducted in 1997 to estimate arrival times for shodan and world champion level software play, is interesting: the programmers were a bit too optimistic, but the actual time required for shodan level was very close to the median estimate.

Replies from: gwern
comment by gwern · 2012-02-29T03:00:23.794Z · LW(p) · GW(p)

That doesn't bode too well for reaching world champion level - if I toss the list into tr -s '[:space:]' | cut -d ' ' -f 3 | sort -g | head -14, the median estimate is 2050! Personally, I expect it by 2030.

comment by rwallace · 2010-11-04T22:08:21.164Z · LW(p) · GW(p)

Yes - look at the last table.

comment by timtyler · 2011-06-06T19:47:13.065Z · LW(p) · GW(p)

The solution to the paradox is that a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided. [...]

Can such a thing happen again? In particular, is it possible for AI to go foom the way humanity did?

If such lopsidedness were to repeat itself... well even then, the answer is probably no.

Chimpanzee brains were screwed - because they lacked symbolic processing - OK.

...but human brains are still screwed - since they have a crazy spaghetti-like architecture, have terrible serial performance, have unmaintainable source code, crappy memory facilities, and they age and die without the possibility of a backup being made.

So, human brains will go into the dustbin of history as well - and be replaced by superior successors, moving at the more rapid pace of technological evolution. Surely this one is not difficult to see coming.

comment by NancyLebovitz · 2010-11-05T17:42:58.924Z · LW(p) · GW(p)

Snagged from a thread that's gone under a fold:

The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps.

One thing people do that neither chimps nor computers have managed is invent symbolic logic.[1]

Maybe it's in the sequences somewhere, but what does it take to notice gaps in one's models and oddities that might be systematizable?

[1] If I'm going to avoid P=0, then I'll say it's slightly more likely that chimps have done significant intellectual invention than computers.

Replies from: Eliezer_Yudkowsky, Vladimir_Nesov, Vladimir_Nesov
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-11-05T18:05:46.443Z · LW(p) · GW(p)

The quote is wrong.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-11-08T13:09:25.662Z · LW(p) · GW(p)

My apologies-- I should have caught that-- the quote didn't seem to be an accurate match for what you said, but I was having too much fun bouncing off the misquote to track that aspect.

comment by Vladimir_Nesov · 2010-11-05T17:51:07.219Z · LW(p) · GW(p)

One thing people do that neither chimps nor computers have managed is invent symbolic logic.

Also lipstick. Don't forget lipstick.

(Your comment isn't very clear, so I'm not sure what you intended to say by the statement I cited.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-11-05T17:54:24.287Z · LW(p) · GW(p)

Thanks for posting the link.

My point was that some of the most interesting things people do aren't obviously algorithmic.

It's impressive that programs beat chess grandmasters. It would be more impressive (and more evidential that self-optimization is possible) if a computer could invent a popular game.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-05T18:02:10.128Z · LW(p) · GW(p)

My point was that some of the most interesting things people do aren't obviously algorithmic.

What is this statement intended as an argument for?

(What do you mean by "algorithmic"? It's a human category, just like "interesting". )

comment by Alexei · 2010-11-04T21:56:27.802Z · LW(p) · GW(p)

You make some interesting points, but completely fumble on your conclusion about AI FOOM. Please read the sequences. They are interesting and they will make you rethink your position.

Replies from: Will_Newsome, rwallace, wedrifid
comment by Will_Newsome · 2010-11-04T23:21:46.861Z · LW(p) · GW(p)

In general, telling people to "read the sequences" is moderately offensive, and should either be avoided or prefaced with both a surety that the intended audience has not in fact read the sequence and care to explain what parts of the sequences might help and why.

Replies from: wedrifid
comment by wedrifid · 2010-11-05T01:25:56.019Z · LW(p) · GW(p)

(Although sometimes people can get away with the offense in the exhortation the the subject has sacrificed credibility in the immediate context beyond the audience's threshold for wanting to protect that person's dignity. Or if you have a lot of status yourself, but that tends to only drag you up to 'neutral' reception.)

comment by rwallace · 2010-11-04T22:19:40.346Z · LW(p) · GW(p)

Not only have I read the sequences, I was a Singularitarian as well as AGI researcher long before they were written. If you have any new arguments against my conclusion, I would be interested in hearing them.

comment by wedrifid · 2010-11-04T22:33:53.937Z · LW(p) · GW(p)

They are interesting and they will make you rethink your position.

Even the sequences only lead the horse to water. ;)

Also note that the sequences are (thankfully) for most part about thinking in general not FAI and fooming specifically. Even if Eliezer intends them to improve thinking in that area.

That said, there are one or two posts by Eliezer (and probably by others) that are more than sufficient to dissuade one of the notions in this post. Although I must admit that I think this post is one of them.