Posts

Comments

Comment by Phil_Goetz6 on The Evolutionary-Cognitive Boundary · 2009-02-13T23:44:16.000Z · LW · GW

"I think it really is important to use different words that draw a hard boundary between the evolutionary computation and the cognitive computation."

Does this boundary even exist? It's a distinction we can make, for purposes of discussion; but not a hard boundary we can draw. You can find examples that fall clearly into one category (reflex) or another (addition), but you can also find examples that don't. This is just the sort of thing I was talking about in my post on false false dichotomies. It's a dichotomy that we can sometimes use for discussion, but not a true in-the-world binary distinction.

Eliezer responds yes: "Anna, you're talking about a messiness of the human system, not a difficulty in drawing hard distinctions between human-style messiness and evolutionary-style messiness."

I can't figure out what that's supposed to mean. I think it means Eliezer didn't understand what she said. The "messiness" is that you can't draw that hard distinction.

The entire discussion is cast in terms that imply Eliezer thinks evolutionary psychology deals with issues of conscious vs. subconscious motivations. AFAIK it sidesteps the issue whenever possible. Psychologists don't want to ask whether behavior comes from conscious or subconscious motivations. They want to observe behavior, record it, and explain it. Not trying to slice it up into conscious vs. subconscious pieces is the good part of behaviorism.

Comment by Phil_Goetz6 on Prolegomena to a Theory of Fun · 2008-12-18T18:01:47.000Z · LW · GW

Michael Vassar: "Phil: Eliezer has repeatedly said that ems (formerly uploads) are people. Eliezer, can you please clarify this point in a simple direct comment aimed at Phil?"

Huh? No need. Why would you think I'm unaware of that?

I notice that several people replied to my question, Why not colonize America?; yet no one addressed it. I think they fail to see the strength of the analogy. Humans use many more resources than ems or AIs. If you take the resources from the humans and give them to the AI, you will at some point be able to support 100 times as many "equivalent", equally happy people. Make an argument for not doing that. And don't, as komponisto did, just say that it's the right thing to do.

Everybody says that not taking the land from the Native Americans would have been the right thing to do; but nobody wants to give it back.

An argument against universe-tiling would also be welcome.

Comment by Phil_Goetz6 on Prolegomena to a Theory of Fun · 2008-12-18T00:53:48.000Z · LW · GW

You're solving the wrong problem. Did you really just call a body of experimental knowledge a political inconvenience?
Oh, snap.

Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral ... to talk about human-level minds still running around the day after the Singularity.
We're offended by the inequity - why does that big hunk of meat get to use 2,000W plus 2000 square feet that it doesn't know what to do with, while the poor, hardworking, higher-social-value em gets 5W and one square inch? And by the failure to maximize social utility.

Fun is a cognitive phenomenon. Whatever your theory of fun is, I predict that more fun will be better than less fun, and the moral thing to do seems to be to pack in as much fun as you can before the heat death of the universe. Following that line of thought could lead to universe-tiling.

Suppose you develop a theory of fun/good/morality. What are arguments for not tiling the universe in a way that maximizes it? Are there any such arguments that don't rely on either diversity as an inherent good, or on the possibility that your theory is wrong?

Your post seems to say that fun and morality are the same. But we use the term "moral" only in cases when the moral thing to do isn't fun. I think morality = fun only if it's a collective fun. If that collective fun is also summed over hypothetical agents you could create, then we come back to moral outrage at humans.

The problem brings to mind the colonization of America. Would it have been the moral thing to do to turn around and leave the Indians alone, instead of taking their land and using it to build an advancing civilization that can support a population of about 100 times as many people, who think they are living more pleasurable and interesting lives, and hardly ever cut out their neighbors' hearts on the tops of temples to the sun god? Intellectuals today unanimously say "yes". But I don't think they've allowed themselves to actually consider the question.

What is the moral argument for not colonizing America?

Comment by Phil_Goetz6 on Prolegomena to a Theory of Fun · 2008-12-18T00:15:27.000Z · LW · GW

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.
And the twenty lines are from the "spam" sketch. :)

Comment by Phil_Goetz6 on Visualizing Eutopia · 2008-12-17T19:23:18.000Z · LW · GW

Ben: "There's absolutely no guarantee that humanity won't go the way of the neanderthal in the grand scheme of things."

Are you actually hoping that won't happen? That we'll still be human a million years from now?

Comment by Phil_Goetz6 on Visualizing Eutopia · 2008-12-16T20:24:52.000Z · LW · GW

"Questions like "what do I want out of life?" or "what do I want the universe to look like?" are super important questions to ask, regardless of whether you have a magic genie. I personally have had the unfortunate experience of answering some parts of those question wrong and then using those wrong answers to run my life for a while."

I think that asking what you want the universe to look like in the long run, has little or no bearing on how to live your life in the present. (Except insofar as you direct your life to planning the universe's future.) The problems confronted are different.

Comment by Phil_Goetz6 on Not Taking Over the World · 2008-12-16T04:51:09.000Z · LW · GW

To ask what God should do to make people happy, I would begin by asking whether happiness or pleasure are coherent concepts in a future in which every person had a Godbot to fulfill their wishes. (This question has been addressed many times in science fiction, but with little imagination.) If the answer is no, then perhaps God should be "unkind", and prevent desire-saturation dynamics from arising. (But see the last paragraph of this comment for another possibility.)

What things give us the most pleasure today? I would say, sex, creative activity, social activity, learning, and games.

Elaborating on sexual pleasure probably leads to wireheading. I don't like wireheading, because it fails my most basic ethical principle, which is that resources should be used to increase local complexity. Valuing wireheading qualia also leads to the conclusion that one should tile the universe with wireheaders, which I find revolting, although I don't know how to justify that feeling.

Social activity is difficult to analyze, especially if interpersonal boundaries, and the level of the cognitive hierarchy to relate to as a "person", are unclear. I would begin by asking whether we would get any social pleasure from interacting with someone whose thoughts and decision processes were completely known to us.

Creative activity and learning may or may not have infinite possibilities. Can we continue constructing more and more complex concepts, to infinity? If so, then knowledge is probably also infinite, for as soon as we have constructed a new concept, we have something new to learn about. If not, then knowledge - not specific knowledge of what you had for lunch today, but general knowledge - may be limited. Creative activity may have infinite possibilities, even if knowledge is finite.

(The answer to whether intelligence has infinite potential has many other consequences; notably, Bayesian reasoners are likely only in a universe in which there are finite useful concepts, because otherwise it will be preferable to be a non-Bayesian reasoning over more complex concepts using faster algorithms.)

Games largely rely on uncertainty, improving mastery, and competition. Most of what we get out of "life", besides relationships and direct hormonal pleasures like sex, food, and fighting, is a lot like what we get from playing a game. One fear is that life will become like playing chess when you already know the entire game tree.

If we are so unfortunate as to live in a universe in which knowledge is finite, then conflict may serve as a substitute for ignorance in providing us a challenge. A future of endless war may be preferable to a future in which someone has won. It may even be preferable to a future of endless peace. If you study the middle ages of Europe, you will probably at some point ask, "Why did these idiots spend so much time fighting, when they could have all become wealthier if they simply stopped fighting long enough for their economies to grow?" Well, those people didn't know that economies could grow. They didn't believe that there was any further progress to be made in any domain - art, science, government - until Jesus returned. They didn't have any personal challenges; the nobility often weren't even allowed to do work. If you read what the nobles wrote, some of them said clearly that they fought because they loved fighting. It was the greatest thrill they ever had. I don't like this option for our future, but I can't rule out the possibility that war might once again be preferable to peace, if there actually is no more progress to be made and nothing to be done.

The answers to these questions also have a bearing on whether it is possible for God, in the long run, to be selfish. It seems that God would be the first person to have his desires saturated, and enter into this difficult position where it is hard to imagine how to want anything. I can imagine a universe, rather like the Buddhist universe, in which various gods, like bubbles, successively float to the top, and then burst into nothingness, from not caring anymore. I can also imagine an equilibrium, in which there are many gods, because the greater power than one acquires, the less interest one has in preserving that power.

Comment by Phil_Goetz6 on What I Think, If Not Why · 2008-12-12T01:25:40.000Z · LW · GW

It would have been better of me to reference Eliezer's Al Qaeda argument, and explain why I find it unconvincing.

Vladimir:

Phil, in suggesting to replace an unFriendly AI that converges on a bad utility by a collection of AIs that never converge, you are effectively trying to improve the situation by injecting randomness in the system.
You believe evolution works, right?

You can replace randomness only once you understand the search space. Eliezer wants to replace the evolution of values, without understanding what it is that that evolution is optimizing. He wants to replace evolution that works, with a theory that has so many weak links in its long chain of logic that there is very little chance it will do what he wants it to, even supposing that what he wants it to do is the right thing to do.

Vladimir:

Your perception of lawful extrapolation of values as "stasis" seems to stem from intuitions about free will.
That's a funny thing to say in response to what I said, including: 'One question is where "extrapolation" fits on a scale between "value stasis" and "what a free wild-type AI would think of on its own."' It's not that I think "extrapolation" is supposed to be stasis; I think it may be incoherent to talk about an "extrapolation" that is less free than "wild-type AI", and yet doesn't keep values out of some really good areas in value-space. Any way you look at it, it's primates telling superintelligences what's good.

As I just said, clearly "extrapolation" is meant to impose restrictions on the development of values. Otherwise it would be pointless.

Vladimir:

it could act as a special "luck" that in the end results in the best possible outcome given the allowed level of interference.
Please remember that I am not assuming that FAI-CEV is an oracle that magically works perfectly to produce the best possible outcome. Yes, an AI could subtly change things so that we're not aware that it is RESTRICTING how our values develop. That doesn't make it good for the rest of all time to be controlled by the utility functions of primates (even at a meta level).

Here's a question whose answer could diminish my worries: Can CEV lead to the decision to abandon CEV? If smarter-than-humans "would decide" (modulo the gigantic assumption CEV makes that it makes sense to talk about what "smarter than humans would decide", as if greater intelligence made agreement more rather than less likely - and, no, they will not be perfect Bayesians) that CEV is wrong, does that mean an AI guided by CEV would then stop following CEV?

If this is so, isn't it almost probability 1 that CEV will be abandoned at some point?

Comment by Phil_Goetz6 on What I Think, If Not Why · 2008-12-11T23:48:09.000Z · LW · GW

Eliezer: "Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own. This pattern would explain a lot of Phil Goetz too."

No; the dynamic you're thinking of is that I raise objections to things that you have already analyzed, because I think your analyis was unconvincing. Eg., the recent Attila the Hun / Al Qaeda example. The fact that you have written about something doesn't mean you've dealt with it satsifactorily.

Comment by Phil_Goetz6 on What I Think, If Not Why · 2008-12-11T23:42:16.000Z · LW · GW

Eliezer: "and then gets smart enough to do guaranteed self-improvement, at which point its values freeze (forever)."

Why do the values freeze? Because there is no more competition? And if that's the problem, why not try to plan a transition from pre-AI to an ecology of competing AIs that will not converge to a singleton? Or spell out the problem clearly enough that we can figure whether one can achieve a singleton that doesn't have that property?

(Not that Eliezer hasn't heard me say this before. I made a bit of a speech about AI ecology at the end of the first AGI conference a few years ago.)

Robin: "In a foom that took two years, if the AI was visible after one year, that might give the world a year to destroy it."

Yes. The timespan of the foom is important largely because it changes what the AI is likely to do, because it changes the level of danger that the AI is in and the urgency of its actions.

Eliezer: "When I try myself to visualize what a beneficial superintelligence ought to do, it consists of setting up a world that works by better rules, and then fading into the background."

There are many sociological parallels between Eliezer's "movement", and early 20th-century communism.

Eliezer: "I truly do not understand how anyone can pay any attention to anything I have said on this subject, and come away with the impression that I think programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI."

I wonder if you're thinking that I meant that. You can see that I didn't in my first comment on Visions of Heritage. But I do think you're going one level too few meta. And I think that CEV would make it very hard to escape the non-meta philosophies of the programmers. It would be worse at escaping them than the current, natural system of cultural evolution is.

Numerous people have responded to some of my posts by saying that CEV doesn't restrict the development of values (or equivalently, that CEV doesn't make AIs less free). Obviously it does. That's the point of CEV. If you're not trying to restrict how values develop, you might as well go home and watch TV and let the future spin out of control. One question is where "extrapolation" fits on a scale between "value stasis" and "what a free wild-type AI would think of on its own." Is it "meta-level value stasis"?

I think that evolution and competition have been pretty good at causing value development. (That's me going one more level meta.) Having competition between different subpopulations with different values is a key part of this. Taking that away would be disastrous.

Not to mention the fact that value systems are local optima. If you're doing search, it might make sense to average together some current good solutions and test the results out, in competition with the original solutions. It is definitely a bad idea to average together your current good solutions and replace them with the average.

Comment by Phil_Goetz6 on Is That Your True Rejection? · 2008-12-09T15:54:49.000Z · LW · GW

I don't think it did help, though. I think I failed to comprehend it. I didn't file it away and think about it; I completely missed the point. Later, my subconscious somehow changed gears so that I was able to go back and comprehend it. But communication failed.

Buddhists say that great truths can't be communicated; they have to be experienced, only after which you can understand the communication. This was something like that. Discouraging.

Comment by Phil_Goetz6 on Disjunctions, Antipredictions, Etc. · 2008-12-09T15:48:55.000Z · LW · GW
But if you're going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction. So I try not to ask myself "What will happen?" but rather "Is this possibility allowed to happen, or is it prohibited?"

I thought that you were changing your position; instead, you have used this opening to lead back into concentrating all your strength into one prediction.

I think this characterizes a good portion of the recent debate: Some people (me, for instance) keep saying "Outcomes other than FOOM are possible", and you keep saying, "No, FOOM is possible." Maybe you mean to address Robin specifically; and I don't recall any acknowledgement from Robin that foom is >5% probability. But in the context of all the posts from other people, it looks as if you keep making arguments for "FOOM is possible" and implying that they prove "FOOM is inevitable".

A second aspect is that some people (again, eg., me) keep saying, "The escalation leading up to the first genius-level AI might be on a human time-scale," and you keep saying, "The escalation must eventually be much faster than human time-scale." The context makes it look as if this is a disagreement, and as if you are presenting arguments that AIs will eventually self-improve themselves out of the human timescale and saying that they prove FOOM.

Comment by Phil_Goetz6 on Is That Your True Rejection? · 2008-12-09T15:23:49.000Z · LW · GW

You could say that embracing timeless decision theory is a global meta-commitment, that makes you act as if you made commitment in all the situations where you benefit from having made the commitment.
I think this is correct.

It's perplexing: This seems like a logic problem, and I expect to make progress on logic problems using logic. I would expect reading an explanation to be more helpful than having my subconscious mull over a logic problem. But instead, the first time I read it, I couldn't understand it properly because I was not framing the problem properly. Only after I suddenly understood the problem better, without consciously thinking about it, was I able to go back, re-read this, and understand it.

Comment by Phil_Goetz6 on Is That Your True Rejection? · 2008-12-09T15:08:34.000Z · LW · GW

I may be wrong about Newcomb's paradox.

Comment by Phil_Goetz6 on Artificial Mysterious Intelligence · 2008-12-08T17:45:26.000Z · LW · GW

Eliezer: I was making a parallel. I didn't mean "how are these different"; I really meant, "This statement below about consciousness is wrong; yet it seems very similar to Eliezer's post. What is different about Eliezer's post that would make it not be wrong in the same way?"

That said, we don't know what consciousness is, and we don't know what intelligence is; and both occur in every instance of intelligence that we know of; and it would be surprising to find one without the other even in an AI; so I don't think we can distinguish between them.

Comment by Phil_Goetz6 on Artificial Mysterious Intelligence · 2008-12-08T16:18:14.000Z · LW · GW

How is this different from saying,

"For a long time, many different parties and factions in AI, adherent to more than one ideology, have been trying to build AI without understanding consciousness. Unfortunate habits of thought will already begin to arise, as soon as you start thinking of ways to create Artificial Intelligence without having to penetrate the mystery of consciousness. Instead of all this mucking about with neurons and neuroanatomy and population encoding and spike trains, we should be facing up to the hard problem of understanding what consciousness is."

Comment by Phil_Goetz6 on Sustained Strong Recursion · 2008-12-05T22:21:58.000Z · LW · GW

"The issue is that simulating a computers design require a lot of computational power. The advances made in going from 65nm to 45nm now moving to 32nm were enabled by computers that could better simulate the designs without todays computers it would be hard to design the fabrication systems or run the fabrication system for the future processors."

I believe (strongly) that the bottleneck is figuring out how to make 45nm and 32nm circuits work reliably. If you learn how to do 32nm, you can probably get speedup just by re-using the same design you used at 45nm.

Comment by Phil_Goetz6 on Permitted Possibilities, & Locality · 2008-12-03T23:09:57.000Z · LW · GW

I designed, with a co-worker, a cognitive infrastructure for DARPA that is supposed to let AIs share code. I intended to have cognitive modules be web services (at present, they're just software agents). Every representation used was to be evaluated using a subset of Prolog, so that expressions could be automatically converted between representations. (This was never implemented; nor was ontology mapping, which is really hard and would also be needed to translate content.) Unfortunately, my former employer didn't let me publish anything on it. Also, it works only with symbolic AI.

It wouldn't change the picture Eliezer is drawing much even if it worked perfectly, though.

on any sufficiently short timescale, progress should locally approximate an exponential because of competition between interest rates (even in the interior of a single mind).
Huh?

Comment by Phil_Goetz6 on Hard Takeoff · 2008-12-03T17:45:14.000Z · LW · GW

Eliezer: So really, the whole hard takeoff analysis of "flatline or FOOM" just ends up saying, "the AI will not hit the human timescale keyhole." From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.
But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build parts for it, etc. Remember that I'm only questioning the trajectory for the first year or decade.

(BTW, the term "trajectory" implies that only the state of the entity at the top of the heap matters. One of the human race's backup plans should be to look for a niche in the rest of the heap. But I've already said my piece on that in earlier comments.)

Thomas: Even if it is wrong - I think it is correct - it is the most important thing to consider.
I think most of us agree it's possible. I'm only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.

Comment by Phil_Goetz6 on Hard Takeoff · 2008-12-02T22:21:40.000Z · LW · GW

"All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff. You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the "fold the curve in on itself" paradigm of recursion is going to amplify even small roughnesses in the trajectory."

Wouldn't that be a reason to say, "I don't know what will happen"? And to disallow you from saying, "An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely"?

If you can't make quantitative predictions, then you can't say that the foom might take an hour or a day, but not six months.

A lower-bound (of the growth curve) analysis could be sufficient to argue the inevitability of foom.

I agree there's a time coming when things will happen too fast for humans. But "hard takeoff", to me, means foom without warning. If the foom doesn't occur until the AI is smart enough to rewrite an AI textbook, that might give us years or decades of warning. If humans add and improve different cognitive skills to the AI one-by-one, that will start a more gently-sloping RSI.

Comment by Phil_Goetz6 on Recursive Self-Improvement · 2008-12-02T21:17:42.000Z · LW · GW

"For "specific numbers", for example, look at the well-documented growth of the computer industry since the 1950s."

You would need to show how to interpret those numbers applied to the AI foom.

I'd rather see a model for AI foom built from the ground up, and ranges of reasonable values posited, and validated in some way.

This is a lot of work, but after several years working on the problem, it's one that ought to have a preliminary answer.

Comment by Phil_Goetz6 on Recursive Self-Improvement · 2008-12-02T19:52:00.000Z · LW · GW

"Sexual selection is at the root of practically all the explanations for the origin of our large brains."

Ooh, you triggered one of my cached rants.

Practically all of those explanations start by saying something like, "It's a great mystery how humans got so smart, since you don't need to be that smart to gather nuts and berries."

And that shows tremendous ignorance of how much intelligence is needed to be a hunter-gatherer. (Much more than is needed to be a modern city-dweller.) Most predators have a handful of ways of catching prey; primitive humans have thousands. Just enumerating different types of snares and traps used would bring us over 100.

Comment by Phil_Goetz6 on Recursive Self-Improvement · 2008-12-02T18:22:56.000Z · LW · GW

I wrote:

Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably.
What I wish I'd said is: What percentage of the AI's efforts will go into algorithm, architecture, and hardware research?

At the start, probably a lot; so this issue may not be important wrt FOOM and humans.

Comment by Phil_Goetz6 on Recursive Self-Improvement · 2008-12-02T17:55:58.000Z · LW · GW

A number of people are objecting to Eliezer's claim that the process he is discussing is unique in its FOOM potential, proposing other processes that are similar. Then Eliezer says they aren't similar.

Whether they're similar enough depends on the analysis you want to do. If you want to glance at them and come up with yes or no answer regarding FOOM, then none of them are similar. A key difference is that these other things don't have continual halving of the time per generation. You can account for this when comparing results, but I haven't seen anyone do this.

But some things are similar enough that you can gain some insights into the AI FOOM potential by looking at them. Consider the growth of human societies. A human culture/civilization/government produces ideas, values, and resources used to rewrite itself. This is similar to the AI FOOM dynamics, except with constant and long generation times.

To a tribesman contemplating the forthcoming culture FOOM, it would look pretty simple: Culture is about ways for your tribe to get more land than other tribes.

As culture progressed, we developed all sorts of new goals for it that the tribesman couldn't have predicted.

Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably. (Further analysis of this is difficult since we haven't agreed what "intelligence" is.)

Another lesson to learn from culture has to do with complexity. The tribesman, given some ideas of what technology and government would do, would suppose that it would solve all problems. But in fact, as cultures grow more capable, they are able to sustain more complexity; and so our problems get more and more complicated. The idea that human stupidity is holding us back, and AIs will burst into exponential territory once they shake free of these shackles:

I suspect that human economic growth would naturally tend to be faster and somewhat more superexponential, if it were not for the negative feedback mechanism of governments and bureaucracies with poor incentives, that both expand and hinder whenever times are sufficiently good that no one is objecting strongly enough to stop it
is like that tribesman thinking good government will solve all problems. Systems - societies, governments, AIs - expand to the limits of complexity that they can support; at those limits, actions have unintended consequences and agents have not quite enough intelligence to predict them or agree on them, and in efficiency and "stupidity" - relative stupidity - lives on.

I'll respond to Eliezer's response to my response later today. Short answer: 1. Diminishing returns exist and are powerful. 2. This isn't something you can eyeball. If you want to say FOOM is probable; fine. If you want to say FOOM is almost inevitable, I want to see equations worked out with specific numbers. You won't convince me with handwaving, especially when other smart people are waving their hands and reaching different conclusions.

Comment by Phil_Goetz6 on Recursive Self-Improvement · 2008-12-02T00:17:31.000Z · LW · GW

The rapidity of evolution from chimp to human is remarkable, but you can infer what you're trying to infer only if you believe evolution reliably produces steadily more intelligent creatures. It might be that conditions temporarily favored intelligence, leading to humans; our rapid rise is then explained by the anthropic principle, not by universal evolutionary dynamics.

Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition+metaknowledge(current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process
Knowledge feeds on itself only when it is continually spread out over new domains. If you keep trying to learn more about the same domain - say, to cure cancer, or make faster computer chips - you get logarithmic returns, requiring an exponential increase in resources to maintain constant output. (IIRC it has required exponentially-increasing capital investments to keep Moore's Law going; the money will run out before the science does.) Rescher wrote about this in the 1970s and 1980s.

This is important because it says that, if an AI keeps trying to learn how to improve itself, it will get only logarithmic returns.

When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.
This is the most important and controversial claim, so I'd like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.

Comment by Phil_Goetz6 on Disappointment in the Future · 2008-12-01T16:48:46.000Z · LW · GW

Publishers choose from a wide range of authors who want to make predictions. They choose those that are most exciting. These come from the "fast change" and "strange assumptions" end of the distribution. Any prediction you actually hear about is therefore likely to be wrong.

Comment by Phil_Goetz6 on Engelbart: Insufficiently Recursive · 2008-11-27T00:23:39.000Z · LW · GW

Eliezer: all these posts seem to take an awful lot of your time as well as your readers', and they seem to be providing diminishing utility. It seems to me that talking at great length about what the AI might look like, instead of working on the AI, just postpones the eventual arrival of the AI. I think you already understand what design criteria are important, and a part of your audience understands as well. It is not at all apparent that spending your time to change the minds of others (about friendliness etc) is a good investment or that it has any impact on when and whether they will change their minds.
As you may have guessed, I think just the opposite. The idea that Eliezer, on his own, can figure out
  1. how to build an AI
  2. how to make an AI stay within a specified range of behavior, and
  3. what an AI ought to do
suggests that somebody has read Ender's Game too many times. These are three gigantic research projects. I think he should work on #2 or #3.

Not doing #1 would mean that it actually matters that he convince other people of his ideas.

I think that #3 is really, really tricky. Far beyond the ability of any one person. This blog may be the best chance he'll have to take his ideas, lay them out, and get enough intelligent criticism to move from the beginnings he's made, to something that might be more useful than dangerous. Instead, he seems to think (and I could be wrong) that the collective intelligence of everyone else here on Overcoming Bias is negligible compared to his own. And that's why I get angry and sometimes rude.

Generalizing from observations of points at the extremes of distributions, we can say that when we find an effect many standard deviations away from the mean, its position is almost ALWAYS due more to random chance than to the properties underlying that point. So when we observe a Newton or an Einstein, the largest contributor to their accomplishments was not their intellect, but random chance. So if you think you're relying on someone's great intellect, you're really relying on chance.