Posts

Comments

Comment by bambi on You Only Live Twice · 2008-12-12T22:19:54.000Z · LW · GW

burger flipper, making one decision that increases your average statistical lifespan (signing up for cryonics) does not compel you to trade off every other joy of living in favor of further increases. and, if the hospital or government or whoever can't be bothered to wait for my organs until i am done with them, that's their problem not mine.

Comment by bambi on What I Think, If Not Why · 2008-12-11T22:45:41.000Z · LW · GW

Carl, Robin's response to this post was a critical comment about the proposed content of Eliezer's AI's motivational system. I assumed he had a reason for making the comment, my bad.

Comment by bambi on What I Think, If Not Why · 2008-12-11T20:57:10.000Z · LW · GW

Oh, and Friendliness theory (to the extent it can be separated from specific AI architecture details) is like the doomsday device in Dr. Strangelove: it doesn't do any good if you keep it secret! [in this case, unless Eliezer is supremely confident of programming AI himself first]

Comment by bambi on What I Think, If Not Why · 2008-12-11T20:49:03.000Z · LW · GW

Regarding the 2004 comment, AGI Researcher probably was referring to the Coherent Extrapolated Volition document which was marked by Eliezer as slightly obsolete in 2004, and not a word since about any progress in the theory of Friendliness.

Robin, if you grant that a "hard takeoff" is possible, that leads to the conclusion that it will eventually be likely (humans being curious and inventive creatures). This AI would "rule the world" in the sense of having the power to do what it wants. Now, suppose you get to pick what it wants (and program that in). What would you pick? I can see arguing with the feasibility of hard takeoff (I don't buy it myself), but if you accept that step, Eliezer's intentions seem correct.

Comment by bambi on Underconstrained Abstractions · 2008-12-04T17:35:48.000Z · LW · GW

When Robin wrote: "It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions." he gets it exactly right (though it is not necessarily so easy to make good ones, that isn't really the point).

This should have been clear from the sequence on the "timeless universe" -- just as that interesting abstraction is not going to convince more than a few credulous fans of the truth of that abstraction, the truth of the magical super-FOOM is not going to convince anybody without more substantial support than an appeal to a very specific way of looking at "things in general", which few are going to share.

On a historical time frame, we can grant pretty much everything you suppose and still be left with a FOOM that "takes" a century (a mere eyeblink in comparison to everything else in history). If you want to frighten us sufficiently about a FOOM of shorter duration, you're going to have to get your hands dirtier and move from abstractions to specifics.

Comment by bambi on Singletons Rule OK · 2008-12-01T17:02:47.000Z · LW · GW

The correct response to a guy scheming to take over the world someday in the future, in a pleasant and friendly way -- time permitting, between grocery shopping and cocktail parties -- is a bemused smile.

Comment by bambi on The First World Takeover · 2008-11-19T22:25:32.000Z · LW · GW

The issue, of course, is not whether AI is a game-changer. The issue is whether it will be a game-changer soon and suddenly. I have been looking forward to somebody explaining why this is likely, so I've got my popcorn popped and my box of wine in the fridge.

Comment by bambi on Logical or Connectionist AI? · 2008-11-17T16:59:19.000Z · LW · GW

Perhaps Eliezer goes to too many cocktail parties:

X: "Do you build neural networks or expert systems?" E: "I don't build anything. Mostly I whine about people who do." X: "Hmm. Does that pay well?"

Perhaps Bayesian Networks are the hot new delicious lemon glazing. Of course they have been around for 23 years.

Comment by bambi on The Gift We Give To Tomorrow · 2008-07-17T14:32:51.000Z · LW · GW

Where does mathematics enter the universe? Is it an expression of today's overhyped worldview (evolutionary psychology) or is it actually "out there"? How about "truth"? Might beauty and justice and other things expressed in our minds exist independent of us as well, discovered by minds capable of such discovery?

Comment by bambi on The Design Space of Minds-In-General · 2008-06-25T17:24:56.000Z · LW · GW

Silas: you might find this paper of some interest:

http://www.agiri.org/docs/ComputationalApproximation.pdf

Comment by bambi on The Design Space of Minds-In-General · 2008-06-25T14:35:48.000Z · LW · GW

Perhaps "mind" should just be tabooed. It doesn't seem to offer anything helpful, and leads to vast fuzzy confusion.

Comment by bambi on The Design Space of Minds-In-General · 2008-06-25T14:32:11.000Z · LW · GW

What do you mean by a mind?

All you have given us is that a mind is an optimization process. And: what a human brain does counts as a mind. Evolution does not count as a mind. AIXI may or may not count as a mind (?!).

I understand your desire not to "generalize", but can't we do better than this? Must we rely on Eliezer-sub-28-hunches to distinguish minds from non-minds?

Is the FAI you want to build a mind? That might sound like a dumb question, but why should it be a "mind", given what we want from it?

Comment by bambi on The Psychological Unity of Humankind · 2008-06-24T14:36:36.000Z · LW · GW

Lessons:

1) A situation with AIs whose intelligence is between village idiot and Einstein -- assuming there is a scale to make "between" a non-poetic concept -- is not very likely and probably short-lived if it does occur (unless perhaps it is engineered that way on purpose).

2) Aspects of human cognition -- our particular emotions, our language forms, perhaps even pervasive mental tricks like reasoning by analogy -- may be irrelevant to Optimization Processes in general, making their focus for AI research possibly "voodoo doll" methodology. AI may only deal with such things as part of communicating with humans, though mastering them well enough to participate effectively in human culture may be as difficult as inventing new technologies.

3) Optimization Processes built by Intelligent Designers can develop in ways that those built by evolution cannot because of multiple coordinated changes (this point has been beaten to death by now I think).

4) Sex is interesting.

For once, I have no complaints. I assume the path is being cleared for a discussion of what actually IS required for an optimization process to do what we need it to do (model the world, improve itself, etc), which seems only marginally related to what our brains do. If that's where this is headed, I'm looking forward to it.

Comment by bambi on Optimization and the Singularity · 2008-06-24T13:24:59.000Z · LW · GW

Richard: Thanks for the link; that looks like a bunch of O.B. posts glommed together; I don't find it any more precise or convincing than anything here so far. Don't get me wrong, though; like the suggestive material on O.B. it is very interesting. If it simply isn't possible to get more concrete because the ideas are not developed well enough, so be it.

For the record, my nickname is taken from a character in an old Disney animated film, a (male) deer.

Comment by bambi on Optimization and the Singularity · 2008-06-23T21:53:07.000Z · LW · GW

Z.M.: interesting discussion. weapons of math destruction is a wickedly clever phrase. Still, I can hope for more than "FAI must optimize something, we know not what. Before we can figure out what to optimize we have to understand Recursive Self Improvement. But we can't talk about that because it's too dangerous."

Nick: Yes, science is about models, as that post says. Formal models. It does not seem unreasonable to hope that some are forthcoming. Surely that is the goal. The post you reference is complaining about people making a distinction between the theoretical possibility of different levels of intelligence without any rational basis. That doesn't seem to be the same thing as merely asking for a little precision in the definitions of "intelligence", "self improvement", and "friendliness".

Comment by bambi on Optimization and the Singularity · 2008-06-23T13:07:41.000Z · LW · GW

Carry on with your long winding road of reasoning.

Of particular interest, which I hope you will dwell on: What does "self-improving" in the context of an AI program mean precisely? If there is a utility function involved, exactly what is it?

I also hope you start introducing some formal notation, to make your speculations on these topics less like science fiction.

Comment by bambi on Surface Analogies and Deep Causes · 2008-06-22T15:55:43.000Z · LW · GW

"I built my network, and it's massively parallel and interconnected and complicated, just like the human brain from which intelligence emerges! Behold, now intelligence shall emerge from this neural network as well!"

Who actually did this? I'm not aware of any such effort, much less it being a trend. Seems to me that the "AI" side of neural networks is almost universally interested in data processing properties of small networks. Larger more complex network experiments are part of neuroscience (naive in most cases but that's a different topic). I don't think anybody in AI or brain research ever thought their network was or would or could be "intelligent" in the broad sense you are implying.

Comment by bambi on LA-602 vs. RHIC Review · 2008-06-19T19:38:29.000Z · LW · GW

If the secret report comes back "acceptable risk" I suppose it just gets dumped into the warehouse from Raiders of the Lost Ark, but what if it doesn't?

Perhaps such a report was produced during the construction of the SSC?

What if the report is about something not under monolithic control?

Comment by bambi on Ghosts in the Machine · 2008-06-19T15:18:02.000Z · LW · GW

Ben, you could be right that my "world is too fuzzy" view is just mind projection, but let me at least explain what I am projecting. The most natural way to get "unlimited" control over matter is a pure reductionist program in which a formal mathematical logic can represent designs and causal relationships with perfect accuracy (perfect to the limits of quantum probabilities). Unfortunately, combinatorial explosion makes that impractical. What we can actually do instead is redescribe collections of matter in new terms. Sometimes these are neatly linked to the underlying physics and we get cool stuff like f=ma but more often the redescriptions are leaky but useful "concepts". The fact that we have to leak accuracy (usually to the point where definitions themselves are basically impossible) to make dealing with the world tractable is what I mean by "the world is too fuzzy to support much intelligent manipulation". In certain special cases we come up with clever ways to bound probabilities and produce technological wonders... but transhumanist fantasies usually make the leap to assume that all things we desire can be tamed in this way. I think this is a wild leap. I realize most futurists see this as unwarranted pessimism and that the default position is that anything imaginable that doesn't provably violate the core laws of physics only awaits something smart enough to build it.

My other reasons for doubting the ultimate capabilities of RSI probably don't need more explanation. My skepticism about the imminence of RSI as a threat (never mind the overall ability of RSI itself) is more based on the ideas that 1) The world is really damn complicated and it will take a really damn complicated computer to make sense of it (the vast human data sorting machinery is well beyond Roadrunner and is not that capable anyway), and 2) there is still no beginning of a credible theory of how to make sense of a really damn complicated world with software.

I agree it is "very dangerous" to put a low probability on any particular threat being an imminent concern. Many such threats exist and we make this very dangerous tentative conclusion every day... from cancer in our own bodies to bioterror to the possibility that our universe is a simulation designed to measure how long it takes us to find the mass of the Higgs, after which we will be shut off.

That is all just an aside though to my main point, which was that if I'm wrong, the only conclusion I can see is that an explicit program to take over the world with a Friendly AI is the only reasonable option.

I approve of such an effort. If my skepticism is correct it will be impossible for decades at least; if I'm wrong I'd rather have an RSI that at least tried to be Friendly. It does seem that the Friendliness bit is more important than the RSI part as the start of such an effort.

Comment by bambi on Ghosts in the Machine · 2008-06-18T15:11:32.000Z · LW · GW

Eliezer taught you rationality, so figure it out!

If I understand the research program under discussion, certain ideas are answered "somebody else will". e.g.

Don't build RSI, build AI with limited improvement capabilities (like humans) and use Moore's law to get speedup. "but somebody else will"

Build it so that all it does is access a local store of data (say a cache of the internet) and answer multiple choice questions (or some other limited function). Don't build it to act. "but somebody else will"

etc. every safety suggestion can be met with "somebody else will build an AI that does not have this safety feature".

So: make it Friendly. "but somebody else won't".

This implies: make it Friendly and help it take over the world to a sufficient degree that "somebody else" has no opportunity to build non-Friendly AI.

I think it is hugely unlikely that intelligence of the level being imagined is possible in anything like the near future, and "recursive self improvement" is very likely to be a lot more limited than projected (there's a limit to how much code can be optimized, P!=NP which severely bounds general search optimization, there's only so much you can do with "probably true" priors, and the physical world itself is too fuzzy to support much intellegent manipulation). But I could be wrong.

So, if you guys are planning to take over the world with your Friendly AI, I hope you get it right. I'm surprised there isn't an "Open Friendliness Project" to help answer all the objections and puzzles that commenters on this thread.

If Friendliness has already been solved, I'm reminded of Dr. Strangelove: it does no good to keep it a secret!

If it isn't, is it moral to work on more dangerous aspects (like reflectivity) without Friendliness worked out beforehand?

Comment by bambi on The Ultimate Source · 2008-06-15T18:14:48.000Z · LW · GW

There are many terms and concepts that don't pay for themselves, though we might not agree on which ones. For example, I think Goedel's Theorem is one of them... its cuteness and abstract splendor doesn't offset the dumbness it invokes in people trying to apply it. "Consciousness" and "Free Will" are two more.

If the point here is to remove future objections to the idea that AI programs can make choices and still be deterministic, I guess that's fair but maybe a bit pedantic.

Personally I provisionally accept the basic deterministic reductionist view that Eliezer has been sketching out. "Provisionally" because our fundamental view of reality and our place in it has gone through many transformations throughout history and it seems unlikely that exactly today is where such revelations end. But since we don't know what might be next we work with what we have even though it is likely to look naive in retrospect from the future.

The viewpoint also serves to make me happy and relatively carefree... doing important things is fun, achieving successes is rewarding, helping people makes you feel good. Obsessive worry and having the weight of the world on one's shoulders is not fun. "Do what's fun" is probably not the intended lesson to young rationalists, but it works for me!

Comment by bambi on Causality and Moral Responsibility · 2008-06-13T16:01:44.000Z · LW · GW

iwdw: you could be right -- perhaps the truly top talented members of the "next generation" are better attracted to AI by wandering internet blog sequences on "rationality" than some actual progress on AI. I am neither a supergenius nor young so I can't say for sure.

Comment by bambi on Causality and Moral Responsibility · 2008-06-13T14:23:51.000Z · LW · GW

Re your moral dilemma: you've stated that you think your approach needs a half-dozen or so supergeniuses (on the level of the titans of physics). Unless they have already been found -- and only history can judge that -- some recruitment seems necessary. Whether these essays capture supergeniuses is the question.

Demonstrated (published) tangible and rigorous progress on your AI theory seems more likely to attract brilliant productive people to your cause.

Comment by bambi on Timeless Control · 2008-06-07T14:52:18.000Z · LW · GW

Unknown, your comment strikes me as a good way of looking at it.

The "me of now" as a region of configuration space contains residue of causal relationships to other regions of configuration space ("the past" and my memories of it). And the timeless probability field on configuration space causally connects the "me of now" to the "future" (other regions of configuration space). Just because this is true, and -- even more profoundly -- even though the "me of now" configuration space region has no special status (no shining "moment in the sun" as the privileged focus of a global clock ticking a path through configuration space), I am still what I am and I do what I do (from a local perspective which is all I have detailed information about), which includes making decisions.

Our decisions are based on what we know and believe, so an acceptance of the viewpoint Eliezer has been putting forth is likely to have some impact on decisions we make... I wonder what that impact is, and what should it be?

Comment by bambi on Living in Many Worlds · 2008-06-05T03:38:01.000Z · LW · GW

So what tools do all you self-improving rationalists use to help with the "multiply" part of "shut up and multiply"? A development environment for a programming/scripting language? Mathematica? A desk calculator? Mathcad? Spreadsheet? Pen and paper?

Comment by bambi on A Premature Word on AI · 2008-05-31T19:23:41.000Z · LW · GW

Eliezer, your observers would hopefully have noticed hundreds of millions of years of increasing world-modeling cognitive capability, eventually leading to a species with sufficient capacity to form a substrate for memetic progress, followed by a hundred thousand years and a hundred billion individual lives leading up to now.

Looking at a trilobyte, the conclusion would not be that such future development is "impossible", but perhaps "unlikely to occur while I'm eating lunch today".

Comment by bambi on A Premature Word on AI · 2008-05-31T18:41:12.000Z · LW · GW

Ok, sure. Maybe Bayesianism is much more broadly applicable than it seems. And maybe there are fewer fundamental breakthroughs still needed for a sufficient RSI-AI theory than it seems. And maybe the fundamentals could be elaborated into a full framework more easily than it seems. And maybe such a framework could be implemented into computer programming more easily than it seems. And maybe the computing power required to execute the computer program at non-glacial speeds is less than it seems. And maybe the efficiency of the program can be automatically increased more than seems reasonable. And maybe "self improvement" can progress further into the unknown than seems reasonably to guess. And maybe out there in the undiscovered territory there are ways of reasoning about and subsequently controlling matter that are more effective than seems likely, and maybe as these things are revealed we will be too stupid to recognize them.

Maybe.

To make most people not roll their eyes at the prospect, though, they'll have to be shown something more concrete than a "Maybe AI is like a nuclear reaction" metaphor.

Comment by bambi on Timeless Beauty · 2008-05-28T23:55:30.000Z · LW · GW

Ok, it looks to me like these answers (invoking the future over and over after accepting that there is no 't') are admissions that this type of physics thinking is just playfulness -- no consequences whatsoever, to our own actions or to any observable aspect of the universe.

That's cool, I misunderstood is all. Maybe life is just a dream, eh?

Comment by bambi on Timeless Beauty · 2008-05-28T19:24:45.000Z · LW · GW

Eliezer, if you believe all of this, why do you care so much about saving the world from "future" ravenous AIs? The paperclip universes just are and the non-paperclip-universes just are. Go to the beach, man! Chill out. You can't change anyting; there is nothing to change.

Comment by bambi on My Childhood Role Model · 2008-05-23T17:41:57.000Z · LW · GW

As long as arguing from fictional evidence is ok as long as you admit you're doing it, somebody should write the novelization.

Bayesian Ninja Army contacted by secret government agency due to imminent detonation of Logic Bomb* in evil corporate laboratory buried deep beneath some exotic location. Hijinks ensue; they fail to stop Logic Bomb detonation but do manage to stuff in a Friendliness supergoal at the last minute. Singularity ensues, with lots of blinky lights and earth-rending. Commentary on the human condition follows, ending in a sequel-preparing twist.

  • see commentary on yesterday's post
Comment by bambi on That Alien Message · 2008-05-23T17:13:00.000Z · LW · GW

Ok, the phrase was just an evocative alternative to "scary optimization process" or whatever term the secret society is using these days to avoid saying "AI" -- because "AI" raises all sorts of (purportedly) irrelevant associations like consciousness and other anthropomorphisms. The thing that is feared here is really just the brute power of bayesian modeling and reasoning applied to self improvement (through self modeling) and world control (through world modeling).

If an already existing type of malware has claimed the term, invent your own colorful name. How about "Master"?


Comment by bambi on That Alien Message · 2008-05-23T14:47:00.000Z · LW · GW

Phillip Huggan: bambi, IDK anything about hacking culture, but I doubt kids need to read a decision theory blog to learn what a logic bomb is (whatever that is). Posting specific software code, on the other hand...

A Logic Bomb is the thing that Yudkowsky is trying to warn us about. Ice-Nine might be a more apropos analogy, though -- the start of a catalytic chain reaction that transforms everything. Homo Sapiens is one such logically exothermic self-sustaining chain reaction but it's a slow burn because brains suck.

A Logic Bomb has the following components: a modeling language and model-transformation operators based on Bayesian logic. A decision system (including goals and reasoning methods) that decides which operators to apply. A sufficiently complete self-model described in the modeling language. Similar built-in models of truth, efficiency, the nature of the physical universe (say, QM), and (hopefully) ethics.

Flip the switch and watch the wavefront expand at the speed of light.

I assume that the purpose here is not so much to teach humanity to think and behave rationally, but rather to teach a few people to do so, or attract some who already do, then recruit them into the Bayesian Ninja Army whose purpose is to make sure that the immininent inevetable construction and detonation of a Logic Bomb has results we like.


Comment by bambi on That Alien Message · 2008-05-22T22:23:34.000Z · LW · GW

Thanks Patrick, I did sort of get the gist, but went into the ditch from there on that point.

I have been posting rather snarky comments lately as I imagined this was where the whole series was going and frankly it seems like lunacy to me (the bit about evidence being passe was particularly sweet). But I doubt anybody wants to hear me write that over and over (if people can be argued INTO believing in the tooth fairy then maybe they can be argued into anything after all). So I'll stop now.

I hereby dub the imminent magical self-reprogramming seed AI: a "Logic Bomb"

and leave you with this:

Every hint you all insist on giving to the churning masses of brillint kids with computers across the world for how to think about and build a Logic Bomb is just another nail in your own coffins.

Comment by bambi on That Alien Message · 2008-05-22T21:58:32.000Z · LW · GW

Sorry, the first part of that was phrased too poorly to be understood. I'll just throw "sufficiently advanced YGBM technology" on the growing pile of magical powers that I am supposed to be terrified of and leave it at that.

Comment by bambi on That Alien Message · 2008-05-22T21:43:24.000Z · LW · GW

Sorry, Hopefully Anonymous, I missed the installment where "you gotta belive me" was presented as a cornerstone of rational argument.

The fact that a group of humans (CBI) is sometimes able to marginally influence the banana-brand-buying probabilities of some individual humans does not imply much in my opinion. I wouldn't have thought that extrapolating everything to infinity and beyond is much of a rational method. But we are all here to learn I suppose.

Comment by bambi on That Alien Message · 2008-05-22T14:37:40.000Z · LW · GW

Hmm, the lesson escapes me a bit. Is it

1) Once you became a true rationalist and overcome your biases, what you are left with is batshit crazy paranoid delusions

or

2) If we build an artificial intelligence as smart as billions of really smart people, running a hundred trillion times faster than we do (so 10^23 x human-equivalence), give it an unimaginably vast virtual universe to develop in, then don't pay any attention to what it's up to, we could be in danger because a sci-fi metaphor on a web site said so

or

3) We must institute an intelligence-amplification eugenics program so that we will be capable of crushing our creators should the opportunity arise

I'm guessing (2). So, um, let's not then. Or maybe this is supposed to happen by accident somehow? Now that I have Windows Vista maybe my computer is 10^3 human-equivalents and so in 20 years a pc will be 10^10 human equivalents and the internet will let our pc's conspire to kill us? Of course, even our largest computers cannot perform the very first layers of input data sorting tasks one person does effortlessly, but that's only my biases talking I suppose.

Comment by bambi on Einstein's Speed · 2008-05-21T12:59:04.000Z · LW · GW

Given this perspective on what Science does and does not encourage, can you explain the phenomenon of String Theory to us?

Comment by bambi on Changing the Definition of Science · 2008-05-19T03:04:19.000Z · LW · GW

Sufficiently-advanced Bayesian rationality is indistinguishable from magic.

How fun!

It's possible to be "smart" and a nutter at the same time you know.

Comment by bambi on No Safe Defense, Not Even Science · 2008-05-18T16:56:10.000Z · LW · GW

If you think that Science rewards coming up with stupid theories and disproving them just as much as more productive results, I can hardly even understand what you mean by Science beyond the "observe, hypothesize, test, repeat" overview given to small children as an introduction to the scientific method. Was Eliezer-18 blind to anything beyond such simple rote forumulas?

Negative results are forgiven but hardly ever rewarded (unless the theory disproven is widely believed).

If you'd put aside the rather bizarre bitterness and just say: "Bayesian rationality is a good way to pick which theories to test. Here's some non-toy examples worked through to demonstrate how" that would be much more useful than these weird parables and goofy "I am an outcast" rants.

Comment by bambi on Science Isn't Strict Enough · 2008-05-16T15:30:48.000Z · LW · GW

Where do we get sufficient self-confidence to pull probabilities for ill-defined and under-measured quantities out of our butts so we can use them in The Formula?

Is there any actually interesting intellectual task that rests on nice justifiable grounded probabilities?

Comment by bambi on When Science Can't Help · 2008-05-15T15:26:37.000Z · LW · GW

Finally this sequence of posts is beginning to build to its hysterical climax. It might be difficult to convince us that doomsday probability calculations are more than swag-based-Bayesianism, but the effort will probably be entertaining. I know I love getting lost in trying to calculate "almost infinity" times "almost zero".

As a substantive point from this sequence, at least now scientists know that they should choose reasonable theories to test in preference to ridiculous ones; I'm sure that will be a very helpful insight.

Comment by bambi on The Dilemma: Science or Bayes? · 2008-05-13T15:18:48.000Z · LW · GW

Surely "science" as a method is indifferent to interpretations with no observable differences.

Your point seems to be that "science" as a social phenomenon resists new untestable interpretations. Scientists will wander all over the place in unmappable territory (despite your assertion that "science" rejects MWI, it doesn't look like that to me).

If Bayesianism trumps science only in circumstances where there are no possible testable consequences, that's a pretty weak reason to care, and a very long tortured argument to achieve so little.