Posts

Comments

Comment by Russell_Wallace on Wise Pretensions v.0 · 2009-02-20T19:44:53.000Z · LW · GW

Well, I like the 2006 version better. For all that it's more polemic in style -- and if I recall correctly, I was one of the people against whom the polemic was directed -- it's got more punch. After all, this is the kind of topic where there's no point in even pretending to be emotionless. The 2006 version alloys logic and emotion more seamlessly.

Comment by Russell_Wallace on ...And Say No More Of It · 2009-02-09T01:37:18.000Z · LW · GW

Suffice it to say that I think the above is a positive move ^.^

Comment by Russell_Wallace on Epilogue: Atonement (8/8) · 2009-02-06T13:16:11.000Z · LW · GW
I hope you others feel that the character was primarily a victim way back when, instead of a dirtbag.

Of course not. The victim was the girl he murdered.

That's the point of the chapter title - he had something to atone for. It's what tvtropes.org calls a Heel Face Turn.

Comment by Russell_Wallace on Epilogue: Atonement (8/8) · 2009-02-06T12:54:38.000Z · LW · GW

A type 2 supernova emits most of its energy in the form of neutrinos; these interact with the extremely dense inner layers that didn't quite manage to accrete onto the neutron star, depositing energy that creates a shockwave that blows off the rest of the material. I've seen it claimed that the neutrino flux would be lethal out to a few AU, though I suspect you wouldn't get the chance to actually die of radiation poisoning.

A planet the size and distance of Earth would intercept enough photons and plasma to exceed its gravitational binding energy, though I'm skeptical about whether it would actually vaporize; my guess for what its worth is that most of the energy would be radiated away again. Wouldn't make any difference to anyone on the planet at the time of course.

Well-chosen chapter title, and good wrapup!

Comment by Russell_Wallace on True Ending: Sacrificial Fire (7/8) · 2009-02-05T13:30:31.000Z · LW · GW
The point is, that the Normal Ending is the most probable one.

Historically, humans have not typically surrendered to genocidal conquerors without an attempt to fight back, even when resistance is hopeless, let alone when (as here) there is hope. No, I think this is the true ending.

Nitpick: eight hours to evacuate a planet? I think not, no matter how many ships you can call. Of course the point is to illustrate a "shut up and multiply" dilemma; I'm inclined to think both horns of the dilemma are sharper if you change it to eight days.

But overall a good ending to a good story, and a rare case where a plot is wrapped up by the characters showing the spark of intelligence. Nicely done!

Comment by Russell_Wallace on Three Worlds Decide (5/8) · 2009-02-03T17:02:00.000Z · LW · GW
You guys are very trusting of super-advanced species who already showed a strong willingness to manipulate humanity with superstimulus and pornographic advertising.

I'm not planning to trust anyone. My suggestion was based on the assumption that it is possible to watch what the Superhappys actually do and detonate the star if they start heading for the wrong portal. If that is not the case (which depends on the mechanics of the Alderson drive) then either detonate the local star immediately, or the star one hop back.

Comment by Russell_Wallace on Three Worlds Decide (5/8) · 2009-02-03T10:10:10.000Z · LW · GW

Hmm. The three networks are otherwise disconnected from each other? And the Babyeaters are the first target?

Wait a week for a Superhappy fleet to make the jump into Babyeater space, then set off the bomb.

(Otherwise, yes, I would set off the bomb immediately.)

Comment by Russell_Wallace on The Super Happy People (3/8) · 2009-02-01T18:37:54.000Z · LW · GW
Either way though, there would seem to be a prisoner's dilemma of sorts with regards to that. I'm not sure about this, but let's say we could do unto the Babyeaters without them being able to do unto us, with regards to altering them (even against their will) for the sake of our values. Wouldn't that sort of be a form of Prisoner's Dilemma with regards to, say, other species with different values than us and more powerful than us that could do the same to us? Wouldn't the same metarationality results hold? I'm not entirely sure about this, but..

I'm inclined to think so, which is one reason I wasn't in favor of going to war on the Babyeaters: what if the next species who doesn't share our values is stronger than us, how would I have them deal with us? what sort of universe do we want to live in?

(Another reason being that I'm highly skeptical of victory in anything other than a bloody war of total extermination. Consider analogous situations in real life where atrocities are being committed in other countries, e.g. female circumcision in Africa; we typically don't go to war over them, and for good reason.)

Good story! It's not often you see aliens who aren't just humans in silly make up. I particularly liked the exchange between the Confessor and the Kiritsugu.

Comment by Russell_Wallace on Value is Fragile · 2009-02-01T02:48:40.000Z · LW · GW

Specifically, the point of utility theory is the attempt to predict the actions of complex agents by dividing them into two layers:

  1. Simple list of values
  2. Complex machinery for attaining those values

The idea being that if you can't know the details of the machinery, successful prediction might be possible by plugging the values into your own equivalent machinery.

Does this work in real life? In practice it works well for simple agents, or complex agents in simple/narrow contexts. It works well for Deep Blue, or for Kasparov on the chessboard. It doesn't work for Kasparov in life. If you try to predict Kasparov's actions away from the chessboard using utility theory, it ends up as epicycles; every time you see him taking a new action you can write a corresponding clause in your model of his utility function, but the model has no particular predictive power.

In hindsight we shouldn't really have expected otherwise; simple models in general have predictive power only in simple/narrow contexts.

Comment by Russell_Wallace on Failed Utopia #4-2 · 2009-01-22T14:55:00.000Z · LW · GW
But if not - if this world indeed ranks lower in my preference ordering, just because I have better scenarios to compare it to - then what happens if I write the Successful Utopia story?

Try it and see! It would be interesting and constructive, and if people still disagree with your assessment, well then there will be something meaningful to argue about.

Comment by Russell_Wallace on Failed Utopia #4-2 · 2009-01-21T13:43:01.000Z · LW · GW

An amusing if implausible story, Eliezer, but I have to ask, since you claimed to be writing some of these posts with the admirable goal of giving people hope in a transhumanist future:

Do you not understand that the message actually conveyed by these posts, if one were to take them seriously, is "transhumanism offers nothing of value; shun it and embrace ignorance and death, and hope that God exists, for He is our only hope"?

Comment by Russell_Wallace on Justified Expectation of Pleasant Surprises · 2009-01-15T13:01:50.000Z · LW · GW
If existential angst comes from having at least one deep problem in your life that you aren't thinking about explicitly, so that the pain which comes from it seems like a natural permanent feature - then the very first question I'd ask, to identify a possible source of that problem, would be, "Do you expect your life to improve in the near or mid-term future?"

Saved in quotes file.

Comment by Russell_Wallace on Serious Stories · 2009-01-09T01:49:57.000Z · LW · GW

The way stories work is not as simple as Orson Scott Card's view. I can't do justice to it in a blog comment, but read 'The Seven Basic Plots' by Christopher Booker for the first accurate, comprehensive theory of the subject.

Comment by Russell_Wallace on Dunbar's Function · 2008-12-31T20:25:53.000Z · LW · GW

"I'd like to see a study confirming that. The Internet is more addictive than television and I highly suspect it drains more life-force."

If you think that, why haven't you canceled your Internet access yet? :P I think anyone who finds it drains more than it gives back, is using it wrong. (Admittedly spending eight hours a day playing World of Warcraft does count as using it wrong.)

Comment by Russell_Wallace on Dunbar's Function · 2008-12-31T14:17:40.000Z · LW · GW

"But the media relentlessly bombards you with stories about the interesting people who are much richer than you or much more attractive, as if they actually constituted a large fraction of the world."

This seems to be at least part of the explanation why television is the most important lifestyle factor. Studies of factors influencing both happiness and evolutionary fitness have found television is the one thing that really stands out above the noise -- the less of it you watch, the better off you are in every way.

The Internet is a much better way to interact with the world, both because it lets you choose a community of reasonable size to be involved with, and because it's active rather than passive -- you can do something to improve your status on a mailing list, whereas you can't do anything to improve your status relative to Angelina Jolie (the learned helplessness affect again).

Comment by Russell_Wallace on Singletons Rule OK · 2008-12-01T02:00:33.000Z · LW · GW

"The increase in accidents for 2002 sure looks like a blip to me"

Looks like a sustained, significant increase to me. Let's add up the numbers. From the linked page, total fatalities 1997 to 2000 were 167176. Total fatalities 2002 to 2005 were 172168. The difference (by the end of 2005, already nearly 3 years ago) is about 5000, more than the total deaths in the 9/11 attacks.

Comment by Russell_Wallace on Singletons Rule OK · 2008-11-30T21:09:03.000Z · LW · GW

Eliezer,

I was thinking in terms of Dyson spheres -- fusion reactor complete with fuel supply and confinement system already provided, just build collectors. But if you propose dismantling stars and building electromagnetically confined fusion reactors instead, it doesn't matter; if you want stellar power output, you need square AUs of heat radiators, which will collectively be just as luminous in infrared as the original star was in visible.

Comment by Russell_Wallace on Singletons Rule OK · 2008-11-30T20:46:30.000Z · LW · GW

Eliezer,

It turns out that there are ways to smear a laser beam across the frequency spectrum while maintaining high intensity and collimation, though I am curious as to how you propose to "pull a Maxwell's Demon" in the face of beam intensity such that all condensed matter instantly vaporizes. (No, mirrors don't work. Neither do lenses.)

As for scattering your parts unpredictably so that most of the attack misses -- then so does most of the sunlight you were supposedly using for your energy supply.

Finally, "trust but verify" is not a new idea; a healthy society can produce verifiable accounting of roughly what its resources are being used for. Though you casually pile implausibility on top of implausibility; now we are supposed to imagine that Hannibal Lecter created his fully populated torture chamber solar system all by himself, with no subcontractors or anything else that might leave a trace.

Comment by Russell_Wallace on Singletons Rule OK · 2008-11-30T20:02:45.000Z · LW · GW

Carl,

If "singleton" is to be defined that broadly, then we are already in a singleton, and I don't think anyone will object to keeping that feature of today's world.

Note that altruistic punishment of the type I describe may actually be beneficial, when done as part of a social consensus (the punishers get to seize at least some of the miscreant's resources).

Also note that there may be no such thing as evolved hardscrabble replicators; the number of generations to full colonization of our future light cone may be too small for much evolution to take place. (The log to base 2 of the number of stars in our Hubble volume is quite small, after all.)

Comment by Russell_Wallace on Singletons Rule OK · 2008-11-30T19:21:56.000Z · LW · GW

I have tended to focus on meta level issues in this sort of context, because I know from experience how untrustworthy our object level thoughts are.

For example, there's a really obvious non-singleton solution to the "serial killer somehow creates his own fully populated solar system torture chamber" problem: a hundred concerned neighbors point Nicoll-Dyson lasers at him and make him an offer he can't refuse. It's a simple enough solution for a reasonably bright five year old to figure out in 10 seconds; the fact that I didn't figure it out for months, makes it clear exactly how much to trust my thinking here.

The reason for this untrustworthiness is itself not too hard to figure out: our Cro-Magnon brains are hardwired to think about interpersonal interactions in ways that were appropriate for our ancestral environment at the cost of performing worse than random chance in sufficiently different environments.

But fear is not harmless. Where was the largest group of Americans killed by the 9/11 attacks? In the Twin Towers? No: on the roads, in the excess road accident toll caused by people driving for fear of airline terrorism.

If the smartest thinkers in the world can't get together without descending into a spiral of paranoid fantasy, is there hope for the future of intelligent life in the universe? If we can avoid that descent, then it is time to begin doing so.

Comment by Russell_Wallace on The Weak Inside View · 2008-11-18T21:22:27.000Z · LW · GW

Tim -- I looked at your essay just now, and yes, your Visualization of the Cosmic All seems to agree with mine. (I think Robin's model also has some merit, except that I am not quite so optimistic about the timescales, and I am very much less optimistic about our ability to predict the distant future.)

Comment by Russell_Wallace on The Weak Inside View · 2008-11-18T21:12:50.000Z · LW · GW

Now, I should clarify that I don't really expect Moore's Law to continue forever. Obviously the more you extrapolate it, the shakier the prediction becomes. But there is no point at which some other prediction method becomes more reliable. There is no time in the future about which we can say "we will deviate from the graph in this way", because we have no way to see more clearly than the graph.

I don't see any systematic way to resolve this disagreement either, and I think that's because there isn't any. This shouldn't come as a surprise -- if I had a systematic method of resolving all disagreements about the future, I'd be a lot richer than I am! At the end of the day, there's no substitute for putting our heads down, getting on with the work, and seeing who ends up being right.

But this is also an example of why I don't have much truck with Aumann's Agreement Theorem. I'm not disputing the mathematics of course, but I think cases where its assumptions apply, are the exception rather than the rule.

Comment by Russell_Wallace on The Weak Inside View · 2008-11-18T19:53:49.000Z · LW · GW

"To stick my neck out further: I am liable to trust the Weak Inside View over a "surface" extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided."

But there's the question of whether the balance of support is sufficiently lopsided, and if so, on which side. Your example illustrates this nicely:

"I will go ahead and say, "I don't care if you say that Moore's Law has held for the last hundred years. Human thought was a primary causal force in producing Moore's Law, and your statistics are all over a domain of human neurons running at the same speed. If you substitute better-designed minds running at a million times human clock speed, the rate of progress ought to speed up - qualitatively speaking.""

What you're not taking into account is that computers are increasingly used to help design and verify the next generation of chips. In other words, a greater amount of machine intelligence is required each generation just to keep the doubling time the same (or only slightly longer), never mind shorter.

Once we appreciate this, we can understand why: as the low hanging fruit is plucked, each new Moore's Law generation has to solve problems that are intrinsically more difficult. But we didn't think of that in advance. It's an explanation in hindsight.

That doesn't mean we can be sure the doubling time will still be 18 to 24 months, 60 years from now. It does mean we have no way to make a better prediction than that. It means that is the prediction on which rationalists should base their plans. Historically, those who based their plans on weak (or even strong) inside predictions of progress faster (or slower) than Moore's Law, like Nelson and Xanadu, or Star Bridge and their hypercomputers, have come to grief. Those who just looked at the graphs, have found success.

Comment by Russell_Wallace on Logical or Connectionist AI? · 2008-11-17T16:11:10.000Z · LW · GW

"Not sure I see your point. All the high speed connections were built long before bittorrent came along, and they were being used for idiotic point-to-point centralised transfers."

No they weren't. The days of Napster and Bit Torrent were, by no coincidence, also the days when Internet speed was in the process of ramping up enough to make them useful.

But of course, the reason we all heard of Napster wasn't that it was the first peer-to-peer data sharing system. On the contrary, we heard of it because it came so late that by the time it arrived, the infrastructure to make it useful was actually being built. Ever heard of UUCP? Few have. That's because in its day -- the 70s and 80s -- the infrastructure was by and large not there yet.

A clever algorithm, or even a clever implementation thereof, is only one small piece of a real-world solution. If we want to build useful AGI systems -- or so much as a useful Sunday market stall -- our plans must be built around that fact.

Comment by Russell_Wallace on Logical or Connectionist AI? · 2008-11-17T15:16:41.000Z · LW · GW

"If you'd asked me in 1995 how many people it would take for the world to develop a fast, distributed system for moving films and TV episodes to people's homes on an 'when you want it, how you want it' basis, internationally, without ads, I'd have said hundreds of thousands."

And you'd have been right. (Ever try running Bit Torrent on a 9600 bps modem? Me neither. There's a reason for that.)

Comment by Russell_Wallace on Logical or Connectionist AI? · 2008-11-17T13:06:29.000Z · LW · GW

"Russell, I think the point is we can't expect Friendliness theory to take less than 30 years."

If so, then fair enough -- I certainly don't claim it will take less.

Comment by Russell_Wallace on Logical or Connectionist AI? · 2008-11-17T12:00:38.000Z · LW · GW

"So I'm just mentioning this little historical note about the timescale of mathematical progress, to emphasize that all the people who say "AI is 30 years away so we don't need to worry about Friendliness theory yet" have moldy jello in their skulls."

It took 17 years to go from perceptrons to back propagation...

... therefore I have moldy Jell-O in my skull for saying we won't go from manually debugging buffer overruns to superintelligent AI within 30 years...

Eliezer, your logic circuits need debugging ;-)

(Unless the comment was directed at, not claims of "not less than 30 years", but specific claims of "30 years, neither more nor less" -- in which case I have no disagreement.)

Comment by Russell_Wallace on Ethical Inhibitions · 2008-10-20T02:25:03.000Z · LW · GW

Robin -- because it needs to be more specific. "Always be more afraid of bad things happening" would reduce effectiveness in other areas. Even "always be more afraid of people catching you and doing bad things to you" would be a handicap if you need to fight an enemy tribe. The requirement is, specifically, "don't violate your own tribe's ethical standards".

Comment by Russell_Wallace on Protected From Myself · 2008-10-20T01:57:28.000Z · LW · GW

odf23ds: "Ack. Could you please invent some terminology so you don't have to keep repeating this unwieldy phrase?"

Well, there are worse things than an unwieldy phrase! Consider how many philosophers have spent entire books trying to communicate their thoughts, and still failed. Looked at that way, Jef's phrase has a very good ratio of length to precision.

Comment by Russell_Wallace on Protected From Myself · 2008-10-19T16:54:56.000Z · LW · GW

Excellent post!

As for explanation, the way I would put it is that ethics consists of hard-won wisdom from many lifetimes, which is how it is able to provide me with a safety rail against the pitfalls I have yet to encounter in my single lifetime.

Comment by Russell_Wallace on Shut up and do the impossible! · 2008-10-09T20:05:00.000Z · LW · GW

anki -- "probability estimate" normally means explicit numbers, at least in the cases I've seen the term used, but if you prefer, consider my statement qualified as "... in the form of numerical probability".

Comment by Russell_Wallace on Shut up and do the impossible! · 2008-10-09T19:18:00.000Z · LW · GW

anki --
Throughout the experiment, I regarded "should the AI be let out of the box?" as a question to be seriously asked; but at no point was I on the verge of doing it.

I'm not a fan of making up probability estimates in the absence of statistical data, but my belief that no possible entity could persuade me to do arbitrary things via IRC is conditional on said entity having only physically ordinary sources of information about me. If you're postulating a scenario where the AI has an upload copy of me and something like Jupiter brain hardware to run a zillion experiments on said copy, I don't know what the outcome would be.

Comment by Russell_Wallace on Shut up and do the impossible! · 2008-10-09T17:21:00.000Z · LW · GW

"How do we know that Russell Wallace is not a persona created by Eliezer Yudkowski?"

Ron -- I didn't let the AI out of the box :-)

Comment by Russell_Wallace on Shut up and do the impossible! · 2008-10-09T16:40:32.000Z · LW · GW

Silas -- I can't discuss specifics, but I can say there were no cheap tricks involved; Eliezer and I followed the spirit as well as the letter of the experimental protocol.

Comment by Russell_Wallace on Shut up and do the impossible! · 2008-10-09T16:08:18.000Z · LW · GW

"I have a feeling that if the loser of the AI Box experiment were forced to pay thousands of dollars, you would find yourself losing more often."

David -- if the money had been more important to me than playing out the experiment properly and finding out what would really have happened, I wouldn't have signed up in the first place. As it turned out, I didn't have spare mental capacity during the experiment for thinking about the money anyway; I was sufficiently immersed that if there'd been an earthquake, I'd probably have paused to integrate it into the scene before leaving the keyboard :-)

Comment by Russell_Wallace on The Bedrock of Morality: Arbitrary? · 2008-08-17T03:42:00.000Z · LW · GW

"But most of all - why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?"

You are asking why anyone would choose life rather than what is good. Inclusive genetic fitness is just the long term form of life, as personal survival is the short-term form.

The answer is, of course, that one should not. By definition, one should always choose what is good. However, while there are times when it is right to give up one's life for a greater good, they are the exception. Most of the time, life is a subgoal of what is good, so there is no conflict.

Comment by Russell_Wallace on Natural Selection's Speed Limit and Complexity Bound · 2008-06-04T20:50:00.000Z · LW · GW

I was curious about the remark that simulation results differed from theoretical ones, so I tried some test runs. I think the difference is due to sexual reproduction.
Eliezer's code uses random mating. I modified it to use asexual reproduction or assortative mating to see what difference that made.

Asexual reproduction:
mutation rate 0.1 gave 6 bits preserved
0.05 preserved 12-13 bits
0.025 preserved 27
increasing population size from 100 to 1000 bumped this to 28
decreasing the beneficial mutation rate brought it down to 27 again
so the actual preserved information is fairly consistently 0.6 times the theoretical value, with some sort of caveat about larger populations catching beneficial mutations.

Random mating:
mutation rate 0.1 gave 20 bits preserved (already twice the theoretical value)

Assortative mating:
mutation rate 0.1 gave 25-26 bits preserved
0.05 preserved 66 bits
increasing population size from 100 to 1000 bumped this to 73

So sexual reproduction helps in a big way, especially if mating is assortative and/or the population is large. Why? At least part of the explanation, as I understand it, is that it lets several bad mutations be shuffled into one victim. I don't know the mathematics here, but there's a book with the memorable title 'Mendel's Demon' that I read some years ago, which proposed this (in addition to the usual explanation of fast adaptation to parasites) as an explanation for the existence of sex in the first place. These results would seem to support the theory.