Posts

Comments

Comment by steven on The Pascal's Wager Fallacy Fallacy · 2009-03-19T21:55:53.000Z · LW · GW

Eliezer, "more AIs are in the hurting class than in the disassembling class" is a distinct claim from "more AIs are in the hurting class than in the successful class", which is the one I interpreted Yvain as attributing to you.

Comment by steven on The Pascal's Wager Fallacy Fallacy · 2009-03-19T21:18:23.000Z · LW · GW

Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.

Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.

Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altruists either; presumably most anything that cared that much about torturing newly created people would also use cryonauts for raw materials. Also, maybe there are "people who are still alive" considerations.

Comment by steven on The Pascal's Wager Fallacy Fallacy · 2009-03-19T14:33:36.000Z · LW · GW

Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)

Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no choice but to set the probability of christianity equal to the probability of anti-christianity, even though I don't doubt such arguments exist. (Both irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers, so it's moot as far as I can tell.)

Vladimir, your argument doesn't apply to moralities with an egoist component of some sort, which is surely what we were discussing even though I'd agree they can't be justified philosophically.

I stand by all the arguments I gave against Pascal's wager in the comments to Utilitarian's post, I think.

Comment by steven on The Pascal's Wager Fallacy Fallacy · 2009-03-18T16:48:41.000Z · LW · GW

Vladimir, hell is only one bit away from heaven (minus sign in the utility function). I would hope though that any prospective heaven-instigators can find ways to somehow be intrinsically safe wrt this problem.

Comment by steven on The Pascal's Wager Fallacy Fallacy · 2009-03-18T01:02:42.000Z · LW · GW

There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.

Comment by steven on True Ending: Sacrificial Fire (7/8) · 2009-02-05T12:49:56.000Z · LW · GW

The Superhappies can expand very quickly in principle, but it's not clear that they're doing so

We (or "they" rather; I can't identify with your fanatically masochist humans) should have made that part of the deal, then. Also, exponential growth quickly swamps any reasonable probability penalty.

I'm probably missing something but like others I don't get why the SHs implemented part of BE morality if negotiations failed.

Comment by steven on True Ending: Sacrificial Fire (7/8) · 2009-02-05T12:26:23.000Z · LW · GW

Shutting up and multiplying suggests that we should neglect all effects except those on the exponentially more powerful species.

Comment by steven on Three Worlds Decide (5/8) · 2009-02-03T15:49:38.000Z · LW · GW

Peter, destroying Huygens isn't obviously the best way to defect, as in that scenario the Superhappies won't create art and humor or give us their tech.

Comment by steven on Three Worlds Decide (5/8) · 2009-02-03T12:54:58.000Z · LW · GW

If they're going to play the game of Chicken, then symbolically speaking the Confessor should perhaps stun himself to help commit the ship to sufficient insanity to go through with destroying the solar system.

Comment by steven on Interlude with the Confessor (4/8) · 2009-02-02T13:48:00.000Z · LW · GW

Well... would you prefer a life entirely free of pain and sorrow, having sex all day long?

False dilemma.

Comment by steven on The Baby-Eating Aliens (1/8) · 2009-02-01T11:27:00.000Z · LW · GW

Can a preference against arbitrariness ever be stable? Non-arbitrariness seems like a pretty arbitrary thing to care about.

Comment by steven on The Baby-Eating Aliens (1/8) · 2009-02-01T11:17:00.000Z · LW · GW

I would greatly prefer that there be Babyeaters, or even to be a Babyeater myself, than the black hole scenario, or a paperclipper scenario.

Seems to me it depends on the parameter values.

Comment by steven on 31 Laws of Fun · 2009-01-27T02:25:00.000Z · LW · GW

For what it's worth, I've always enjoyed stories where people don't get hurt more than stories where people do get hurt. I don't find previously imagined utopias that horrifying either.

Comment by steven on BHTV: Yudkowsky / Wilkinson · 2009-01-26T03:23:30.000Z · LW · GW

I agree with Johnicholas. People should do this over IRC and call it "bloggingheadlessnesses".

Comment by steven on Failed Utopia #4-2 · 2009-01-22T21:30:00.000Z · LW · GW

In view of the Dunbar thing I wonder what people here see as a eudaimonically optimal population density. 6 billion people on Mars, if you allow for like 2/3 oceans and wilderness, means a population density of 100 per square kilometer, which sounds really really high for a cookie-gatherer civilization. It means if you live in groups of 100 you can just about see the neighbors in all directions.

Comment by steven on Failed Utopia #4-2 · 2009-01-22T02:04:00.000Z · LW · GW

"boreana"

This means "half Bolivian half Korean" according to urbandictionary. I bet I'm missing something.

Perhaps we should have a word ("mehtopia"?) for any future that's much better than our world but much worse than could be. I don't think the world in this story qualifies for that; I hate to be negative guy all the time but if you keep human nature the same and "set guards in the air that prohibit lethal violence, and any damage less than lethal, your body shall repair", they still may abuse one another a lot physically and emotionally. Also I'm not keen on having to do a space race against a whole planet full of regenerating vampires.

Comment by steven on Failed Utopia #4-2 · 2009-01-21T18:32:36.000Z · LW · GW

The fact that this future takes no meaningful steps toward solving suffering strikes me as a far more important Utopia fail than the gender separation thing.

Comment by steven on Getting Nearer · 2009-01-17T14:43:51.000Z · LW · GW

Or "what if you wake up in Dystopia?" and tossed out the window.

What is the counterargument to this? Maybe something like "waking up in Eutopia is as good as waking up in Dystopia is bad, and more probable"; but both of those statements would have to be substantiated.

Comment by steven on Justified Expectation of Pleasant Surprises · 2009-01-15T12:58:21.000Z · LW · GW

So could it be said that whenever Eliezer says "video game" he really means "RPG", as opposed to strategy games which have different principles of fun?

Comment by steven on Eutopia is Scary · 2009-01-14T18:01:52.000Z · LW · GW

Probably the space you could visit at light speed in a given subjective time would be unreasonably large, depending on speedup and miniaturization.

Comment by steven on Building Weirdtopia · 2009-01-13T16:47:36.000Z · LW · GW

Few of these weirdtopias seem strangely appealing in the same way that conspiratorial science seems strangely appealing.

Comment by steven on Building Weirdtopia · 2009-01-13T01:12:25.000Z · LW · GW

I think the most you can plausibly say is that for humanlike architectures, memories of suffering (not necessarily true ones) are necessary to appreciate pleasures more complex than heroin. Probably what matters is that there's some degree of empathy with suffering, whether or not that empathy comes from memories. Even in that weakened form the statement doesn't sound plausible to me.

Anyway it seems to me that utopianly speaking the proper psychological contrast for pleasure is sobriety rather than pain.

Comment by steven on Eutopia is Scary · 2009-01-12T15:07:05.000Z · LW · GW

Perhaps a benevolent singleton would cripple all means of transport faster than say horses and bicycles, so as to preserve/restore human intuitions and emotions relating to distance (far away lands and so on)?

Comment by steven on Serious Stories · 2009-01-09T11:24:31.000Z · LW · GW

If I'm 50% sure that the asymmetry between suffering and happiness is just because it's very difficult to make humans happy (and so in general achieving great happiness is about as important as avoiding great suffering), and 50% sure that the asymmetry is because of something intrinsic to how these things work (and so avoiding great suffering is maybe a hundred times as important), should I act in the mean time as if avoiding great suffering is slightly over 50 times as important as achieving great happiness, slightly under 2 times as important as achieving great happiness, or something in between? This is where you need the sort of moral uncertainty theory that Nick Bostrom has been working on I think.

Comment by steven on Serious Stories · 2009-01-09T11:15:17.000Z · LW · GW

I suspect climbing Everest is much more about effort and adventure than about actual pain. Also, the vast majority of people don't do that sort of thing as far as I know.

Comment by steven on Emotional Involvement · 2009-01-07T08:51:51.000Z · LW · GW

I think putting it as "eudaimonia vs simple wireheading" is kind of rhetorical; I agree eudaimonia is better than complex happy mind states that don't correspond to the outside world, but I think complex happy mind states that don't correspond to the outside world are a lot better than simple wireheading.

Comment by steven on Emotional Involvement · 2009-01-07T08:48:28.000Z · LW · GW

For alliances to make sense it seems to me there have to be conflicts; do you expect future people to get in each other's way a lot? I guess people could have conflicting preferences about what the whole universe should look like that couldn't be satisfied in just their own corner, but I also guess that this sort of issue would be only a small percentage of what people cared about.

Comment by steven on Growing Up is Hard · 2009-01-04T19:10:43.000Z · LW · GW

Patri, try "Algernon's Law"

Comment by steven on Harmful Options · 2008-12-25T05:53:01.000Z · LW · GW

The rickroll example actually applies to all agents, including ideal rationalists. Basically you're giving the victim an extra option that you know the victim thinks is better than it actually is. There's no reason why this would apply to humans only or to humans especially.

Comment by steven on High Challenge · 2008-12-19T17:02:24.000Z · LW · GW

Oh, massive crosspost.

Comment by steven on High Challenge · 2008-12-19T17:01:20.000Z · LW · GW

That one bothered me too. Perhaps you could say bodies are much more peripheral to people's identities than brains, so that in the running case what is being tested is meat that happens to be attached to you and in the robot case it's you yourself. On the other hand I'd still be me with some minor brain upgrades.

Comment by steven on High Challenge · 2008-12-19T16:05:28.000Z · LW · GW

Computer games are the devil but I agree strongly with Hyphen, the good ones are like sports not work.

Comment by steven on For The People Who Are Still Alive · 2008-12-15T11:09:09.000Z · LW · GW

Not sure global diversity, as opposed to local diversity or just sheer quantity of experience, is the only reason I prefer there to be more (happy) people.

Comment by steven on For The People Who Are Still Alive · 2008-12-14T19:27:02.000Z · LW · GW

and where I just said "universe" I meant a 4D thing, with the dials each referring to a 4D structure and time never entering into the picture.

Comment by steven on For The People Who Are Still Alive · 2008-12-14T19:24:14.000Z · LW · GW

Eliezer, I don't think your reality fluid is the same thing as my continuous dials, which were intended as an alternative to your binary check marks. I think we can use algorithmic complexity theory to answer the question "to what degree is a structure (e.g. a mind-history) implemented in the universe" and then just make sure valuable structures are implemented to a high degree and disvaluable structures are implemented to a low degree. The reason most minds should expect to see ordered universes is because it's much easier to specify an ordered universe and then locate a mind within it, than it is to specify a mind from scratch. If this commits me to believing funny stuff like people with arrows pointing at them are more alive than people not with arrows pointing at them, I'm inclined to say "so be it".

Comment by steven on For The People Who Are Still Alive · 2008-12-14T17:49:06.000Z · LW · GW

Also "standard model" doesn't mean what you think it means and "unpleasant possibility" isn't an argument.

Comment by steven on For The People Who Are Still Alive · 2008-12-14T17:45:12.000Z · LW · GW

I'm completely not getting this. If all possible mind-histories are instantiated at least once, and their being instantiated at least once is all that matters, then how does anything we do matter?

If you became convinced that people had not just little checkmarks but little continuous dials representing their degree of existence (as measured by algorithmic complexity), how would that change your goals?

Comment by steven on The Mechanics of Disagreement · 2008-12-12T20:45:12.000Z · LW · GW

Hal, it also requires that you see each other as seeing each other that way, that you see each other as seeing each other as seeing each other that way, that you see each other as seeing each other as seeing each other as seeing each other that way, and so on.

Comment by steven on You Only Live Twice · 2008-12-12T20:35:06.000Z · LW · GW

I agree that a future world with currently-existing people still living in it is more valuable than one with an equal number of newly-created people living in it after the currently-existing people died, but to show that cryonics is a utilitarian duty you'd need to show not just that this is a factor but that it's an important enough factor to outweigh whatever people are sacrificing for cryonics (normalcy capital!). Lots of people are dead already so whether any single person lives to see the future can constitute at most a tiny part of the future's value.

Comment by steven on Logical or Connectionist AI? · 2008-11-17T12:54:04.000Z · LW · GW

Russell, I think the point is we can't expect Friendliness theory to take less than 30 years.

Comment by steven on Mundane Magic · 2008-10-31T16:53:53.000Z · LW · GW

Awesome post, but somebody should do the pessimist version, rewriting various normal facets of the human condition as horrifying angsty undead curses.

Comment by steven on Measuring Optimization Power · 2008-10-28T01:29:52.000Z · LW · GW

I guess it works out if any number of bits of optimization being exerted given the existence of an optimizer is as probable as any other number, but if that is the prior we're starting from then this seems worth stating (unless it follows from the rest in a way that I'm overlooking).

Comment by steven on Measuring Optimization Power · 2008-10-28T01:22:37.000Z · LW · GW

The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities. To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around. This plugs directly into Bayesian updating

This seems to me to suggest the same fallacy as the one behind p-values... I don't want to know the tail area, I want to know the probability for the event that actually happened (and only that event) under the hypothesis of no optimization divided by the same probability under the hypothesis of optimization. Example of how they can differ: if we know in advance that any optimizer would optimize at least 100 bits, then a 10-bit-optimized outcome is evidence against optimization even though the probability given no optimization of an event at least as preferred as the one that happened is only 1/1024.

Comment by steven on Expected Creative Surprises · 2008-10-26T12:01:09.000Z · LW · GW

re: calibration, it seems like what we want to do is ask ourselves what happens if an agent is asked lots of different probability questions, consider the the true probability as a function of probability stated by the agent, use some prior distribution (describing our uncertainty) on all such functions that the agent could have, update this prior using a finite set of answers we have seen the agent give and their correctness, and end up with a posterior distribution on functions (agent's probability -> true probability) from which we can get estimates of how over/underconfident the agent is at each probability level, and use those to determine what the agent "really means" when it says 90%; if the agent is overconfident at all probabilities then it's "overconfident" period, if it's underconfident at all probabilities then it's "underconfident" period, if it's over at some and under at some then I guess it's just "misconfident"? (an agent could be usually overconfident in an environment that usually asked it difficult questions and usually underconfident in an environment that usually asked it easy questions, or vice versa) If we keep asking an agent that doesn't learn the same question like in anon's comment then that seems like a degenerate case. On first think it doesn't seem like an agent's calibration function necessarily depends on what questions you ask it; Solomonoff induction is well-calibrated in the long run in all environments (right?), and you could imagine an agent that was like SI but with all probability outputs twice as close to 1, say. Hope this makes any sense.

Comment by steven on Traditional Capitalist Values · 2008-10-17T19:52:24.000Z · LW · GW

AFAIK the 9/11 people didn't believe they would die in any real sense.

Comment by steven on Traditional Capitalist Values · 2008-10-17T18:22:18.000Z · LW · GW

you will realize that Osama bin Laden would be far more likely to say, "I hate pornography" than "I hate freedom"

There's a difference between hating freedom and saying you hate freedom. There's also a difference between hating freedom and hating our freedom; the latter phrasing rules out Bin Laden misredefining the word to suit his own purposes. And thirdly it's possible to hate freedom and hate pornography more than freedom.

Comment by steven on Crisis of Faith · 2008-10-16T13:28:00.000Z · LW · GW

Eliezer, that's a John McCarthy quote.

Comment by steven on Why Does Power Corrupt? · 2008-10-14T09:15:40.000Z · LW · GW

Isn't the problem often not that people betray their ideals, but that their ideals were harmful to begin with? Do we know that not-yet-powerful Stalin would have disagreed (internally) with a statement like "preserving Communism is worth the sacrifice of sending a lot of political opponents to gulags"? If not then maybe to that extent everyone is corrupt and it's just the powerful that get to act on it. Maybe it's also the case that the powerful are less idealistic and more selfish, but then there are two different "power corrupts" effects at play.

Comment by steven on Crisis of Faith · 2008-10-12T23:48:00.000Z · LW · GW

It's important in these crisis things to remind yourself that 1) P does not imply "there are no important generally unappreciated arguments for not-P", and 2) P does not imply "the proponents of P are not all idiots, dishonest, and/or users of bad arguments". You can switch sides without deserting your favorite soldiers. IMO.

Comment by steven on Crisis of Faith · 2008-10-12T09:58:08.000Z · LW · GW

One more argument against deceiving epistemic peers when it seems to be in their interest is that if you are known to have the disposition to do so, this will cause others to trust your non-deceptive statements less; and here you could recommend that they shouldn't trust you less, but then we're back into doublethink territory.