Posts

Comments

Comment by Anna_Salamon_and_Steve_Rayhawk on The Baby-Eating Aliens (1/8) · 2009-02-01T11:06:00.000Z · LW · GW

Unknown, how certain are you that you would retain that preference if you "knew more, thought faster"? How certain are you that Eliezer would retain the opposite preference and that we are looking at real divergence? I have little faith in my initial impressions concerning Babyeaters vs. black holes; it's hard for me to understand the Babyeater suffering, or the richness of their lives vs. of black holes, as more than a statistic.

Eliezer, regarding (2), it seems plausible to me (I'd assign perhaps 10% probability mass) that if there is a well-formed goal that with the non-arbitrariness property that both Hollerith and Roko seem partly to be after, there is a reasonable notion of extrapolation (though probably a minority of such notions) under which 95% of humans would converge to that goal. Yes, Hollerith got there by a low-probability path; but the non-arbitrariness he is (sort of) aiming for, if realizable, suggests his aim could be gotten to by other paths as well. And there are variables in one's choice of "reasonable" notions of extrapolation that could be chosen to make non-arbitrariness more plausible. For example, one could give more weight to less arbitrary preferences (e.g., to whatever human tendencies lead us to appreciate Go or the integers or other parts of mathematics), or to types of value-shifts that to make our values less arbitrary (e.g., to preferences for values with deep coherence, or to preferences similar to a revulsion for lost purposes), or one could include father-back physical processes (e.g., biological evolution, gamma rays) as part of the "person" one is extrapolating.
[I realize the above claim differs from my original (2).] Do you disagree?

Richard, I don't see why pulling numbers out of the air is absurd. We're all taking action in the face of uncertainty. If we put numbers on our uncertainty, we give others more opportunity to point out problems in our models so we can learn (e.g., it's easier to notice if we're assigning too-high probabilities to conjunctions and so having probabilities that sum to more than 1).

Comment by Anna_Salamon_and_Steve_Rayhawk on The Baby-Eating Aliens (1/8) · 2009-02-01T08:30:00.000Z · LW · GW

Hollerith, you are now officially as weird as a Yudkowskian alien. If I ever write this species I'll name it after you.

Eliezer, to which of the following possibilities would you accord significant probability mass? (1) Richard Hollerith would change his stated preferences if he knew more and thought faster, for all reasonable meanings of "knew more and thought faster"; (2) There's a reasonable notion of extrapolation under which all normal humans would agree with a goal in the vicinity of Richard Hollerith's stated goal; (3) There exist relatively normal (non-terribly-mutated) current humans A and B, and reasonable notions of extrapolation X and Y, such that "A's preferences under extrapolation-notion X" and "B's preferences under extrapolation-notion Y" differ as radically as your and Richard Holleriths preferences appear to diverge.

Comment by Anna_Salamon_and_Steve_Rayhawk on Above-Average AI Scientists · 2008-09-30T05:51:00.000Z · LW · GW

Phil, your analysis depends a lot on what the probabilities are without Eliezer.

If Eliezer vanished, what probabilities would you assign to: (A) someone creating a singularity that removes most/all value from this part of the universe; (B) someone creating a positive singularity; (C) something else (e.g., humanity staying around indefinitely without a technological singularity)? Why?