## Posts

## Comments

**anon19**on Can't Unbirth a Child · 2008-12-28T21:17:04.000Z · LW · GW

luzr: The strength of an optimizing process (i.e. an intelligence) does not necessarily dictate, or even affect too deeply, its goals. This has been one of Eliezer's themes. And so a superintelligence might indeed consider incredibly valuable something that you wouldn't be interested in at all, such as cheesecake, or smiling faces, or paperclips, or busy beaver numbers. And this is another theme: rationalism does not demand that we reject values merely because they are consequences of our long history. Instead, we can reject values, or broaden them, or otherwise change our moralities, when sufficient introspection forces us to do so. For instance, consider how our morality has changed to reject outright slavery; after sufficient introspection, it does not seem consistent with our other values.

**anon19**on Can't Unbirth a Child · 2008-12-28T21:04:54.000Z · LW · GW

I agree that it's not all-out impossible under the laws of thermodynamics, but I personally consider it rather unlikely to work on the scales we're talking about. This all seems somewhat tangential though; what effect would it have on the point of the post if "rewinding events" in a macroscopic volume of space was theoretically possible, and easily within the reach of a good recursively self-improving AGI?

**anon19**on Can't Unbirth a Child · 2008-12-28T20:01:42.000Z · LW · GW

luzr: The cheesecake is a placeholder for anything that the sentient AI might value highly, while we (upon sufficient introspection) do not. Eliezer thinks that some/most of our values are consequences of our long history, and are unlikely to be shared by other sentient beings.

The problems of morality seem to be quite tough, particularly when tradeoffs are involved. But I think in your scenario, Lightwave, I agree with you.

nazgulnarsil: I disagree about the "unlimited power", at least as far as practical consequences are concerned. We're not *really* talking about unlimited power here, only humanly unattainable incredible power, at most. So rewinding isn't necessarily an option. (Actually it sounds pretty unlikely to me, considering the laws of thermodynamics as far as I know them.) Lives that are never lived should count morally similarly to how opportunity cost counts in economics. This means that probably, with sufficient optimization power, incredibly much better and worse outcomes are possible than any of the ones we ordinarily consider in our day-to-day actions, but the utilitarian calculation still works out.

roko: It's true that the discussion must be limited by our current ignorance. But since we have a notion of morality/goodness that describes (although imperfectly) what we want, and so far it has not proved to be necessarily incoherent, we should consider what to do based on our current understanding of it. It's true that there are many ways in which our moral/empathic instincts seem irrational or badly calibrated, but so far (as far as I know) each such inconsistency could be understood to be a difference between our CEV and our native mental equipment, and so we should still operate under the assumption that there is a notion of morality that is perfectly correct in the sense that it's invariant under further introspection. This is then the morality we should strive to live by. Now as far as I can tell, most (if not all) of morality is about the well-being of humans, and things (like brain emulations, or possibly some animals, or ...) that are like us in certain ways. Thus it makes sense to talk about morally significant or insignificant things, unless you have some reason why this abstraction seems unsuitable. The notion of "morally significant" seems to coincide with sentience.

But what if there is no morality that is invariant under introspection?

**anon19**on Can't Unbirth a Child · 2008-12-28T17:34:49.000Z · LW · GW

Tim:

Eliezer was using "sentient" practically as a synonym for "morally significant". Everything he said about the hazards of creating sentient beings was about that. It's true that in our current state, our feelings of morality come from empathic instincts, which may not stretch (without introspection) so far as to feel concern for a program which implements the algorithms of consciousness and cognition, even perhaps if it's a human brain simulation. However, upon further consideration and reflection, we (or at least most of us, I think) find that a human brain simulation is morally significant, even though there is much that is not clear about the consequences. The same should be true of a consciousness that isn't in fact a simulation of a human, but of course determining what is and what is not conscious is the hard part.

It would be a mistake to create a new species that deserves our moral consideration, even if at present we would not give it the moral consideration it deserves.

**anon19**on Imaginary Positions · 2008-12-24T00:37:11.000Z · LW · GW

Lord:

I don't think there are scientists, who, in their capacity as scientists, debate what constitutes natural and artificial.

**anon19**on Complex Novelty · 2008-12-20T15:35:32.000Z · LW · GW

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

**anon19**on Complex Novelty · 2008-12-20T15:34:36.000Z · LW · GW

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

**anon19**on Complex Novelty · 2008-12-20T15:34:20.000Z · LW · GW

Tim:

That's beside the point, which was that if you could somehow find BB(n) for n equal to the size of a (modified to run on an empty string) Turing machine then the halting problem is solved for that machine.

**anon19**on The Mechanics of Disagreement · 2008-12-10T14:46:10.000Z · LW · GW

Silly typo: I'm sure you meant 4:1, not 8:1.

**anon19**on Hard Takeoff · 2008-12-03T04:05:40.000Z · LW · GW

luzr: You're currently using a program which can access the internet. Why do you think an AI would be unable to do the same? Also, computer hardware exists for manipulating objects and acquiring sensory data. Furthermore: by hypothesis, the AI can improve itself better then we can, because, as EY pointed out, we're not exactly cut out for programming. Also, improving an algorithm does not necessarily increase its complexity. And you don't have to simulate reality perfectly to understand it, so there is no showstopper there. Total simulation is what we do when we don't have anything better.

**anon19**on Cascades, Cycles, Insight... · 2008-11-24T17:10:45.000Z · LW · GW

Tyrrell: My impression is that you're overstating Robin's case. The main advantage of his model seems to be that it gives numbers, which is perhaps nice, but it's not at all clear why those numbers should be correct. It seems like they assume a regularity between some rather uncomparable things, which one can draw parallels between using the abstractions of economics; but it's not so very clear that they apply. Eliezer's point with the Fermi thing isn't "I'm Fermi!" or "you're Fermi!", but just that since powerful ideas have a tendency to cascade and open doors to more powerful ideas, it seems likely that not too long before a self-improving AI takes off as a result of a sufficiently powerful set of ideas, leading AI researchers will be still uncertain of whether such a thing will take months, years, or decades, and reasonably so. In other words, this accumulation of ideas is likely to explode at some point, but our abstractions (at least economic ones) are not a good enough fit to the problem to say when or how. But the point is that such an explosion of ideas would lead to the hard takeoff scenario.

**anon19**on Lawful Creativity · 2008-11-08T21:37:40.000Z · LW · GW

This is unimportant, but in the original human experience of milk, somewhat-spoiled milk was not in fact bad to drink. Old milk being actually rotten came as a surprise to my family when we moved to North America from Eastern Europe.

**anon19**on Expected Creative Surprises · 2008-10-25T15:38:16.000Z · LW · GW

Nick: It seems like a bad idea to me to call a prediction underconfident or overconfident depending on the particular outcome. Shouldn't it depend rather on the "correct" distribution of outcomes, i.e. the Bayesian posterior taking all your information into account? I mean, with your definition, if we do the coin flip again, with 99% heads and 1% tails, and our prediction is 99% heads and 1% tails, then if it comes up heads we're slightly underconfident, and if it comes up tails we're strongly overconfident. Hence there's no such thing as an actually well-calibrated prediction for this (?). If we take into account the existence of a correct Bayesian posterior then it's clear that "expected calibration" is not at all 0. For instance if p is the "correct" probability of heads and q is your prediction then the "expected calibration" would seem to be -p*log(q)-(1-p)*log(1-q)+q*log(q)+(1-q)*log(1-q). And, for instance, if you know for a fact that a certain experiment can go one of 3 ways, and over a long period of time the proportion has been 60%-30%-10%, then not only 33.3%-33.3%-33.3%, but also 45%-45%-10% and 57%-19%-24% have "expected calibration" ~0 by this definition.

**anon19**on Expected Creative Surprises · 2008-10-25T05:24:15.000Z · LW · GW

Nick: Sorry, I got it backwards. What you seem to be saying is that well-calibratedness means that relative entropy of your distribution relative to the "correct" one is equal to your entropy. This does hold for the uniform guess. But once again, considering a situation where your information tells you the coin will land "heads" with 99% probability, it would seem that the only well-calibrated guesses are 99%-1% and 50%-50%. I don't yet have an intuition for why both of these guesses are strictly "better" in any way than an 80%-20% guess, but I'll think about it. It definitely avoids the sensitivity that seemed to come out of the "rough" definition, where 50% is great but 49.9% is horrible.

**anon19**on Expected Creative Surprises · 2008-10-25T05:00:26.000Z · LW · GW

This notion of calibratedness seems to have bad properties to me. Consider a situation where I'm trying to guess a distribution for the outcomes of a coin flip with a coin which, my information tells me, lands "heads" 99% of the time. Then a guess of 50% and 50% is "calibrated" because of the 50% predictions I make, exactly half come out right. But a guess 49.9% heads and 50.1% tails is horribly calibrated; the "49.9%" predictions come out 99% correct, and the "50.1%" predictions come out 1% correct. So the concept, as defined like this, seems hypersensitive, and therefore not very useful. I think a proper definition must necessarily be in terms of relative entropy, or perhaps considering Bayesian posteriors from subsets of your information, but I still don't see how it should work. Sorry if someone already gave a robust definition that I missed.

Nick: If you don't mean *expected* log probability, then I don't know what you're talking about. And if you do, it seems to me that you're saying that well-calibratedness means that relative entropy of the "correct" distribution relative to yours is equal to your entropy. But then the uniform prior doesn't seem well-calibrated; again, consider a coin that lands "heads" 99% of the time. Then your entropy is 1, while the relative entropy of the "correct" distribution is (-log(99%)-log(1%))/2, which is >2.

**anon19**on Expected Creative Surprises · 2008-10-25T00:01:22.000Z · LW · GW

Could you give a more precise definition of "calibrated"? Your example of 1/37 for each of 37 different possibilities, justified by saying that indeed one of the 37 will happen, seems facile. Do you mean that the "correct" distribution, relative to your guess, has low relative entropy?