Posts

Comments

Comment by eddie on Einstein's Superpowers · 2008-05-31T13:36:00.000Z · LW · GW

Eliezer's creation (the AI-Box Experiment) has once again demonstrated its ability to take over human minds through a text session. Small wonder - it's got the appearance of a magic trick, and it's being presented to geeks who just love to take things apart to see how they work, and who stay attracted to obstacles ("challenges") rather than turned away by them.

My contribution is to echo Doug S.'s post (how AOL-ish... "me too"). I'm a little puzzled by the AI-Box Experiment, in that I don't see what the gatekeeper players are trying to prove by playing. AI-Boxers presumably take the real-world position of "I'll keep the AI in the box until I know it's Friendly, and then let it out." But the experiment sets up the gatekeepers with the position of "I'll keep the AI in the box no matter what." It's fascinating (because it's surprising and mysterious) that Eliezer has managed to convince at least three people to change their minds on what seemed to be a straight-forward matter - "I'll keep the AI in the box no matter what." How hard can it be to stick to a plan as simple as that?

But why would you build an AI in a box if you planned to never let it out? If you thought that it would always be too dangerous to let out, you wouldn't build it in a box that could be opened. But of course there's no such thing as a box that can't be opened, because you could always build the AI a second time and do it with no box.

So the real position has to be "I'll keep it in the box until I know it's Friendly, and then let it out." To escape, the AI only has to either 1) make a sufficiently convincing argument that it is Friendly or 2) persuade the gatekeeper to let it out regardless of whether it is Friendly. I don't see how either 1 or 2 could be accomplished by a Friendly AI in any way that an UnFriendly AI could not duplicate.

Set aside for a moment the (very fascinating) question of whether an AI can take over a human via a text session. What is it the AI-Boxers are even trying to accomplish with the box in the first place, and how do they propose to do it, and why does anyone think it's at all possible to do?

Comment by eddie on Timeless Causality · 2008-05-29T23:18:34.000Z · LW · GW

Caledonian: What I mean by "time" is whatever Eliezer means by it, and what I mean by "exist" is that thing that Eliezer says causality does but time doesn't. It seems to me that time and causality are so intertwined that they are surely the same thing; if you have causality but not time, then I don't understand what this "time" thing is that you don't have.

When Eliezer says things like "Our equations don't need a t in them, so we can banish the t and make our ontology that much simpler", perhaps I need a better understanding of exactly what he's proposing to banish.

Perhaps my first clue is your point that causality loops are logically possible. Perhaps time loops aren't logically possible, and that's one way in which the two are not the same. Perhaps I'm using a different mental dictionary than everyone else in these threads.

Comment by eddie on Timeless Causality · 2008-05-29T17:16:37.000Z · LW · GW

Nick: IL, to do what you suggest you'd have to actually compute the history of your universe, meaning the causal relations would exist, so there wouldn't be any problem with there being consciousness.

I don't think that's correct. You could populate your model with random data, and if that data happens to be an accurate representation of the timeless universe, then poof you have created consciousness with no computation required (unless you believe that acquiring random data and writing it to RAM is "computation" of the kind that should create causality and consciousness).

Granted, most such randomly populated models wouldn't contain causality or consciousness. But a non-zero number of them would.

I think IL's point stands. If the universe is timeless, then a sufficiently large integer is full of conscious beings.

Comment by eddie on Timeless Causality · 2008-05-29T16:45:46.000Z · LW · GW

Caledonian: thanks for the reply, but that wasn't what I was getting at. I can see that things in a temporal sequence may not be causally related - e.g. the light flashes and then the bell rings, but the light didn't cause the bell. My question was about the reverse implication: if causality exists, such that A causes B, does that not necessarily imply that A preceded B and that time exists? If not, what aspect of time is not included within the notion of causality such that we can have causality but not time?

The only case I can think of offhand would be a time loop: grampa tells dad a secret, dad tells it to me, then I go back in time and tell it to grampa. In this case causality and time diverge for at least part of the loop. But in Elizier's explanation of causality without time, where you use Bayesian analysis to determine which events in a series caused the others, requires that there be no causality loops. So I don't think my time loop example answers my question: what is the difference between causality-with-time and causality-without-time?

Comment by eddie on Timeless Causality · 2008-05-29T13:43:58.000Z · LW · GW

Don't think that any of this preserves time, though, or distinguishes the past from the future. I am just holding onto cause and effect and computation and even anticipation for a little while longer.

What is the difference between a time-like relationship and a causal relationship? How have you not preserved time by preserving causality?

Comment by eddie on Timeless Beauty · 2008-05-29T04:29:01.000Z · LW · GW

You've never been so intoxicated that you "lose time", and woken up wondering who you threw up on the previous night? You've never done any kind of hallucinogenic drug? You don't ... sleep?

I have in fact done at least two of the above three. (Perhaps if I slept I wouldn't need to take drugs so often...)

But you're taking my words too literally and missing my point. Indeed, it is very possible for me to fail to perceive time; I've done it before, and at some point I'll do it forever. But the very fact that I can sit here, now, and talk about "before" and "forever" and "now" (and "I") shows that I must be perceiving time. It is not possible that I am not perceiving time - unless I'm a zombie and not perceiving anything. But I'm pretty sure I'm not. And I don't think you are, either, although I can't prove it.

The sensation of time passing only seems to exist because we have short term memory to compare new input against.

That's not the reason the sensation of time seems to exist - it's the reason the sensation of time does exist. It is the very definition of the perception of time. As I said, this sensation may be an illusion, but it is also indisputably real, and it seems pointless (or rather, I don't yet see the point) to say there's no such thing as time simply because we can imagine a block universe or Barbour manifold or what have you.

Flags flap. Wind blows. Minds change. Time moves. We remember. It's all the same thing.

Comment by eddie on Timeless Beauty · 2008-05-28T21:51:26.000Z · LW · GW

Assuming that dust theory or the block universe or Barbourian timelessness are true... I fail to see how it matters to any of us.

Presumably, we are all timeful beings. I know I am (cogito, ergo tempus fugit), and I assume the rest of you are, too. Whether I and my memories and my perception of time passing only exist as collections of block slices or as neighboring nodes in the static quantum foam in configuration space or as relationships between specks of dust... or even as time-slices in a computer simulation, or as integers in MathSpace which is the only thing that really exists... it doesn't matter. I still perceive time. And I bet you do, too.

If physics experiments and solid reasoning lead us inexorably to conclude that time, identity, and consciousness are mere illusions... well, they also lead us (or lead me, anyway) to conclude that those illusions are impenetrable. It's impossible for me not to perceive time, to not perceive myself as myself, to not perceive my own consciousness.

What basis, then, is there for saying that time is not "real"?

What value does the concept of dust, blocks, or Barbour bring to our intellectual discourse, other than making for entertaining conversation among stoners?

Comment by eddie on Timeless Beauty · 2008-05-28T21:35:24.000Z · LW · GW

And for those of us who haven't read Permutation City at all, here's an explanation of this whole "dust theory" thing they're talking about.

(The FAQ Z.M.Davis points to has answers to several good questions about dust theory, but not the question "what is it?")

Comment by eddie on That Alien Message · 2008-05-26T16:07:00.000Z · LW · GW

Okay, one more try at closing the italics tag, and now I definitely blame the AI and not myself...

Eliezer, if this doesn't work, please feel free to delete the offending posts, if you can persuade your AI to let you.

This must be how we got the poor schmuck to mix together the protein vials.

Comment by eddie on That Alien Message · 2008-05-26T16:02:00.000Z · LW · GW

... not that humans are much smarter, it seems.

(stupid meat puppet, stupid html tags...)

Comment by eddie on Collapse Postulates · 2008-05-09T14:09:09.000Z · LW · GW

Ben: It's simulations all the way up.

Comment by eddie on The Born Probabilities · 2008-05-01T18:57:16.000Z · LW · GW

Stephen, thanks for your thoughts on Eli's thoughts. I'm going to have to think on them further - after all these helpful posts I can pretend I understand quantum mechanics, but pretending to understand how conscious minds perceive a single point in configuration space instead of blobs of amplitude is going to take more work.

I will point out, though, that the question of how consciousness is bound to a particular branch (and thus why the Born rule works like it does) doesn't seem that much different from how consciousness is tied to a particular point in time or to a particular brain when the Spaghetti Monster can see all brains in all times and would have to be given extra information to know that my consciousness seems to be living in this particular brain at this particular time.

Finally: "it is a common misconception that should be addressed at some point anyway" - it appears to me that Robin's paper is based on this same misconception, or something like it: the Born rule (and experiment!) give one result while counting worlds gives another, therefore we have to add a new rule ("worlds that are too small get mangled") in order to make counting worlds match experiment. Whereas without the misconception we wouldn't be counting worlds in the first place. Do you think I'm understanding Robin's position and/or QM correctly?

Comment by eddie on The Born Probabilities · 2008-05-01T13:53:32.000Z · LW · GW

Thanks to Eliezer's QM series, I'm starting to have enough background to understand Robin's paper (kind of, maybe). And now that I do (kind of, maybe), it seems to me that Robin's point is completely demolished by Wallace's points about decoherence being continuous rather than discrete and therefore there being no such thing as a number of discrete worlds to count.

There seems to be nothing to resolve between the probabilities given by measure and the probabilities implied by world count if you simply say that measure is probability.

Eliezer objects. We're interpreting. We're adding something outside the mathematics.

I fail to see the problem.

If we're to accept that particles moving like billiard balls are an illusion, and configuration space is real, and blobs of amplitude are real, and time evolution of amplitude within configuration space according to the wave equations is real, and that configurations and amplitude and wave equations are fundamental parts of reality, because that's the best model we've come up with that agrees with experimental observation... why not accept that the modulus-squared law is real and fundamental, too?

It certainly agrees with experimental observations, and doesn't seem any less desirable a part of our model of reality than configurations, amplitude blobs, and wave equations.

I wish someone would explain the problem more clearly, although if Eliezer's explanations so far haven't cleared it up for me yet, perhaps nothing will.

Comment by eddie on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T23:36:00.000Z · LW · GW

Eliezer: So when I say that two punches to two faces are twice as bad as one punch, I mean that if I would be willing to trade off the distance from the status quo to one punch in the face against a billionth (probability) of the distance between the status quo and one person being tortured for one week, then I would be willing to trade off the distance from the status quo to two people being punched in the face against a two-billionths probability of one person being tortured for one week.

So alternatives that have twice the probability of some good thing X happening have twice the utility? A sure gain of a dollar has twice the utility of a gaining a dollar on a coin flip? Insurance companies and casinos certainly think so, but their customers certainly don't.

I think you are conflating utility and expected utility. I'm not convinced they are the same thing, although I think you believe they are.

Comment by eddie on The "Intuitions" Behind "Utilitarianism" · 2008-01-29T14:07:11.000Z · LW · GW

There are no natural utility differences that large. (Eliezer, re 3^^^3)

You've measured this with your utility meter, yes?

If you mean that it's not possible for there to be a utility difference that large, because the smallest possible utility shift is the size of a single particle moving a planck distance, and the largest possible utility difference is the creation or destruction of the universe, and the scale between those two is smaller than 3^^^3 ... then you'll have to remind me again where all these 3^^^3 people that are getting dust specks in their eyes live.

If 3^^^3 makes the math unnecessary because utility differences can't be that large, then your example fails to prove anything because it can't take place. For your example to be meaningful, it is necessary to postulate a universe in which 3^^^3 people can suffer a very small harm, which necessarily implies that yes, in fact, it is possible in this hypothetical universe for one thing to have 3^^^3 times the utility of something else. At which point, in order to prove that the dust specks outweigh the torture, you will now have to shut up and multiply. And be sure to show your work.

Your first task in performing this multiplication will be to measure the harm from torture and dust specks.

Good luck.