Posts

Comments

Comment by Latanius2 on Sympathetic Minds · 2009-01-19T14:56:15.000Z · LW · GW

So "good" creatures have a mechanism which simulates the thoughts and feelings of others, making it have similar thoughts and feelings, whether they are pleasant or bad. (Well, we have a "but this is the Enemy" mode, some others could have a "but now it's time to begin making paperclips at last" mode...)

For me, feeling the same seems to be much more important. (See dogs, infants...) So thinking in AI terms, there must be a coupling between the creature's utility function and ours. It wants us to be happy in order to be happy itself. (Wireheading us is not sufficient, because the model of us in its head would feel bad about it, unchanged in the process... it's some weak form of CEV.)

So is an AI sympathetic if it has this coupling in its utility function? And with whose utilities? Humans? Sentient beings? Anything with an utility function? Chess machines? (Losing makes them really really sad...) Or what about rocks? Utility functions are just a way to predict some parts of the world, after all...

My point is that a definition of sympathy also needs a function to determine who or what to feel sympathy for. For us, this seems to be "everyone who looks like a living creature or acts like one", but it's complicated in the same way as our values. Accepting "sympathy" and "personlike" for the definition of "friendly" could be easily turtles all the way down.

Comment by Latanius2 on Nonsentient Optimizers · 2008-12-27T13:01:01.000Z · LW · GW

What's the meaning of "consciousness", "sentient" and "person" at all? It seems to me that all these concepts (at least partially) refer to the Ultimate Power, the smaller, imperfect echo of the universe. We've given our computers all the Powers except this: they can see, hear, communicate, but still...

For understanding my words, you must have a model of me, in addition to the model of our surroundings. Not just an abstract mathematical one but something which includes what I'm thinking right now. (Why should we call something a "superintelligence" if it doesn't even grasp what I'm telling to it?)

Isn't "personhood" a mixture of godshatter (like morality) and power estimation? Isn't it like asking "do we have free will"? Not every messy spot on our map corresponds to some undiscovered territory. Maybe it's just like a blegg .

Comment by Latanius2 on Prolegomena to a Theory of Fun · 2008-12-18T14:54:32.000Z · LW · GW

Doug S.: if it were 20 lines of lisp... it is'nt, see http://xkcd.com/224/ :)

Furthermore... it seems to me that a FAI which creates a nice world for us needs the whole human value system AND its coherent extrapolation. And knowing how complicated the human value system is, I'm not sure we can accomplish even the former task. So what about creating a "safety net" AI instead? Let's upload everyone who is dying or suffering too much, create advanced tools for us to use, but otherwise preserve everything until we come up with a better solution. This would fit into 20 lines, "be nice" wouldn't.

Comment by Latanius2 on Permitted Possibilities, & Locality · 2008-12-03T23:49:21.000Z · LW · GW

That looks so... dim. (But sadly, it sounds too true.) So I ask too: what to do next? Hack AI and... become "death, destroyer of worlds"? Or think about FAI without doing anything specific? And doing that not just using that "just for fun" curiosity, which is needed (or so it seems) for every big scientific discovery. (Or is it just me who thinks it that way?)

Anyway... Do we have any information about what the human brain is capable of without additional downloaded "software"? (Or has the co-evolution of the brain and the "software" played such an important role that certain parts of it need some "drivers" to be useful at all?)

Comment by Latanius2 on Surprised by Brains · 2008-11-23T10:42:20.000Z · LW · GW

Programmers are also supposed to search the space of Turing machines, which seems really hard. Programming in Brainfuck is hard. All the software written in higher level languages are points of a mere subspace... If optimizing in this subspace has proven to be so effective, I don't think we have a reason to worry about uncompressible subspaces containing the only working solution for our problems, namely more intelligent AI designs.

Comment by Latanius2 on Failure By Analogy · 2008-11-18T06:56:53.000Z · LW · GW

Analogy might work better for recognizing things already optimized in design space, especially if they are a product of evolution, with common ancestors (4 legs, looks like a lion, so run, even if it has stripes). And we only started designing complicated stuff a few thousand years ago at most...

Comment by Latanius2 on Ethical Injunctions · 2008-10-21T08:11:54.000Z · LW · GW

"looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of 'What would you ask me to do if you knew what I know and thought as fast as I do?'"

We're sufficiently more intelligent than monkeys to do that reasoning... so humanity's goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.

We don't have the right to say, "if monkeys were more intelligent and consistent, they would think like us": we're also a random product of evolution, from the point of view of monkeys. (Tile the world with ugly concrete buildings? Uhhh...)

So I think that to preserve our humanity in the process we should be the ones who become gradually more and more intelligent (and decide what goals to follow next). Humans are complicated, so to simulate it in a Friendly AI, we'd need comparably complex systems... and they are probably chaotic, too. Isn't it... simply... impossible? (Not in a sense that "we can't make it", but "we can prove nobody can"...)

Comment by Latanius2 on Fundamental Doubts · 2008-07-12T11:05:43.000Z · LW · GW

"I think therefore I am"... So there is a little billiard ball in some model which is me, and it has a relatively stable existence in time. Can't you imagine a world in which these concepts simply make no sense? (If you couldn't, just look around, QM, GR...)

Comment by Latanius2 on Will As Thou Wilt · 2008-07-07T14:05:29.000Z · LW · GW

Unknown, for the fourth: yes, even highest level desires change by time, but not because we want them to be changed. I think the third one is false instead: doing what you don't want to do is a flaw in the integrity of the cognitive system, a result of that we can't reprogram our lower level desires, but what desire could drive us to reprogram our highest level ones?

Comment by Latanius2 on Is Morality Preference? · 2008-07-05T10:23:27.000Z · LW · GW

There is a subsystem in our brains called "conscience". We learn what is right and what is wrong in our early years, perhaps with certain priors ("causing harm to others is bad"). These things can also change by time (slowly!) per person, for example if the context of the feelings dramatically changes (oops, there is no God).

So agreeing with Subhan, I think we just do what we "want", maximizing the good feelings generated by our decisions. We ("we" = the optimization process trying to accomplish that) don't have access to the lower level (on/off switch of conscience), so in many cases the best solution is to avoid doing "bad" things. (And it really feels different a) to want something because we like it b) to want something to avoid the bad feelings generated by conscience). What our thoughts can't control directly seems to be an objective, higher level truth, that's the algorithm feels from the inside.

Furthermore, see psychopaths. They don't seem to have the same mental machinery of conscience, so the utility of their harmful intentions don't get the same correction factor. And so immoral they become.

Comment by Latanius2 on Created Already In Motion · 2008-07-01T09:25:47.000Z · LW · GW

I think the moral is that you shouldn't try to write software for which you don't have the hardware to run on, not even if the code could run itself by emulating the hardware. A rock runs on physics, Euclid's rules don't. We have morality to run on our brains, and... isn't FAI about porting it to physics?

So shouldn't we distinguish between the symbols physics::dynamic and human_brain::dynamic? (In a way, me reading the word "dynamic" uses more computing power than running any Java applet could on current computers...)

Comment by Latanius2 on Bloggingheads: Yudkowsky and Horgan · 2008-06-08T16:30:18.000Z · LW · GW

Well... I liked the video, especially to watch how all the concepts mentioned on OB before work in... real life. But showing how you should think to be effective (which Eliezer is writing about on OB) is a different goal from persuading people that the Singularity is not some other dull pseudo-religion. No, they haven't read OB, and they won't even have a reason to if they are told "you won't understand all this all of a sudden, see inferential distances, which is a concept I also can't explain now". To get thorough their spam filter, we'll need stories, even details, with a "this is very unprobable, but if you're interested, read OB" disclaimer at the end. See the question "but how could we use AI to fight poverty etc."... Why is the Singularity still "that strange and scary prediction some weird people make without any reason"?

Comment by Latanius2 on The Generalized Anti-Zombie Principle · 2008-04-06T10:22:00.000Z · LW · GW

Eliezer, does this whole theory cause us to anticipate something different after thinking about it? For example, after I upload, will I (personally) feel anything or only the death-like dark nothingness comes?

I think I did find such a thing, involving copying yourself in parts varying in size. (Well, it's leading to a contradiction, by the way, but maybe that's why it's even more worthwhile to talk about.)

Comment by Latanius2 on Zombies! Zombies? · 2008-04-04T20:54:15.000Z · LW · GW

We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.


So let's try to create zombies then! I don't see why this seems logically so difficult, we only need some nanotechnology... So consider the following thought experiment.

You enter room A. Some equipment scans your atoms, and after scanning each, replaces it with one of the same element, same position. Meanwhile, the original atoms are assembled in room B, resulting in a zombie twin of you. You were conscious all along, and noticed nothing except some buzz coming from the walls... So you wouldn't be worried about the experiment even if your zombie is killed afterward, or sent to the stone mines of Mars for a lifelong sentence, etc.

You enter room A. Now, the copy process goes cell by cell. Scanning every cell, making an atom-by-atom perfect copy of it, then replacing, original goes into room B, assembled. You still notice nothing.

You enter room A. Your whole brain is grabbed, scanned, and then placed into room B. The body with the copied brain and other organs walks out happily of room A, while you go to the stone mines. A bit more depressing than the original version.

So, if we copy only atoms or cells (which is regularly done in our bodies), we stay in room A. If we copy whole organs or bodies, we go to room B. It wouldn't be intuitive to postulate that consciousness can be divided, it's either in room A or room B. But the quantity of atoms to be moved in one step is almost continous... it would be weird to assume that there is some magic number of them which allows consciousness to transfer.

The conclusion: to differentiate between "conscious beings" and "zombies" leads to contradiction even from a subjective viewpoint. (Where would that mysterious "inner Chalmers" be in the above cases?)

I think we are used to our consistent self-image too much, and can't imagine how anything else would feel. An example: using brain-computer interfaces, we construct a camera which watches our surroundings, even as we sleep. As we wake up, we could "remember" what happened while we slept, because of the connection the camera hardware made with our memories. (The right images just "popped into our minds".) But how would it feel? Were we conscious at night? If not, why do we remember certain things? If we were, why did we just watch as those thieves got away with all our stuff?

All we need to understand that is some experience. If it were given, we wouldn't ask questions like "why am I so special", I think.

Comment by Latanius2 on Angry Atoms · 2008-03-31T17:43:57.000Z · LW · GW

athmwiji: if I understood correctly, you said that the concept of the physical world arises from our subjective experiences, and even if we explain that consistently, there still remain subjective experiences which we can't. We could for example imagine a simulated world in which everyone has silicon-based brains, including, at first sight, you, but in the real world you're still a human with a traditional flesh-based brain. There would be no physics then, which you could use to explain your headache with in-world mechanisms.

But without assuming that you're in such a special position in the world, you just have to explain why the other seemingly conscious beings (including AIs made by us) argue that they must have special subjective experiences which aren't explainable by the objective physics. (In fact, the whole thought process is explainable.) I think it's the same as free will...

Tiiba: no, the Martians wouldn't be able to contradict our math, as it's a model about what's happening around us, of the things we perceive. They wouldn't have different anticipations of real-world facts, but they would have different concepts, as their "hardware" differs from us, and so do their models. If our world would consist of fluid, seemingly infinitely divisible things, I don't think we would understand prime numbers at all... (As quantum mechanics doesn't seem to be intuitive to us.)

So I can imagine another math in which 2+2=5 is not obviously false, but needs a long proof and complicated equations...

Comment by Latanius2 on Angry Atoms · 2008-03-31T09:24:41.000Z · LW · GW

Unknown: see Dennett: Kinds of Minds, he has a fairly good theory for what consciousness is. (To put it short: it's the capability to reflect on one's own thoughts, and so use them as tools.)

At the current state of science and AI, this is what sounds like a difficult (and a bit mysterious) question. For the hunter-gatherers, "what makes your hand move" was the same (or even more) difficult question. (The alternative explanation, "there is a God who began all movement etc." is still popular nowadays...)

Tiiba: an algorithm is a model in our mind to describe the similarities of those physical systems implementing it. Our mathematics is the way we understand the world... I don't think the Martians with four visual cortexes would have the same math, or would be capable of understanding the same algorithms... So algorithms aren't fundamental, either.

Comment by Latanius2 on If You Demand Magic, Magic Won't Help · 2008-03-23T10:06:52.000Z · LW · GW

If you personally did the astoundingly complex science and engineering to build the replicator, drinking that Earl Grey tea would be a lot more satisfying.

One of the fundamental differences between technology and magic is that two engineers do twice as much work as one would do, while a more powerful sorcerer gets farther than 10 not so powerful ones. It matters more how good you are than how many of you exist.

What NBA players do looks similar in quality to the thing you did with your friends at home, because even if you play well, you five can't put your powers together to be equivalent to one NBA star. Engineers can, so the things you create with technology aren't comparable to the products of big companies, even if you're good at engineering, and even if you're better than anyone in that company. (Yet another reason, I think, why people often like sports better than math.)

Yes, if I could design a replicator myself, I would be statisfied. I can't.

Comment by Latanius2 on If You Demand Magic, Magic Won't Help · 2008-03-22T21:55:33.000Z · LW · GW

Eliezer, isn't reading a good fantasy story like being transported into another world?

Jed Harris: I agree... Our world seems to have the rule: "you are not significant". You can't design and build an airplane in your backyard, no one can. Even if you've got enough money, you haven't got enough time for that. In magical worlds (including Star Trek, Asimov, etc) that is what seems to be normal. (And I've never read about a committee which coordinates the work of hundreds of sorcerers, who create new spells 8 hours a day...)

rfriel: Yes, we could build the technology to do the things magic can do, but even with our current technology we also can do things which magic can't. And these limitations are which make magic so "nice", not only the features.

Martin: to be the best, you only have to make your world small. (I was one of the best in math in our secondary school, and it didn't bother me that I wasn't the best in the whole country, or that I was quite bad in history...) But it would have been soo good to be the one who makes the best operating systems in the whole school...

Comment by Latanius2 on Joy in the Merely Real · 2008-03-20T10:17:56.000Z · LW · GW

In what category does "the starship from book X" fit?

Definitely not into the "real, explainable, playing by the rules of our world" category. We can't observe it's inner workings more closely, although in the world of the book everything seems to be explained. (They know how it works, we don't.)

But also not in the "does'nt exist, is not worth caring about" category: we know that it doesn't exist in the real world even before reading the full book, but is nevertheless interesting and worth reading.

I personally would be less curious about bird droppings after reading such a book. (And read the sequel instead.) Does this count as self-deception?

So how should we overcome this "virtual reality bias"? Eliezer, you once wrote that reading sci-fi is one of the "software methods" to increase intelligence (and shock level). But to be accustomed to interstellar travel and AIs, and be interested in bird droppings and "mere reality" at the same time... If I could do that, I would be happy, but I can't, I think. So how do scientists manage to do that?

Comment by Latanius2 on Explaining vs. Explaining Away · 2008-03-17T10:06:43.000Z · LW · GW

"If we cannot learn to take joy in the merely real, our lives will be empty indeed."

It's true... but... why do we read sci-fi books then? Why should we? I don't think that after reading a novel about intelligent, faster-than-light starships the bus stopping at the bus stop nearby will be as interesting as it used to be when we were watching it on the way to the kindergarten... Or do you think it is? (Without imagining starships in place of buses, of course.)

So what non-existing things should we imagine to be rational (= to win), and how? I hope there will be some words about that in tomorrow's post, too...

Comment by Latanius2 on Wrong Questions · 2008-03-09T23:52:18.000Z · LW · GW

Psy-Kosh: Maybe I really tried to approach the meaning of the question from the direction of subjective experience. But I think that the concept of "existence" includes that there is some observer who can decide if that thing we're talking about does really exist or doesn't, given his/her stable existence.

Maybe that's why the question can't be easily answered (and maybe has no answer at all) because the concept of "world" includes us as well. So if we want to predict something about the existence of the world (that is what the word "why" means, I think), we haven't observe anything: it's a logical truth that any world in which this question is asked really does exist.

But if we statisfy the two assumptions in the question (the existence of the world is observable by us, and is repeatable, so we can make predictions about it), it starts to make sense, but it becomes less mysterious somehow. Some possible answers: because the previous one did already collapse in an anti-big-bang, I've just seen that..., or we usually have to wait for it to recreate itself, therefore nothing exists right now... Or it exists because I was bored and created a new one, or maybe because I was bored and started Half-Life (which also fits our new world-concept in some way)... etc.

And... physical equations are definitely something which differ from nothing. Some rules for a... world... But I think if something is becoming so blurry like the concept of "world" just now, we better ask what subsystem in our mind is applied for the wrong problem, and what are those problems which it is intended to solve.

Comment by Latanius2 on Wrong Questions · 2008-03-09T12:02:52.000Z · LW · GW

Psy-Kosh: let's Taboo "exist" then... What does it exactly mean? For me, it's something like "I have some experiences, whose cause is best modeled by imagining some discrete object in the outer world". The existence or non-existence of something affects what I will feel next.

Some further expansions: "why": how can I predict one experience from another? "world": all the experiences we have? (Modeled as a discrete object... But I can't really imagine what can be modeled by the fact that there is no world.)

So the question "why does our world exist" becomes something like "what is the experience from which we can predict we will have any experiences at all... Sounds a little bit more controversial than the original.

By the way, have we tried this transformation in the opposite direction? Turning the question "what is the sound of blue" to one which seems to make sense...

Comment by Latanius2 on Words as Mental Paintbrush Handles · 2008-03-04T11:24:54.000Z · LW · GW

@Roko: The visual cortex isn't the only one thing we use. Other parts of the brain probably "cache" some of the insights gained by visualizing things, or trying / imagining movements etc., also common sentences, so we can use these areas for other things we've never seen before. These cached things are our concepts, I think.

You're right, I won't visualize every part of the thought "technology advances exponentially because technology feeds back positively on itself". But I've seen a lot of exponential functions in math classes, plotted them on screen, and noticed that they can grow very big. Now I use this concept for understanding this sentence. It would be hard to explain this to a five year old, or to somebody who has never seen exponential functions: you can't visualize so many things at once, without using any cache mechanisms. (That's why inferential distances are so long in reality, I think.)

With only language and the cached thoughts (grammar / logic and rules in a symbolic system) we can get surprisingly far, but not far enough. (For us, even logic is a cached thought from the visual cortex, for it describes the connections of distinct things. This is a special feature of vision: try to imagine two songs at the same time...)

Comment by Latanius2 on Words as Mental Paintbrush Handles · 2008-03-02T23:41:43.000Z · LW · GW

Are words really just pointers? If you want to refer to objects which you've visualized, they indeed are. But people even do some peculiar "arithmetic" with words, forming sentences, which has nothing to do with meanings.

For example, when I'm sleepy (half sleeping state), sometimes I notice that whole sentence structures are running through my head, without the words filled in, but I know where the sentences begin and end, and how they are connected. Even specific words show up time to time, but the whole stream has no sense at all. But if you don't visualize and use concepts... it just sounds right.

So I think words as stand-alone things (with their sound and syntactical role) also have an important role for connecting those things in a more abstract way, whose connections can't be inferred only by visualization. (Think of a linked list of pointers: the position of the pointers in the list can be as important as the referenced object itself.)

Roko: FOPL is similar to the taxi driver who never visualizes anything. (It never dereferences the pointers.) I don't think the solution would be a much better symbolic system (although FOPL is not really designed for dereferencig), but to connect a visual cortex to the symbol manipulation system. So the similarity of two symbols could be checked by simply visualizing them.

Comment by Latanius2 on But There's Still A Chance, Right? · 2008-01-06T11:55:13.000Z · LW · GW

Unknown: What do we mean by "chance"? That it has a very small a priori probability... The evidence is given: the two sequences are similar. We can also assume that the evolution theory has a bigger probability a priori, than the chance to get that sequence. These insights were all included in the post, I think. So applying Bayes' theorem we get the fact that the evolution version has much bigger a posteriori probability, so we don't have to show that separately.

There are a lot of events which have a priori probabilities in that order of magnitude... But we also should have strong evidences to shift that to a plausible level. But a lot of people think: "there was only a very little chance for this to happen, but it happened => things with very little chances do happen sometimes."