Posts

Comments

Comment by PK on A New Day · 2009-01-01T00:45:06.000Z · LW · GW

Wow! This post is particularly relevant to my life right now. On January 5th I start bootcamp, my first day in the military.

Comment by PK on High Challenge · 2008-12-19T08:08:00.000Z · LW · GW

MMO of the future lol(some swearing)

And just so I'm not completely off topic, I agree with the original post. There should be games, they should be fun and challenging and require effort and so on. AI's definetly should not do everything for us. A friendly future is a nice place to live in and not a place wher an AI does the living for us so we might as well just curl up in a fetal position and die.

Comment by PK on High Challenge · 2008-12-19T07:48:00.000Z · LW · GW

@ ac: I agree with everything you said except the part about farming a scripted boss for phat lewt in the future. One would think that in the future they could code something more engaging. Have you seen LOTR...

Comment by PK on Prolegomena to a Theory of Fun · 2008-12-18T01:03:50.000Z · LW · GW

Does that mean I could play a better version of World of Warcraft all day after the singularity? Even though it's a "waste of time"?

Comment by PK on Not Taking Over the World · 2008-12-16T01:08:07.000Z · LW · GW

What about a kind of market system of states? The purpose of the states will be will be to provide a habitat matching each citizen's values and lifestyle?

-Each state will have it's own constitution and rules. -Each person can pick the state they wish to live in assuming they are accepted in based on the state’s rules. -The amount of resources and territory allocated to each state is proportional to the number of citizens that choose to live there. -There are certain universal meta-rules that supercede the states' rules such as... -A citizen may leave a state at any time and may not be held in a state against his or her will. -No killing or significant non-consensual physical harm permitted; at most a state could permanently exile a citizen. -There are some exceptions such as the decision power of children and the mentally ill. -Etc.

Anyways, this is a rough idea of what I would do with unlimited power. I would build this, unless I came across a better idea. In my vision, citizens will tend to move into states they prefer and avoid states they dislike. Over time good states will grow and bad states will shrink or collapse. However states could also specialize and for example, you could have a small state with rules and a lifestyle just right for a small dedicated population. I think this is an elegant way of not imposing a monolithic "this is how you should live" vision on every person in the world yet the system will still kill bad states and favor good states whatever those attractors are.

P.S. In this vision I assume the Earth is "controlled"(meta rules only) by a singleton super-AI with nanotech. So we don't have to worry about things like crime(forcefields), violence(more forcefields) or basic necessities such as food.

Comment by PK on The Mechanics of Disagreement · 2008-12-11T04:00:13.000Z · LW · GW

Um... since we're on the subject of disagreement mechanics, is there any way for Robin or Eliezer to concede points/arguments/details without loosing status? If that could be solved somehow then I suspect the dicussion would be much more productive.

Comment by PK on Underconstrained Abstractions · 2008-12-04T18:37:38.000Z · LW · GW

"...what are some other tricks to use?" --Eliezer Yudkowsky "The best way to predict the future is to invent it." --Alan Kay

It's unlikely that a reliable model of the future could be made since getting a single detail wrong could throw everything off. It's far more productive to predict a possible future and implement it.

Comment by PK on Permitted Possibilities, & Locality · 2008-12-03T22:28:09.000Z · LW · GW

Eliezer, what are you going to do next?

Comment by PK on Engelbart: Insufficiently Recursive · 2008-11-26T15:38:04.000Z · LW · GW

"I think your [Eliezer's] time would be better spent actually working, or writing about, the actual details of the problems that need to be solved."

I used to think that but now I realize that Eliezer is a writer and a theorist but not necessarily a hacker so I don't expect him to necessarily be good at writing code. (I'm not trying to diss Eliezer here, just reasoning from the available evidence and the fact that becoming a good hacker requires a lot of practice). Perhaps Eliezer's greatest contribution will be inspiring others to write AI. We don't have to wait for Eliezer to do everything. Surely some of you talented hackers out there could give it a shot.

Comment by PK on The Complete Idiot's Guide to Ad Hominem · 2008-11-26T03:41:35.000Z · LW · GW

Slight correction. I said: "Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it's an attempt to reverse stupidity to get intelligence." I worded this sentence badly. I mean that stupid people saying things cannot make something false and usually when people commit this fallacy it's because they are trying to say that the opposite of the "bad" point is true. This is why I said it's an attempt to reverse stupidity to get intelligence.

Basically when we see "a stupid person said this" being advanced as proof that something is false, we can expect a reverse stupidity to get intelligence fallicy right after.

Comment by PK on The Complete Idiot's Guide to Ad Hominem · 2008-11-26T03:14:56.000Z · LW · GW

I disagree with much of what is in the linked essay. One doesn't have to explicitly state an ad hominim premise to be arguing ad hominimly. Any non sequitur that is coincidentally designed to lower an arguer’s status is ad hominim in my book. Those statements have no other purpose but to create a silent premise: "My opponent is a tainted, therefore his arguments are bad." One can make ad hominim statements without actually saying them by using innuendo.

On the other hand ad hominim isn't even necessarily a fallacy. Of course an argument cannot become wrong just because a stupid person says it but we can expect that on average people with a bad track record in arguing will continue to argue poorly and people with good track records will argue well. In that sense we can set priors for someone's arguments being right before hearing them. Just remember to update afterwards. We actually do this all the time whether we admit it or not. We trust more what someone with a PHD in physics has to say about physics than a creationist. Saying that an argument is wrong because a stupid/bad person said it is of course fallacious, it's an attempt to reverse stupidity to get intelligence. However expecting people who normally say stupid things to continue to do so is Bayes compliant.

I see the ad hominim "fallacy" concept as more of an injunction or a hack if you will for human reasoners. It reminds us to examine the substance of the arguments of people we disagree with instead of dismissing them for political reasons. A perfect Bayesian mind could set up priors for people being right and impartially examine their arguments and update correctly without being swept up by political instincts. For humans on the other hand it might be more practical to focus on the substance exclusively and not the messengers unless the gap of expertise is huge (eg. PHD physicist vs. creationist on physics).

Comment by PK on Complexity and Intelligence · 2008-11-04T01:02:20.000Z · LW · GW

I don't understand. Am I too dumb or is this gibberish?

Comment by PK on Building Something Smarter · 2008-11-03T00:00:12.000Z · LW · GW

"You can't build build Deep Blue by programming a good chess move for every possible position."

Syntax error: Subtract one 'build'.

Comment by PK on Protected From Myself · 2008-10-19T04:32:58.000Z · LW · GW

I wonder if liars or honest folk are happier and or more successful in life.

Comment by PK on Dark Side Epistemology · 2008-10-18T02:28:23.000Z · LW · GW

We are missing something. Humans are ultimatly driven by emotions. We should look for which emotions beliefs tap into in order to understand why people seek or avoid certain beliefs.

Comment by PK on Dark Side Epistemology · 2008-10-18T02:01:24.000Z · LW · GW

I thought of some more. -there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don't fight the status quo. -everything is connected with "energy"(mystically): you or special/chosen people might be ably to tap into this "energy". You might glean information you normally shouldn't have or gain some kind of special powers. -Scientists/professionals/experts are "elitists". -Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good. That's it for now.

Comment by PK on Dark Side Epistemology · 2008-10-18T01:41:24.000Z · LW · GW

-faith: i.e. unconditional belief is good. It's like loyalty. Questioning beliefs is like betrayal. -The saying "Stick to your guns.": Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier. -The faithfull: i.e. us, we are the best, god is on our side. -the infedels: i.e. them, sinners, barely human, or not even. -God: Infenetly powerful alpha male. Treat him as such with all the implications... -The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the devil is seducing you to sin and suceeding. Anyone opposed to your beliefs is cooperating with/being influenced by the devil. -Assasination fatwas: Whacking people who are anti-Islam is the will of Allah. -a sexually satisfying lifestyle is bad: This makes people more angsty(especially young men). This angst is your fault and it's sin. To be less angsty you should be less sinful ergo fight your sexual urges. And so the cycle of desire, guilt, angst and confusion continues. -no masturbation: see above. -you are born in debt to Jesus because he died for your sins 2000 years ago. That's all I could think of right now.

Comment by PK on Traditional Capitalist Values · 2008-10-17T15:37:00.000Z · LW · GW

Ok, maybe my last post was a bit harsh(it's tricky to express oneself over the Internet). I will elaborate further. Eliezer said:

"So here are the traditional values of capitalism as seen by those who regard it as noble - the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say "There's a sucker born every minute"), or Warren Buffett:"

I don't know much about the latter two but I have read Paul Graham extensively. It sounds like a strawman to me when Eliezer says:

"I regard finance as more of a useful tool than an ultimate end of intelligence - I'm not sure it's the maximum possible fun we could all be having under optimal conditions. I'm more sympathetic than this to people who lose their jobs, because I know that retraining, or changing careers, isn't always easy and fun. I don't think the universe is set up to reward hard work; and I think that it is entirely possible for money to corrupt a person."

So if we come back to Paul Graham, while reading his essays I've never got the impression that he... -regards finance as the ultimate end of intelligence, -thinks capitalism is the maximum possible fun we could all be having under optimal conditions, -is not sympathetic to people who lose their jobs, -thinks the universe is set up to reward hard work(proportionately as a physical law), -or that money doesn't corrupt people.

That's why I think the post gives off the vibe of a strawman. Look, capitalism isn't perfect but you need better arguments to dismiss it. Am I being too harsh again? Alright, maybe Eliezer isn't trying to dismiss capitalism in his post but then what is he actually trying to say? All I got from the post was a weak attempt at refuting things nobody actually believes. If I misunderstand please explain.

Comment by PK on Traditional Capitalist Values · 2008-10-17T02:55:18.000Z · LW · GW

The post wasn't narrow enough to make a point. Elizier stated: "I regard finance as more of a useful tool than an ultimate end of intelligence - I'm not sure it's the maximum possible fun we could all be having under optimal conditions." Are we talking pre or post a nanotech OS running the solar system? In the latter case most of these "values" would become irrelevant. However given the world we have today, I can confidently say that capitalism is pretty awesome. There is massive evidence to back up my claim.

It smells like Eliezer is trying to refute a strawman. Specifically, I mean that there are probably few intelligent people who think of capitalism as a win-win all around. Capitalism is a compromise, it's the best we could come up with so far.

Comment by PK on Crisis of Faith · 2008-10-10T23:51:26.000Z · LW · GW

Good post but this whole crisis of faith business sounds unpleasant. One would need Something to Protect to be motivated to deliberately venture into this masochistic experience.

Comment by PK on Beyond the Reach of God · 2008-10-04T18:06:54.000Z · LW · GW

What's the point of despair? There seems to be a given assumption in the original post that:

1) there is no protection, universe is allowed to be horrible --> 2)lets despair

But number 2 doesn't change 1 one bit. This is not a clever argument to disprove number 1. I'm just saying despair is pointless if it changes nothing. It's like when babies cry automatically when something isn't the way they like because they are programmed to by evolution because this reliably attracted the attention of adults. Despairing about the universe will not attract the attention of adults to make it better. We are the only adults, that's it. I would rather reason along the lines of:

1) there is no protection, universe is allowed to be horrible --> 2)what can I do to make it better

Agreed with everything else except the part where this is really sad news that's supposed to make us unhappy.

Comment by PK on The Magnitude of His Own Folly · 2008-09-30T22:10:05.000Z · LW · GW

Eli, do you think you're so close to developing a fully functional AGI that one more step and you might set off a land mine? Somehow I don't believe you're that close.

There is something else to consider. An AGI will ultimately be a piece of software. If you're going to dedicate your life to talking about and ultimately writing a piece of software then you should have superb programming skills. You should code something.. anything.. just to learn to code. Your brain needs to swim in code. Even if none of that code ends up being useful the skill you gain will be. I have no doubt that you're a good philosopher and a good writer since I have read your blog but wether or not you're a good hacker is a complete mystery to me.

Comment by PK on Competent Elites · 2008-09-27T20:04:09.000Z · LW · GW

Eliezer, perhaps you were expecting them to seem like A-holes or snobs. That is not the case. They are indeed somewhat smarter than average. They also tend to be very charismatic or "shiny" which makes them seem smarter still. That doesn't necessarily mean they are smart enough or motivated to fix the problems of the world.

Perhaps there are better models of the world than the Approval/Disapproval of Eletes dichotomy.

Comment by PK on GAZP vs. GLUT · 2008-04-07T07:15:21.000Z · LW · GW

A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that "x = 4". No operation is taken since the GLUT is already set. At t = 2 the system is queried "what is x?". Since the GLUT was written before the information that "x = 4" was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed inputs. Somehow the GLUT author also knew an event from the future, in this case that "x = 4" would be supplied at t = 1.

It would have to be a Cascading Input Giant Lookup Table(CIGLUT). eg: At t = 1, input = "1) x = 4" at t = 2, input = "1) x = 4 //previous inputs what is x?" //+ new inputs We would have to postulate infinite storage and reaffirm our commitment to ignoring combinatorial explosions.

Think about it. I need to go to sleep now, it's 3 AM.

Comment by PK on Zombies! Zombies? · 2008-04-04T17:19:16.000Z · LW · GW

Humans have a metaphysical consciousness which is outside the mere physical world. I know this is true because this means I'm special and I feel special so it must be true. If you say humans consciousness is not something metaphysical and special then you are saying humans are no more special then animals or mere matter. You are saying that if you arrange mere matter in a certain way it will be just as special as me. Well, for your information, I'm really really special: therefore I'm right and you are wrong. In fact, I'm so special that there must be some way in which the universe says I'm special. Also, anyone attempting to take my specialness feelings about myself away from me is evil.

Comment by PK on Reductive Reference · 2008-04-04T03:29:06.000Z · LW · GW

Can someone just tell us dumb asses the differece between describing something and experiencing it?

Um... ok.

Description: If you roll your face on your keyboard you will feel the keys mushing and pressing against your face. The most pronounced features of the tactile experience will be the feeling of the ridges of the keys pressing against your forehead, eyebrows and cheekbones. You will also hear a subtle "thrumping" noise of the keys are being pressed. If you didn't put the cursor in a text editor you might hear some beeps from your computer. Once you lift your head you may still have some residual sensations on your face most likely where the relatively sharp ridges of the keys came in contact with your skin.

Experience: Roll your face on your keyboard. Don't just read this, you have to actually roll your face on the keyboard if you want to experience it. 1, 2, 3, go ... bnkiv7n6ym7n9t675r

Did you notice any difference between the description and the experience?

Anyways, I still hold that you can only define reductionism up to point after which you are just wasting time.

Comment by PK on Reductive Reference · 2008-04-03T20:32:48.000Z · LW · GW

Too much philosophy and spinning around in circular definitions. Eliezer, you cannot transfer experiences, only words which hopefully point our minds to the right thing until we "get it". Layers upon layers of words trying to define reductionism won't make people who haven't "gotten it" yet "get it". It will just lead to increasingly more sophisticated confusion. I suppose the only thing that could snap people into "getting" reductionism at this point is lots of real world examples because that would emulate an experience. How is this useful for building an AGI anyway? Please change your explanation tactic or move on to a different topic(if you want).

Q: Is "snow is white" true? A: No, it is false. Sometimes it is yellow(don't eat it when yellow). Next question.

Comment by PK on Hand vs. Fingers · 2008-03-30T14:59:34.000Z · LW · GW

Everyone ignored my c++ example. Was I completely off base? If so please tell me. IMHO we should look for technical examples to understand concepts like "reductionism". Otherwise we end up wasting time arguing about definitions and whatnot.

Personally, I find it irritating when a discussion starts with fuzzy terms and people proceed to add complexity making things fuzzier and fuzzier. In the end, you end up with confused philosophers and no practical knowledge whatsoever. This is why I like math or computer science examples. It connects what you are talking about to something real.

Comment by PK on Hand vs. Fingers · 2008-03-30T01:51:14.000Z · LW · GW

If people can understand the concept of Unions from c/c++ they can understand reductionism. One can use different overlaping data structures to access the same physical locations in memory.

union mix_t { long l; struct { short hi; short lo; } s; char c[4]; } mix;

Unfortunately the blog ate my indentations.

Is mix made up of a long, shorts or chars? Silly questions. mix.l, mix.s and mix.c are accessing the same physical memory location.

This is reductionism in a nutshell, it's talking about the same physical thing using different data types. You can 'go up'(use big data types) or 'go down' use small data types but you are still referring to the same thing.

In conclusion, aspiring rationalists should learn some basic c++.

Comment by PK on Reductionism · 2008-03-17T01:31:21.000Z · LW · GW

Caledonian's job is to contradict Eliezer.

Comment by PK on Penguicon & Blook · 2008-03-13T20:32:58.000Z · LW · GW

Eliezer, do you have a rough plan for when you will start programming an AI?

Comment by PK on Probability is in the Mind · 2008-03-12T21:56:17.000Z · LW · GW

The "probability" of an event is how much anticipation you have for that event occurring. For example if you assign a "probability" of 50% to a tossed coin landing heads then you are half anticipating the coin to land heads.

Comment by PK on Probability is in the Mind · 2008-03-12T17:42:24.000Z · LW · GW

Silas: My post wasn't meant to be "shockingly unintuitive", it was meant to illustrate Eliezer's point that probability is in the mind and not out there in reality in a ridiculously obvious way.

Am I somehow talking about something entirely different than what Eliezer was talking about? Or should I complexificationafize my vocabulary to seem more academic? English isn't my first language after all.

Comment by PK on Probability is in the Mind · 2008-03-12T16:32:38.000Z · LW · GW

Here is another example me, my dad and my brother came up with when we were discussing probability.

Suppose there are 4 card, an ace and 3 kings. They are shuffled and placed face side down. I didn't look at the cards, my dad looked at the first card, my brother looked at the first and second cards. What is the probability of the ace being one of the last 2 cards. For me: 1/2 For my dad: If he saw the ace it is 0, otherwise 2/3. For my brother: If he saw the ace it is 0, otherwise 1.

How can there be different probabilities of the same event? It is because probability is something in the mind calculated because of imperfect knowledge. It is not a property of reality. Reality will take only a single path. We just don't know what that path is. It is pointless to ask for "the real likelihood" of an event. The likelihood depends on how much information you have. If you had all the information, the likelihood of the event would be 100% or 0%.

Comment by PK on Mind Projection Fallacy · 2008-03-11T17:01:53.000Z · LW · GW

"Hard AI Future Salon" lecture, good talk. Most of the audience's questions however were very poor.

One more comment about the mind projection fallacy. Eliezer, you also have to keep in mind that the goal of a sci-fi writer is to make a compelling story which he can sell. Realism is only important in so far as it helps him achieve this goal. Agreed on the point that it's a fallacy, but don't expect it to change unless the audience demands/expects realism. http://tvtropes.org/ if full of tropes that illustrate stuff like that.

Comment by PK on Mind Projection Fallacy · 2008-03-11T04:06:07.000Z · LW · GW

Good post. I have a feeling I've read this very same example before from you Eliezer. I can't remember where.

Comment by PK on Righting a Wrong Question · 2008-03-09T17:12:21.000Z · LW · GW

OK, time to play:

Q: Why am I confused by the question "Do you have free will?"? A: Because I don't know what "free will" really means. Q: Why don't I know what "free will" means? A: Because there is no clear explanation of it using words. It's an intuitive concept. It's a feeling. When I try to think of the details of it, it is like I'm trying to grab slime which slides through my fingers. Q: What is the feeling of "free will"? A: When people talk of "free will" they usually put it thusly. If one has "free will", he is in control of his own actions. If one doesn't have "free will" then it means outside forces like the laws of physics control his actions. Having "free will" feels good because being in control feels better then being controlled. On the other hand, those who have an appreciation for the absolute power of the laws of physics feel the need to bow down to them and acknowledge their status as the ones truly in control. The whole thing is very tribal really. Q: Who is in control, me or the laws of physics? A: Since currently saying [I] is equivalent to saying [a specific PK shaped collection of atoms operating on the laws of physics], then saying "I am in control" is equivalent to saying "a specific PK shaped collection of atoms operating on the laws of physics is in control". The laws of physics are not an outside force apart from me, they are inside me too. Q: Why do people have a tendency to believe their minds are somehow separate from the rest of the universe? A: Ugghhh... I don't know the details well enough to answer that.

Comment by PK on Dissolving the Question · 2008-03-08T20:31:44.000Z · LW · GW

Ughh more homework. Overcoming bias should have a sister blog called Overcoming laziness.

Comment by PK on Leave a Line of Retreat · 2008-02-26T20:35:44.000Z · LW · GW

This reminds me of an item from a list of "horrible job interview questions" we once devised for SIAI:

Would you kill babies if it was intrinsically the right thing to do? Yes/No

If you circled "no", explain under what circumstances you would not do the right thing to do:


If you circled "yes", how right would it have to be, for how many babies? ___

What a horrible horrible question. My answer is ... what do you mean when you say "intrinsically the right thing to do"? The "right thing" according to whom? If it was the right thing according to an authority figure but I disagreed, I probably would not do it. If the circumstances were so extreme that I truly believed it's the right thing(eg: not killing a baby results in the baby's death anyway + 1 million babies) then I would kill babies(assuming I could overcome my aversion to killing).

Actually I don't really know how I would react. This is how I wish I would act. Calmly theorising in front of the computer never having experienced circumstances remotely as awful is not the same as being in those circumstances when the fear and dread overtakes you. There would probably be a significant shift from what I consider and feel is "me" right now to the "me" I would become in that hypothetical situation.

Comment by PK on Politics is the Mind-Killer · 2008-02-23T05:43:03.000Z · LW · GW

Lately I've been thinking about "mind killing politics". I have come to the conclusion that this phenomenon is pretty much present to some degree in any kind of human communication where being wrong means you or your side lose status.

It is incorrect to assume that this bias can only occurs when the topic involves government, religion, liberalism/conservatism or any other "political" topics. Communicating with someone who has a different opinion than you is sufficient for the "mind killing politics" bias to start creeping in.

The pressure to commit "mind killing politics" type biases is proportional to how much status one or one's side has to lose for being wrong in any given disagreement. This doesn't mean the bias can't be mixed or combined with other biases.

I've also noticed six factors that can increase or decrease the pressure to be biased.

1)If you are talking to your friends or people close to you that you trust then the pressure to be right will be reduced because they are less likely to subtract status from you for being wrong. Talking to strangers will increase it.

2)Having an audience will increase the pressure to be right. That's because the loss of status for being wrong is multiplied by the number of people that see you lose(each weighted for how important it is for them to consider you as having a high status).

3)If someone is considered an 'expert', the pressure to be right will be enormous. Thats because experts have special status for being knowledgeable about a topic and getting answers about it right. Every mistake is seen as reducing that expertise and proportionatly reducing the status of the expert. Being wrong to someone considered a non expert is even more painful then being wrong to an expert.

4)It is very hard psychologically to disagree with authority figures or the group consensus. Therefore "mind killing politics" biases will be replaced by other biases when there is disagreement with authority figure or the group consensus but will be amplified against those considered outside the social group.

5)People will easily spot "mind killing politics" biases in the enemy side but will deny, not notice or rationalize the same biases in themselves.

6)And finally, "mind killing politics" biases can lead to agitation(ei. triggering of the fight or flight response) which will amplify biased thinking.

Comment by PK on Arguing "By Definition" · 2008-02-21T00:21:00.000Z · LW · GW

Good post. So how do you usually respond to invalid "by definition" arguments? Is there any quick(but honest) way to disarm the the argument or is there too much inferential distance to cover?

Comment by PK on Taboo Your Words · 2008-02-18T19:57:31.000Z · LW · GW

Eliezer Yudkowsky said: It has an obvious failure mode if you try to communicate something too difficult without requisite preliminaries, like calculus without algebra. Taboo isn't magic, it won't let you cross a gap of months in an hour.

Fair enough. I accept this reason for not having your explanation of FAI before me at this very moment. However I'm still in "Hmmmm...scratches chin" mode. I will need to see said explanation before I will be in "Whoa! This is really cool!" mode.

Really? That's your concept of how to steer the future of Earth-originating intelligent life? "Shut up and do what I say"? Would you want someone else to follow that strategy, say Archimedes of Syracuse, if the future fell into their hands?

First of all I would like to say that I don't spend a huge amount of time thinking of how to make an AGI "friendly" since I am busy with other things in my life. So forgive me if my reasoning has some obvious flaw(s) I overlooked. You would need to point out the flaws before I agree with you however.

If I was writing an AGI I would start with "obey me" as the meta instruction. Why? because "obey me" is very simple and allows for corrections. If the AGI acts in some unexpected way I could change it or halt it. Anything can be added as a subgoal to "obey me". On the other hand if I use some different algorithm and the AGI starts acting in some weird way because I overlooked something, well the situation is fubar. I'm locked out.

You should consider looking for problems and failure modes in your own answer, rather than waiting for someone else to do it. What could go wrong if an AI obeyed you? There are plenty of things that could go wrong. For instance if the AGI obeyed me but not in way I expected. Or if the consequences of my request were unexpected and irreversible. This can be mitigated by asking for forecasts before asking for actions.

As I'm writing this I keep thinking of a million possible objections and rebuttals but that would make my post very very long.

P.S. Caledonian's post disappeared. May a suggest a Youtube type system where posts that are considered bad are folded instead of deleted. This way you get free speech while keeping the signal to noise ratio in check.

Comment by PK on Taboo Your Words · 2008-02-18T16:45:21.000Z · LW · GW

@Richard Hollerith: Skipping all the introductory stuff to the part which tries to define FAI(I think), I see two parts. Richard Hollerith said:

"This vast inquiry[of the AI] will ask not only what future the humans would create if the humans have the luxury of [a)] avoiding unfortunate circumstances that no serious sane human observer would want the humans to endure, but also [b)] what future would be created by whatever intelligent agents ("choosers") the humans would create for the purpose of creating the future if the humans had the luxury"

a) What's a "serious sane human observer"? Taboo the words and synonyms. What are "unfortunate circumstances" that s/he would like to avoid? Taboo...

b)What is "the future humans would chose for the purpose of creating the future"? In what way exactly would they "chose" it? Taboo...

Good luck :-)

Eliezer Yudkowsky said: "Don't underestimate me so severely. You think I don't know how to define "Friendly" without using synonyms? Who do you think taught you the technique? Who do you think invented Friendly AI?"

I'm not trying to under/over/middle-estimate you, only theories which you publicly write about. Sometimes I'm a real meanie with theories, shoving hot pokers into to them and all sorts of other nasty things. To me theories have no rights.

"... I've covered some of the ladder I used to climb to Friendly AI. But not all of it. So I'm not going to try to explain FAI as yet; more territory left to go." So are you saying that if at present you played a taboo game to communicate what "FAI" means to you, the effort would fail? I am interested in the intricacies of the taboo game including it's failure modes.

"But you (PK) are currently applying the Taboo technique correctly, which is the preliminary path I followed at the analogous point in my own reasoning; and I'm interested in seeing you follow it as far as you can. Maybe you can invent the rest of the ladder on your own. You're doing well so far. Maybe you'll even reach a different but valid destination." I actually already have a meaning for FAI in my head. It seems different from the way other people try to describe it. It's more concrete but seems less virtuous. It's something along the lines of "obey me".

Comment by PK on Taboo Your Words · 2008-02-17T22:50:56.000Z · LW · GW

^^^^Thank you. However merely putting the technique into the "toolbox" and never looking back is not enough. We must go further. This technique should be used at which point we will either reach new insights or falsely the method. Would you care to illustrate what FAI means to you Eliezer?(others are also invited to do so)

Maybe the comment section of a blog isn't even the best medium for playing taboo. I don't know. I'm brainstorming of productive ways/mediums to play taboo(assuming the method itself leads to something productive).

Comment by PK on Taboo Your Words · 2008-02-17T20:08:53.000Z · LW · GW

Julian Morrison said: "FAI is: a search amongst potentials which will find the reality in which humans best prosper." What is "prospering best"? You can't use "prospering", "best" or any synonyms.

Let's use the Taboo method to figure out FAI.

Comment by PK on Taboo Your Words · 2008-02-16T20:32:18.000Z · LW · GW

The game is not over! Michael Vassar said: "[FAI is ..] An optimization process that brings the universe towards the target of shared strong attractors in human high-level reflective aspiration."

For the sake of not dragging out the argument too much lets assume I know what an optimization process and a human is.

Whats are "shared strong attractors"? You cant use the words "shared", "strong", "attractor" or any synonyms.

What's a "high-level reflective aspiration"? You can't use the words "high-level", "reflective ", "aspiration" or any synonyms.


Caledonian said: "Then declaring the intention to create such a thing takes for granted that there are shared strong attractors."

We can't really say if there are "shared strong attractors" one way or the other until we agree on what that means. Otherwise it's like arguing about wither falling trees make "sound" in the forest. We must let the taboo game play out before we start arguing about things.

Comment by PK on Taboo Your Words · 2008-02-16T04:25:02.000Z · LW · GW

Sounds interesting. We must now verify if it works for useful questions.

Could someone explain what FAI is without using the words "Friendly", or any synonyms?

Comment by PK on Words as Hidden Inferences · 2008-02-04T01:15:37.000Z · LW · GW

Eliezer said: "Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you. The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity."

What alternative model would you propose? I'm not quite ready yet to stop using words that imperfectly place objects into categories. I'll keep the fact that categories are imperfect in mind.

I really don't mean this in a condescending way. I'm just not sure what new belief this line of reasoning is supposed to convey.

Comment by PK on The Parable of Hemlock · 2008-02-03T17:46:38.000Z · LW · GW

I'm not really sure what the point of the post is.

Logic is always conditional. If the premises are true then the conclusion is true. That means we could reach the wrong conclusion with false premises.

Eliezer, are you saying we should stop or diminish our use of logic? Should I eat hemlock because I might be wrong about it's lethality?

Comment by PK on Newcomb's Problem and Regret of Rationality · 2008-02-02T04:22:00.000Z · LW · GW

I agree that "rationality" should be the thing that makes you win but the Newcomb paradox seems kind of contrived.

If there is a more powerful entity throwing good utilities at normally dumb decisions and bad utilities at normally good decisions then you can make any dumb thing look genius because you are under different rules than the world we live in at present.

I would ask Alpha for help and do what he tells me to do. Alpha is an AI that is also never wrong when it comes to predicting the future, just like Omega. Alpha would examine omega and me and extrapolate Omega's extrapolated decision. If there is a million in box B I pick both otherwise just B.

Looks like Omega will be wrong either way, or will I be wrong? Or will the universe crash?