Posts

On manipulating others 2013-06-16T17:44:06.359Z
Meetup posts as discussion threads, please 2011-02-14T11:49:40.201Z
The true prisoner's dilemma with skewed payoff matrix 2010-11-20T20:37:14.926Z

Comments

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T20:41:19.756Z · LW · GW

Try as I might, I cannot find any reference to what's canonical way of building such counterfactual scenarios. Closest I could get was in http://lesswrong.com/lw/179/counterfactual_mugging_and_logical_uncertainty/ , where Vladimir Nesov seems to simply reduce logical uncertainty to ordinary uncertainty, but this does not seem to have anything to do with building formal theories and proving actions or any such thing.

To me, it seems largely arbitrary how agent should do when faced with such a dilemma, all dependent on actually specifying what it means to test a logical counterfactual. If you don't specify what it means, whatever could happen as a result.

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T20:08:34.797Z · LW · GW

I asked about these differences in my second post in this post tree, where I explained how I understood these counterfactuals to work. I explained as clearly as I could that, for example, calculators should work as they do in real world. I did this explaining in hopes of someone voicing disagreement if I had misunderstood how these logical counterfactuals work.

However, modifying any calculator would mean that there can not be, in principle, any "smart" enough ai or agent that could detect it was in counterfactual. Our mental hardware that checks if logical coin should've been heads or tails is a calculator the same as any computer, and again, there does not seem to be any reason to assume Omega leaves some calculators unchanged while changes results of others.

Unless, this thing is just assumed to happen, with some silently assumed cutaway point where calculators become so internal they are left unmodified.

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T19:45:40.443Z · LW · GW

Well, to be exact, your formulation of this problem has pretty much left this counterfactual entirely undefined. Naive approximation, that the world is just like ours, and Omega just lies in counterfactual, would not contain such weird calculators which give you wrong answers. If you want to complicate problem by saying that some specific class of agents have a special class of calculators that one would usually think to work in certain way, but actually they work in a different way, well, so be it. That's however just a free-floating parameter you have left unspecified and that, unless stated otherwise, should be assumed not to be the case.

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T19:10:07.908Z · LW · GW

Yes, those agents you termed "stupid" in your post, right?

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T18:15:47.646Z · LW · GW

After asking about this on #LW irc channel, I take back my initial objection, but I still find this entire concept of logical uncertainty kinda suspicious.

Basically, if I'm understanding this correctly, Omega is simulating an alternate reality which is exactly like ours, and where the only difference is that Omega says something like "I just checked if 0=0, and turns out it's not. If it was, I would've given you moneyzzz(iff you would give me moneyzzz in this kind of situation), but now that 0!=0, I must ask you for $100." Then the agent notices, in that hypothetical situation, that actually 0=0, so actually Omega is lying, so he is in hypothetical, and thus he can freely give moneyzzz away to help to real you. Then, because some agents can't tell for all possible logical coins if they are lied to or not, they might have to pay real moneyzzz, while sufficiently intelligent agents might be able to cheat the system if they are able to notice if they are lied to about the state of the logical coin.

I still don't understand why a stupid agent would want to make a smart AI that did pay. Also, there are many complications that restrict decisions of both smart and stupid agents, given argument I've given here, stupid agents still might prefer not paying, and smart agents might prefer paying, if they gain some kind of insght to how Omega chose these logical coins. Also, this logical coin problemacy seems to me like a not-too-special special class of Omega problems where some group of agents is able to detect if they are in counterfactuals

Comment by Jonii on Notes on logical priors from the MIRI workshop · 2013-09-16T08:02:45.524Z · LW · GW

You lost me at part

In Counterfactual Mugging with a logical coin, a "stupid" agent that can't compute the outcome of the coinflip should agree to pay, and a "smart" agent that considers the coinflip as obvious as 1=1 should refuse to pay.

The problem is that, I see no reason why smart agent should refuse to pay. Both stupid and smart agent know it as logical certainty that they just lost. There's no meaningful difference between being smart and stupid in this case, that I can see. Both however like to be offered such bets, where logical coin is flipped, so they pay.

I mean, we all agree that a "smart" agent, that refused to pay here, would receive $0 if Omega flipped logical coin of asking if 1st digit of pi was an odd number, while "stupid" agent would get $1,000,000.

Comment by Jonii on On manipulating others · 2013-06-17T07:05:03.602Z · LW · GW

This actually was one of the things inspiring me to write this post. I was wondering if I could make use of LW community to run such tests, because it would be interesting to get to practice these skills with consent, but trying to devise such tests stumped me. It's actually pretty difficult to come up with a goal that's actually difficult to achieve in any not-overtly-hostile social context. Laborious, maybe, but that's not the same thing. I just kinda generalized from this, that it should actually be pretty easy to run with any consciously named goal and achieve it, but there must be some social inhibition.

The set of things that inspired me was wide and varying. It just may be reflected in how the essay was... Not as coherent as I'd have hoped.

Comment by Jonii on On manipulating others · 2013-06-17T00:38:29.897Z · LW · GW

That's a nice heuristic, but unfortunately, it's easy to come up with cases where this heuristic is wrong. Say, people want to play a game, I'll use chess for availability, not because it best exemplifies this problem. If you want to have a fun game of chess, ideally you'd hope you did have roughly equal matches. If 9 out of 10 players are pretty weak, just learning the rules, and want to play and have fun with it, you, the 10th player, a strong club player, being an outlier, cannot partake because you are too good(with chess, you could maybe try giving your queen to handicap yourself, or take time handicap, to make games more interesting, but generally I feel that sorta of tricks still make it less for fun for all parties)

While there might be obvious reasons to suspect bias being at play, unless you want to ban ever discussing topics that might involve bias, the best way around it, that I know of, is to actually focus on the topic. Just stating "woah, you probably are biased if you think thoughts like this" is something I did take into consideration. I was still curious to hear LW thoughts on this topic. The actual topic, not on whether LW thinks it's a bias-inducing topic or not. If you want me to add some disclaimer for other people, I'm open to suggestions. I was going to include one myself, that was basically saying "Failing socially in a way described here would at best be very very weak evidence of you being socially gifted, intelligent, or whatever. Reasoning presented here is not peer-reviewed, and might as well contain errors". I did not, because I didn't want to add yet another shiny distraction from the actual point presented. I didn't think it would be needed, either.

Comment by Jonii on On manipulating others · 2013-06-16T22:32:43.400Z · LW · GW

Oh, yes, that is basically my understanding: We do social manipulation to the extent it is deemed "fair", that is, to the point it doesn't result in retaliation. But at some point it starts to result in such retaliation, and we have this "fairness"-sensor that tells us when to retaliate or watch out for retaliation.

I don't particularly care about manipulation that results in obtaining salt shaker or a tennis partner. What I'm interested in is manipulation you can use to form alliances, make someone liable to help you with stuff you want, make them like you, make them think of you as their friend or "senpai" for the lack of better term, or make them fall in love with you. What also works is getting them to have sex with you, to reveal something embarrassing about themselves, or otherwise become part of something they hold sacred. Pretending to be a god would fall into this category. I'm struggling to explain why I think manipulation on those cases is iffy, I think it has to do with that kind of interaction kinda assuming that there are processes involved beyond self-regulation. With manipulation, you could bypass that and in effect you would lie about your alliance.

It is true many social interactions are not about anything deeper than getting the salt shaker. I kind of just didn't think of them while writing this post. I might need to clarify that point.

Comment by Jonii on On manipulating others · 2013-06-16T19:27:06.886Z · LW · GW

This I agree with completely. However, it sounding like power fantasy doesn't mean it's wrong or mistaken.

Comment by Jonii on On manipulating others · 2013-06-16T18:31:16.253Z · LW · GW

True. However, it's difficult to construct culturally neutral examples that are not obvious. The ones that pop to my mind are the kind of "it's wrong to be nice to an old, really simple-minded lady because that way you can make her rewrite her will to your benefit", or "It's allright to try to make your roommate do the dishes as many times as you possibly can, as long as you're both on equal footing on this "competition" of "who can do the least dishes"".

I'm not sure how helpful that kind of examples are.

Comment by Jonii on Open Thread, April 15-30, 2013 · 2013-05-30T22:30:31.609Z · LW · GW

This strikes to me as massively confused.

Keeping track of cancelled values is not required as long as you're working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there's no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.

http://en.wikipedia.org/wiki/Group_%28mathematics%29 <- you can check group axioms here, I won't list them here.

Then again, canceling out, as it is procedurally done in math classes, requires each and every group axiom. That basically means it's nonsense to speak of canceling out with structures that aren't groups. If you tried to cancel out stuff with non-group, that'd be basically assuming stuff you know ain't true.

Which begs a question: What are these structures in advanced maths that you speak of?

Comment by Jonii on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-15T14:11:06.569Z · LW · GW

Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing.

Comment by Jonii on Pascal's Muggle: Infinitesimal Priors and Strong Evidence · 2013-05-15T13:56:40.589Z · LW · GW

Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough.

Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of years worth of R&D on your priors, to get them straight. However, the gap these evolution-set priors would have to cross to get even close to that absurd 1/3^^^3... It's a theoretical possibility that's by no stretch a realistic one.

Comment by Jonii on Open Thread, April 15-30, 2013 · 2013-04-25T20:08:42.975Z · LW · GW

I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with " utilitarianism", " design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: ". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.

Comment by Jonii on Open Thread, April 15-30, 2013 · 2013-04-19T11:39:44.471Z · LW · GW

yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.

Comment by Jonii on Open Thread, April 15-30, 2013 · 2013-04-18T20:06:51.425Z · LW · GW

Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.

But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use "0" to signify additive neutral element you get from adding together an element and its additive inverse, that is, "a + (-a) = 0"), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.

Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen: 0x = (0 + 0)x
= 0x + 0x So 0
x = 0x + 0x.

We subtract 0x from both sides, leaving us with 0x = 0.

Doesn't matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that's the case, also, 0 = 1 is the only number in the entire zero ring), you can't get a number such that 0*x = 1. Lacking inverse elements, there's no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it's separate from the division defined for other numbers.

And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that's just your own clumsiness.

Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 => 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.

Comment by Jonii on A brief history of ethically concerned scientists · 2013-02-13T23:53:52.907Z · LW · GW

My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.

Comment by Jonii on "Epiphany addiction" · 2012-08-04T16:10:31.862Z · LW · GW

Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what's true and what's not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you're doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.

Comment by Jonii on Problematic Problems for TDT · 2012-05-23T21:32:40.024Z · LW · GW

Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.

Comment by Jonii on Meditation, insight, and rationality. (Part 2 of 3) · 2012-05-16T15:32:07.517Z · LW · GW

I got similar results when I tried the more nondescript "focus on your breathing, if you get lost in your thoughts, go back to breathing, try to observe what happens in your mind" style meditation. Also, I got intense feeling of euphoria on my third try, and feelings of almost passing out under the storm of weird thoughts flowing in and out. That made me a bit scared of meditation, but this post series managed to scare me a whole lot more.

Comment by Jonii on [deleted post] 2012-05-10T00:12:54.147Z

This probably doesn't interest many of you, but I'd be curious to know if I'd hear here any suggestions to inspiring works of fiction with hypercompetent characters in them. Watched the Bourne trilogy in the middle of reading this post, now I want more! :)

My own ideas

Live: -James Bond Casino Royale/Quantum of Solace/Skyfall -House MD -Sherlock

Anime: Death Note Golden Boy

Comment by Jonii on Leveling Up in Rationality: A Personal Journey · 2012-01-17T20:51:56.709Z · LW · GW

I do think it is good to have some inspirational posts here than don't rely that much on actual argumentation but rather paint an example picture where you could be when using rationality, what rationality could look like. There are dangers to that, but still, I like these.

Comment by Jonii on Welcome to Less Wrong! · 2011-12-26T02:36:41.514Z · LW · GW

I guess the subject is a bit touchy now.

Comment by Jonii on Welcome to Less Wrong! · 2011-12-26T01:50:40.256Z · LW · GW

I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.

Upvoted

Comment by Jonii on Spend Money on Ergonomics · 2011-12-25T01:50:14.268Z · LW · GW

Is there any data supporting the idea that dvorak/colemak/some other new keyboard layout are actually better than qwerty. Like, actual data collected by doing research on actual people that type stuff, how their layout of choice affects their health and typing speed. I do know that you get figures like "on average your fingers travel twice the amount if you type on qwerty as compared to some other layout", but actual data from actual typists?

Comment by Jonii on Spend Money on Ergonomics · 2011-12-23T14:49:25.755Z · LW · GW

I've been practicing dvorak for about a month. Not much since I got above 10wpm(1 hour a day for a week), but I've used it when there has been typing to be done. I've gotten to 40wpm, and I started with 70wpm qwerty speed. Incidentally, I've also forgotten how to type with qwerty.

I'd suggest you find a week when you are free to use about an hour of your time every day to practice dvorak and don't need to type anything really, and then maybe another week when you are not under any stress about your typing speed. After that, you should be able to type well enough to cope, but its gonna take more time than that to get even faster. If you know some systematic touch typing systems already, and know how to use them, i think you might be able to retain your qwerty ability. I lost that because my touch type system for qwerty was so wild and unorganized, and learning this more proper style pretty much overwrote that. Also, knowing these proper touch type systems probably help you learn dvorak faster.

Comment by Jonii on Welcome to Less Wrong! · 2011-12-23T00:38:40.010Z · LW · GW

Welcome, its fun to have you here.

So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.

Admittedly, I think that there is no god, but also I'm not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone here than someone here converting you.

So, come, share some of your thoughts about what is LW doing wrong, or just partake discussions here and there you find interesting. Welcome!

Comment by Jonii on Meetup : Less Wrong Helsinki meetup · 2011-08-18T15:00:00.522Z · LW · GW

"Ylioppilasaukio 5"? I can't find Cafe Picnic at an address like that

Comment by Jonii on Helsinki LW Meetup - Sat March 5th · 2011-03-02T15:08:56.136Z · LW · GW

I'm interested, and most likely I'll be there.

Comment by Jonii on A simple-minded theory of "observer fluid" · 2011-02-17T00:06:41.075Z · LW · GW

If you make copy, then inform both original and the copy of their states("You're the original" "You're the first copy"), and then proceed to make new copy of the original, information equivalence exists only between copy number 2 and the original, making it back to 1/2, 1/4, 1/4

Comment by Jonii on Meetup posts as discussion threads, please · 2011-02-14T15:20:29.616Z · LW · GW

Even majority of readers participated to these meetups every time, it doesn't matter. Quoting the about-post: ""Promoted" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance."

Meetup-posts do not contain new, important, argumentative content. It's meta-level discussion, meta that it bit by bit trying to take over the whole LW. I don't want LW that exists for posts about LW. Meetup-posts are not the only thing driving LW towards uselessness, but as far as I can tell, having those posts in the front page is by far the most visible and obvious warning sign.

Comment by Jonii on Punishing future crimes · 2011-01-29T16:17:53.903Z · LW · GW

So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.

Comment by Jonii on How To Lose 100 Karma In 6 Hours -- What Just Happened · 2010-12-12T02:10:08.790Z · LW · GW

Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.

Comment by Jonii on How To Lose 100 Karma In 6 Hours -- What Just Happened · 2010-12-11T21:41:10.569Z · LW · GW

I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.

I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell

Comment by Jonii on Luminosity (Twilight fanfic) Part 2 Discussion Thread · 2010-12-01T15:49:11.155Z · LW · GW

Yes, but that incomplete-one means that his power can't override powers others have. Even if he could, after paying attention to Allirea, understand her power, it doesn't follow from what we know of his powers up to now that he could pay attention to her any more than any other person there. Even some sort of power-detection field would fail to reveal other than "There's is vampire that diverts attention paid to it in that general direction", if we assume it overrides her ability, which would make Eleazar severely handicapped in a fight anyway.

Yeah, and I wanted to say that you're treating the characters you create in an awful and cruel way. Stop that. They should be happy at least once in a while :p

Comment by Jonii on Luminosity (Twilight fanfic) Part 2 Discussion Thread · 2010-12-01T14:45:00.972Z · LW · GW

Chapter 11:

Is Allirea + Eleazar thing canon? It sure doesn't seem to follow from what we've seen before, unless Eleazar lied to Bella.

Comment by Jonii on Unsolved Problems in Philosophy Part 1: The Liar's Paradox · 2010-11-30T17:28:02.359Z · LW · GW

Mind explaining why? I don't see any reason it's any more true than it is false.

Comment by Jonii on Unsolved Problems in Philosophy Part 1: The Liar's Paradox · 2010-11-30T16:17:54.031Z · LW · GW

Oh, right, now I get it.

Comment by Jonii on Unsolved Problems in Philosophy Part 1: The Liar's Paradox · 2010-11-30T14:39:20.354Z · LW · GW

This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic.

So the sentence "The sentence 'Everything written on the board in Room 33 is either false or meaningless.' is meaningless" is not true?

Comment by Jonii on The true prisoner's dilemma with skewed payoff matrix · 2010-11-30T00:28:37.085Z · LW · GW

Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.

Comment by Jonii on Agents of No Moral Value: Constrained Cognition? · 2010-11-22T15:34:30.409Z · LW · GW

Becoming a person doesn't seem like something that you can do free of cost. There seems to be a lot of complexity hidden in that "Become a person" part.

Comment by Jonii on The true prisoner's dilemma with skewed payoff matrix · 2010-11-21T15:19:44.260Z · LW · GW

Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You'd need to find it in order to grasp what it means to have a being that lacked moral value, and "both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.

Comment by Jonii on The true prisoner's dilemma with skewed payoff matrix · 2010-11-21T14:29:02.572Z · LW · GW

But I'd think if I only said "It doesn't have moral value in itself", you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.

Comment by Jonii on The true prisoner's dilemma with skewed payoff matrix · 2010-11-17T19:54:34.876Z · LW · GW

It requires us to know what sort of utility function the other player has, at the very least, and even then the result might be, at best, mutual defect or, against superrational players, mutual co-operation.

Comment by Jonii on Criticisms of CEV (request for links) · 2010-11-17T00:38:35.904Z · LW · GW

And? If you have multiple contradictory wishes what to do next, some of them are bound to be unfulfilled. CEV or negotiation are just ways to decide which ones.

Comment by Jonii on Criticisms of CEV (request for links) · 2010-11-16T23:36:42.041Z · LW · GW

Why do you think I lose?

Because there are a lot more of those with values totally different from yours, which made the CEV optimize a future that you didn't like at all. If you're negotiating will all those people, why would they give in to you any more than CEV would optimize for you?

Comment by Jonii on Criticisms of CEV (request for links) · 2010-11-16T23:03:37.105Z · LW · GW

So you're bound to end up losing in this game, anyway, right? Negotiation in itself won't bring you any additional power over the coherent extrapolated volition of humanity to change the future of the universe. If others think very much unlike you, you need to overpower them to bring your values back to the game or perish in the attempt.

Comment by Jonii on Criticisms of CEV (request for links) · 2010-11-16T21:24:17.393Z · LW · GW

The above is a caricature of 'coherence' as presented in the May 2004 document. If someone else can provide a better interpretation, that would be welcome.

It seemed accurate to me. Also, I didn't find any problems from it that would seem frightening or so. Was it supposed to be problematic in some way?

Comment by Jonii on The true prisoner's dilemma with skewed payoff matrix · 2010-11-16T20:36:29.109Z · LW · GW

Just an attempt to make it clear that we're dealing with something like an intelligent calculator here with nothing in it that we'd find interesting or valuable in itself. Setting up this as the true PD.