Posts
Comments
Try as I might, I cannot find any reference to what's canonical way of building such counterfactual scenarios. Closest I could get was in http://lesswrong.com/lw/179/counterfactual_mugging_and_logical_uncertainty/ , where Vladimir Nesov seems to simply reduce logical uncertainty to ordinary uncertainty, but this does not seem to have anything to do with building formal theories and proving actions or any such thing.
To me, it seems largely arbitrary how agent should do when faced with such a dilemma, all dependent on actually specifying what it means to test a logical counterfactual. If you don't specify what it means, whatever could happen as a result.
I asked about these differences in my second post in this post tree, where I explained how I understood these counterfactuals to work. I explained as clearly as I could that, for example, calculators should work as they do in real world. I did this explaining in hopes of someone voicing disagreement if I had misunderstood how these logical counterfactuals work.
However, modifying any calculator would mean that there can not be, in principle, any "smart" enough ai or agent that could detect it was in counterfactual. Our mental hardware that checks if logical coin should've been heads or tails is a calculator the same as any computer, and again, there does not seem to be any reason to assume Omega leaves some calculators unchanged while changes results of others.
Unless, this thing is just assumed to happen, with some silently assumed cutaway point where calculators become so internal they are left unmodified.
Well, to be exact, your formulation of this problem has pretty much left this counterfactual entirely undefined. Naive approximation, that the world is just like ours, and Omega just lies in counterfactual, would not contain such weird calculators which give you wrong answers. If you want to complicate problem by saying that some specific class of agents have a special class of calculators that one would usually think to work in certain way, but actually they work in a different way, well, so be it. That's however just a free-floating parameter you have left unspecified and that, unless stated otherwise, should be assumed not to be the case.
Yes, those agents you termed "stupid" in your post, right?
After asking about this on #LW irc channel, I take back my initial objection, but I still find this entire concept of logical uncertainty kinda suspicious.
Basically, if I'm understanding this correctly, Omega is simulating an alternate reality which is exactly like ours, and where the only difference is that Omega says something like "I just checked if 0=0, and turns out it's not. If it was, I would've given you moneyzzz(iff you would give me moneyzzz in this kind of situation), but now that 0!=0, I must ask you for $100." Then the agent notices, in that hypothetical situation, that actually 0=0, so actually Omega is lying, so he is in hypothetical, and thus he can freely give moneyzzz away to help to real you. Then, because some agents can't tell for all possible logical coins if they are lied to or not, they might have to pay real moneyzzz, while sufficiently intelligent agents might be able to cheat the system if they are able to notice if they are lied to about the state of the logical coin.
I still don't understand why a stupid agent would want to make a smart AI that did pay. Also, there are many complications that restrict decisions of both smart and stupid agents, given argument I've given here, stupid agents still might prefer not paying, and smart agents might prefer paying, if they gain some kind of insght to how Omega chose these logical coins. Also, this logical coin problemacy seems to me like a not-too-special special class of Omega problems where some group of agents is able to detect if they are in counterfactuals
You lost me at part
In Counterfactual Mugging with a logical coin, a "stupid" agent that can't compute the outcome of the coinflip should agree to pay, and a "smart" agent that considers the coinflip as obvious as 1=1 should refuse to pay.
The problem is that, I see no reason why smart agent should refuse to pay. Both stupid and smart agent know it as logical certainty that they just lost. There's no meaningful difference between being smart and stupid in this case, that I can see. Both however like to be offered such bets, where logical coin is flipped, so they pay.
I mean, we all agree that a "smart" agent, that refused to pay here, would receive $0 if Omega flipped logical coin of asking if 1st digit of pi was an odd number, while "stupid" agent would get $1,000,000.
This actually was one of the things inspiring me to write this post. I was wondering if I could make use of LW community to run such tests, because it would be interesting to get to practice these skills with consent, but trying to devise such tests stumped me. It's actually pretty difficult to come up with a goal that's actually difficult to achieve in any not-overtly-hostile social context. Laborious, maybe, but that's not the same thing. I just kinda generalized from this, that it should actually be pretty easy to run with any consciously named goal and achieve it, but there must be some social inhibition.
The set of things that inspired me was wide and varying. It just may be reflected in how the essay was... Not as coherent as I'd have hoped.
That's a nice heuristic, but unfortunately, it's easy to come up with cases where this heuristic is wrong. Say, people want to play a game, I'll use chess for availability, not because it best exemplifies this problem. If you want to have a fun game of chess, ideally you'd hope you did have roughly equal matches. If 9 out of 10 players are pretty weak, just learning the rules, and want to play and have fun with it, you, the 10th player, a strong club player, being an outlier, cannot partake because you are too good(with chess, you could maybe try giving your queen to handicap yourself, or take time handicap, to make games more interesting, but generally I feel that sorta of tricks still make it less for fun for all parties)
While there might be obvious reasons to suspect bias being at play, unless you want to ban ever discussing topics that might involve bias, the best way around it, that I know of, is to actually focus on the topic. Just stating "woah, you probably are biased if you think thoughts like this" is something I did take into consideration. I was still curious to hear LW thoughts on this topic. The actual topic, not on whether LW thinks it's a bias-inducing topic or not. If you want me to add some disclaimer for other people, I'm open to suggestions. I was going to include one myself, that was basically saying "Failing socially in a way described here would at best be very very weak evidence of you being socially gifted, intelligent, or whatever. Reasoning presented here is not peer-reviewed, and might as well contain errors". I did not, because I didn't want to add yet another shiny distraction from the actual point presented. I didn't think it would be needed, either.
Oh, yes, that is basically my understanding: We do social manipulation to the extent it is deemed "fair", that is, to the point it doesn't result in retaliation. But at some point it starts to result in such retaliation, and we have this "fairness"-sensor that tells us when to retaliate or watch out for retaliation.
I don't particularly care about manipulation that results in obtaining salt shaker or a tennis partner. What I'm interested in is manipulation you can use to form alliances, make someone liable to help you with stuff you want, make them like you, make them think of you as their friend or "senpai" for the lack of better term, or make them fall in love with you. What also works is getting them to have sex with you, to reveal something embarrassing about themselves, or otherwise become part of something they hold sacred. Pretending to be a god would fall into this category. I'm struggling to explain why I think manipulation on those cases is iffy, I think it has to do with that kind of interaction kinda assuming that there are processes involved beyond self-regulation. With manipulation, you could bypass that and in effect you would lie about your alliance.
It is true many social interactions are not about anything deeper than getting the salt shaker. I kind of just didn't think of them while writing this post. I might need to clarify that point.
This I agree with completely. However, it sounding like power fantasy doesn't mean it's wrong or mistaken.
True. However, it's difficult to construct culturally neutral examples that are not obvious. The ones that pop to my mind are the kind of "it's wrong to be nice to an old, really simple-minded lady because that way you can make her rewrite her will to your benefit", or "It's allright to try to make your roommate do the dishes as many times as you possibly can, as long as you're both on equal footing on this "competition" of "who can do the least dishes"".
I'm not sure how helpful that kind of examples are.
This strikes to me as massively confused.
Keeping track of cancelled values is not required as long as you're working with a group, that is, a set(like reals), and an operation(like addition) that follows the kinda rules addition with integers and multiplication with non-zero real values do. If you are working with a group, there's no sense in which those canceled out values are left dangling. Once you cancel them out, they are gone.
http://en.wikipedia.org/wiki/Group_%28mathematics%29 <- you can check group axioms here, I won't list them here.
Then again, canceling out, as it is procedurally done in math classes, requires each and every group axiom. That basically means it's nonsense to speak of canceling out with structures that aren't groups. If you tried to cancel out stuff with non-group, that'd be basically assuming stuff you know ain't true.
Which begs a question: What are these structures in advanced maths that you speak of?
Are you sure it wouldn't be rational to pay up? I mean, if the guy looks like he could do that for $5, I'd rather not take chances. If you pay, and it turns out he didn't have all that equipment for torture, you could just sue him and get that $5 back, since he defrauded you. If he starts making up rules about how you can never ever tell anyone else about this, or later check validity of his claim or he'll kidnap you, you should, for game-theoretical reasons not abide, since being the kinda agent that accepts those terms makes you valid target for such frauds. Reasons for not abiding being the same as for single-boxing.
Actually, there is such a law. You cannot reasonably start, when you are born into this world, naked, without any sensory experiences, expecting that the next bit you experience is much more likely to be 1 rather than 0. If you encounter one hundred zillion bits and they all are 1, you still wouldn't assign 1/3^^^3 probability to next bit you see being 0, if you're rational enough.
Of course, this is mudded by the fact that you're not born into this world without priors and all kinds of stuff that weights on your shoulders. Evolution has done billions of years worth of R&D on your priors, to get them straight. However, the gap these evolution-set priors would have to cross to get even close to that absurd 1/3^^^3... It's a theoretical possibility that's by no stretch a realistic one.
I don't think you need to change the domain name. For marketability, you might wanna have the parts named so that stuff within your site becomes brand in itself, so greatplay.net becomes associated with " utilitarianism", " design" etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff i won't work with: ". I can't remember the domain name, but I know that whenever I want to read about nasty chemical, i google that phrase.
yes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.
Qiaochu_Yuan already answered your question, but because he was pretty technical with his answer, I thought I should try to simplify the point here a bit. The problem with division by zero is that division is essentially defined through multiplication and existence of certain inverse elements. It's an axiom in itself in group theory that there are inverse elements, that is, for each a, there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy to see why a 1/a = 1. Division is defined by these inverse elements: a/b is calculated by a * (1/b), where (1/b) is the inverse of b.
But, if you have both multiplication and addition, there is one interesting thing. If we assume addition is the group operation for all numbers(and we use "0" to signify additive neutral element you get from adding together an element and its additive inverse, that is, "a + (-a) = 0"), and we want multiplication to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is, distributivity hold, something interesting happens.
Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral element. Now watch the magic happen:
0x
= (0 + 0)x
= 0x + 0x
So 0x = 0x + 0x.
We subtract 0x from both sides, leaving us with 0x = 0.
Doesn't matter what you are multiplying 0 with, you always end up with zero. So, assuming 1 and 0 are not the same number(in zero ring, that's the case, also, 0 = 1 is the only number in the entire zero ring), you can't get a number such that 0*x = 1. Lacking inverse elements, there's no obvious way to define what it would mean to divide by zero. There are special situations where there is a natural way to interpret what it means to divide by zero, in which cases, go for it. However, it's separate from the division defined for other numbers.
And, if you end up dividing by zero because you somewhere assumed that there actually was such a number x that 0*x = 1, well, that's just your own clumsiness.
Also, you can prove 1=2 if you multiply both sides by zero. 1 = 2. Proof: 10 = 20 => 0 = 0. Division and multiplication work in opposite directions, multiplication gets you from not equals to equals, division gets you from equals to not equals.
My friend told me he wanted to see http://en.wikipedia.org/wiki/Andrei_Sakharov on this list. I must say that I don't know the guy, but based on the Wikipedia article, he was a brilliant Soviet nuclear physicist behind few of the largest man-made explosions ever to happen, and somewhere around 1960's he turned to political activism regarding dangers posed by nuclear arms race. In the political climate of 1960 Soviet Union, that was a brave move, too, and the powers that be made him lose much because of that choice.
Sequences contain a rational world view. Not a comprehensive one, but still, it gives some idea about how to avoid thinking stupid and how to communicate with other people that are also trying to find out what's true and what's not. It gives you words by which you can refer to problems in your world view, meta-standards to evaluate whether whatever you're doing is working, etc. I think of it as an unofficial manual to my brain and the world that surrounds me. You can just go ahead and figure out yourself what works, without reading manuals, but reading a manual before you go makes you better prepared.
Interaction of this simulated TDT and you is so complicated I don't think many of commenters here actually did the math to see how should they expect the simulated TDT agent to react in these situations. I know I didn't. I tried, and failed.
I got similar results when I tried the more nondescript "focus on your breathing, if you get lost in your thoughts, go back to breathing, try to observe what happens in your mind" style meditation. Also, I got intense feeling of euphoria on my third try, and feelings of almost passing out under the storm of weird thoughts flowing in and out. That made me a bit scared of meditation, but this post series managed to scare me a whole lot more.
This probably doesn't interest many of you, but I'd be curious to know if I'd hear here any suggestions to inspiring works of fiction with hypercompetent characters in them. Watched the Bourne trilogy in the middle of reading this post, now I want more! :)
My own ideas
Live: -James Bond Casino Royale/Quantum of Solace/Skyfall -House MD -Sherlock
Anime: Death Note Golden Boy
I do think it is good to have some inspirational posts here than don't rely that much on actual argumentation but rather paint an example picture where you could be when using rationality, what rationality could look like. There are dangers to that, but still, I like these.
I guess the subject is a bit touchy now.
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
Upvoted
Is there any data supporting the idea that dvorak/colemak/some other new keyboard layout are actually better than qwerty. Like, actual data collected by doing research on actual people that type stuff, how their layout of choice affects their health and typing speed. I do know that you get figures like "on average your fingers travel twice the amount if you type on qwerty as compared to some other layout", but actual data from actual typists?
I've been practicing dvorak for about a month. Not much since I got above 10wpm(1 hour a day for a week), but I've used it when there has been typing to be done. I've gotten to 40wpm, and I started with 70wpm qwerty speed. Incidentally, I've also forgotten how to type with qwerty.
I'd suggest you find a week when you are free to use about an hour of your time every day to practice dvorak and don't need to type anything really, and then maybe another week when you are not under any stress about your typing speed. After that, you should be able to type well enough to cope, but its gonna take more time than that to get even faster. If you know some systematic touch typing systems already, and know how to use them, i think you might be able to retain your qwerty ability. I lost that because my touch type system for qwerty was so wild and unorganized, and learning this more proper style pretty much overwrote that. Also, knowing these proper touch type systems probably help you learn dvorak faster.
Welcome, its fun to have you here.
So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.
Admittedly, I think that there is no god, but also I'm not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone here than someone here converting you.
So, come, share some of your thoughts about what is LW doing wrong, or just partake discussions here and there you find interesting. Welcome!
"Ylioppilasaukio 5"? I can't find Cafe Picnic at an address like that
I'm interested, and most likely I'll be there.
If you make copy, then inform both original and the copy of their states("You're the original" "You're the first copy"), and then proceed to make new copy of the original, information equivalence exists only between copy number 2 and the original, making it back to 1/2, 1/4, 1/4
Even majority of readers participated to these meetups every time, it doesn't matter. Quoting the about-post: ""Promoted" posts (appearing on the front page) are chosen by the editors on the basis of substantive new content, clear argument, good writing, popularity, and importance."
Meetup-posts do not contain new, important, argumentative content. It's meta-level discussion, meta that it bit by bit trying to take over the whole LW. I don't want LW that exists for posts about LW. Meetup-posts are not the only thing driving LW towards uselessness, but as far as I can tell, having those posts in the front page is by far the most visible and obvious warning sign.
So you can avoid being punished by not predicting potential punishers well enough, or by deciding to do something regardless of punishments you're about to receive? I'm not sure that's good.
Oh, thanks to more discussion today, I figured out why the dangerous idea is dangerous, and now I understand why people shouldn't seek it. More like, the actual idea is not dangerous, but it can potentially lead to dangerous ones. At least, if I understood the entire thing correctly. So, I understand that it is harmful for us to seek that idea, and if possible, it shouldn't be discussed.
I sought out the dangerous idea right after I heard about the commotion, and I was disappointed. I discussed the idea, and thought about it hard, I'm still a bit unsure if I figured out why people think of the idea as dangerous, but to me it seems to be just plain silly.
I don't regret knowing it. I figured right from the start that the probability of it actually being dangerous was low enough that I don't need to care about it, and seems that my initial guess was right on the spot. And I really do dislike not knowing about things that everybody says are really dangerous and can cause me and my loved ones much agony for reasons no one is allowed to tell
Yes, but that incomplete-one means that his power can't override powers others have. Even if he could, after paying attention to Allirea, understand her power, it doesn't follow from what we know of his powers up to now that he could pay attention to her any more than any other person there. Even some sort of power-detection field would fail to reveal other than "There's is vampire that diverts attention paid to it in that general direction", if we assume it overrides her ability, which would make Eleazar severely handicapped in a fight anyway.
Yeah, and I wanted to say that you're treating the characters you create in an awful and cruel way. Stop that. They should be happy at least once in a while :p
Chapter 11:
Is Allirea + Eleazar thing canon? It sure doesn't seem to follow from what we've seen before, unless Eleazar lied to Bella.
Mind explaining why? I don't see any reason it's any more true than it is false.
Oh, right, now I get it.
This isn't translatable as a function. 'Meaningful' and 'meaningless' aren't values bivalent functions return so they shouldn't be values in our logic.
So the sentence "The sentence 'Everything written on the board in Room 33 is either false or meaningless.' is meaningless" is not true?
Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.
Becoming a person doesn't seem like something that you can do free of cost. There seems to be a lot of complexity hidden in that "Become a person" part.
Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You'd need to find it in order to grasp what it means to have a being that lacked moral value, and "both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.
But I'd think if I only said "It doesn't have moral value in itself", you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.
It requires us to know what sort of utility function the other player has, at the very least, and even then the result might be, at best, mutual defect or, against superrational players, mutual co-operation.
And? If you have multiple contradictory wishes what to do next, some of them are bound to be unfulfilled. CEV or negotiation are just ways to decide which ones.
Why do you think I lose?
Because there are a lot more of those with values totally different from yours, which made the CEV optimize a future that you didn't like at all. If you're negotiating will all those people, why would they give in to you any more than CEV would optimize for you?
So you're bound to end up losing in this game, anyway, right? Negotiation in itself won't bring you any additional power over the coherent extrapolated volition of humanity to change the future of the universe. If others think very much unlike you, you need to overpower them to bring your values back to the game or perish in the attempt.
The above is a caricature of 'coherence' as presented in the May 2004 document. If someone else can provide a better interpretation, that would be welcome.
It seemed accurate to me. Also, I didn't find any problems from it that would seem frightening or so. Was it supposed to be problematic in some way?
Just an attempt to make it clear that we're dealing with something like an intelligent calculator here with nothing in it that we'd find interesting or valuable in itself. Setting up this as the true PD.