Posts
Comments
shakespeare is good tho
Well, now he has another reason not to change his mind. Seems unwise, even if he's right about everything.
What does "an action that harms another agent" mean? For instance, if I threaten to not give you a chicken unless you give me $5, does "I don't give you a chicken" count as "a course of action that harms another agent"? Or does it have to be an active course, rather than act of omission?
It's not blackmail unless, given that I don't give you $5, you would be worse of, CDT-wise, not giving me the chicken than giving me the chicken. Which is to say, you really want to give me the chicken but you're threatening to withhold it because you think you can make $5 out of it. If I were a Don't-give-$5-bot, or just broke, you would have no reason to threaten to withhold the chicken. If you don't want to give me the chicken, but are willing to do so if I give you $5, that's just normal trade.
"Wanna see something cool?"
Bob Dylan's new album ("Tempest") is perfect. At the time of posting, you can listen to it free on the itunes store. I suggest you do so.
On another note, I'm currently listening to all the Miles Davis studio recordings and assembling my own best-of list. It'll probably be complete by next month, and I'll be happy to share the playlist with anyone who's interested.
Thomas Bergersen is just wonderful. Also, I've been listening to a lot of Miles Davis (I'm always listening to a lot of Miles Davis, but I haven't posted in one of these threads before). I especially recommend In a Silent Way.
Murakami is still the only currently living master of magical realism
Salman Rushdie. Salman Rushdie Salman Rushdie Salman Rushdie. Salman Rushdie.
If you haven't read much other Italo Calvino, "Invisible Cities" is really, really, really great.
I have to say, as a more-or-less lifelongish fan of Oscar Wilde (first read "The Happy Prince" when I was eight or nine), that the ending to Ernest is especially weak. I like the way he builds his house of cards in that play, and I like the dialogue, but (and I think I probably speak for a lot of Wilde fans here), the way he knocks the cards down really isn't all that clever or funny. For a smarter Wilde play, see "A Woman of No Importance", although his best works are his childrens' stories, "The Picture of Dorian Grey", and "Ballad of Reading Gaol" (although it is not, in fact, the case that "Every man kills the thing he loves".)
(Also I should mention that I recently reread "The Code of the Woosters" and laughed myself inside-out.)
You sure about this?
Nope, not sure at all.
I don't think that question's going to give you the information you want - when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn't really allow it. I think Christianity really is the canonical example of the withering away of religiosity - and that happened through a succession of internal revolutions ("In Praise of Folly", Lutheranism, the English reformation etc.) which themselves happened for a variety of reasons, not all pure or based in rationality (Henry VIII's split with Rome, for example) but had the effect of demystifying the church and thereby shrinking the domain of its influence. I think. Although it's hard to interpret the Englightenment as a movement internal to Christianity, so this only gets you so far, I suppose.
I agree with pretty much everything you've said here, except:
You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.
You don't actually need to continue this chain - if you're playing against any opponent which cooperates iff you cooperate, then you want to cooperate - even if the opponent would also cooperate against someone who cooperated no matter what, so your statement is also true without the "ad nauseum" (provided the opponent would defect if you defected).
What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly; this is probably a Hard Problem.
Remember that "enforcing contracts" could mean two things. It could mean that the government steps in and makes the parties do what they said they would - it keeps whipping them until they follow through. It could also mean punishing the parties for damage done on the other end when they breach the contract. For example, in a world in which prostitution is legal, X proposes to pay Y for sex. Y accepts. X hands over the money. Y refuses to have sex with X. The horrific version of this is the government comes in and "enforces" the contract... by holding down Y and, well, yeah. The alternative is the government comes in, sees that Y has taken money from X by fraud, and punishes Y the same way it would punish any other thief. The second option is, I think, both more intuitive and less massively disturbing.
Thank you. I had expected the bottom to drop out of it somehow.
EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.
Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?
That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".
This variation of the problem was invented in the follow-up post (I think it was called "Sneaky strategies for TDT" or something like that:
Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that's what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT's prudence and TDT suffers for CDT's lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we're concerned with is not so much "fair" vs. "unfair", but more like "those problem on which the best I can do is not necessarily the best anyone can do". We can call it "fairness" if we want, but it's not like Omega is discriminating against TDT in this case.
Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)
Well, no, this would be my disagreement: it's precisely because Omega told you that the simulated agent is running TDT that only TDT could or could not be the simulation; the simulated and real problem are, for all intents and purposes, identical (Omega doesn't actually need to put a reward in the simulated boxes, because he doesn't need to reward the simulated agent, but both problems appear exactly the same to the simulated and real TDT agents).
Well, in the problem you present here TDT would 2-box, but you've avoided the hard part of the problem from the OP, in which there is no way to tell whether you're in the simulation or not (or at least there is no way for the simulated you to tell), unless you're running some algorithm other than TDT.
"Father figure" seems to me to permit either position, "father" not so much. It's always troublesome when someone declares that you can only be properly impartial by agreeing with them.
For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.
Actually you only cooperate if the other player would defect if you didn't cooperate. If they cooperate no matter what, defect.
I guess so, although looking at it now Elcenia seems to be pretty massive. It will take me a couple of weeks to catch up at least (unless it's exceptionally compelling, it which case damn you in advance for taking up all my time), and we also have to allow for the possibility that it's not just my kind of thing, in which case trying to finish it will make me miserable and I won't be much use to you anyway. But sure, I'll give it a shot.
Sent.
Okay, I wrote up my thoughts, but it's pretty long and I'm not sure it's fair to post it here (also it's too long for a PM). Do you have an email I can send it to?
What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?
I might be interested in giving a fuller critique of this at some point (but then who the hell am I), but for now I'll confine myself to just one point:
It was, of course, a highly ceremonial occasion...
The reader knows that the narrator knows more about this world than they do. The reader is okay with that. Trying to impart information by pretending that the reader already knows it seems clumsy and distracting to me. Compare with:
It was a highly ceremonial occasion, excruciatingly ritualized, and he was bored.
I think this is fine. No need to pretend you're the reader's chum.
I think the clearest and simplest version of Problem 1 is where Omega chooses to simulate a CDT agent with .5 probability and a TDT agent with .5 probability. Let's say that Value-B is $1000000, as is traditional, and Value-A is $1000. TDT will one-box for an expected value of $500500 (as opposed to $1000 if it two-boxes), and CDT will always two-box, and receive an expected $501000. Both TDT and CDT have an equal chance of playing against each other in this version, and an equal chance of playing against themselves, and yet CDT still outperforms. It seems TDT suffers for CDT's irrationality, and CDT benefits from TDT's rationality. Very troubling.
EDIT: (I will note, though, that a TDT agent still can't do any better by two-boxing - only make CDT do worse).
Hmm, if I've understood this correctly, it's the way I've always thought about decision theory for as long as I've had a concept of expected utility maximisation. Which makes me think I must have missed some important aspect of the ex post version.
I'm not sure whether it is the case that primitive cultures have a category of things they think of as "supernatural" - pagan religions were certainly quite literal: they lived on Olympus, they mated with humans, they were birthed. I wonder whether the distinction between "natural" and "supernatural" only comes about when it becomes clear that gods don't belong in the former category.
And that's without even getting into my experiences, or those close to me.
Well, don't be coy. There's no point in withholding your strongest piece of evidence. Please, get into it.
Why is it important for a decision theory to pass fair tests but not unfair tests?
Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb's problem, with the one difference that a CDT agent gets $1billion just for showing up, it's still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The "unfair" class of problems is that class where "winning as much as possible" is distinct from "winning the most out of all possible agents".
Why are we not counting philosophers? Isn't that like saying, "Not counting physicists, where's this supposed interest in gravity?"
Well, I've had a think about it, and I've concluded that it would matter how great the difference between TDT and TDT-prime is. If TDT-prime is almost the same as TDT, but has an extra stage in its algorithm in which it converts all dollar amounts to yen, it should still be able to prove that it is isomorphic to Omega's simulation, and therefore will not be able to take advantage of "logical separation".
But if TDT-prime is different in a way that makes it non-isomorphic, i.e. it sometimes gives a different output given the same inputs, that may still not be enough to "separate" them. If TDT-prime acts the same as TDT, except when there is a walrus in the vicinity, in which case it tries to train the walrus to fight crime, it is still the case in this walrus-free problem that it makes exactly the same choice as the simulation (?). It's as if you need the ability to prove that two agents necessarily give the same output for the particular problem you're faced with, without proving what output those agents actually give, and that sure looks crazy-hard.
EDIT: I mean crazy-hard for the general case, but much, much easier for all the cases where the two agents are actually the same.
EDIT 2: On the subject of fairness, my first thoughts: A fair problem is one in which if you had arrived at your decision by a coin flip (which is as transparently predictable as your actual decision process - i.e. Omega can predict whether it's going to come down heads or tails with perfect accuracy), you would be rewarded or punished no more or less than you would be using your actual decision algorithm (and this applies to every available option).
EDIT 3: Sorry to go on like this, but I've just realised that won't work in situations where some other agent bases their decision on whether you're predicting what their decision will be, i.e. Prisoner's Dilemma.
Hmm, so TDT-prime would reason something like, "The TDT simulation will one-box because, not knowing that it's the simulation, but also knowing that the simulation will use exactly the same decision theory as itself, it will conclude that the simulation will do the same thing as itself and so one-boxing is the best option. However, I'm different to the TDT-simulation, and therefore I can safely two-box without affecting its decision." In which case, does it matter how inconsequential the difference is? Yep, I'm confused.
You can see that something funny has hapened by postulating TDT-prime, which is identical to TDT except that Omega doesn't recognize it as a duplicate (eg, it differs in some way that should be irrelevant). TDT-prime would two-box, and win.
I don't think so. If TDT-prime two boxes, the TDT simulation two-boxes, so only one box is full, so TDT-prime walks away with $1000. Omega doesn't check what decision theory you're using at all - it just simulates TDT and bases its decision on that. I do think that this ought to fall outside a rigorously defined class of "fair" problems, but it doesn't matter whether Omega can recognise you as a TDT-agent or not.
this problem is the reason why decision theories have to be non-deterministic. It comes up all the time in real life: I try and guess what safe combination you chose, try that combination, and if it works I take all your money.
Of course, you can just set up the thought experiment with the proviso that "be unpredictable" is not a possible move - in fact that's the whole point of Omega in these sorts of problems. If Omega's trying to break into your safe, he takes your money. In Nesov's problem, if you can't make yourself unpredictable, then you win nothing - it's not even worth your time to open the box. In both cases, a TDT agent does strictly as well as it possibly could - the fact that there's $100 somewhere in the vicinity doesn't change that.
Okay, so there's no such thing as jackalopes. Now I know.
At a certain point the psychological quality of life of living individuals that comes from living in a society with a certain structure and values may trump the right of individuals who thought they were dead to live once more.
This is vague. Can you pinpoint exactly why you think this would damage people's psychological quality of life?
If information cannot travel back more than six hours
This does seem to be a constraint that exclusively affects the time-turners. Otherwise prophesies wouldn't be possible. It also seems like it's an artificial rule rather than a deep law of magic because after the Stanford Prison experiment, Bones tells Dumbledore that she has information from four hours in the future and asks whether he'd like to know it. That there is relevant information from four hours in the future is information from the future - she would not have said that if it were otherwise, so it seems there must be exemptions of that kind.
Alternative hypothesis: prophesies are jive, and Eliezer didn't think of the other thing.
Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion.
The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That's an important structural difference to the thought experiment.
Although I agree that he got the Amanda Knox thing right, I don't think it actually counts as a prediction - he wasn't saying whether or not the jury would find her innocent the second time round, and as far as I can tell no new information came out after Eliezer made his call.
Oh, I see your argument now (not that I think it's decisive enough to make you interpretation "clearly" the correct one, but, you know, whatever) - notice though that there was no way I could have guessed it from the great^3-grandparent. I would have said that's why you were downvoted initially, but looking through your comment history it's quite possible there is someone automatically downvoting your comments regardless of content, in which case I really don't know what to tell you. Sorry about that.
Okay, seriously, how strong do you think the groupthink effect could possibly be on the question of whether Harry's dark side is a piece of Voldemort's soul in HPMOR? For the record I think you were probably downvoted for claiming that something was "clearly" implied when I (and so presumably others) can't see how it's implied at all (and I still can't see it, having read the comment which is apparently supposed to make it clear, and which wasn't, incidentally, linked to in the great-grandparent), and then downvoted further when you decided to insult everyone.
1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery.
I don't think that's true. If you were going to tamper with the lottery, isn't your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?
Eliezer's article is actually quite long, and not the only article he's written on the subject on this site - it seems uncharitable to decide that "Huh?" is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists - it simply wouldn't be very reductionist, would it?
You happen to have carved out a small portion of the Internet, a medium that aside from porn is primarily for pirates vs. ninjas debates, and declared it's for some other purpose. That doesn't mean you're allowed to be surprised when pirates vs. ninjas debates happen.
Is he allowed to be surprised when lesswrong porn happens?
Oh right. Slightly careless reading. Sorry about that.
Did Eliezer say that Lucius interrogated Draco himself? I can't find it - I had assumed it was aurors, who in the course of investigating this particular crime would have no reason even to mention Harry's name.
And I believe he was interrogated by aurors investigating this crime - in which Harry was not involved - not by Malfoy.
Marriage at eleven is inappropriate.