Posts

Seeking education 2012-02-17T00:55:11.064Z
The hundred-room problem 2012-01-21T18:12:29.719Z
How to un-kill your mind - maybe. 2012-01-19T06:36:25.414Z

Comments

Comment by APMason on To like, or not to like? · 2017-10-27T00:36:33.045Z · LW · GW

shakespeare is good tho

Comment by APMason on P/S/A - Sam Harris offering money for a little good philosophy · 2013-09-02T22:47:56.485Z · LW · GW

Well, now he has another reason not to change his mind. Seems unwise, even if he's right about everything.

Comment by APMason on Semi-open thread: blackmail · 2013-07-29T22:10:15.746Z · LW · GW

What does "an action that harms another agent" mean? For instance, if I threaten to not give you a chicken unless you give me $5, does "I don't give you a chicken" count as "a course of action that harms another agent"? Or does it have to be an active course, rather than act of omission?

It's not blackmail unless, given that I don't give you $5, you would be worse of, CDT-wise, not giving me the chicken than giving me the chicken. Which is to say, you really want to give me the chicken but you're threatening to withhold it because you think you can make $5 out of it. If I were a Don't-give-$5-bot, or just broke, you would have no reason to threaten to withhold the chicken. If you don't want to give me the chicken, but are willing to do so if I give you $5, that's just normal trade.

Comment by APMason on AI box: AI has one shot at avoiding destruction - what might it say? · 2013-01-23T01:30:48.886Z · LW · GW

"Wanna see something cool?"

Comment by APMason on September 2012 Media Thread · 2012-09-09T03:16:45.421Z · LW · GW

Bob Dylan's new album ("Tempest") is perfect. At the time of posting, you can listen to it free on the itunes store. I suggest you do so.

On another note, I'm currently listening to all the Miles Davis studio recordings and assembling my own best-of list. It'll probably be complete by next month, and I'll be happy to share the playlist with anyone who's interested.

Comment by APMason on August 2012 Media Thread · 2012-08-17T01:12:11.215Z · LW · GW

Thomas Bergersen is just wonderful. Also, I've been listening to a lot of Miles Davis (I'm always listening to a lot of Miles Davis, but I haven't posted in one of these threads before). I especially recommend In a Silent Way.

Comment by APMason on August 2012 Media Thread · 2012-08-13T01:07:44.060Z · LW · GW

Murakami is still the only currently living master of magical realism

Salman Rushdie. Salman Rushdie Salman Rushdie Salman Rushdie. Salman Rushdie.

Comment by APMason on August 2012 Media Thread · 2012-08-13T01:06:23.746Z · LW · GW

If you haven't read much other Italo Calvino, "Invisible Cities" is really, really, really great.

Comment by APMason on August 2012 Media Thread · 2012-08-13T01:02:18.246Z · LW · GW

I have to say, as a more-or-less lifelongish fan of Oscar Wilde (first read "The Happy Prince" when I was eight or nine), that the ending to Ernest is especially weak. I like the way he builds his house of cards in that play, and I like the dialogue, but (and I think I probably speak for a lot of Wilde fans here), the way he knocks the cards down really isn't all that clever or funny. For a smarter Wilde play, see "A Woman of No Importance", although his best works are his childrens' stories, "The Picture of Dorian Grey", and "Ballad of Reading Gaol" (although it is not, in fact, the case that "Every man kills the thing he loves".)

(Also I should mention that I recently reread "The Code of the Woosters" and laughed myself inside-out.)

Comment by APMason on The Problem Of Apostasy · 2012-07-19T21:32:06.578Z · LW · GW

You sure about this?

Nope, not sure at all.

Comment by APMason on The Problem Of Apostasy · 2012-07-19T14:22:40.322Z · LW · GW

I don't think that question's going to give you the information you want - when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn't really allow it. I think Christianity really is the canonical example of the withering away of religiosity - and that happened through a succession of internal revolutions ("In Praise of Folly", Lutheranism, the English reformation etc.) which themselves happened for a variety of reasons, not all pure or based in rationality (Henry VIII's split with Rome, for example) but had the effect of demystifying the church and thereby shrinking the domain of its influence. I think. Although it's hard to interpret the Englightenment as a movement internal to Christianity, so this only gets you so far, I suppose.

Comment by APMason on Real World Solutions to Prisoners' Dilemmas · 2012-07-03T14:48:06.920Z · LW · GW

I agree with pretty much everything you've said here, except:

You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.

You don't actually need to continue this chain - if you're playing against any opponent which cooperates iff you cooperate, then you want to cooperate - even if the opponent would also cooperate against someone who cooperated no matter what, so your statement is also true without the "ad nauseum" (provided the opponent would defect if you defected).

Comment by APMason on Thoughts on moral intuitions · 2012-07-01T13:50:49.543Z · LW · GW

What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly; this is probably a Hard Problem.

Remember that "enforcing contracts" could mean two things. It could mean that the government steps in and makes the parties do what they said they would - it keeps whipping them until they follow through. It could also mean punishing the parties for damage done on the other end when they breach the contract. For example, in a world in which prostitution is legal, X proposes to pay Y for sex. Y accepts. X hands over the money. Y refuses to have sex with X. The horrific version of this is the government comes in and "enforces" the contract... by holding down Y and, well, yeah. The alternative is the government comes in, sees that Y has taken money from X by fraud, and punishes Y the same way it would punish any other thief. The second option is, I think, both more intuitive and less massively disturbing.

Comment by APMason on A (small) critique of total utilitarianism · 2012-06-27T14:29:13.027Z · LW · GW

Thank you. I had expected the bottom to drop out of it somehow.

EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.

Comment by APMason on A (small) critique of total utilitarianism · 2012-06-27T13:41:58.506Z · LW · GW

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Comment by APMason on A (small) critique of total utilitarianism · 2012-06-27T13:07:58.919Z · LW · GW

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

Comment by APMason on Problematic Problems for TDT · 2012-06-25T15:40:53.628Z · LW · GW

This variation of the problem was invented in the follow-up post (I think it was called "Sneaky strategies for TDT" or something like that:

Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that's what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT's prudence and TDT suffers for CDT's lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we're concerned with is not so much "fair" vs. "unfair", but more like "those problem on which the best I can do is not necessarily the best anyone can do". We can call it "fairness" if we want, but it's not like Omega is discriminating against TDT in this case.

Comment by APMason on Problematic Problems for TDT · 2012-06-24T22:08:30.734Z · LW · GW

Wait a minute, what exactly do you mean by "you"? TDT? or "any agent whatsoever"? If it's TDT alone why? If I read you correctly, you already agree that's it's not because Omega said "running TDT" instead of "running WTF-DT". If it's "any agent whatsoever", then are you really sure the simulated and real problem aren't actually the same? (I'm sure they aren't, but, just checking.)

Well, no, this would be my disagreement: it's precisely because Omega told you that the simulated agent is running TDT that only TDT could or could not be the simulation; the simulated and real problem are, for all intents and purposes, identical (Omega doesn't actually need to put a reward in the simulated boxes, because he doesn't need to reward the simulated agent, but both problems appear exactly the same to the simulated and real TDT agents).

Comment by APMason on Problematic Problems for TDT · 2012-06-24T20:59:33.848Z · LW · GW

Well, in the problem you present here TDT would 2-box, but you've avoided the hard part of the problem from the OP, in which there is no way to tell whether you're in the simulation or not (or at least there is no way for the simulated you to tell), unless you're running some algorithm other than TDT.

Comment by APMason on Review: Selfish Reasons to Have More Kids · 2012-06-22T03:39:21.521Z · LW · GW

"Father figure" seems to me to permit either position, "father" not so much. It's always troublesome when someone declares that you can only be properly impartial by agreeing with them.

Comment by APMason on Debate between 80,000 hours and a socialist · 2012-06-08T23:52:00.097Z · LW · GW

For iterated 2 player's dilemma, you cooperate when the other player cooperates, and defect when the other player defects. Always cooperating is not the best strategy; you need to respond to the other player's actions.

Actually you only cooperate if the other player would defect if you didn't cooperate. If they cooperate no matter what, defect.

Comment by APMason on Suggestion: Less Wrong Writing Circle? · 2012-06-05T00:47:48.851Z · LW · GW

I guess so, although looking at it now Elcenia seems to be pretty massive. It will take me a couple of weeks to catch up at least (unless it's exceptionally compelling, it which case damn you in advance for taking up all my time), and we also have to allow for the possibility that it's not just my kind of thing, in which case trying to finish it will make me miserable and I won't be much use to you anyway. But sure, I'll give it a shot.

Comment by APMason on Suggestion: Less Wrong Writing Circle? · 2012-06-04T23:22:14.620Z · LW · GW

Sent.

Comment by APMason on Suggestion: Less Wrong Writing Circle? · 2012-06-04T22:46:47.209Z · LW · GW

Okay, I wrote up my thoughts, but it's pretty long and I'm not sure it's fair to post it here (also it's too long for a PM). Do you have an email I can send it to?

Comment by APMason on A plan for Pascal's mugging? · 2012-06-04T12:49:34.488Z · LW · GW

What happens if you're using this method and you're offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don't take the deal you gain and lose nothing). Isn't the "typical outcome" here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?

Comment by APMason on Suggestion: Less Wrong Writing Circle? · 2012-06-03T20:57:44.681Z · LW · GW

I might be interested in giving a fuller critique of this at some point (but then who the hell am I), but for now I'll confine myself to just one point:

It was, of course, a highly ceremonial occasion...

The reader knows that the narrator knows more about this world than they do. The reader is okay with that. Trying to impart information by pretending that the reader already knows it seems clumsy and distracting to me. Compare with:

It was a highly ceremonial occasion, excruciatingly ritualized, and he was bored.

I think this is fine. No need to pretend you're the reader's chum.

Comment by APMason on Sneaky Strategies for TDT · 2012-05-27T14:13:56.583Z · LW · GW

I think the clearest and simplest version of Problem 1 is where Omega chooses to simulate a CDT agent with .5 probability and a TDT agent with .5 probability. Let's say that Value-B is $1000000, as is traditional, and Value-A is $1000. TDT will one-box for an expected value of $500500 (as opposed to $1000 if it two-boxes), and CDT will always two-box, and receive an expected $501000. Both TDT and CDT have an equal chance of playing against each other in this version, and an equal chance of playing against themselves, and yet CDT still outperforms. It seems TDT suffers for CDT's irrationality, and CDT benefits from TDT's rationality. Very troubling.

EDIT: (I will note, though, that a TDT agent still can't do any better by two-boxing - only make CDT do worse).

Comment by APMason on A novel approach to axiomatic decision theory · 2012-05-27T01:33:11.360Z · LW · GW

Hmm, if I've understood this correctly, it's the way I've always thought about decision theory for as long as I've had a concept of expected utility maximisation. Which makes me think I must have missed some important aspect of the ex post version.

Comment by APMason on Welcome to Less Wrong! (2012) · 2012-05-24T18:00:12.456Z · LW · GW

I'm not sure whether it is the case that primitive cultures have a category of things they think of as "supernatural" - pagan religions were certainly quite literal: they lived on Olympus, they mated with humans, they were birthed. I wonder whether the distinction between "natural" and "supernatural" only comes about when it becomes clear that gods don't belong in the former category.

Comment by APMason on Welcome to Less Wrong! (2012) · 2012-05-24T17:42:42.113Z · LW · GW

And that's without even getting into my experiences, or those close to me.

Well, don't be coy. There's no point in withholding your strongest piece of evidence. Please, get into it.

Comment by APMason on Problematic Problems for TDT · 2012-05-24T10:47:29.494Z · LW · GW

Why is it important for a decision theory to pass fair tests but not unfair tests?

Well, on unfair tests a decision theory still needs to do as well as possible. If we had a version of the original Newcomb's problem, with the one difference that a CDT agent gets $1billion just for showing up, it's still incumbent upon a TDT agent to walk away with $1000000 rather than $1000. The "unfair" class of problems is that class where "winning as much as possible" is distinct from "winning the most out of all possible agents".

Comment by APMason on Problematic Problems for TDT · 2012-05-23T17:19:14.029Z · LW · GW

Why are we not counting philosophers? Isn't that like saying, "Not counting physicists, where's this supposed interest in gravity?"

Comment by APMason on Problematic Problems for TDT · 2012-05-23T16:22:55.801Z · LW · GW

Well, I've had a think about it, and I've concluded that it would matter how great the difference between TDT and TDT-prime is. If TDT-prime is almost the same as TDT, but has an extra stage in its algorithm in which it converts all dollar amounts to yen, it should still be able to prove that it is isomorphic to Omega's simulation, and therefore will not be able to take advantage of "logical separation".

But if TDT-prime is different in a way that makes it non-isomorphic, i.e. it sometimes gives a different output given the same inputs, that may still not be enough to "separate" them. If TDT-prime acts the same as TDT, except when there is a walrus in the vicinity, in which case it tries to train the walrus to fight crime, it is still the case in this walrus-free problem that it makes exactly the same choice as the simulation (?). It's as if you need the ability to prove that two agents necessarily give the same output for the particular problem you're faced with, without proving what output those agents actually give, and that sure looks crazy-hard.

EDIT: I mean crazy-hard for the general case, but much, much easier for all the cases where the two agents are actually the same.

EDIT 2: On the subject of fairness, my first thoughts: A fair problem is one in which if you had arrived at your decision by a coin flip (which is as transparently predictable as your actual decision process - i.e. Omega can predict whether it's going to come down heads or tails with perfect accuracy), you would be rewarded or punished no more or less than you would be using your actual decision algorithm (and this applies to every available option).

EDIT 3: Sorry to go on like this, but I've just realised that won't work in situations where some other agent bases their decision on whether you're predicting what their decision will be, i.e. Prisoner's Dilemma.

Comment by APMason on Problematic Problems for TDT · 2012-05-23T14:39:16.763Z · LW · GW

Hmm, so TDT-prime would reason something like, "The TDT simulation will one-box because, not knowing that it's the simulation, but also knowing that the simulation will use exactly the same decision theory as itself, it will conclude that the simulation will do the same thing as itself and so one-boxing is the best option. However, I'm different to the TDT-simulation, and therefore I can safely two-box without affecting its decision." In which case, does it matter how inconsequential the difference is? Yep, I'm confused.

Comment by APMason on Problematic Problems for TDT · 2012-05-23T14:28:04.607Z · LW · GW

You can see that something funny has hapened by postulating TDT-prime, which is identical to TDT except that Omega doesn't recognize it as a duplicate (eg, it differs in some way that should be irrelevant). TDT-prime would two-box, and win.

I don't think so. If TDT-prime two boxes, the TDT simulation two-boxes, so only one box is full, so TDT-prime walks away with $1000. Omega doesn't check what decision theory you're using at all - it just simulates TDT and bases its decision on that. I do think that this ought to fall outside a rigorously defined class of "fair" problems, but it doesn't matter whether Omega can recognise you as a TDT-agent or not.

Comment by APMason on Problematic Problems for TDT · 2012-05-23T12:28:15.552Z · LW · GW

this problem is the reason why decision theories have to be non-deterministic. It comes up all the time in real life: I try and guess what safe combination you chose, try that combination, and if it works I take all your money.

Of course, you can just set up the thought experiment with the proviso that "be unpredictable" is not a possible move - in fact that's the whole point of Omega in these sorts of problems. If Omega's trying to break into your safe, he takes your money. In Nesov's problem, if you can't make yourself unpredictable, then you win nothing - it's not even worth your time to open the box. In both cases, a TDT agent does strictly as well as it possibly could - the fact that there's $100 somewhere in the vicinity doesn't change that.

Comment by APMason on If epistemic and instrumental rationality strongly conflict · 2012-05-12T12:26:40.440Z · LW · GW

Okay, so there's no such thing as jackalopes. Now I know.

Comment by APMason on Survey of older folks as data about one's future values and preferences? · 2012-04-28T16:53:58.958Z · LW · GW

At a certain point the psychological quality of life of living individuals that comes from living in a society with a certain structure and values may trump the right of individuals who thought they were dead to live once more.

This is vague. Can you pinpoint exactly why you think this would damage people's psychological quality of life?

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-27T21:49:50.044Z · LW · GW

If information cannot travel back more than six hours

This does seem to be a constraint that exclusively affects the time-turners. Otherwise prophesies wouldn't be possible. It also seems like it's an artificial rule rather than a deep law of magic because after the Stanford Prison experiment, Bones tells Dumbledore that she has information from four hours in the future and asks whether he'd like to know it. That there is relevant information from four hours in the future is information from the future - she would not have said that if it were otherwise, so it seems there must be exemptions of that kind.

Alternative hypothesis: prophesies are jive, and Eliezer didn't think of the other thing.

Comment by APMason on Please Don't Fight the Hypothetical · 2012-04-20T20:05:27.815Z · LW · GW

Edit: In other words, I think Torture v. Specks is just a restatement of the Repugnant Conclusion.

The Repugnant Conclusion can be rejected by average-utilitarianism, whereas in Torture vs. Dustspecks average-utilitarianism still tells you to torture, because the disutility of 50 years of torture divided among 3^^^3 people is less than the disutility of 3^^^3 dustspecks divided among 3^^^3 people. That's an important structural difference to the thought experiment.

Comment by APMason on A question about Eliezer · 2012-04-20T07:25:11.591Z · LW · GW

Although I agree that he got the Amanda Knox thing right, I don't think it actually counts as a prediction - he wasn't saying whether or not the jury would find her innocent the second time round, and as far as I can tell no new information came out after Eliezer made his call.

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T14:46:16.613Z · LW · GW

Oh, I see your argument now (not that I think it's decisive enough to make you interpretation "clearly" the correct one, but, you know, whatever) - notice though that there was no way I could have guessed it from the great^3-grandparent. I would have said that's why you were downvoted initially, but looking through your comment history it's quite possible there is someone automatically downvoting your comments regardless of content, in which case I really don't know what to tell you. Sorry about that.

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85 · 2012-04-18T14:10:29.889Z · LW · GW

Okay, seriously, how strong do you think the groupthink effect could possibly be on the question of whether Harry's dark side is a piece of Voldemort's soul in HPMOR? For the record I think you were probably downvoted for claiming that something was "clearly" implied when I (and so presumably others) can't see how it's implied at all (and I still can't see it, having read the comment which is apparently supposed to make it clear, and which wasn't, incidentally, linked to in the great-grandparent), and then downvoted further when you decided to insult everyone.

Comment by APMason on Rationality Quotes April 2012 · 2012-04-04T11:52:35.397Z · LW · GW

1-2-3-4-5-6 is a Schelling point for overt tampering with a lottery.

I don't think that's true. If you were going to tamper with the lottery, isn't your most likely motive that you want to win it? Why, then, set it up in such a way that you have to share the prize with the thousands of other people who play those numbers?

Comment by APMason on Zombies! Zombies? · 2012-04-02T14:52:54.587Z · LW · GW

Eliezer's article is actually quite long, and not the only article he's written on the subject on this site - it seems uncharitable to decide that "Huh?" is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists - it simply wouldn't be very reductionist, would it?

Comment by APMason on SotW: Check Consequentialism · 2012-03-29T23:44:21.358Z · LW · GW

You happen to have carved out a small portion of the Internet, a medium that aside from porn is primarily for pirates vs. ninjas debates, and declared it's for some other purpose. That doesn't mean you're allowed to be surprised when pirates vs. ninjas debates happen.

Is he allowed to be surprised when lesswrong porn happens?

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-29T23:19:20.765Z · LW · GW

Oh right. Slightly careless reading. Sorry about that.

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-29T22:10:07.042Z · LW · GW

Did Eliezer say that Lucius interrogated Draco himself? I can't find it - I had assumed it was aurors, who in the course of investigating this particular crime would have no reason even to mention Harry's name.

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-29T21:46:22.502Z · LW · GW

And I believe he was interrogated by aurors investigating this crime - in which Harry was not involved - not by Malfoy.

Comment by APMason on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-29T21:42:35.427Z · LW · GW

Marriage at eleven is inappropriate.