Comment by mavant on An overview of the mental model theory · 2015-08-24T01:22:24.839Z · score: 2 (2 votes) · LW · GW

The fact that it's the same phrasing used in the literature is really concerning, because it means the interpretation the literature gives is wrong: Many subjects may in fact be generating a mental model (based on deductive reasoning, no less!) which is entirely compatible with the problem-as-stated and yet which produces a different answer than the one the researchers expected.

One could certainly write '(Ace is present OR King is present) XOR (Queen is present OR Ace is present)' which trivially reduces to '(King is present OR Queen is present) AND (Ace is not present)', but that gives the game away a bit - as perhaps it should! The fact that phrasing the knowledge formally rather than in ad-hoc English makes the correct answer so much more obvious is a strong indicator that this is a deficiency in the original researchers' grasp of idiomatic English, not in their research subjects' grasp of logic.

It's difficult for me to look at the problem with fresh eyes, so I can't be entirely certain whether the added 'black box' note helps. It doesn't look helpful.

What would be really useful would be a physical situation in which the propositional-logic reading of the statements is the only correct interpretation. There is luckily a common silly-logic-puzzle trope which evokes this:

The dealer-robot has two heads, one of which always lies and one of which always tells the truth. You don't know which is which. After dealing the hand, but before showing it to you, the robot dealer takes a peek.

One of the robot's heads has told you that the dealt hand contains either a king or an ace (or both).

The robot's other head has told you that the dealt hand contains either a queen or an ace (or both).

Comment by mavant on Predicted corrigibility: pareto improvements · 2015-08-23T19:46:14.823Z · score: 0 (0 votes) · LW · GW

Third obvious possibility: B maximises u~Σpivi, subject to the constraints E(Σpivi|B) ≥ E(Σpivi|A) and E(u|B) ≥ E(u|A). where ~ is some simple combining operation like addition or multiplication, or "the product of A and B divided by the sum of A and B".

I think these possibilities all share the problem that the constraint makes it essentially impossible to choose any action other than what A would have chosen. If A chose the action that maximized u, then B cannot choose any other action while satisfying the constraint E(u|B) ≥ E(u|A) unless there were multiple actions that had the exact same payoff (which seems unlikely if payoff values are distributed over the reals, rather than over a finite set). And the first possibility (to maximize u while respecting E(Σpivi|B) ≥ E(Σpivi|A) ) just results in choosing the exact same action as A would have chosen, even if there's another action that has an identical E(u) AND higher E(Σpivi).

Comment by mavant on An overview of the mental model theory · 2015-08-23T18:51:41.358Z · score: 0 (0 votes) · LW · GW

The Ace is in both statements and both statements cannot be true as per the requirement.

No.

deal :: IO CardHand

deal = do

x <- randomBoolean

if x

then generateHandsContainingEitherOrBothOf (King, Ace)

else generateHandsContainingEitherOrBothOf (Queen, Ace)

Asking a trick question and then insisting on a particular reading does not constitute evidence of a logical fallacy being committed by the answerer.

Comment by mavant on How to escape from your sandbox and from your hardware host · 2015-08-23T18:39:25.845Z · score: 0 (0 votes) · LW · GW

1-3-2 in descending order of difficulty

Comment by mavant on No Universally Compelling Arguments in Math or Science · 2013-11-12T14:25:10.644Z · score: -2 (4 votes) · LW · GW

If Despotism failed only for want of a capable benevolent despot, what chance has Democracy, which requires a whole population of capable voters?

Comment by mavant on So You Want to Save the World · 2013-11-12T14:11:23.535Z · score: 0 (0 votes) · LW · GW

I don't really understand how this could occur in a TDT-agent. The agent's algorithm is causally dependent on '(max $5 $10), but considering the counterfactual severs that dependence. Observing a money-optimizer (let's call it B) choosing $5 over $10 would presumably cause the agent (call it A) to update its model of B to no longer depend on '(max $5 $10). Am I missing something here?

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T18:57:54.785Z · score: 3 (5 votes) · LW · GW

Don't know if this has been suggested before, but:

Possibility: Harry's "Father's rock" is the Resurrection Stone. Giving this one low probability, since it has thus far demonstrated no other magical properties, and just seems like a way to get Harry to grind his Transfiguration and mana stats.

Possibility: Harry's "Father's rock" is the Philosopher's Stone. Giving this one even lower probability.

Possibility: The Philosopher's Stone is actually the Resurrection stone, or a similar magical construct. Middling probability; Dumbledore refers to Flamel insisting "the Stone" be kept at Hogwarts, but never mentions the Philosopher's Stone; it seems quite plausible that all of the "Philosopher's Stone" rumors are in fact obfuscations about the true nature of the object, and that Flamel's wealth has more to do with his alchemical talents and his having had six centuries to accumulate capital than an actual ability to transmute base metals into gold.

Harry dismisses the possibility of the Philosopher's Stone far too readily, especially considering he already knows that magic, at least to some degree, works the way you (or the creator of a spell) believe(s) it will work, AND knows that fruit which seems low-hanging to him is obviously not so to the rest of the magical world. This smells a little bit idiot-ball-ish to me, even if he is correct.

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T18:27:51.532Z · score: 0 (2 votes) · LW · GW

Can't be Harry's blood; at age eleven he's certainly got less than 3 litres (if he weighs ~80 pounds), possibly little more than two (can't recall if HJPEV is as skinny as Canon!HP). If you cut off a limb, he might have as much one litre "spill" out, but the rest would just sort of... dribble in spurts.

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T18:08:53.936Z · score: 3 (3 votes) · LW · GW

It's a shame you retracted this, because I wanted to +1 it.

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-27T18:06:22.813Z · score: 0 (0 votes) · LW · GW

That ritual required quite a number more components... But then, it didn't WORK, so perhaps Burgess and his order meant to perform the one Quirrell meant.

This is my headcanon, now.

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T20:48:54.579Z · score: 2 (2 votes) · LW · GW

At least one of the definitions is applicable to any arbitrary proposition. Either (1) it can be counterfeited, implying that there's no test you can perform to determine the true state of things, or (2) it can be tested to determine the true state of things.

Comment by mavant on Group Rationality Diary, July 16-31 · 2013-07-25T19:30:43.239Z · score: 6 (6 votes) · LW · GW

Today I had the health exam for the life insurance policy associated with my cryonic suspension contract.

Then I grabbed my best friend and girlfriend and repeatedly showed them clips from the Futurama episode where Fry's dog waits for years after Fry gets frozen, and Fry misses his dog in the future, and the dog misses Fry in the past, etc. They are now both awaiting insurance policy quotes for their own suspension contracts.

Comment by mavant on Group Rationality Diary, July 1-15 · 2013-07-04T19:39:26.999Z · score: 4 (4 votes) · LW · GW

No, but Sequences-related. I finished them a couple weeks ago, and it just seemed like the only choice that still made sense.

Comment by mavant on Harry Potter and the Methods of Rationality discussion thread, part 20, chapter 90 · 2013-07-03T12:08:41.413Z · score: 1 (3 votes) · LW · GW

So much win.

Comment by mavant on Group Rationality Diary, July 1-15 · 2013-07-03T12:07:03.463Z · score: 16 (16 votes) · LW · GW

I signed up for life insurance to pay for cryonics. I'm told it'll be about six weeks from today until I'm fully covered (and CI coverage should start the same day).

Comment by mavant on Group Rationality Diary, July 1-15 · 2013-07-03T12:00:28.507Z · score: 1 (1 votes) · LW · GW

For those who use public transit, anki on the phone is lifechanging. I'd advise keeping a small notepad with you in case you think of something to look up, check, add or edit later - those are all inconvenient on the phone, especially if one is on the subway and can't get online at all.

Comment by mavant on Cryocrastinating? Send me (or someone else) money! · 2013-06-26T18:02:56.439Z · score: 0 (0 votes) · LW · GW

Any suggestions besides Rudi Hoffman for finding insurance policies? I requested a quote from him on Monday, but haven't yet heard back.

Comment by mavant on Cryocrastinating? Send me (or someone else) money! · 2013-06-25T08:36:25.605Z · score: 1 (1 votes) · LW · GW

I recently finished the Sequences, and I'm convinced about cryopreservation (well, convinced that it's a good idea; not 100% convinced it will work...) but I'm not sure what to do next.

Is there any known reason to sign up for Alcor vs Cryonics Institute (or some other org that I'm not familiar with)? I'm young (22) and healthy, if that matters.