Posts

Comments

Comment by findis on What rationality material should I teach in my game theory course · 2014-01-20T02:12:15.169Z · LW · GW

Yep. The most common model that yields a rational agent who will choose to restrict zir own future actions is beta-delta discounting, or time inconsistent preferences. I've had problem sets with such questions, usually involving a student procrastinating on an assignment; I don't think I can copy them, but let me know if you want me to sketch out how such a problem might look.

Actually, maybe the most instrumental-rationality-enhancing topics to cover that have legitimate game theoretic aspects are in behavioral economics. Perhaps you could construct examples where you contrast the behavior of an agent who interprets probabilities in a funny way, as in Prospect Theory, with an agent who obeys the vNM axioms.

Comment by findis on What rationality material should I teach in my game theory course · 2014-01-20T02:02:59.921Z · LW · GW

The standard definition of "rationality" in economics is "having complete and transitive preferences", and sometimes "having complete and transitive preferences and adhering to the von Neumann-Morgenstern axioms". Not the way it's used on Less Wrong.

I think the really cool thing about game theory is how far you can go by stating the form of a game and deriving what someone will do, or the possible paths they may take, assuming only that they have rational preferences.

Comment by findis on Philosophical Landmines · 2013-02-17T03:56:00.951Z · LW · GW

Wouldn't a rational consequentialist estimate the odds that the policy will have unpredictable and harmful consequences, and take this into consideration?

Regardless of how well it works, consequentialism essentially underlies public policy analysis and I'm not sure how one would do it otherwise. (I'm talking about economists calculating deadweight loss triangles and so on, not politicians arguing that "X is wrong!!!")

Comment by findis on Welcome to Less Wrong! (July 2012) · 2013-01-04T06:50:44.516Z · LW · GW

Why is whether your decision actually changes the boxes important to you? [....] If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.)

I think I'm going to back out of this discussion until I understand decision theory a bit better.

Comment by findis on Welcome to Less Wrong! (July 2012) · 2013-01-04T05:55:55.695Z · LW · GW

Do you choose to hit me or not?

No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality.

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

Comment by findis on Counterfactual Mugging · 2013-01-04T04:31:02.329Z · LW · GW

you will achieve a net gain of $4950*p(x) over a non-committer (a very small number admittedly given that p(x) is tiny, but for the sake of the thought experiment all that matters is that it's positive.)

Given that someone who makes such a precommitment comes out ahead of someone who doesn't - shouldn't you make such a commitment right now?

Right now, yes, I should precommit to pay the $100 in all such situations, since the expected value is p(x)*$4950.

If Omega just walked up to me and asked for $100, and I had never considered this before, the value of this commitment is now p(x)*$4950 - $100, so I would not pay unless I thought there was more than a 2% chance this would happen again.

Comment by findis on Welcome to Less Wrong! (July 2012) · 2013-01-02T00:59:04.235Z · LW · GW

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega.

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

I still think I'm missing something, since a lot of people have thought carefully about this and come to a different conclusion from me, but I'm still not sure what it is. :/

Comment by findis on You can't signal to rubes · 2013-01-01T19:43:45.139Z · LW · GW

I think it is worth preserving a distinction between the specific kind of signaling Patrick describes and a weaker definition, because "true signaling" explains a specific phenomenon: in equilibrium, there seems to be too much effort expended on something, but everyone is acting in their own best interest. "High-quality" people do something to prove they are high quality, and "low-quality" people imitate this behavior. If education is a signal, people seem to get "too much" education for what their jobs require.

As in an exam problem I recently heard about: Female bullfrogs prefer large male bullfrogs. Large bullfrogs croak louder. In the dark, small bullfrogs croak loudly to appear large. To signal that they are the true large frogs, large ones croak even louder. When everyone is croaking as loudly as they can, croaking quietly makes a frog look incapable of croaking loudly and therefore small. Result: swamps are really noisy at night.

Or, according to this paper, people "expect a high-quality firm to undertake ambitious investments". Investment is a signal of quality: low-quality firms invest more ambitiously to look high-quality. Then high-quality firms invest more to prove they are the true high-quality firms. Result: firms over-invest.

In this sense, you can also signal that you are serious about a friendship, job, or significant other, but only where your resources are limited. An expensive engagement ring is a good signal of your seriousness -- hence, expensive diamond engagement rings instead cubic zirconium. Or, applying to college and sending a video of yourself singing the college's fight song is a good signal that you will attend if admitted, and writing a gushing essay is a cheap imitation signal of that devotion. Hence, high school seniors look like they spend way too much effort telling colleges how devoted they are.

So you might use signaling to explain why "too many" people get "useless" degrees studying classics, or why swamps are "too loud", or engagement rings are "too expensive". I don't think it's true that too many people pretend to be Republicans, or too many birthday cards or sent.

Comment by findis on [Link] Economists' views differ by gender · 2012-12-31T17:43:44.241Z · LW · GW

Differences in conformity: women may conform a bit more to widespread social views (at least, to views of "their social class") and/or compartimentalize more between what they learn about a specific topic and their general views. This would mean female scientists would be slightly less likely to be atheists in religious countries, female theology students would be slightly less likely to be fanatics in not-that-fanatical societies, etc.

We need to look at differences between men and women conditional on the fact that they've become economists, not just differences between men and women. Becoming a professional economist requires more nonconformity for a woman than for a man -- deciding to pursue a gender-atypical job, having peers and mentors that are mostly male, and delaying having children or putting a lot of time into family life until you're 30, at least.

Different subfields in economics: Maybe "economics" shouldn't be considered one big blob - there may be some subfields that have more in common with other social sciences (and thus have a more female student body, and a more "liberal" outlook), and some more in common with maths and business.

There are more women in fields you might expect to be more liberal, and fewer in fields like theory. http://www.cepr.org/meets/wkcn/3/3530/papers/Dolado.pdf Women seem to be more concentrated in public economics (taxes) and economic development. They are less concentrated in theory... and in the large field of "other". When you define the fields differently women are especially well represented (compared to the mean) in "health, education, and welfare" and "labour and demographic economics".

It would be interesting to see how, say, health economists view employer-provided health insurance rules.

Comment by findis on Welcome to Less Wrong! (July 2012) · 2012-12-29T20:51:23.025Z · LW · GW

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just for p=0)? What changes as p shrinks?

Or what if Omega/Ann's mom is a perfect predictor, but for a random 1% of the time decides to fill the boxes as if it made the opposite prediction, just to mess with you? If you one-box for p=0, you should believe that taking one box is correct (and generates $1 million more) in 99% of cases and that two boxes is correct (and generates $1000 more) in 1% of cases. So taking one box should still have a far higher expected value. But the perfect predictor who sometimes pretends to be wrong behaves exactly the same as an imperfect predictor who is wrong 1% of the time.

Comment by findis on Belief in Self-Deception · 2012-12-29T19:29:34.276Z · LW · GW

I await the eager defenses of belief in belief in the comments, but I wonder if anyone would care to jump ahead of the game and defend belief in belief in belief? Might as well go ahead and get it over with.

My boyfriend was once feeling a bit tired and unmotivated for a few months (probably mild depression), and he also wanted to stop eating dairy for ethical reasons. He felt that his illness was partly mentally generated. He decided that he was allergic to dairy, and that dairy was causing his illness. Then he stopped eating dairy and felt better!

He told me all this, and also told me that he usually believes he is actually allergic to dairy, and it is hard to remember that he is not. When someone asks how he knows he is allergic to dairy, he says something plausible and false ("The doctor ran blood tests") and believes it if he doesn't stop and think too much.

He believes he is not allergic to dairy, but he believes he believes he is allergic to dairy? Belief-in-belief. But he recognizes this and explained it to me -- so that's a belief-in-belief-in-belief? But it helped him get over his mental illness and stop eating dairy... that's winning.

In general I would say a belief-in-belief is useful if you decide some behaviors are desirable, but some false model of the world better motivates you to behave properly. Belief-in-belief-in-belief is useful if you know too much to think both "Z is true" and "I believe not-Z". Then you tell yourself you have a belief-in-belief.

Disclaimer: This is weird to me and I don't really understand how he pulls it off.

Comment by findis on How To Have Things Correctly · 2012-12-28T00:51:16.687Z · LW · GW

My rule of thumb is that I generally don't buy an X for myself unless I've tried living without it, then borrowed a friend's X and found it helpful. This mainly applies to cooking and hiking instruments. And I try really really hard to not buy yarn (for knitting) without a project in mind.

Comment by findis on Welcome to Less Wrong! (July 2012) · 2012-12-26T06:20:13.853Z · LW · GW

Hi, I'm Liz.

I'm a senior at a college in the US, soon to graduate with a double major in physics and economics, and then (hopefully) pursue a PhD in economics. I like computer science and math too. I'm hoping to do research in economic development, but more relevantly to LW, I'm pretty interested in behavioral economics and in econometrics (statistics). Out of the uncommon beliefs I hold, the one that most affects my life is that since I can greatly help others at a small cost to myself, I should; I donate whatever extra money I have to charity, although it's not much. (see givingwhatwecan.org)

I think I started behaving as a rationalist (without that word) when I became an atheist near the end of high school. But to rewind...

I was raised Christian, but Christianity was always more of a miserable duty than a comfort to me. I disliked the music and the long services and the awkward social interactions. I became an atheist for no good reason in the beginning of high school, but being an atheist was terrible. There was no one to forgive me when I screwed up, or pray to when the world was unbearably awful. My lack of faith made my father sad. Then, lying in bed and angsting about free will one night, I had some philosophical revelation, and it seemed that God must exist. I couldn't re-explain the revelation to myself, but I clung to the result and became seriously religious for the next year or so. But objections to the major strands of theism began to creep up on me. I wanted to believe in God, and I wanted to know the truth, and I found out that (surprise) having an ideal set of beliefs isn't compatible with seeking truth. I did lots of reading (mostly old-school philosophy), slowly changed my mind, then came out as an atheist (to close friends only) once the Bible Quiz season was over. (awk.)

At that point I decided to never lie to myself again. Not just to avoid comforting half-truths, but to actively question all beliefs I held, and to act on whatever conclusions I come to. After hard practice, unrelenting honesty towards myself is a habit I can't break, but I'm not sure it's actually a good policy. For example, a few white lies would've helped me move past a situation of extreme guilt last year.

Anyway, more recently, I read HPMOR and I'm now reading Kahneman's Thinking, Fast and Slow. I'm slowly working through the Sequences too. I always appreciate new reading recommendations.


I have some thoughts on Newcomb's Paradox. (Of course I am new to this, probably way off base, etc.) I think two boxes is the right way to go, and it seems that intuition towards one-boxing often comes from the idea that your decision somehow changes the contents of the boxes. (No reverse causality is supposed to be assumed, right?) Say that instead of an infallible superintelligence, the story changes to

"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."

Terribly small sample size, but a friend told me this changes his answer from one box to two. As far as I can tell these changes are aesthetic and make the story clearer without changing the philosophy.


And, a question. Why is Bayes so central to this site? I use Bayesian reasoning regularly, but I learned Bayes' Theorem around the time I started thinking seriously about anything, so I'm not clear on what the alternative is. Why do y'all celebrate Bayes, rather than algebra or well-designed experiments?

Edit: Read farther in Thinking, Fast and Slow; question answered.