Posts

Comments

Comment by xv15 on Rationality Quotes Thread November 2015 · 2015-11-05T23:48:58.541Z · LW · GW

The optimist fell ten stories, and at each window bar he shouted to the folks inside: 'Doing all right so far!'

Anonymous; quoted for instance in The Manager's Dilemma

Comment by xv15 on Privileging the Question · 2013-05-01T11:12:58.675Z · LW · GW

Sure.

Comment by xv15 on Privileging the Question · 2013-05-01T04:44:49.612Z · LW · GW

It's rather against the point of the article to start talking about the above examples of privileged questions...

Even so, it's worth noting that immigration policy is a rare, important question with first-order welfare effects. Relaxing border fences creates a free lunch in the same way that donating to the Against Malaria Foundation creates a free lunch. It costs on the order of $7 million to save an additional American life, but on the order of $2500 to save a life if you're willing to consider non-Americans.

By contrast, most of politics consists of policy debates with about as many supporters as opponents, suggesting there isn't a huge welfare difference either way. What makes immigration and international charity special is the fact that the beneficiaries of the policies have no say in our political system. Thus the benefits that accrue to them are not weighted as heavily as our benefits, which means there's a free lunch if overall welfare is what you care about.

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-09T05:55:24.964Z · LW · GW

"Fairness" depends entirely on what you condition on. Conditional on the hare being better at racing, you could say it's fair that the hare wins. But why does the hare get to be better at racing in the first place?

Debates about what is and isn't fair are best framed as debates over what to condition on, because that's where most of the disagreement lies. (As is the case here, I suppose).

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-08T17:23:40.472Z · LW · GW

This is much better than my moral.

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-08T05:38:53.462Z · LW · GW

I will run the risk of overanalyzing: Faced with a big wide world and no initial idea of what is true or false, people naturally gravitate toward artificial constraints on what they should be allowed to believe. This reduces the feeling of crippling uncertainty and makes the task of reasoning much simpler, and since an artificial constraint can be anything, they can even paint themselves a nice rosy picture in which to live. But ultimately it restricts their ability to align their beliefs with the truth. However comforting their illusions may be at first, there comes a day of reckoning. When the false model finally collides with reality, reality wins.

The truth is that reality contains many horrors. And they are much harder to escape from a narrow corridor that cuts off most possible avenues for retreat.

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-08T05:38:31.754Z · LW · GW

"Alas", said the mouse, "the whole world is growing smaller every day. At the beginning it was so big that I was afraid, I kept running and running, and I was glad when I saw walls far away to the right and left, but these long walls have narrowed so quickly that I am in the last chamber already, and there in the corner stands the trap that I must run into."

"You only need to change your direction," said the cat, and ate it up.

-Kafka, A Little Fable

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-08T03:36:06.394Z · LW · GW

Joe Pyne was a confrontational talk show host and amputee, which I say for reasons that will become clear. For reasons that will never become clear, he actually thought it was a good idea to get into a zing-fight with Frank Zappa, his guest of the day. As soon as Zappa had been seated, the following exchange took place:

Pyne: I guess your long hair makes you a girl.

Zappa: I guess your wooden leg makes you a table.

Of course this would imply that Pyne is not a featherless biped.

Source: Robert Cialdini's Influence: The Psychology of Persuasion

Comment by xv15 on Rationality Quotes April 2013 · 2013-04-08T03:14:10.293Z · LW · GW

I've always thought there should be a version where the hare gets eaten by a fox halfway through the race, while the tortoise plods along safely inside its armored mobile home.

Comment by xv15 on Bayesian Adjustment Does Not Defeat Existential Risk Charity · 2013-03-18T22:31:34.445Z · LW · GW

That is true. But there are also such things as holding another person at gunpoint and ordering them to do something. It doesn't make them the same person as you. Their preferences are different even if they seem to behave in your interest.

And in either case, you are technically not deciding the other person's behavior. You are merely realigning their incentives. They still choose for themselves what is the best response to their situation. There is no muscle now-you can flex to directly make tomorrow-you lift his finger, even if you can concoct some scheme to make it optimal for him tomorrow.

In any case, commitment devices don't threaten the underlying point because most of the time they aren't available or cost-effective, which means there will still be many instances of behavior that are best described by non-exponential discounting.

Comment by xv15 on Bayesian Adjustment Does Not Defeat Existential Risk Charity · 2013-03-17T20:03:45.669Z · LW · GW

We can't jettison hyperbolic discounting if it actually describes the relationship between today-me and tomorrow-me's preferences. If today-me and tomorrow-me do have different preferences, there is nothing in the theory to say which one is "right." They simply disagree. Yet each may be well-modeled as a rational agent.

The default fact of the universe is that you aren't the same agent today as tomorrow. An "agent" is a single entity with one set of preferences who makes unified decisions for himself, but today-you can't make decisions for tomorrow-you any more than today-you can make decisions for today-me. Even if today-you seems to "make" a decision for tomorrow-you, tomorrow-you can just do something else. When it comes down to it, today-you isn't the one pulling the trigger tomorrow. It may turn out that you are (approximately) an individual with consistent preferences over time, in which case it's equivalent to today-you being able to make decisions for tomorrow-you, but if so that would be a very special case.

There are evolutionary pressures that encourage agency and exponential discounting in particular. I have also seen models that tried to generate some evolutionary reason for time inconsistency, but never convincingly. I suspect that really, it's just plain hard to get all the different instances of a person to behave as a single agent across time, because that's fundamentally not what people are.

The idea that you are a single agent over time is an illusion supported by inherited memories and altruistic feelings towards your future selves. If you all happen to agree on which one of you should get to eat the donut, I will be surprised.

Comment by xv15 on MetaMed: Evidence-Based Healthcare · 2013-03-06T19:22:51.638Z · LW · GW

Another alternative is to provide doctors with a simple, easy-to-use program called Dr. Bayes. The program would take as input: the doctor's initial estimate of the chance the patient has the disorder (taking into account whatever the doctor knows about various risk factors) the false positive and false negative rates of a test.

The program would spit out the probability of having the disorder given positive and negative test results.

Obviously there are already tools on the internet that will implement Bayes theorem for you. But maybe it could be sold to doctors if the interface were designed specifically for them. I could see a smart person in charge of a hospital telling all the doctors at the hospital to incorporate such a program into their diagnostic procedure.

Failing this, another possibility is to solicit the relevant information from the doctor and then do the math yourself. (Being sure to get the doctor's prior before any test results are in). Not every doctor would be cooperative...but come to think of it, refusal to give you a number is a good sign that maybe you shouldn't trust that particular doctor anyway.

Comment by xv15 on MetaMed: Evidence-Based Healthcare · 2013-03-06T02:54:34.387Z · LW · GW

thanks, PPV is exactly what I'm after.

The alternative to giving a doctor positive & negative predictive values for each maternal age is to give false positive & negative rates for the test plus the prevalence rate for each maternal age. Not much difference in terms of the information load.

One concern I didn't consider before is that many doctors would probably resist reporting PPV's to their patients because they are currently recommending tests that, if they actually admitted the PPV's, would look ridiculous! (e.g. breast cancer screening).

Comment by xv15 on MetaMed: Evidence-Based Healthcare · 2013-03-05T19:15:56.612Z · LW · GW

"False positive rate" and "False negative rate" have strict definitions and presumably it is standard to report these numbers as an outcome of clinical trials. Could we similarly define a rigid term to describe the probability of having a disorder given a positive test result, and require that to be reported right along with false positive rates?

Seems worth an honest try, though it might be too hard to define it in such a way as to forestall weaseling.

Comment by xv15 on MetaMed: Evidence-Based Healthcare · 2013-03-05T18:10:09.908Z · LW · GW

Only one out of 21 obstetricians could estimate the probability that an unborn child had Down syndrome given a positive test

Say the doctor knows false positive/negative rates of the test, and also the overall probability of Down syndrome, but doesn't know how to combine these into the probability of Down syndrome given a positive test result.

Okay, so to the extent that it's possible, why doesn't someone just tell them the results of the Bayesian updating in advance? I assume a doctor is told the false positive and negative rates of a test. But what matters to the doctor is the probability that the patient has the disorder. So instead of telling a doctor, "Here is the probability that a patient with Down syndrome will have a negative test result," why not just directly say, "When the test is positive, here is the probability of the patient actually having Down syndrome. When the test is negative, here is the probability that the patient has Down syndrome."

Bayes theorem is a general tool that would let doctors manipulate the information they're given into the probabilities that they care about. But am I crazy to think that we could circumvent much of their need for Bayes theorem by simply giving them different (not necessarily much more) information?

There are counterpoints to consider. But it seems to me that many examples of Bayesian failure in medicine are analogously simple to the above, and could be as simply fixed. The statistical illiteracy of doctors can be offset so long as there are statistically literate people upstream.

Comment by xv15 on Rationality Quotes February 2013 · 2013-02-11T16:34:07.463Z · LW · GW

Closeness in the experiment was reasonably literal but may also be interpreted in terms of identification with the torturer. If the church is doing the torturing then the especially religious may be more likely to think the tortured are guilty. If the state is doing the torturing then the especially patriotic (close to their country) may be more likely to think that the tortured/killed/jailed/abused are guilty. That part is fairly obvious but note the second less obvious implication–the worse the victim is treated the more the religious/patriotic will believe the victim is guilty. ... Research in moral reasoning is important because understanding why good people do evil things is more important than understanding why evil people do evil things.

-Alex Tabarrok

Comment by xv15 on Rationality Quotes November 2012 · 2012-11-07T00:26:09.013Z · LW · GW

I dislike this quote because it obscures the true nature of the dilemma, namely the tension between individual and collective action. Being "not in one's right mind" is a red herring in this context. Each individual action can be perfectly sensible for the individual, while still leading to a socially terrible outcome.

The real problem is not that some genius invents nuclear weapons and then idiotically decides to incite global nuclear war, "shooting from the hip" to his own detriment. The real problem is that incentives can be aligned so that it is in everyone's interest every step along the way, to do their part in their own ultimate destruction.

Of course, if "right mind" was defined to mean "socially optimal mind," fine, we aren't in our right mind. But I don't think that's the default interpretation.

Comment by xv15 on What Is Signaling, Really? · 2012-07-12T19:07:50.838Z · LW · GW

This post, by its contents and tone, seems to really emphasize the downside of signaling. So let me play the other side.

Enabling signaling can add or subtract a huge amount of value from what would happen without signaling. You can tweak your initial example to get a "rat race" outcome where everyone, including the stupid people, sends a costly signal that ends up being completely uninformative (since everyone sends it). But you can also make it prohibitively mentally painful for stupid people to go to college, versus neutral or even enjoyable for smart people (instead of there being an actual economic cost of engaging in signaling), with a huge gain to employers for being able to tell them apart.

one can look at Nikolai Roussanov's study on how the dynamics of signaling games in US minority communities encourage conspicuous consumption and prevent members of those communities from investing in education and other important goods.

As a counterpoint to this, in other cases the signaling value of education may induce people to get more education than is individually optimal, which is actually a good thing socially if you think education has large positive externalities. And if you work hard and discover a cure for cancer, you will be paid largely through other people's opinions of you, now that you've signaled to them that you are such an intelligent and hard-working and socially-conscious person. (You were just as intelligent before you cured it, but now they know). Since you cannot possibly hope to recoup even a modest fraction of the social value you will have created, that's unambiguously good for incentives.

On any other site, I would probably get away with saying: Since invention is basically the reason for our high modern standards of living, if signaling seriously encourages it, then in the long run the positive value of signaling would seem to dwarf any losses discussed above (even the "poverty" of some minority communities is nothing compared to the poverty in all of our shared historical past). But here...well, here we are pretty worried about where our invention spree might be leading us.

Comment by xv15 on Biased Pandemic · 2012-03-14T05:12:53.389Z · LW · GW

This sounds awesome. It would be really cool if you could configure it so that identifying biases actually helps you to win by some tangible measure. For example, if figuring out a bias just meant that person stopped playing with bias (instead of drawing a new bias), figuring out biases would be instrumental in winning. The parameters could be tweaked of course (if people typically figure out the biases quickly, you could make it so they redraw biases several times). Or you could link drawing additional biases with the drawing of epidemic cards?

I have this terrifying vision of a version where it is biases -- not diseases -- which spread throughout the world, and whenever a player's piece is in a city infected with a certain bias, they have to play with it...

Comment by xv15 on Can the Chain Still Hold You? · 2012-01-13T03:21:27.168Z · LW · GW

Luke, I thought this was a good post for the following reasons.

(1) Not everything needs to be an argument to persuade. Sometimes it's useful to invest your limited resources in better illuminating your position instead of illuminating how we ought to arrive at your position. Many LWers already respect your opinions, and it's sometimes useful to simply know what they are.

The charitable reading of this post is not that it's an attempted argument via cherry-picked examples that support your feeling of hopefulness. Instead I read it as an attempt to communicate your level of hopefulness accurately to people who you largely expect to be less hopeful. This is an imprecise business that necessarily involves some emotional language, but ultimately I think you are just saying: do not privilege induction with such confidence, we live in a time of change.

It might quell a whole class of complaints if you said something like that in the post. Perhaps you feel you've noticed a lot of things that made you question and revise your prior confidence about the unchangingness of the world...if so, why not tell us explicitly?

(2) I also see this post as a step in the direction of your stated goal to spend time writing well. It seems like something you spent time writing (at least relative to the amount of content it contains). Quite apart from the content it contains, it is a big step in the direction of eloquence. LWers are programmed to notice/become alarmed when eloquence is being used to build up a shallow argument, but it's the same sort of writing whether your argument is shallow or deep. This style of writing will do you a great service when it is attached to a much deeper argument. So at the least it's good practice, and evidence that you should stick with your goal.

Comment by xv15 on 2011 Survey Results · 2011-12-13T17:38:10.318Z · LW · GW

wedrifid, RIGHT. Sorry, got a little sloppy.

By "TDT reasoning" -- I know, I know -- I have been meaning Desrtopa's use of "TDT reasoning," which seems to be like TDT + [assumption that everyone else is using TDT].

I shouldn't say that TDT is irrelevant, but really that it is a needless generalization in this context. I meant that Desrtopa's invocation of TDT was irrelevant, in that it did nothing to fix the commons problem that we were initially discussing without mention of TDT.

Comment by xv15 on 2011 Survey Results · 2011-12-13T03:22:52.444Z · LW · GW

It seems like this is an example of, at best, a domain on which decisionmaking could use TDT. No one is denying that people could use TDT, though. I was hoping for you to demonstrate an example where people actually seem to be behaving in accordance with TDT. (It is not enough to just argue that people reason fairly similarly in certain domains).

"Isomorphic" is a strong word. Let me know if you have a better example.

Anyway let me go back to this from your previous comment:

Tragedies of commons are not universally unresolvable....Simply saying "It's a tragedy of the commons problem" doesn't mean there's no chance of resolving it and therefore no use in knowing about it.

No one is claiming tragedies of the commons are always unresolvable. We are claiming that unresolved tragedies of the commons are tragedies of the commons! You seem to be suggesting that knowledge is a special thing which enables us to possibly resolve tragedies of the commons and therefore we should seek it out. But in the context of global warming and the current discussion, knowledge-collection is the tragedy of the commons. To the extent that people are underincentivized to seek out knowledge, that is the commons problem we're talking about.

If you turn around and say, "well they should be seeking out more knowledge because it could potentially resolve the tragedy"...well of course more knowledge could resolve the tragedy of not having enough knowledge, but you have conjured up your "should" from nowhere! The tragedy we're discussing is what exists after rational individuals decide to gather exactly as much information as a rational agent "should," where should is defined with respect to that agent's preferences and the incentives he faces.

Final question: If TDT reasoning did magically get us to the level of informedness on global warming that you think we rationally should be attaining, and if we are not attaining that level of informedness, does that not imply that we aren't using TDT reasoning? And if other people aren't using TDT reasoning, does that not imply that it is NOT a good idea for me to start using it? You seem to think that TDT has something to do with how rational agents "should" behave here, but I just don't see how TDT is relevant.

Comment by xv15 on 2011 Survey Results · 2011-12-11T20:00:22.969Z · LW · GW

Later, when I cease identifying with my past self too much, I would admit (at least to myself) that I have changed my opinion.

I think the default is that people change specific opinions more in response to the tactful debate style you're identifying, but are less likely to ever notice that they have in fact changed their opinion. I think explicitly noticing one's wrongness on specific issues can be really beneficial in making a person less convinced of their rightness more globally, and therefore more willing to change their mind in general. My question is how we ought to balance these twin goals.

It would be much easier to get at the first effect by experiment than the second, since the latter is a much more long-term investment in noticing one's biases more generally. And if we could get at both, we would still have to decide how much we care about one versus the other, on LW.

Personally I am becoming inclined to give up the second goal.

Comment by xv15 on 2011 Survey Results · 2011-12-11T18:08:17.392Z · LW · GW

I'd like to say yes, but I don't really know. Am I way off-base here?

Probably the most realistic answer is that I would sometimes believe it, and sometimes not. If not often enough, it's not worth it. It's too bad there aren't more people weighing in on these comments because I'd like to know how the community thinks my priorities should be set. In any case you've been around for longer so you probably know better than I.

Comment by xv15 on 2011 Survey Results · 2011-12-10T22:48:11.800Z · LW · GW

human decisionmaking is isomorphic to TDT in some domains

Maybe it would help if you gave me an example of what you have in mind here.

Comment by xv15 on 2011 Survey Results · 2011-12-10T22:28:08.897Z · LW · GW

prase, I really sympathize with that comment. I will be the first to admit that forcing people to concede their incorrectness is typically not the best way of getting them to agree on the truth. See for example this comment.

BUT! On this site we sort of have TWO goals when we argue, truth-seeking and meta-truth-seeking. Yes, we are trying to get closer to the truth on particular topics. But we're also trying to make ourselves better at arguing and reasoning in general. We are trying to step back and notice what we're doing, and correct flaws when they are exposed to our scrutiny.

If you look back over this debate, you will see me at several points deliberately stepping back and trying to be extremely clear about what I think is transpiring in the debate itself. I think that's worth doing, on lesswrong.

To defend the particular sentence you quote: I know that when I was younger, it was entirely possible for me to "escape" from a debate in a face-saving way without realizing I had actually been wrong. I'm sure this still happens from time to time...and I want to know if it's happening! I hope that LWers will point it out. On LW I think we ought to prioritize killing biases over saving faces.

Comment by xv15 on 2011 Survey Results · 2011-12-07T23:58:11.988Z · LW · GW

Unfortunately that response did not convince me that I'm misunderstanding your position.

If people are not using a TDT decision rule, then your original explicit use of TDT reasoning was irrelevant and I don't know why you would have invoked it at all unless you thought it was actually relevant. And you continue to imply at least a weaker form of that reasoning.

No one is disputing that there is correlation between people's decisions. The problem is that correlation does not imply that TDT reasoning works! A little bit of correlation does not imply that TDT works a little bit. Unless people are similar to you AND using TDT, you don't get to magically drag them along with you by choosing to cooperate.

This is a standard textbook tragedy of the commons problem, plain and simple. From where I'm standing I don't see the relevance of anything else. If you want to continue disagreeing, can you directly tell me whether you think TDT is still relevant and why?

Comment by xv15 on 2011 Survey Results · 2011-12-07T12:46:12.781Z · LW · GW

Desrtopa, can we be careful with it means to be "different" from other agents? Without being careful, we might reach for any old intuitive metric. But it's not enough to be mentally similar to other agents across just any metric. For your reasoning to work, they have to be executing the same decision rule. That's the metric that matters here.

Suppose we start out identical but NOT reasoning as per TDT -- we defect in the prisoner's dilemma, say -- but then you read some LW and modify your decision rule so that when deciding what to do, you imagine that you're deciding for both of us, since we're so similar after all. Well, that's not going to work too well, is it? My behavior isn't going to change any, since, after all, you can't actually influence it by your own reading about TDT.

So don't be so quick to cast your faith in TDT reasoning. Everyone can look very similar in every respect EXCEPT the one that matters, namely whether they are using TDT reasoning.

With this in mind, if you reread the bayesians versus barbarians post you linked to, you should be able to see that it has the feel more of an existence proof of a cooperate-cooperate equilibrium. It does not say that we will necessarily find ourselves in such an equilibrium just by virtue of being sufficiently similar.

Comment by xv15 on 2011 Survey Results · 2011-12-07T12:21:53.803Z · LW · GW

To me, this comment basically concedes that you're wrong but attempts to disguise it in a face-saving way. If you could have said that people should be informing themselves at the socially optimal level, as you've been implying with your TDT arguments above, you would have. Instead, you backed off and said that people ought to be informing themselves at least a little.

Just to be sure, let me rewrite your claim precisely, in the sense you must mean it given your supposed continued disagreement:

In general I think that a personal policy of not informing oneself at even a basic level about tragedies of commons where the information is readily available is not beneficial to the individual, because humans have a sufficiently developed propensity for resolving tragedies of commons to give at least the most basic information marginal benefit to the individual.

Assuming that's what you're saying, it's easy to see that even this is an overreach. The question on the table is whether people should be informing themselves about global warming. Whether the first epsilon of information one gets from "informing oneself" (as opposed to hearing the background noise) is beneficial to the individual relative to the cost of attaining it, is a question of derivatives of cost and benefit functions at zero, and it could go either way. You simply can't make a general statement about how these derivatives relate for the class of Commons Problems. But more importantly, even if you could, SO WHAT? The question is not whether people should be informing themselves a bit, the question is whether they should be informing themselves at anywhere close to the socially optimal level. And by admitting it's a tragedy of the commons, we are already ANSWERING that question.

Does that make sense? Am I misunderstanding your position? Has your position changed?

Comment by xv15 on 2011 Survey Results · 2011-12-06T15:59:07.074Z · LW · GW

I agree. Desrtopa is taking Eliezer's barbarians post too far for a number of reasons.

1) Eliezer's decision theory is at the least controversial which means many people here may not agree with it.

2) Even if they agree with it, it doesn't mean they have attained rationality in Eliezer's sense.

3) Even if they have attained this sort of rationality, we are but a small community, and the rest of the world is still not going to cooperate with us. Our attempts to cooperate with them will be impotent.

Desrtopa: Just because it upholds an ideal of rationality that supports cooperation, does not mean we have attained that ideal. Again, the question is not what you'd like to be true, but about what's actually true. If you're still shocked by people's low confidence in global warming, it's time to consider the possibility that your model of the world -- one in which people are running around executing TDT -- is wrong.

Comment by xv15 on 2011 Survey Results · 2011-12-06T05:44:12.108Z · LW · GW

Exactly, it IS the tragedy of the commons, but that supports my point, not yours. It may be good for society if people are more informed about global warming, but society isn't what makes decisions. Individuals make decisions, and it's not in the average individual's interest to expend valuable resources learning more about global warming if it's going to have no real effect on the quality of their own life.

Whether you think it's an individual's "job" or not to do what's socially optimal, is completely besides the point here. The fact is they don't. I happen to think that's pretty reasonable, but it doesn't matter how we wish people would behave, in order to predict how they will behave.

Let me try to be clear, since you might be wondering why someone (not me) downvoted you: You started by noting your shock that people aren't that informed about global warming. I said we shouldn't necessarily be surprised that they aren't that informed about global warming. You responded that we're suffering from the tragedy of the commons, or the tragedy of the rationalists versus the barbarians. I respond that I agree with what you say but not with what you seem to think it means. When we unearth a tragedy of the commons, we don't go, "Aha! These people have fallen into a trap and if they saw the light, they would know to avoid it!" Casting light on the tragedy of the commons does not make it optimal for individuals to avoid it.

Casting light on the commons is a way of explaining why people would be behaving in such a socially suboptimal way, not a way of bolstering our shock over their behavior.

Comment by xv15 on 2011 Survey Results · 2011-12-06T04:43:41.826Z · LW · GW

Wait a sec. Global warming can be important for everyday life without it being important that any given individual know about it for everyday life. In the same way that matters of politics have tremendous bearing on our lives, yet the average person might rationally be ignorant about politics since he can't have any real effect on politics. I think that's the spirit in which thomblake means it's a political matter. For most of us, the earth will get warmer or it won't, and it doesn't affect how much we are willing to pay for tomatoes at the grocery store (and therefore it doesn't change our decision rule for how to buy tomatoes), although it may effect how much tomatoes cost.

(It's a bit silly, but on the other hand I imagine one could have their preferences for tomatoes depend on whether tomatoes had "genes" or not.)

This is a bit like the distinction between microeconomics and macroeconomics. Macroeconomics is the stuff of front page newspaper articles about the economy, really very important stuff. But if you had to take just one economics class, I would recommend micro, because it gives you a way of thinking about choices in your daily life, as opposed to stuff you can't have any real effect on.

Comment by xv15 on 2011 Less Wrong Census / Survey · 2011-11-01T22:56:49.345Z · LW · GW

I took the survey too. I would strongly recommend changing the Singularity question to read:

"If you don't think a Singularity will ever happen, write N for Never"

Or something like that. The fraction of people who think Never with high probability is really interesting! You don't want to lump them in with the people who don't have an opinion.

Comment by xv15 on Better Disagreement · 2011-10-25T06:06:18.201Z · LW · GW

If the goal is intellectual progress, those who disagree should aim not for name-calling but for honest counterargument.

and

DH7: Improve the Argument, then Refute Its Central Point...if you're interested in producing truth, you will fix your opponents' arguments for them. To win, you must fight not only the creature you encounter; you [also] must fight the most horrible thing that can be constructed from its corpse."

I would add that the goal of intellectual progress sometimes extends beyond you-the-rationalist, to the (potentially less than rational) person you're arguing with. The goal is not just to "produce" the truth, or to recognize the truth with your own two eyes. The goal is to both locate the truth and convince the other person that it is in fact the truth.

Often, I find myself in the following scenario: Someone says, "X and Y, therefore Z!" And off the bat, I have a good idea of what they're thinking and where the logic goes bad. But in point of fact, they are being loose with semantics, and there exist definitions of X and Y consistent with their original (loose) statements which would imply Z. I could ask them clarifying questions and get them to pin down their position further...but alternatively, I am free to say, "Surely you don't mean this one thing [which they really do mean] because here's how that would go bad. Perhaps you really meant this other thing? Am I understanding you correctly?"

This makes it much easier for people to "back down" from their original position without losing face, because they are framed as not having ever committed to that position in the first place. The reality is that we often have a choice between nailing someone in place and offering them up as a sacrifice to the logic gods -- in which case we don't really win since the logic gods can't actually touch people who don't submit to their power -- or deliberately leaving them untethered, so that they will more willingly adjust to new evidence.

Here it's not so much that I'm constructing the best argument from the corpse of their fully formed argument and striking it down. It's more like encouraging the growth of an adolescent argument in a direction that does not require it to be struck down, in the process striking down the bad argument that the original argument would have grown into, and trying to ensure that my "opponent" doesn't end up getting slain along with the bad argument.

This would be out of place in the above post, but I thought it was worthy of a discussion on Better Disagreement. Because I used to think the way to win was to pin people into logical corners, but if you're goal is partly to convince people, and those people are like most people, then in my (limited) experience, this way works So. Much. Better.

Comment by xv15 on 1001 PredictionBook Nights · 2011-10-08T23:11:56.480Z · LW · GW

As we evaluate predictions for accuracy, one thing we should all be hyper-aware of is that predictions can affect behavior. It is always at least a little bit suspect to evaluate a prediction simply for accuracy, when its goal might very well have been more than accuracy.

If I bet you $100 on even odds that I'm going to lose 10 lbs this month, it doesn't necessarily indicate that I think the probability of it happening is > 50%. Perhaps this bet increases the probability of it happening from 10% to 40%, and maybe that 30% increase in probability is worth the expected $20 cost. More generally, even with no money on the table, such a bet might motivate me, and so it cannot be inferred from the fact that I made the bet that I actually believe the numbers in the bet.

Or let's talk about the planning fallacy. Instrumentally, it might make sense to hold the belief that you'll get your project done before the deadline, if that sort of thinking motivates you to finish it earlier than you would if you were completely resigned to finishing it late from the get-go. It might even be detrimental to your project-finishing goals to become enlightened about the reality.

And of course, predictions can affect other people than the predicter. It is ludicrous to look at the public predictions of the Fed chairman and call him a fool when he turns out to be wrong.

Sometimes a prediction is a prediction. But this is definitely something to keep in mind. And gwern, given that you have all this data now, you might find it interesting to see if there are any systematic differences between predictions that do and don't have any significant effect on behavior.

Comment by xv15 on A Crash Course in the Neuroscience of Human Motivation · 2011-08-24T19:32:16.024Z · LW · GW

I really enjoyed this article. I took a few sittings to read it, but I liked the continuous format.

Let me just make a general comment on the tone here:

But really, shouldn't it have been obvious all along that humans are irrational? Perhaps it is, to everyone but neoclassical economists and Aristoteleans. (Okay, enough teasing...)

Teasing per se is fine, but this happens to reinforce a popular sentiment which I find misleading. Everyone likes to point out the differences between standard economic assumptions and actual human behavior. Pitted against other disciplines like neuroscience, whose goal here is to get an accurate reading on what makes an individual tick, economics loses.

But that's an unfair comparison, because by and large the goal of economics is not to get an accurate reading on what makes an individual tick. Economics is much more concerned with what happens when you aggregate multiple individuals, through markets or games and so forth. The behavior that emerges from their interactions, that is the main focus of economics. Of course it can get pretty complex when you throw a bunch of people together, so we typically have to make a lot of simplifying assumptions at the level of the individual.

Rationality is certainly a simplifying assumption. Rationality is a standard assumption of economics because economists are typically looking at a more complicated situation than "how an individual behaves." It's a little unfair to compare the standard assumptions of economics with the standard assumptions of other disciplines -- in a discussion of how an individual behaves -- and conclude that economists "got it wrong." Were they trying to get it right?

Thought it was worth bringing up because I encounter this sentiment so frequently, whereas I think that economists by and large are quite aware that their assumptions are simplifications in service of their models. On the one hand, this comment is irrelevant to the goal of communicating the neuroscience of human motivation. But on the other hand, economics is usually discussed from the individual angle on LW, and often with a similar sentiment, so in the long run maybe it's worthwhile taking steps to avoid consistently giving a lot of people the wrong impression about an entire field.

[You may say, but economists do sometimes discuss individual behavior, and when they do, I often see them assuming rationality! But even here, I think it's a bit misleading to criticize economists for making the assumption. For one, descriptive results aren't the sole goal; prescriptive results can be important too, and rationality is the way to best achieve your goals. For another, because economists spend so much time around rationality for good reasons, they are very comfortable with it and have a lot of solid intuition about what follows from rationality, so to the extent that they can apply those tools to areas they aren't necessarily optimized for, they may actually have something interesting and worthwhile and unique to say. For a third, in the absence of other information about what really makes people tick, sticking to rationality is a way of restricting oneself from making up just any story to explain the facts. Neuro -- by putting irrationality stories on solid ground -- will certainly be allowing us to break away from that, but I don't think it will ever be fair to say that economists were "proven wrong" about their assumptions. Most economists don't think people are actually rational. Most economists don't think they already know what neuroscience will discover in the brain. They are just making assumptions, assumptions that have served them well and so far been enormously successful as a framework for human behavior]

Comment by xv15 on Offense versus harm minimization · 2011-04-17T15:35:47.084Z · LW · GW

There are commenters who note that the use of "ey" and other gender neutral pronouns hurts their head. You may understand this and still use "ey" as part of a larger attempt to accustom people to language that is ultimately more convenient, even if it's worse in the short run. Which is a perfect example of what I was going to say:

When you do your harm minimization calculation, you really need to include the entire path over time, and not just the snapshot. It is often true that hurting people today makes them stronger in the future, resulting in a better outcome. It could be, for instance, that gay marriage today offends more people more deeply than it benefits, but that by pushing for its spread, many of the formerly offended people end up desensitized to it (see also any number of past civil rights issues). Or, if by showing the Brits enough pictures of salmon we could actually desensitize them to the pain, in the long run we may all be better off.

A big difference between the salmon and mohammed example is that you built into the first that Brits can't adapt to the pain. But some people may be imagining a future, better world where everyone has free speech and nobody has a problem with it. And they imagine that the way to get there is by exercising that freedom now, even if it's bad in the short run.

Personally, my feeling is that retaining offendability on some topics can easily confer benefits, but I am sympathetic to people who have not realized this, and I can understand why they would feel some compunction to wave their free speech rights in the faces of others, without necessarily being "bad" people.

Comment by xv15 on Some rationality tweets · 2010-12-31T06:39:58.538Z · LW · GW

Big agents can be more coherent than small agents, because they have more resources to spend on coherence.

Yes. Coherence, and persuasiveness.

The individual that argues against whatever political lobby is quick to point out that the lobby gets its way not because it is right, but rather because it has reason to scream louder than the dispersed masses who would oppose it. But indeed, the very arguments the lobby crafts are likely to be more compelling to the masses, because it has the resources to make them so.

The lobby screams louder and better than smaller agents, as far as convincing people goes.

Comment by xv15 on The Irrationality Game · 2010-10-05T03:27:37.571Z · LW · GW

okay. I still suspect I disagree with whatever you mean by mere "figures of speech," but this rational truthseeker does not have infinite time or energy.

in any case, thank you for a productive and civil exchange.

Comment by xv15 on The Irrationality Game · 2010-10-04T23:35:11.454Z · LW · GW

Fair. Let me be precise too. I read your original statement as saying that numbers will never add meaning beyond what a vague figure of speech would, i.e. if you say "I strongly believe this" you cannot make your position more clear by attaching a number. That I disagree with. To me it seems clear that:

i) "Common-sense conclusions and beliefs" are held with varying levels of precision. ii) Often even these beliefs are held with a level of precision that can be best described with a number. (Best=most succinctly, least misinterpretable, etc...indeed it seems to me that sometimes "best" could be replaced with "only." You will never get people to understand 60% by saying "I reasonably strongly believe"...and yet your belief may be demonstrably closer to 60 than 50 or 70).

I don't think your statement is defensible from a normal definition of "common sense conclusions," but you may have internally defined it in such a way as to make your statement true, with a (I think) relatively narrow sense of "meaningfulness" also in mind. For instance if you ignore the role of numbers in transmission of belief from one party to the next, you are a big step closer to being correct.

Comment by xv15 on The Irrationality Game · 2010-10-04T15:09:50.568Z · LW · GW

Again, meaningless is a very strong word, and it does not make your case easy. You seem to be suggesting that NO number, however imprecise, has any place here, and so you do not get to refute me by saying that I have to embrace arbitrary precision.

In any case, if you offer me some bets with more significant digits in the odds, my choices will reveal the cutoff to more significant digits. Wherever it may be, there will still be some bets I will and won't take, and the number reflects that, which means it carries very real meaning.

Now, maybe I will hold the line at 54% exactly, not feeling any gain to thinking harder about the cutoff (as it gets harder AND less important to nail down further digits). Heck, maybe on some other issue I only care to go out to the nearest 10%. But so what? There are plenty of cases where I know my common sense belief probability to within 10%. That suggests such an estimate is not meaningless.

Comment by xv15 on The Irrationality Game · 2010-10-04T03:56:14.810Z · LW · GW

I tell you I believe X with 54% certainty. Who knows, that number could have been generated in a completely bogus way. But however I got here, this is where I am. There are bets about X that I will and won't take, and guess what, that's my cutoff probability right there. And by the way, now I have communicated to you where I am, in a way that does not further compound the error.

Meaningless is a very strong word.

In the face of such uncertainty, it could feel natural to take shelter in the idea of "inherent vagueness"...but this is reality, and we place our bets with real dollars and cents, and all the uncertainty in the world collapses to a number in the face of the expectation operator.

Comment by xv15 on Unknown knowns: Why did you choose to be monogamous? · 2010-07-01T12:10:29.043Z · LW · GW

For people who are embedded in a social structure, it can be costly to step outside of it. Many people will justifiably choose monogamy simply because, given the equilibrium we're in, it is the best move for them...even IF they would prefer a world of polyamory or some other alternative.

To go off topic for a moment, the same could also be said of religious belief. I know the people here feel a special allegiance to the truth, and that's wonderful, but if we lived in 12th century europe it might not be worth rejecting religion even if we saw through it. For that matter, people in the modern day who are particularly entrenched in a religious community...may wisely choose not to even think about the possibility that they're wrong. Wise because, taking this equilibrium behavior as given --- accepting that no one else in the community will seriously consider the possibility of being wrong --- means that deviating will be scorned by all the people whose opinion the deviator cares about.

I applaud people who are devoted to truthseeking, but I do not condemn the rationally ignorant, or for that matter the people who choose to be monogamous simply because that's what society expects of them, rather than because it's "what they really want" or "who they really are."

Comment by xv15 on Your intuitions are not magic · 2010-06-11T10:42:15.443Z · LW · GW

You shouldn't take this post as a dismissal of intuition, just a reminder that intution is not magically reliable. Generally, intuition is a way of saying, "I sense similarities between this problem and other ones I have worked on. Before I work on this problem, I have some expectation about the answer." And often your expectation will be right, so it's not something to throw away. You just need to have the right degree of confidence in it.

Often one has worked through the argument before and remembers the conclusion but not the actual steps taken. In this case it is valid to use the memory of the result even though your thought process is a sort of black box at the time you apply it. "Intuition" is sometimes used to describe the inferences we draw from these sorts of memories; for example, people will say, "These problems will really build up your intuition for how mathematical structure X behaves." Even if you cannot immediately verbalize the reason you think something, it doesn't mean you are stupid to place confidence in your intuitions. How much confidence depends on how frequently you tend to be right after actually trying to prove your claim in whatever area you are concerned with.