Econ/Game theory question 2011-05-11T20:17:02.677Z
Experiment Idea Thread - Spring 2011 2011-05-06T18:10:45.329Z
Hollow Adjectives 2011-05-05T03:44:41.890Z
Planning a series: discounting utility 2011-04-19T15:27:28.702Z
The Bias You Didn't Expect 2011-04-14T16:20:53.424Z
Why people reject science 2011-02-02T15:42:59.189Z
Approaching Infinity 2011-02-01T08:11:53.822Z
Assuming Nails 2010-07-05T22:26:00.586Z
Defeating Ugh Fields In Practice 2010-06-19T19:37:44.349Z
Deception and Self-Doubt 2010-03-11T02:39:06.462Z
A Much Better Life? 2010-02-03T20:01:57.431Z
Two Truths and a Lie 2009-12-23T06:34:55.204Z
Hypothetical Paradoxes 2009-09-19T06:28:06.637Z
Utilons vs. Hedons 2009-08-10T19:20:20.968Z
Not Technically Lying 2009-07-04T18:40:02.830Z
Religion, Mystery, and Warm, Soft Fuzzies 2009-05-14T23:41:06.878Z
Masochism vs. Self-defeat 2009-04-20T21:20:50.815Z


Comment by psychohistorian on The Truth and Instrumental Rationality · 2014-11-05T15:35:42.260Z · LW · GW

I echo people's comments about the impropriety of the just-so story.

The analogy is problematic. At best, it proves "there is an possible circumstance where a fairly poorly thought-out instrumentally rational belief is inferior to a true one. Such an example is fundamentally incapable of proving the universal claim that truth is always superior. It's also a bizarre and unrealistic example. On top of that, it actually ends in the optimal outcome.

The actor in the hypothetical likely made the correct utilitarian decision in the terms you assume. The moral thing to do for a drowning person is save them. But if you saved these people, you'd all die anyways. If you don't save them, it seems like they'll almost-drown until they pass out from exhaustion, then drown. Or they'll be killed by the approaching deadly threat. So without more information, there is no realistic possibilitythey survive anyways. This, you actually did the right thing and soared yourself the emotional anguish of making a hard decion.

Comment by psychohistorian on You can't signal to rubes · 2013-01-02T01:23:35.442Z · LW · GW

My objections would indeed not apply if a new term were used. You can define a new term however you like; that's the point of making a new term. You can't just declare that a commonly used term has a specific meaning without providing some justification for abandoning its other existing meanings.

If I wanted to argue that the definition of "bachelor" is "an unmarried man," I could do so rather easily, by citing this for example. If I were arguing over what counts as "theft," I could offer an argument as to why a particular act should or should not fit under the general definition. An argument like the OP's could theoretically include evidence (of common usage, of confusion, etc.) or argumentation, but the OP's post does not really seem to do this. It declares, "The definition should be X" and then rejects certain usages as not fitting the definition. If you're using an extremely common word like "signaling," you don't get to arbitrarily redefine it.

Comment by psychohistorian on You can't signal to rubes · 2013-01-02T01:15:56.528Z · LW · GW

we're no longer talking about signalling as it was originally conceived.

The word "signal" dates back to the 14th century. The use of the word as a verb dates back to at least the 17th century. The specific meaning you are trying to use seems to have started in the mid-to-late 20th century. That's the issue. Signaling means what you say it means, but it also has a broader meaning. If I do an action that I wouldn't do were it not for the fact that others observe me doing it, it seems very likely that part of my motivation is signaling. The manager clearly qualifies for this, as she would not be "acting decisively" but for the fact that she is being observed. (I also think that Gresham's Law is the wrong one, or that you need a bit of an explanation to tie it into this behaviour, but that's besides the point; the fact that there is a more precise name for a problem does not make that the only name for the problem.).

Unenforceable pre-commitments are still precommitments. If I promise never to cheat on my spouse again, despite a long history of cheating, I've made a commitment. It's not a very credible commitment, but it still belongs in the set labeled, "commitments." If you define "commitment" to only count "credible commitment," you've essentially created a new word.

As with any debate over definitions, this can get circular rather quickly. My point is this: if you want people to use the word "signal" to mean something very specific, and to abandon the conventional use of the word, you need to provide a viable alternative definition, and you need to explain why it would be more productive to abandon the conventional use of the term. I do not think your definition is viable, because it necessarily involves an arbitrary cost threshold. Even if your definition were viable, I don't see how you've shown that there is a problem with the conventional use of the term. Yes, there are different types of signals that differ in important ways, but I don't see why this warrants completely changing how we use the term, rather than specifying weak vs. strong signals.

Comment by psychohistorian on You can't signal to rubes · 2013-01-01T10:08:12.936Z · LW · GW

You've basically come up with four criteria that describe the use of the word "signal" in a highly specific context - traits that exist for pure signalling purposes in evolution or game theory - and then decided, arbitrarily, that this is the one true meaning of "signal." I do not think you have provided adequate evidence or argument to back this claim up.

If everyone around me is a Republican and I am not, it might make sense that I would do things that would signal that I am a Republican, even if these are very cheap and have obvious positive returns. Your definition would not allow this - if it is cheap and has obvious positive returns, it is not "signaling" to you. What you're saying is that if I send a birthday card to a coworker I hate, then I am not "signaling" that I like that person because it's too cheap to send the card.

It may make sense to speak of weak or strong signals, or reliable or unreliable or misleading signals. But you've arbitrarily said that the word applies only when a certain arbitrary threshold is crossed (your 2 and 4).

Incidentally, your theory might actually work if 4 were eliminated and 2 read "the behaviour is more likely to occur if you possess a certain characteristic than if you do not." This would cover my birthday card example - it's cheap, but I'm more likely to do it if I like the person, so it does signal liking the person. But this change would also fix the counter-productive manager. She's doing things that she is more likely to do if she is decisive and in charge. Since she's being evaluated on those criteria, and not "good manager-ness" - which is not generally observable - it would make sense that she would choose to give those signals rather than not. But revising the theory appropriately seems to nullify most or all of your objections.

Comment by psychohistorian on Replaceability as a virtue · 2012-12-11T06:01:19.050Z · LW · GW

Tl;dr - rent seeking is bad, m'kay?

This was a interesting read, but it's rather narrowly focused. If Anne were a doctor, then the greater her skill at surgery, the less replaceable she would be. For any occupation, the more skilled a person, the less replaceable she becomes. Replace ability isn't really the relevant metric. Rather, Dr. Anne may have the option to teach other people her surgical skill, increasing her replaceability and reducing theirs. But teaching people a useful skill is obviously altruistic; this doesn't turn on replaceability. Likewise, doing a good job is more altruistic than doing a bad job (when there's no reward). Hence, complex database Anne is less altruistic than friendly database Anne because she's doing her job worse. The reason replaceability isn't discussed is because I don't think it really adds much, especially since one should, generally, act to become more skilled and thus less replaceable.

Comment by psychohistorian on What Is Signaling, Really? · 2012-07-10T19:20:02.008Z · LW · GW

A simple, interesting, complementary fact is that the cigarette manufacturers all saw profits skyrocket when laws started banning cigarette ads on TV. All of their products are largely interchangeable, so advertising doesn't tell you anything new about the product, it just builds brand loyalty. So it saves everyone costly signalling.

It's also extremely difficult for new cigarette manufacturers to break into the market. It's very hard to use a really clever ad campaign to increase your market share when you're not allowed to advertise on TV. Curiously, this may actually harm consumers, in that it prevents competition from lowering cigarette prices. I suppose this analogizes to the idea that if everyone were suddenly banned from displaying their wealth, it would be very difficult to woo Helen of Troy unless you had clearly shown your wealth prior to the ban. Thus, banning signalling can lead to losses, as the wealthiest suitor may be unable to woo Helen if he came to the game too late.

Comment by psychohistorian on Fallacies as weak Bayesian evidence · 2012-03-19T21:54:52.919Z · LW · GW

I don't think this is an adequate rendition of a circular argument. A circular argument is one that contains a conclusion that is identical to a premise; it should in principle be very easy to detect, provided your argument-evaluator is capable of comprehending language efficiently.

"God exists because the Bible says so, and the Bible is the word of God," is circular, because the Bible can't be the word of God unless God exists. This is not actually the argument you evaluate however; the one you evaluate is, "The bible exists and claims to be the word of God; therefore it is more likely that God exists." That argument is not circular (though it is not very strong).

The other argument is just... weirdly phrased. Cloud-trails are caused by things. Significant other evidence suggests those things also have certain properties. We call those things "electrons." There's nothing circular about that. You've just managed to phrase it in an odd manner that is structurally similar to a circular argument by ignoring the vast network of premises that underlies it.

Similarly, slippery slopes simply fail because they don't articulate probabilities or they assign far higher probabilities than are justified by the evidence. "Legalizing marijuana may herald the Apocalypse" is true for certain, extremely small values of "may." If you say it will do so, then your argument should fail because it simply lacks supporting evidence. I'm not sure there's as much action here as you say.

Comment by psychohistorian on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-18T21:12:41.254Z · LW · GW

If your point is that there are a lot of people locked up for violating laws that are basically stupid, you're absolutely right.

But that issue is largely irrelevant to the subject of the primary post, which is the accuracy of courts. If the government bans pot, the purpose of evidence law is to determine whether people are guilty of that crime with accuracy.

In other words, your criticism of the normative value of the American legal system is spot-on; we imprison far more people than we should and we have a lot of stupid statutes. But since this context is a discussion of the accuracy of evidentiary rules and court procedure, your criticism is off-topic.

Comment by psychohistorian on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-18T07:50:54.755Z · LW · GW

It is self contradictory on its face . Compare these statements:

Literally 100% of people who ever lived have done multiple things which unfriendly legal system might treat as crimes, starting from simple ones like watching youtube videos uploaded without consent of their copyright owners, making mistakes on tax forms, reckless driving, defamation, hate speech, and going as far as the legal system wants to go.

US has extraordinarily high number of prisoners per capita. Looking at crime rates alone, it does not have extraordinarily high levels of serious crime per capita. There's no way most people in prisons can be anything but innocent (or "guilty" of minor and irrelevant "crimes" pretty much everybody is "guilty" of and persecuted on legal system's whims).

Unless you believe that young black men in US are the most criminal group in history of the world, most of them who are in prisons must be innocent by pure statistics.

The first statement provides a complete alternative explanation for the second two. It is entirely possible to believe that (A) there are far too many crimes, (B) police and prosecutors are biased against black people, and this fully explains why there are so many black people in prison without a single one of them needing to be innocent. Similarly, you say that, given crime rates and incarceration rates, some people must be innocent. Again, this is undermined by the fact that you say everyone is guilty of something. You just can't argue that everyone is a criminal, and then argue that high incarceration rates must necessarily be attributable to a high rate of convicting innocent persons. They may both be true, but you have no basis to infer the latter given the former.

To the extent that you disclaim this by saying that people are "guilty" of minor "crimes," your argument becomes largely circular, and is still not supported by evidence. What percentage of thieves/murderers/rapists/etc. are actually caught? How long are sentences? Combine a higher crime rate with a higher catch rate and longer sentences and you easily get a huge prison population without innocent people being convicted. I don't claim to know if this is the case, but you do, so you need to back it up.

There are good reasons to believe few trials that happen are extremely far from any kind of fairness, and they're stacked to give persecution an advantage. Just compare massive funding of police and prosecutors with puny funding of defense attorneys.

Comment by psychohistorian on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-17T20:17:11.403Z · LW · GW

You confuse different parts of the justice system, and your criticism is internally self-contradictory. If everyone who ever lived is guilty of something, then high or racially disparate incarceration rates need not catch innocent people. The fact that this occurs is more an indictment of the laws in place and the people who prosecute them, not the courts that adjudicate them.

Put another way, if breathing were a crime, then everyone convicted of it would in fact be guilty. If there were a lot of black men convicted of it, more so than other races, it would likely be due to different rates of prosecution, given how easy the charge would be to prove this is bad, but it is a criticism of the wrong part of the government. It would be like blaming the mayor for the ineffectiveness of the postal service (the former is the city government, the latter is federal).

Edited to clarify: I am referring to the value of our adversarial method and our rules of evidence / constitutional protections - the mechanics of how a trial works. There is an entirely separate issue of prosecutorial discretion and unequal police enforcement and overly draconian laws, which certainly lead to the problems being discussed here. But the entire purpose of evidence law and courtroom proceedings generally is to determine if the person charged is in fact guilty. It is not to determine if the prosecution is charging the right people or if the laws are just or justly enforced. So this criticism seems misplaced.

Comment by psychohistorian on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-17T14:10:56.056Z · LW · GW

I am sorry I did not manage to comment on this earlier; I did not suspect it would get promoted.

In short, your treatment of hearsay, and how the legal system addresses it, is simply wrong. Most of what you talk about is actually about the Confrontation Clause. I don't know if this is due to an intentional simplification of your examples, but the cases you use just don't work that way.

The main case you talk about, Davis v. Washington, is not a case about hearsay; just look at the wikipedia summary. It is a case about the confrontation clause. This is a clause that says that those accused of crimes have the right to confront the witnesses against them; if someone talks to the police under certain circumstances, that testimony may not be entered. It does not matter how reliable it is. See Crawford v. Washington. The "indicia of reliability test" was abandoned in Crawford, because it was completely circular - it was compared to doing away with a jury trial because the defendant was obviously guilty.

More generally, there is almost never a balancing test in hearsay. Hearsay is a series of rules that are applied systematically. Out of court statements are considered unreliable principally because the declarant is not under oath; there is no particular reason to believe they were being truthful. There is a series of rules that allow certain statements in for this purpose. The idea behind these rules is that they indicate the evidence is reliable. However, they operate purely formalistically: if something someone said was a statement for the purpose of medical diagnosis, it is admissible hearsay, even if the circumstances strongly demonstrate they were lying. The jury is permitted to figure that out.

The basic idea behind hearsay, and indeed behind evidence law generally, is that certain statement are more likely to mislead the jury than to aid in finding the truth. However, your whole discussion of "indicia of reliability" seems to me to address an obsolete doctrine on the Confrontation Clause. Hearsay, in the vast majority of circumstances, does not involve any kind of balancing test or similar determination. It either meets a rule, or it doesn't (though there is catch-all rule that gives the court some discretion - it can actually be somewhat problematic, because courts often get things wrong).

As to the issue of double hearsay - which I am used to hearing referred to as "hearsay within hearsay," a per se rule against a certain number of levels doesn't make a lot of sense. In the example you use, the bottom level of hearsay is very likely inadmissible; that's enough to keep it out. But the circumstances under which one could admit multi-layer hearsay are pretty limited; it would have to have an applicable exception for every level. You don't discuss any inadequacies with the exceptions, so I just don't see why it follows that their repeat application should be unreliable.

Comment by psychohistorian on Is Sunk Cost Fallacy a Fallacy? · 2012-02-04T16:36:29.521Z · LW · GW

Content aside, you should generally avoid the first person as well as qualifiers and you should definitely avoid both, e.g. "I think it is interesting." Where some qualifiers are appropriate, you often phrase them too informally, e.g. "perhaps it is more like," would read much better as, "It is possible that," or, "a possible explanation is." Some first person pronouns are acceptable, but they should really only be used when the only alternative is an awkward or passive sentence.

The beginning paragraph of each subsection should give the reader a clear idea of the ultimate point of that subsection, and you would do well to include a roadmap of everything you plan to cover at the beginning.

I don't know if this is the feedback you're searching for or if the writing style is purposeful, just my two cents.

Comment by psychohistorian on Risk aversion vs. concave utility function · 2012-02-03T05:19:00.630Z · LW · GW

I don't mean obvious in the, "Why didn't I think of that?" sense. I mean obvious in the trivial sense. When I say that it is circular, I don't mean simply that the conclusion follows logically from the premises. That is the ultimate virtue of an argument. What I mean is that the conclusion is one of the premises. The definition of a rational person is one who maximizes their expected utility. Therefore, someone who is risk-averse with respect to utility is irrational; our definition of rational guarantees that this be so.

I certainly see why the overall issue leads to confusion and why people don't see the problem instantly - the language is complex, and the concept of "utilons" folds a lot of concepts into itself so that it's easy to lose track of what it really means. I don't think this post really appreciates this issue, and it seems to me to be the deepest problem with this discussion. It reads like it is analyzing an actual problem, rather than unpacking an argument to show how it is circular, and I think the latter is the best description of the actual problem.

In other words, the article makes it easy to walk away without realizing that it is impossible for a rational person to be risk averse towards utility because it contradicts what we mean by "rational person." That seems like the key issue here to me.

Comment by psychohistorian on Risk aversion vs. concave utility function · 2012-02-03T03:54:46.283Z · LW · GW

Perhaps the point being made is less obvious to some others than it is to you. The same applies to many posts.

This is like a dismissive... compliment? I'm not sure how to feel!

Seriously, though, it doesn't undermine my point. This article ultimately gets to the same basic conclusion, but does it in a very roundabout way. The definition of "utilitons," converting outcomes into utilons eliminates risk-aversion. This extensive discussion ultimately makes the point that it's irrational to be utilon risk averse, but it doesn't really hit the bigger point that utilon risk aversion is fundamentally non-sensical. The fact that people don't realize that there's circular reasoning going on is all the more reason to point out that it is happening.

Comment by psychohistorian on Risk aversion vs. concave utility function · 2012-02-01T06:15:59.375Z · LW · GW

Your claim that a risk-averse agent cannot be rational is trivially true because it is purely circular.

You've defined a risk-averse agent as someone who does not maximize their expected utilons. The meaning of "rational" around these parts is, "maximizes expected utilons." The fact that you took a circuitous route to make this point does not change the fact that it is trivial.

I'll break down that point in case it's non-obvious. Utilons do not exist in the real world - there is no method of measuring utilons. Rather, they are a theoretical construct you are employing. You've defined a rational agent as the one who maximizes the amount of utilons he acquires. You've specified a function as to how he calculates these, but the specifics of that function are immaterial. You've then shown that someone who does not rationally maximize these utilons is not a rational utilon maximizer.

Risk aversion with respect to paper clips or dollars is an empirical claim about the world. Risk aversion with respect to utilons is a claim about preference with respect to a theoretical construct that is defined by those preferences. It is not a meaningful discuss it, because the answer follows logically from the definition you have chosen.

Comment by Psychohistorian on [deleted post] 2011-11-30T20:47:50.981Z

"They are extremely rich and do either do not want a prenup or offer a desirable prenup package.". In that case you either get a great marriage in the long term or you get a truckload of money in the somewhat shorter term.

This is not actually how it works if you get married without a prenup. You only get income made after the marriage; if they have lots of investments and don't work, you probably get nothing. If they have a high salary, you may get a lot. If, that is, you're in a community property state. If you're not, you may not get a dime.

My phrasing was admittedly imprecise because my interest was "Will we have a stable marriage?" not "Will this marriage materially benefit me?" Obviously, "Someone credibly threatens to murder tens of thousands of people if you do not get married," might also be a great reason, but I think from the context it's obvious I wasn't discounting such creative issues. Still, your "obvious" is, as a legal matter, not correct, and therefore hopefully not obvious.

Comment by psychohistorian on Uncertainty · 2011-11-30T20:43:13.662Z · LW · GW

While the probabilistic reasoning employed in the card question is correct and fits in with your overall point, it's rather labor-intensive to actually think through.

In order to get two red cards, you need to pick the right pair of cards. Only one pair will do. There are six ways to pick a pair of cards out of a group of 4 (when, as here, order doesn't matter). Therefore, the odds are 1/6, as one out of the six possible pairs you'll pick will be the correct pair.

Similarly, we know the weatherperson correctly predicts 12.5% of days that will be rainy. We know that 20% of days will actually be raining. That gives us "12.5/20 = 5/8" pretty quickly. Grinding our way through all the P(X [ ~X) representation makes a simple and intuitive calculation look really intimidating.

I'm not entirely sure of your purpose in this sequence, but it seems to be to improve people's probabilistic reasoning. Explaining probabilities through this long and detailed method seems guaranteed to fail. People who are perfectly comfortable with such complex explanations generally already get their application. People who are not so comfortable throw up their hands and stick with their gut. I suspect that a large part of the explanation of mathematical illiteracy is that people aren't actually taught how to apply mathematics in any practical sense; they're given a logically rigorous and formal proof in unnecessary detail which is too complex to use in informal reasoning.

Comment by Psychohistorian on [deleted post] 2011-11-20T02:32:59.134Z

However, four months into the relationship, before much of this had happened, I proposed to her. I was always big on commitments. I felt that if you were dating someone, it was to eventually get married, assuming they were right for you.

I think a pretty good heuristic would be to never marry someone you have known less than a year. The only exceptions are if you've been married before or have been dating a rather long time and thus have a clear sense of what you're looking for. Of course, plenty of people don't know this.

Comment by psychohistorian on Rational Romantic Relationships, Part 1: Relationship Styles and Attraction Basics · 2011-11-03T23:17:38.149Z · LW · GW

The whole goth guy/ alternative look point misses a significant part of the appeal. People (particularly men) who prominently display membership in a subculture often have a strong sense of self. This kind of self-confidence is generally attractive to women, so those who aren't immediately put off by his group identity are likely attracted to that confidence and the charisma that goes with it.

Practically, this means that alternative styles only tend to work when they're genuine and you're comfortable with them. Someone who feels most natural in more conservative clothing may actually hurt themselves by trying niche appeal, because they need to belong to that niche.

Comment by psychohistorian on Amanda Knox: post mortem · 2011-10-31T05:16:54.553Z · LW · GW

Since I was randomly chosen to comment on this, I'll throw in my two cents. I haven't thought about too much and my first instinct was to trust whatever value judgements I had made at the time, which I thought were something like 5-5-95, but were actually 1-1-99. Since me-at-the-time was much more familiar than me-right-now, I'd still probably defer to his judgement; if anything, her exoneration and other evidence should move those numbers slightly closer to the extremes.

Comment by psychohistorian on The Bias You Didn't Expect · 2011-10-30T20:34:38.456Z · LW · GW

The Israeli parole result was for a short single high-stakes decision; most hearings are not like that, I think.

... is the exact response I wanted to make.

Most legal choices are either incredibly short term - like an objection that a judge must often respond to immediately - or medium to long term - like a motion that a judge will ask for parties to provide briefs (written legal arguments) on. Parole hearing like this are one a few legal decisions where there really is a quick decision made - another area would be bail hearings, but there the outcome isn't binary, it's a dollar amount. There isn't much money to be made in gaming either,.

Comment by psychohistorian on The Bias You Didn't Expect · 2011-10-17T03:05:41.370Z · LW · GW

Very few judicial decisions are actually made entirely during a hearing; despite what you see on television, most major issues are going to be briefed and the judge (or his staff) will have already read the briefs and come to a not-too-tentative decision about how they are going to rule. For issues that are so small as not to be briefed, lawyers have pretty much no control over when these will be heard by the judge, and the stakes tend to be relatively small anyways. Moreover, where there are two parties involved, it seems impossible to predict which direction this effect would take - would the judge be wiser, less wise, less agreeable, lazier? Even if someone were paying careful attention to the data, it seems unlikely they could discern a clear trend, and no one's paying such attention because (A) no one really has an incentive to and (B) the payoff is likely very close to 0 anyways.

Comment by psychohistorian on On the Openness personality trait & 'rationality' · 2011-10-17T02:57:38.271Z · LW · GW

This is interesting, but as has been pointed out, it suffers from some extreme reliance on a rather tenuous analogy between infectious diseases and infectious memes. I think it hard to overstate how dubious and dishonest (either recklessly or negligently) this claim is. Diseases and memes are just not even close to the same thing in an evolutionary sense. There's no reason to think that mechanisms that have evolved to prevent disease infection would have any effect on meme promulgation. Even if a meme spreads "like malaria," that doesn't mean that if you have one-half of the sickle cell gene, you'll be immune to it. As other commenters have pointed out, the followup to this only gets worse - the kids who signal openness tend to be the kids who are unpopular and thus have no actual cost of signaling such.

But worse, the underlying evolutionary theory behind this seems pretty dubious. Yes, there's a correlation. That's only modest evidence. There doesn't appear to be a clear connection between the openness psychological trait and interacting with outside tribes thousands of years ago, unless such evidence simply wasn't quoted. Also, the effects of infection would tend to operate on a larger scale than the individual; I don't know if this theory would require group selection, but it wouldn't surprise me if it does to some extent. I'm not saying it's wrong, but it seems extremely carefully tailored and post hoc, and so should be at least suspicious. Piling on the dubious analogy makes this whole point pretty poorly supported.

Comment by psychohistorian on Pain · 2011-09-12T19:56:07.416Z · LW · GW

An interesting article, but completely orthogonal to my point. My point is that it isn't entirely correct to put the two types of pain on the same scale, because they're meaningfully different phenomena. That study... asks people to assume they're on the same scale and rate them accordingly.

Incidentally, it also asks people to remember pain, not to experience it. It's at least my experience that the memory of physical pain is going to be a lot different than the memory of emotional pain. Physical pain (usually) heals. Emotional pain, in many senses, does not. Emotional pain is a purely mental experience - if someone credibly told you your family died when they didn't, it'd feel just the same until you figured out they were wrong. There's nothing analogous to breaking your leg - you can't really re-create it without re-breaking your leg.

Nothing in this post endorses dualism in any way, shape, or form, lest anyone misconstrue it in that manner.

Comment by psychohistorian on Seeing Red: Dissolving Mary's Room and Qualia · 2011-06-13T05:10:22.696Z · LW · GW

Let's be clear here - I'm advocating no such thing. My position is firmly reductionist. Also, we're talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.

Any experience is, basically, a firing of neurons. It's not something that "emerges" from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one's memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.

If Mary were an advanced AI, she could reason as follows: "I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I'm an AI, so I can just fire those neurons on my own. Aha! That's what red looks like!" Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary's brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She's only human.

Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.

Comment by psychohistorian on How not to move the goalposts · 2011-06-13T04:20:32.667Z · LW · GW

I think that the author here is bending over backwards and trying not to offend people. They weren't exactly successful at this, but I think that people should be charitable in interpreting this. They're new and apparently ended up over-qualifying some statements in an effort to be more agreeable.

The underlying point is actually one of the best I have read here in some time; if this retains few upvotes, I may write something closely related to this topic, if doing so is not inappropriate. There are a lot of rather significant political issues that would have been far better resolved by pointing out, "The moral framework you are applying those facts to is abhorrent" rather than, "those facts are wrong." This is precisely because focusing on the latter causes people to not want to believe the truth. Rejecting an argument on all proper grounds is a useful practice; this is particularly true when it relies on an appealing but deeply flawed moral premise.

Comment by psychohistorian on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T04:55:25.529Z · LW · GW

This is somewhat circular. There isn't anyone who knows everything about the visual system. Thus, we're hypothesizing that knowing everything about the visual system is insufficient to understand what red looks like... prove that knowing everything about the visual system is insufficient to understand what red looks like.

Even given this, the obvious solution seems to be that "What red looks like" is a fact about Mary's brain. She needn't have seen red light to see red; properly stimulating some neurons would result in the same effect. That the experience is itself a data point that cannot be explained through other means seems obvious. One could not experience a taste by reading about it.

Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let's pretend) every zero and every one in that DVD. But if you don't have a DVD player, you can never watch it. The human brain does not appear to be able to translate zeroes and ones into a visual experience. Similarly, people can't know what sex feels like for the opposite sex; you simply don't have the equipment.

DVD players do not require magic to work, why should the brain?

Comment by psychohistorian on Econ/Game theory question · 2011-05-15T04:39:27.415Z · LW · GW

As a matter of psychology, the two are neighbors. They probably work it out amiably, and A probably doesn't end up charging much because it doesn't cost him anything, and because B will get really, really angry if A insists on some high price. Also, practically, if B is so inclined, he can punish A by litigating the issue - it'll cost A money and is just an unpleasant experience. It'll cost B the same, but we know that real people are willing to pay money to punish those they find uncooperative.

If these were two competing businesses, or if involved business more generally, I wouldn't be surprised if A did try to take advantage of his position. But the actual fact is that humans are not homo economicus, and will generally not bend other people over a barrel in such situations. If the costs to A were higher, it'd be a very different story.

Or perhaps I have an overly optimistic view of average human behaviour.

Comment by psychohistorian on Econ/Game theory question · 2011-05-13T18:06:04.563Z · LW · GW

If A has something that B values at that price, and that can't be gotten anywhere else, he will charge what the market will bear; and the market will bear 500k.


If B wants to buy something that A obtained at a certain cost, and that can't be sold anywhere else, he will pay what the market will bear; and the market will bear $6.

If A refuses to pay $500k, B gets nothing. If there were multiple buyers and A had the highest reservation cost, your answer would work and the problem would be boring.

But as the reversal shows, if B offers $6, A would take it, under similar reasoning. That's what it means to say it costs A $5. No one is going to make a higher competing offer, because no one else can even legally buy the product (and the product is a legal construct, so that means no one else can buy the product, period). It would make as much sense for B to pay $499,999, as it would for A to accept $6.

A has other sources of money

This is immaterial. A has no other use for the easement - he either sells it to B (losing $5), or it doesn't exist (0$). Conversely, B could simply not build a house on her property ($0). The fact that each has other things they can do with their life is immaterial to the transaction at issue, because that transaction has no alternatives - either A & B come to an agreement, or they both get nothing.

Comment by psychohistorian on Econ/Game theory question · 2011-05-12T13:28:26.020Z · LW · GW

The actual solution to this in the real world, 99 times out of 100, is that B just says OK, or A insists on giving him $100 to cover the damages, or something generally amiable. The reason I asked this question is because I'm thinking about the efficiencies of injunctions (which result in bargaining) versus damage awards (which generally don't). So the only characters I care about are the ones who aren't neighborly.

Indeed, having confirmed my suspicions that this problem is insoluble, it favors a damage award in this context. B's actions are almost pure holdup. If all he were entitled to were damages and not injunctive relief, he wouldn't have nearly the same capacity for holdup, and the outcome looks more like the neighborly one (except with more bad will, perhaps).

In other words, I'm assuming that the agents are selfish and somewhat inhuman - irrational in a big picture sense - because occasionally these disputes do happen. There's a MAJOR case where a landlord sued over having to install a 1 cubic foot cable-box that increased their property value, and there's a case of a guy suing to stop someone from using an easement to get to a contiguous property (i.e. he had a right to cross A to get to B, but he was crossing A to get to B and then continuing on to C, and that was impermissible and went to court).

Comment by psychohistorian on "I know I'm biased, but..." · 2011-05-12T13:24:53.006Z · LW · GW

It's not undermining your own credibility, since "I may be wrong" is generally a truism. It's more of a display of humility, which can be very helpful if (A) you're a lot smarter than the other person and they basically know it, or (B) the other person outranks you, and to be directly contradicted by a subordinate would be embarassing.

As an example, I'll often use this preface (or, "I'm confused; it was my understanding that not-X.") when asking a question in a law school class, where I think the professor may have misstated the law. Usually, I think they actually have - though I'm not always right - and this works a helluva lot better than saying, "But Professor, the law is not-X."

Comment by psychohistorian on "I know I'm biased, but..." · 2011-05-12T02:53:41.044Z · LW · GW

I actually use "I may be bias" or "I may be wrong" either humorously, as a means of softening a claim, or because I know I'm lacking information/have not thought about the matter extensively/am less expert than the other person ("may be wrong" in all those cases").

It's funny when it's obvious, like if you're describing the talents of your child or significant other.

It's useful when you know you're right but you want the other person to be able to agree with you, rather than to force them. It's particularly useful when addressing someone of higher status who has made an error.

Comment by psychohistorian on Econ/Game theory question · 2011-05-12T02:48:18.565Z · LW · GW

As has been pointed out, it is not, because there is continuous interaction, bargaining, and perhaps the potential for pre-commitments, though as I mentioned, those could be risky.

Comment by psychohistorian on Econ/Game theory question · 2011-05-12T02:45:54.052Z · LW · GW

B could get an injunction prohibiting the crossing of his land. Easements traditionally give rise to injunctive relief. That would make A criminally liable if he or his agents crossed his land - it wouldn't be too hard for B to prevent any construction company from working there. That the outcome of litigation is certain is stipulated to. That's actually why this problem is interesting - there is some dispute as to whether injunctions or damages are better solutions to these problems.

If you want to make it cleaner, I suppose you could say that B has put up a fence AND obtained a declaratory injunction already, and they're trying to bargain to have B invalidate the injunction. But I thought the original was clean enough.

Comment by psychohistorian on Econ/Game theory question · 2011-05-12T02:41:21.758Z · LW · GW

...but when he signs this contract, he may find out that B signed a contract refusing to accept anything less than $450,000 for the land, or else pay some large some to D. If there's any lag in communication between the two of them, this is an extremely risky strategy.

Comment by psychohistorian on Econ/Game theory question · 2011-05-11T20:51:03.201Z · LW · GW

The problem is that if A is perfectly rational, in a sense, he can't make a credible take it or leave it offer. If he offers $10,000, B knows he would be willing to pay $11,000, so he rejects. On the other hand, A knows B would take less than $10,000, so why offer that much in the first place? That's why I suspect it's just intractable.

Do any of the elaborate decision theories popular around these parts solve this problem?

Comment by psychohistorian on Closet survey #1 · 2011-05-08T00:00:42.103Z · LW · GW

Given that substantial variance may exist between individuals, isn't birth (or within a day of birth) a rather efficient bright line? I fail to see the gain to permitting more widespread infanticide, even taking your argument as generally correct.

Comment by psychohistorian on Your Evolved Intuitions · 2011-05-06T19:40:03.115Z · LW · GW

I get the underlying theory just fine. It's a neat fictional example, but (and I'm not familiar with the underlying fiction) it would probably be extremely fitness-enhancing. A male light would probably be incredibly high status and have little difficulty producing offspring. If it were purely genetically determined, it seems like it'd be pretty hard to sustain - no one would want it for their own children. If it were recessive, it might work out better, but there still seems a substantial problem of free-loading.

Thus, this evolutionary explanation for homosexuality partly undermines itself: it's genetic, but it's not quite genetic and there's other stuff going on that determines whether or not it gets activated. So it's either genetics (actively-selected) + environmental factors or genetics (random noise) + environmental factors. That's not a very clear case for kin selection, to say the least.

My claim isn't that it couldn't possibly be related to kin selection. It's that, like many ev-psych claims, the evidence for "or something else is going on" is far too strong to make a definitive claim, particularly because the outcome is the exact opposite of what you'd expect from simpler evolutionary theory. Otherwise, you risk combining two theories in a way that can explain far too many outcomes. Any individual who fails to reproduce can divert resources to his siblings. You could just as easily say many negative traits that don't show up with absolute consistency are also advantageous. This seems like a stretch.

In retrospect I will admit that this example detracted from my overall point and was poorly chosen.

Comment by psychohistorian on Your Evolved Intuitions · 2011-05-06T18:21:38.046Z · LW · GW

Touche. It is possible to explain almost anything ex post. Moreover, it's really unclear (and what I've read does not address) how a gene that causes someone to not reproduce gets passed on. Assuming that just you have the gene and your relatives don't, it's beneficial. But if you all have the gene, that's a very different story. A gene that causes me to sacrifice myself to save my brothers is conditional - it doesn't matter unless the need arises. A gene that causes me to prefer non-procreative sex doesn't seem conditional in the same way - it simply prevents me and anyone who has it from reproducing.

In short, while one can rationalize the behaviour as advantageous ex post, it's rather hard to actually put that together cogently, and it's a very long way from getting rid of, "largely accidental" as an alternative explanation.

I will admit bias on this issue, having dated a woman with a lesbian identical twin.

Comment by psychohistorian on Experiment Idea Thread - Spring 2011 · 2011-05-06T18:12:27.958Z · LW · GW

Here's an example, which I will review the comments from and use to develop a sort of standard structure, which I will then incorporate in the top level post for future. So please comment both on the idea and my expression of it, and make suggestions for what basic info should be included either in response to this or in response to the primary article.

Area: Evolutionary Psychology

Topic: Genetic versus "cultural" influence on mate attraction.

Specific problem: There's a belief that various aspects of physical attraction are genetically determined. It is difficult to separate genetic effects from cultural effects. This is an attempt to try to control for that, to see how (in)substantial the effects of culture are. The underlying idea is that, while different cultures also have different genetic makeups, different times in the same geographic area may see different cultures with much more related genetic makeups.

The actual experiment: Sample a group of people (possibly just one sex per experiment) and obtain their views on physical attractiveness. Show them images of people, or drawings, or ask questions about what physical qualities they would find desirable in a mate. (e.g. An attractive member of the opposite sex would be taller than me - strongly agree, agree, disagree, strongly disagree). Then, and this is the expensive part, use the exact same survey on people's offspring at about the same age. It may be ideal to compare people with aunts and uncles rather than parents, as parents are likely to have a more direct non-genetic effect on preferences.

This is a rather general description, but it should be perfectly adequate for someone in the relevant field to design a very effective and insightful experiment. It could even easily be incorporated as part of a larger experiment tracking qualities between generations.

Comment by psychohistorian on Your Evolved Intuitions · 2011-05-06T17:53:52.748Z · LW · GW

Evolutionary psychology also clearly explains why a significant portion of the human population will be homosexual.

Oh, wait. It does the exact opposite of that. Hmmm. [ETA: I admit kin selection provides a basis for saying that homosexuality might kinda not totally be evidence against the genetic origin of certain traits. It's not the kind of thing that anyone would predict if they hadn't already seen it, and my main point is about ex post rationalization.]

I certainly agree with the general conclusion that natural selection and our specific history as social creatures has probably shaped our thinking in some ways. The central problem here is one of epistemology. For the claim, "X causes Y," there are two good ways to prove it - show it experimentally and reproducibly, or show a logical connection and rule out all other causes. The latter is harder and inferior, but it's all we've got. Let me apply this to some examples and the problems here should be evident.

Human skin is capable of enduring water heated to about 110F for an extended period of time without sustaining damage. Does this mean that our ancestral environment had a lot of hot pools of water that we needed to survive? Or is it perhaps mostly accidental? If there were evidence that the heat-resistant properties of skin were expensive to maintain, that'd go a long way to ruling out coincidence. Absent that evidence, the mere potential explanation of hot-tubbing ancestors is useless.

Men prefer women with a BMI of 18-23 or so. This appears true in Western society. It isn't true in other societies. If you had a purely genetic account of both tendencies, or if you could rule out cultural factors definitively, you can make a convincing ev-psych argument for it. But merely knowing that a preference exists is not strong evidence it is genetic, when alternative explanations are obvious. The correct answer is, "We don't know yet; we'd need to experiment."

By contrast, a preference for facial symmetry (and to a much lesser extent waist-hip ratio) is the kind of thing that seems well-supported as being genetic. People aren't too consciously aware of it. There isn't an explicit cultural value for it like there is for thinness. It has been correlated directly to health. Maybe it's random, but given that it's expensive to dislike people, and there's an obvious function and a lack of both conscious awareness and alternative explanations, it's pretty fair to say it's probably genetic.

In short, an overall useful point is probably being seriously muddied in a detour through ev-psych. Where the actual issue is the existence of a trait, the precise cause of it is less material. It doesn't much matter if confirmation bias was useful in the ancestral environment, or if it's an accidental tag-along with something that was. The important thing is fixing it, or at least being aware of it.

Comment by psychohistorian on Hollow Adjectives · 2011-05-06T03:28:17.208Z · LW · GW

The president makes many decisions that affect the economy.

This may be a good instance of the exact kind of thing I am objecting to. Or it may indicate that I need to refine the concept. "I make many decisions that affect the economy," is also a true statement. "In the absence of any other information" is a hole you could pilot an aircraft carrier through. This does nothing to specify what action would meet the criteria of "doing enough to fix the economy," and thus doesn't really seem at odds with my example.

The president no doubt does many things that affect health care and national health. Does it follow we should re-elect him if cancer rates are on the decline?

Comment by psychohistorian on Hollow Adjectives · 2011-05-06T03:22:01.815Z · LW · GW

Hmmm, insidious type. It should read "seems sound if you don't actually think about it." I meant to write, "A, not B," and apparently got distracted and wrote, "A, not A." And the contraction instead of possessive form of who was decidedly a typo - like I said, this is a draft. Thanks!

Comment by psychohistorian on Being Wrong about Your Own Subjective Experience · 2011-05-05T20:19:08.295Z · LW · GW

It's a bit more than a "logical possibility." Consider these two options:

  1. We actually dream in color, but we experience it as black and white, and remember it and report it correctly.

  2. We actually dream in color, but we don't remember it very well (particularly old dreams, and particularly because the memory centers of the brain do not function properly during dreaming), so our answers to questions about old dreams are inaccurate, possibly biased by television or our most recent memory or some other factor we're unaware of.

It's unclear to me that your position is logically possible, insofar as it is represented by 1. I don't know what it means for a subjective experience to be something different from how it is experienced. I know exactly how things can be misremembered, I do it all the time. So it's 2, which is not merely logically possible, but actually relies on a common and pretty unremarkable phenomenon, versus 1, which actually may not be logically possible because it doesn't seem to actually mean anything.

As for your second point - didn't say immediate, but I think you need to be a bit more specific than "certain grounding in a foundationalist epistemology." I can't disagree with you because I'm not entirely sure what you're saying. If you can point to a specific epistemological problem that arises from any of the problems you've pointed out, well, that'd make this discussion a whole lot more useful.

Comment by psychohistorian on Rationality Quotes: May 2011 · 2011-05-05T15:29:39.119Z · LW · GW

True. My entire point is that I'm curious as to which is going on here. I suspect that people down-voting this one are explaining their actions by saying it's too "political," whereas they are not applying the exact same formula to the other one. This indicates that, "It's too political" actually just means, "It's politics I disagree with."

Of course, I admit it's possible that everyone who downvoted for political reasons downvoted both, and the other was just more popular. I don't think that's likely, which I why I asked what people are doing. Could admittedly have phrased it better.

Comment by psychohistorian on Hollow Adjectives · 2011-05-05T15:24:56.461Z · LW · GW

Perhaps I shall spell this out better, but the impossibility is linguistic. A cleaner example I mention is:

Where "bachelor" means "man who is not married," God could not create a married bachelor. A married bachelor is not a thing. If you break down the definitions of circle and square, you'll see that a "square circle" is not a thing. A heavy stone that has no mass (or a heavy stone that is not heavy), or a circle that is not circular, or any other number of direct contradictions seem impossible, not as limits on power, but mostly as limits on language. That's the point I'm getting at.

Comment by psychohistorian on Rationality Quotes: May 2011 · 2011-05-05T02:56:12.543Z · LW · GW

I find it very interesting that this quote, which is also political, does not appear to have been heavily downvoted. It's doubtlessly much less contentious among the particular demographics of this site, but it's probably more politically antagonistic among the population in general - I don't know, but I suspect pacifists are at least as common as libertarians, and this is far more antagonistic towards them.

So why upvote that quote but downvote this one?

Comment by psychohistorian on Rationality Quotes: May 2011 · 2011-05-05T02:54:34.812Z · LW · GW

Unfortunately, my impression here is based on general experience and observation; there isn't a specific document that contains the libertarian view.

But in general, a lot of people who describe themselves either as libertarian, conservative, or pro-small government are opposed to welfare with near-religious fervor, but are likely unaware of the issue of occupational licensing and (in some cases) basically indifferent to charter schools or education related issues. It's just the interesting observation that even though there are numerous types of bad government interference, it's one specific one that generates ire. This suggests more of a political objection (I don't like those people!) than an ideological one (this action is inconsistent with a belief system I hold).

Ideologically, libertarians should oppose all of these things, probably in proportion to the inefficiencies they represent. As that's often not the case, it indicates irrationality.

Comment by psychohistorian on Rationality Quotes: May 2011 · 2011-05-04T19:15:58.418Z · LW · GW

I'll defend this. I think it is closely related to rationality, and I find it ironic that the "Politics is the mind-killer" is such a popular response to an unpopular quote - it makes that point.

A rather basic fallacy is: A, B, and C lead to D. We must stop D. Therefore, we must stop A. The error, of course, is that without further premises, you could equally well get stop or C. Stopping A is merely sufficient, not necessary.

Libertarianism is usually more of an ideology than a politics (just as liberal and conservative are ideologies, to Democratic and Republican politics). This quote shows how it tends to be shaped into a politics. When there are clearly many things to be done, it is in fact bizarre that people focus heavily on one of them, particularly given the above structure.

People are very willing to believe that the market is unfree in ways that unfairly benefit others. People are not nearly as willing or interested when the market is unfree in ways that harms others or benefits themselves. I can see why people are concerned with this being excessively political, but it does seem accurate. Of course, there may be additional factors or explanations that the speaker was not crediting, but I'm not really aware of any.

Inconsistently applying an ideology is kind of the essence of politics being the mind killer, and this seems to be a good point about that.

Comment by psychohistorian on Being Wrong about Your Own Subjective Experience · 2011-04-27T19:07:42.517Z · LW · GW

True. But, if in a 1930 80% of people reported eating chicken at their last meal, and in 1990 80% said they had pork at their last meal, we would not assume that there was an error in their first-person experience without significant additional evidence. That's precisely what is missing here.