Posts

Learn a foreign language to reduce bias? 2012-04-22T15:37:09.300Z
Suggestions needed: good articles for a meetup discussion 2012-04-14T22:31:56.604Z
Experience with Lumosity? 2012-03-18T22:51:47.151Z

Comments

Comment by AShepard on Crossing the History-Lessons Threshold · 2014-10-18T17:02:24.803Z · LW · GW

Seconded. Or more generally, a framework for how to put together a good reading list, would be extremely helpful.

Comment by AShepard on Knightian Uncertainty: Bayesian Agents and the MMEU rule · 2014-08-04T15:50:08.287Z · LW · GW

I think the analysis in this post (and the others in the sequence) has all been spot on, but I don't know that it is actually all that useful. I'll try to explain why.

This is how I would steel man Sir Percy's decision process (stipulating that Sir Percy himself might not agree):

Most bets are offered because the person offering expects to make a profit. And frequently, they are willing to exploit information that only they have, so they can offer bets that will seem reasonable to me but which are actually unfavorable.

When I am offered a bet where there is some important unknown factor (e.g. which way the coin is weighted, or which urn I am drawing from), I am highly suspicious that the person offering the bet knows something that I don't, even if I don't know where they got their information. Therefore, I will be very reluctant to take such bets

When faced with this kind of bet, a perfect bayesian would calculate p(bet is secretly unfair | ambiguous bet is offered) and use that as an input into their expected utility calculations. In almost every situation one might come across, that probability is going to be quite high. Therefore, the general intuition of "don't mess with ambiguous bets - the other guy probably knows something you don't" is a pretty good one.

Of course you can construct thought experiments where p(bet is secretly unfair) is actually 0 and the intuition breaks down. But those situations are very unlikely to come up in reality (unless there are actually a lot of bizarrely generous bookies out there, in which case I should stop typing this and go find them before they run out of money). So while it is technically true that a perfect Bayesian would actually calculate p(bet is secretly unfair | ambiguous bet was offered) in every situation with an ambiguous bet, it seems like a very reasonable shortcut to just assume that probability is high in every situation and save one's cognitive resources for higher impact calculations.

Comment by AShepard on Rationality Quotes July 2013 · 2013-07-02T00:02:16.678Z · LW · GW

We readily inquire, 'Does he know Greek or Latin?' 'Can he write poetry and prose?' But what matters most is what we put last: 'Has he become better and wiser?' We ought to find out not merely who understands most but who understands best. We work merely to fill the memory, leaving the understanding and the sense of right and wrong empty. Just as birds sometimes go in search of grain, carrying it in their beaks without tasting to stuff it down the beaks of their young, so too do our schoolmasters go foraging for learning in their books and merely lodge it on the tip of their lips, only to spew it out and scatter it on the wind.

Michel de Montaigne, Essays, "On schoolmasters' learning"

Comment by AShepard on Rationality Quotes July 2013 · 2013-07-01T23:56:09.576Z · LW · GW

If (as those of us who make a study of ourselves have been led to do) each man, on hearing a wise maxim immediately looked to see how it properly applied to him, he would find that it was not so much a pithy saying as a whiplash applied to the habitual stupidity of his faculty of judgment. But the counsels of Truth and her precepts are taken to apply to the generality of men, never to oneself; we store them up in our memory not in our manners, which is most stupid and unprofitable.

Michel de Montaigne, Essays, "On habit"

Comment by AShepard on How to Build a Community · 2013-05-15T17:57:06.333Z · LW · GW

Good and important questions. I find it interesting, and indicative of a broader tendency at LessWrong, that books are the first place you looked for an answer. The academic approach has its place, but if you're looking for advice you can actually put into practice, it would be more helpful to find some people who have successfully built communities and ask them what they did. Talking to a few thoughtful people who have built successful mid-size businesses, community organizations, or online forums from the ground up is going to be a lot more useful on the margin than thinking more about a public goods game.

Comment by AShepard on Learn A New Language! · 2012-05-20T17:05:05.024Z · LW · GW

My suggestion would be to add an introduction. There are many more things to be read than time to read. It's incumbent on you as a writer to convince people that what you have to say is worth the time investment. And you need to make that case clearly, convincingly, and concisely right at the beginning.

For this particular article, you need to establish two things:

  • Why the reader should care about learning a foreign language. You take this as given, but I submit that it's not as obvious as you might think. It sometimes seems like everyone else in the world is trying to learn English - why shouldn't I let them do all the work?
  • Why the reader should listen to your advice. As far as we know, you're just some random person on the internet. Even if I am interested in learning a foreign language, why should I trust your suggestions?

A paragraph or two addressing those two points would go a long way towards convincing your potential readers that your article is worth their time to read.

Comment by AShepard on [deleted post] 2012-04-30T23:54:39.745Z

From your introductory paragraphs, it appears that you have a genuine desire to respond to feedback but are significantly underestimating the degree of change required to do so. Perhaps a good old fashioned dose of Strunk and White would help. Especially this (note both the content and the style):

Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts. This requires not that the writer make all his sentences short, or that he avoid all detail and treat his subjects only in outline, but that every word tell.

Comment by AShepard on Prisoner's Dilemma on game show Golden Balls · 2012-04-22T15:48:38.547Z · LW · GW

To be even more technical, "Prisoner's Dilemma" is actually used as a generic term in game theory. It refers to the set of two-player games with this kind of payoff matrix (see here). The classic prisoners dilemma also adds in the inability to communicate (as well as a bunch of backstory which isn't relevant to the math), but not all prisoners dilemmas need to follow that pattern.

Comment by AShepard on Attention control is critical for changing/increasing/altering motivation · 2012-04-11T03:43:07.788Z · LW · GW

I’m guessing these are very familiar to most readers here, but let’s cover them briefly just in case.

I, for one, was not familiar with the terms, so I appreciated the explanation.

Comment by AShepard on SotW: Be Specific · 2012-04-03T16:12:08.413Z · LW · GW

I was reminded of something similar by AspiringKnitter's post below. There is an event in Science Olympiad called Write It Do It. One person is given a constructed object made out of LEGO, K'Nex, or similar. They write a set of instructions for how to reproduce the object. These are then given to a teammate who hasn't seen the original object, who must use the instructions to reconstruct the original object. Seems fairly simple to adapt to a group setting - you could just split the group into two rooms and have them first write their own instructions and then try to follow the instructions of a partner in the other room.

This exercise and malicious idiot exercise differ in the "when" and "by whom". With a malicious idiot, your errors are pointed out immediately and by somebody else. When writing instructions, your errors don't come to light until your partner's object doesn't look like yours, and neither of you might notice until that point. It's important to notice a lack of specificity both in others (so they don't lead you astray) and in yourself (so you don't lead yourself astray), so it would probably be useful to do both kinds of exercises.

Comment by AShepard on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-31T21:11:21.628Z · LW · GW

Another thing you could do is measure in a more granular way - ask for NPS about particular sessions. You could do this after each session or at the end of each day. This would help you narrow down what sessions are and are not working, and why.

You do have to be careful not to overburden people by asking them for too much detailed feedback too frequently, otherwise they'll get survey fatigue and the quality of responses will markedly decline. Hence, I would resist the temptation to ask more than 1-2 questions about any particular session. If there are any that are markedly well/poorly received, you can follow up on those later.

Comment by AShepard on Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 · 2012-03-30T02:00:05.209Z · LW · GW

I'd suggest measuring the Net Promoter Score (NPS) (link). It's used in business as a better measure of customer satisfaction than more traditional measures. See here for evidence, sorry for the not-free link.

  1. "On a scale of 0-10, how likely would you be to recommend the minicamp to a friend or colleague?"
  2. "What is the most important reason for your recommendation?

To interpret, split the responses into 3 groups:

  • 9-10: Promoter - people who will be active advocates.
  • 7-8: Passive - people who are generally positive, but aren't going to do anything about it.
  • 0-6: Detractor - people who are lukewarm (which will turn others off) or will actively advocate against you

NPS = [% who are Promoters] - [% who are Detractors]. Good vs. bad NPS varies by context, but +20-30% is generally very good. The followup question is a good way to identify key strengths and high priority areas to improve.

Comment by AShepard on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-17T00:22:08.791Z · LW · GW

I think it's just the standard "a thing, another thing, and yet one more additional thing". A common species, of which "lies damned lies, and statistics" is another example.

Comment by AShepard on Rationality meditation theory. · 2012-01-11T05:52:53.353Z · LW · GW

This is an odd post. It starts out with a suggestion for how to structure group brainstorming, then veers into an argument for why cannabis use enhances creativity. I think you would be better served splitting those arguments into separate posts.

Comment by AShepard on The Value (and Danger) of Ritual · 2011-12-31T20:47:08.482Z · LW · GW

Addendum: This is apparently a known issue with the LW website.

Comment by AShepard on The Value (and Danger) of Ritual · 2011-12-30T16:07:12.770Z · LW · GW

Off-topic: in a number of places where you've used italics, the spaces separating the italicized words from the rest of the text seems to have been lost (e.g. "helpedanyone at all.") Might just be me though?

Comment by AShepard on Uncertainty · 2011-12-01T21:03:09.431Z · LW · GW

I'm having difficulties with your terminology. You've given special meanings to "distinction", "prospect", and "deal" that IMO don't bear any obvious relationship to their common usage ("event" makes more sense). Hence, I don't find those terms helpful in evoking the intended concepts. Seeing "A deal is a distinction over prospects" is roughly as useful to me as seeing "A flim is a fnord over grungas". In both case, I have to keep a cheat-sheet handy to understand what you mean, since I can't rely on an association between word and concept that I've already internalized. Maybe this is accepted terminology that I'm not aware of?

Comment by AShepard on Drawing Less Wrong: Observing Reality · 2011-11-22T06:04:06.657Z · LW · GW

It looks like a couple of footnotes got cut off.

Comment by AShepard on Calibrate your self-assessments · 2011-10-09T21:54:52.107Z · LW · GW

Interesting that your debate predictions tend too low. In my debate experience, nearly everyone consistently overestimated their likelihood of winning a given round. This bias tended to increase the better the debaters perceived themselves to be.

Comment by AShepard on Morality is not about willpower · 2011-10-09T15:45:11.193Z · LW · GW

Upvoted for introducing the very useful term "effective belief".

Comment by AShepard on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-19T13:30:13.741Z · LW · GW

I think if we tabooed (taboo'd?) "arbitrary", we would all find ourselves in agreement about our actual predictions.

Comment by AShepard on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-19T06:01:22.116Z · LW · GW

but because it is the standard value, you can be more confident that they didn't "shop around" for the p value that was most convenient for the argument they wanted to make. It's the same reason people like to see quarterly data for a company's performance - if a company is trying to raise capital and reports its earnings for the period "January 6 - April 12", you can bet that there were big expenses on January 5 and April 13 that they'd rather not include. This is much less of a worry if they are using standard accounting periods.

Comment by AShepard on A potentially great improvement to minimum wage laws to handle both economic efficiency as well as poverty concerns · 2011-07-26T14:14:53.359Z · LW · GW

All good points. To clarify, 50% is the marginal tax rate from the OP's system alone. A major reason that effective marginal tax rates can be so high is that programs like (to be US centric) food stamps and Medicaid are means tested, so they phase out or go away entirely as you make more income. If the OP's system would retain those kinds of programs, their contribution to the marginal tax rate would come on top of the 50% cited above.The net effect of enacting this system would depend on which parts of the current bundle of social insurance programs it would displace (in the US, presumably the EITC and TANF, at least).

Comment by AShepard on A potentially great improvement to minimum wage laws to handle both economic efficiency as well as poverty concerns · 2011-07-26T00:47:29.252Z · LW · GW

I don't think that's quite right. The marginal tax rate is going to be 50% no matter the value of x, given your formula. Your social security payment is half the difference between your income and the x threshold, so each additional dollar you earn below that threshold loses you 0.5 dollars of social security. This is true whether the threshold is $10,000 or $100,000.

You are right, though, that there will be a correspondence between the minimum wage and the level of x. I don't think this is causal, but popular notions about the ideal levels for both the minimum wage and 'x' will probably both reflect underlying notions about an "acceptable standard of living". If there's a correspondence between the level of the minimum wage and the fraction of people who give up working because of this system, I think it would be chiefly because of this correlation (in addition to the employment effect of having a minimum wage at all).

Comment by AShepard on A potentially great improvement to minimum wage laws to handle both economic efficiency as well as poverty concerns · 2011-07-25T23:07:43.599Z · LW · GW

Interesting idea. It's in the same family as the Earned Income Tax Credit and the Negative Income Tax.

The immediate potential downside I see is that this would effectively institute a very high marginal tax rate on income below 'x'. For every additional dollar that someone who makes less than x earns, they lose 0.5 dollars of social security. That's a 50% implicit marginal tax rate, on top of whatever the official marginal tax rate is. By comparison, the highest marginal tax rate for federal income taxes in the United States is 35%, which is only applied to household earnings beyond $370,000 (source). The implication of standard economic theory is that many people would simply choose not to work and earn .75x.

Comment by AShepard on Welcome to Less Wrong! (2010-2011) · 2011-07-03T04:18:53.565Z · LW · GW

Long-time reader, only occasional commenter. I've been following LW since it was on Overcoming Bias, which I found via Marginal Revolution, which I found via the Freakonomics Blog, which I found when I read and was fascinated by Freakonomics in high school. Reading the sequences, it all clicked and struck me as intuitively true. Although my "mistrust intuition" instinct is a little uncomfortable with that, it all seems to hold up so far.

In the spirit of keeping my identity small I don't strongly identify with too many groups or adjectives. However, I've always self-identified as "smart" (whatever that means). If you were modeling my utility function using one variable, I'm most motivated by a desire to learn and know more (like Tsuyoku Naritai, except without the fetish for unnecessary Japanese). I've spent most of my life alternately trying to become the smartest person in the room and looking for a smarter room.

I just graduated from college and am starting work at a consulting firm in Chicago soon, which I anticipate will be the next step in my search for a smarter room. My degree is in economics, a discipline I enjoy because it is pretty good at translating incorrect premises into useful conclusions. I also dabbled fairly widely, realizing spring of my senior year that I should have started taking computer science earlier.

I've been a competitive debater since high school, which has helped me develop many useful skills (public speaking, analyzing arguments, brainstorming pros/cons rapidly, etc.). I was also exposed to some bad habits (you can believe whatever you want if no one can beat your arguments, the tendency to come to genuinely believe that your arbitrarily assigned side is correct). Reading some of the posts here, especially your strength as a rationalist, helped me crystallize some of these downsides, though I still rate the experience as strongly positive.

I am a male and a non-theist, although I've grown up in an area where many of my family members and acquaintances have real and powerful Christian beliefs (not belief in belief, the real deal). This has left me with a measure of reverence for the psychological and rhetorical power of religion. I don't have particularly strong feelings on cryonics or the singularity, probably because I just don't find them that interesting. Perhaps I should care about them more, given how important they could be, but I haven't displayed any effort to do so thus far. It makes me wonder if "interestingness bias" is a real phenomenon.

My participation here over the years has been limited to reading, lurking, and an infrequent comment here and there. I've had a couple ideas for top level posts (including one on my half-baked notion that "rationalists" should consider following virtue ethics), but I have not yet overcome my akrasia and written them. Just recently, I have started using Anki to really learn the sequences. I am also using it to memorize basically useless facts that I can pull out in pub trivia contests, which I enjoy probably more than I should.

Comment by AShepard on Reasons for being rational · 2011-06-26T16:51:13.532Z · LW · GW

I simply care a lot about the truth and I care comparatively less about what people think (in general and also about me), so I'm often not terribly concerned about sounding agreeable.

Can you clarify this statement? As phrased, it doesn't quite mesh with the rest of your self-description. If you truly did not care about what other people thought, it wouldn't bother you that they think untrue things. A more precise formulation would be that you assign little or no value to untrue beliefs. Furthermore, you assign very little value to any emotions that for the person are bound up in their holding that belief.

The untrue belief and the attached emotions are not the same thing, though they are obviously related. It does not follow from "untrue beliefs deserve little respect" that "emotions attached to untrue beliefs deserve little respect". The emotions are real after all.

Comment by AShepard on Memory, Spaced Repetition and Life · 2011-06-14T15:26:32.876Z · LW · GW

Downloaded and set up with a couple of Divia's decks. How many decks do you recommend working through at one time? For reference, I'm currently doing one deck on the default settings, which works out to ~40 cards a day (20 new, ~20 review) and takes 5-7 minutes.

Comment by AShepard on Nature: Red, in Truth and Qualia · 2011-05-30T01:39:33.288Z · LW · GW

I haven't read the post yet, but the title is awesome.

Comment by AShepard on Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields · 2011-02-15T21:18:05.350Z · LW · GW

I'm surprised that you don't mention the humanities as a really bad case where there is little low-hanging fruit and high ideological content. Take English literature for example. Barrels of ink have been spilled in writing about Hamlet, and genuinely new insights are quite rare. The methods are also about as unsound as you can imagine. Freud is still heavily cited and applied, and postmodern/poststructuralist/deconstructionist writing seems to be accorded higher status the more impossible to read it is.

Ideological interest is also a big problem. This seems almost inevitable, since the subject of the humanities is human culture, which is naturally bound up with human ideals, beliefs, and opinions. Academic disciplines are social groups, so they have a natural tendency to develop group norms and ideologies. It's unsurprising that this trend is reinforced in those disciplines that have ideologies as their subject matter. The result is that interpretations which do not support the dominant paradigm (often a variation on how certain sympathetic social groups are repressed, marginalized, or "otherized"), are themselves suppressed.

One theory of why the humanities are so bad is that there is no empirical test for whether an answer is right or not. Incorrect science leads to incorrect predictions, and even incorrect macroeconomics leads to suboptimal policy decisions. But it's hard to imagine what an "incorrect" interpretation of Hamlet even looks like, or what the impact of having an incorrect interpretation would be. Hence, there's no pressure towards correct answers that offsets the natural tendency for social communities to develop and enforce social norms.

I wonder if "empirical testability" is a should be included with the low-hanging fruit heuristic.

Comment by AShepard on Rationality & Criminal Law: Some Questions · 2010-06-20T22:20:13.119Z · LW · GW

I think you can get some useful insights into the reasons why punishments might differ based on moral luck if you take an ex ante rather than an ex post view. I.e. consider what effect the punishment has in expectation at the time that Alice and Yelena are deciding whether to drive home drunk or not, and how recklessly to drive if they do.

Absent an extremely large and pervasive surveillence system, most incidences of drunk driving will go undetected. In order to acheive optimal deterence of drunk driving, those that do get caught have to be punished much more. While drunk drivers will face different punishments ex post, the expected punishment they face ex ante will be the same. If there are in fact factors that make your drunk driving less dangerous (less drunk, more skilled driver, slower speed, etc.), these will decrease the expected punishment.

So basically, the ex ante expected punishment for a particular dangerous act does not differ based on moral luck. Ex post punishment does differ, and that is a cost, but the countervailing benefit of not having a costly and intrusive surveillence system outweighs it I think.

Notes:

  1. This is the second time I've linked this recently, but Gary Becker's Crime and punishment: An economic approach is a very useful way to think through these issues.

  2. This argument applies much less in the attempted murder / murder case, because the chance that an attempted murder is caught and prosecuted is much higher, probably even higher than the probability a murderer is caught (because the victim usually has lots of relevant information).

  3. For the purposes of this comment, I considered drunk people to be rational actors. They are not, but this is a nonissue, because drunk selves are only allowed to exist at the discretion of sober selves.

Comment by AShepard on How to always have interesting conversations · 2010-06-16T04:58:39.531Z · LW · GW

You are certainly correct, and I think what you say reinforces the point. Building comfort is a social function rather than an information exchange function, which is why you don't particularly care whether or not your conversation leads to more accurate predictions for tomorrow's weather.

Comment by AShepard on How to always have interesting conversations · 2010-06-14T03:32:45.303Z · LW · GW

Let me try a Hansonian explanation: conversation is not about exchanging information. It is about defining and reinforcing social bonds and status hierarchies. You don't chit-chat about the weather because you really want to consider how recent local atmospheric patterns relate to long-run trends, you do it to show that you care about the other person. If you actually cared about the weather, you would excuse yourself and consult the nearest meteorologist.

Written communication probably escapes this mechanism - the mental machinery for social interaction is less involved, and the mental machinery for analytical judgment has more room to operate. This probably happens because there was no written word in the evolutionary context, so we didn't evolve to apply our social interaction machinery to it. A second reason is that written communication is relatively easily divorced from the writer - you can encounter a written argument over vast spatial or temporal separation - so the cues that kick the social brain into gear are absent or subdued. The result, as you point out, it is easier to critically engage with a written argument than a spoken one.

Comment by AShepard on How to always have interesting conversations · 2010-06-14T02:02:43.791Z · LW · GW

This seems like something that natural conversationalists already do intuitively. They have a broad range of topic about which they can talk comfortably (either because they are knowledgeable about the specific subjects or because they have enough tools to carry on a conversation even in areas with which they are unfamiliar), and they can steer the conversation around these topics until they find one that their counterpart can also talk comfortably about. Bad conversationalists either aren't comfortable talking about many subjects, are bad at transitioning from one subject to another, or can't sense or don't care when their counterpart doesn't care about a given topic.

The flip side of this is that there are 3 ways of improving one's conversational ability: learning more about more subjects, practicing transitions between various topics, and learning the cues for when one's counterpart is bored or uninterested by the current topic. Kaj focuses on the second of these, but I think the other two strategies ought not be forgotten. It's no use learning to steer the conversation when there are no areas of overlapping interest to steer to, or when you can't recognize whether you are in one or not.

Comment by AShepard on Less Wrong Book Club and Study Group · 2010-06-09T19:29:38.583Z · LW · GW

I'm in. Started reading through it this past winter but stopped. Hopefully this group will provide some motivation.

Comment by AShepard on Diseased thinking: dissolving questions about disease · 2010-05-31T20:45:54.143Z · LW · GW

You might check out Gary Becker's writings on crime, most famously Crime and Punishment: An Economic Approach. He starts from the notion that potential criminals engage in cost-benefit analysis and comes to many of the same conclusions you do.

Comment by AShepard on lessmeta · 2009-12-22T18:41:20.738Z · LW · GW

I agree that applied rationality is important, but I'm not sure that there needs to be another site for that to happen. This recent post, for example, seems like an example of exactly what the OP wants to see. Perhaps what should be done is creating an official "Applied Rationality" tag for all such posts and an easy way to filter them. That way, if a bad scenario happens where new readers more interested in politicized fighting than rationality are drawn to this site because there's a discussion on gun control, they can be easily quarantined. But if this site maintains its high signal/noise ratio, the community benefits from trying out its tools in action.