Rational Reading: Thoughts On Prioritizing Books 2011-03-27T19:54:33.210Z
The danger of living a story - Singularity Tropes 2010-11-14T22:39:06.691Z
Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality 2010-09-14T16:17:55.826Z
Pascal's Pyramid Scheme 2010-01-31T18:56:54.557Z
Anti-Akrasia Technique: Structured Procrastination 2009-11-12T19:35:45.496Z
The Onion Goes Inside The Biased Mind 2009-05-30T17:33:15.038Z
Image vs. Impact: Can public commitment be counterproductive for achievement? 2009-05-28T23:18:28.564Z
Book: Psychiatry and the Human Condition 2009-03-23T19:14:41.401Z
Individual Rationality Is a Matter of Life and Death 2009-03-21T19:22:02.606Z


Comment by patrissimo on I've had it with those dark rumours about our culture rigorously suppressing opinions · 2014-02-10T18:07:03.923Z · LW · GW

As a former Evangelical Polyamorist, now a born-again Monogamist, I enthusiastically endorse items 1 & 2 in this comment.

It can be thought of as the cultural equivalent of Algernon's Law - any small cultural change is a net evolutionary disadvantage. I might add "previously accessible to our ancestors", since the same principle doesn't apply to newly accessible changes, which weren't previously available for cultural optimism. This applies to organizing via websites. It does not apply to polyamory (except inasmuch as birth control, std prevention, and paternity testings may have affected the relevant tradeoffs, though limited to the degree that our reactions are hardwired and relevant).

Comment by patrissimo on Eliezer Yudkowsky Facts · 2011-08-06T03:44:52.776Z · LW · GW

Eliezer Yudkowsky's keyboard only has two keys: 1 and 0.

Comment by patrissimo on Eliezer Yudkowsky Facts · 2011-08-06T03:44:37.611Z · LW · GW

The speed of light used to be much lower before Eliezer Yudkowsky optimized the laws of physics.

Comment by patrissimo on Eliezer Yudkowsky Facts · 2011-08-06T03:44:11.578Z · LW · GW

Eliezer Yudkowsky doesn't have a chin, underneath his beard is another brain.

Comment by patrissimo on Reasons for being rational · 2011-07-09T17:47:10.654Z · LW · GW

As a contrarian rationalist, I can assure you that my attitudes are the results of my personality & upbringing, not some bold brave conscious decision. I was always different, enough that conforming wouldn't have worked, so finding true & interesting & positive-attention-capturing ways to be different was my best path. The result is that I'm biased towards contrarian theses, which I think is useful for improving group rationality in most cases, but still isn't rational.

Comment by patrissimo on Rational Reading: Thoughts On Prioritizing Books · 2011-03-29T20:47:22.988Z · LW · GW

I am starting to believe that Patri is motivated by status and worldly accomplishment much more than by learning or curiosity, and if Patri is indeed (as this article suggests) forgoing opportunities to take pleasure in learning for the sake of optimizing his increases in status or accomplishment, well, then even though Patri certainly is a fine and commendable young man, that is a mistake

Yes, I am indeed attempting to choose my reading based on how it supports my consciously chosen goals, rather than simply the vague non-goal of "learning" or short-term hedonic utility ("pleasure"). There is a name for this - it's called "instrumental rationality", and I'm rather surprised to find an LW commenter calling it a mistake! I thought I could count on it as a shared assumption.

Now, the question of what I'm motivated by & whether that's good is totally separate. I frankly admit that one of my goals is to climb the status ladder, and I can understand why some people might not see that as desirable. On the other hand, I'm again surprised to find "worldly accomplishment" characterized negatively - isn't accomplishing things in the world the point of...everything?

Curiosity is fun for kids, but the world ain't gonna save itself.

Comment by patrissimo on Rational Reading: Thoughts On Prioritizing Books · 2011-03-28T18:36:57.315Z · LW · GW

I use audio books / podcasts some, but I don't run, have a minimal commute, and so don't end up getting much time in.

Comment by patrissimo on Rational Reading: Thoughts On Prioritizing Books · 2011-03-28T18:36:26.336Z · LW · GW

I'm pretty good at getting rid of the worst things, still trying to figure out what the best things are.

Comment by patrissimo on Rational Reading: Thoughts On Prioritizing Books · 2011-03-27T21:06:56.621Z · LW · GW

I see, that makes sense. I find it easiest to prioritize within a domain like "books", vs. among all possible skill-increasing activities. Also, when it comes to "generally increasing my knowledge / improving my map", that is something that I think it makes sense to allocate a fixed bucket of time to, although one should also compare alternatives like documentaries, blogs, and conversations as ways of doing it.

Comment by patrissimo on Verifying Rationality via · 2011-03-27T20:40:43.300Z · LW · GW

I personally know many people who have made those figures in the past, although high-stakes online poker has gotten much tougher in the past few years and it takes extremely high skill to make that much now.

I have personally made about $240/hr at online poker ($200 NLH SNGs on Party Poker back before the UIGEA). But I couldn't make anywhere near that nowadays.

Comment by patrissimo on Verifying Rationality via · 2011-03-27T20:37:45.402Z · LW · GW

200 hours is 1 month of 50 hour weeks, or 2 months of 25 hour weeks. Is it really that big a deal for your results to only matter month to month rather than day to day? I mean, yeah, it can be frustrating during a bad week, but it's not like the long run takes years.

Comment by patrissimo on Verifying Rationality via · 2011-03-27T20:35:42.744Z · LW · GW

I agree with most of this, but I don't think it dilutes the brand to focus on our comparative advantage, namely highlighting the aspects of poker most relevant to rationality training.

Thanks for mentioning Tommy - I should ask him if he wants to make any guest posts.

Comment by patrissimo on Rational Reading: Thoughts On Prioritizing Books · 2011-03-27T20:34:52.811Z · LW · GW

Once a prioritization system is set up, it's then trivial to decide whether to read the top book or do something else based on how your estimate of the value of doing so compares to your alternative activities. Without a prioritization system, it doesn't matter whether you have fixed an amount of time or not - there are vastly more books to read than anyone has time in 8,760 hours per year, so you must prioritize.

So prioritization gets you flexible reading time, flexible reading time doesn't get you prioritization, so I don't see how pointing it out is relevant. Prioritization is an independent need. Please explain.

More generally, you seem to be assuming that one can instantly evaluate, without conscious prioritization, what the optimal activity is at any given time. I know that is not even slightly true for me, and I highly doubt it is true for you.

Comment by patrissimo on Verifying Rationality via · 2011-03-27T20:14:43.244Z · LW · GW

It seems odd that you are criticizing the site for not replicating the specific hand discussion which is done so well elsewhere, while simultaneously wondering how we will differentiate.

Obviously, we will differentiate by not writing about the topics which are written about elsewhere ad nauseam, and instead add new thoughts, not often written about, and likely to be of interest to this audience - namely the connections between poker & becoming more rational. Perhaps these new thoughts are not as fundamental for learning how to win at poker, but they are different, and we believe, useful.

Comment by patrissimo on Towards a Bay Area Less Wrong Community · 2011-03-23T00:59:43.803Z · LW · GW

FYI: Rumor says the plan for the South Bay meetup is to experiment with a variety of icebreakers, rationality games, and other themed evenings & see what works.

Comment by patrissimo on Less Wrong at Burning Man 2011 · 2011-03-18T04:19:09.107Z · LW · GW

Would be awesome if you can be in Playagon - or near us!

Comment by patrissimo on Less Wrong NYC: Case Study of a Successful Rationalist Chapter · 2011-03-18T04:16:48.564Z · LW · GW

I'm interested in hearing a bit more on meeting structure ("Meetup Topics" heading), as well as how it relates to time progression (what types of activities work best for forming the tribe vs. later maintaining it).

Comment by patrissimo on Less Wrong NYC: Case Study of a Successful Rationalist Chapter · 2011-03-18T04:15:54.085Z · LW · GW

This post is wonderful! The general category of "codified knowledge about best practices on how to do something important gained from doing it for hundreds of hours" is way underrepresented on LW. The density of practical experience makes it harder to write than finding a study or bias and musing about it, but it also makes it a lot more useful.

I look forward to helping replicate these practices in the Bay Area. Although achieving gender balance here is going to be a pretty significant challenge...

Comment by patrissimo on Fun and Games with Cognitive Biases · 2011-03-03T08:03:47.608Z · LW · GW

Feed confirmatory evidence to others, give them tests to run which you know beforehand are confirmatory

This is not a way to take advantage of confirmation bias. Confirmation bias means that others look for confirming evidence for their true theories, and ignore disconfirming evidence. This process is not much affected by you adding extra confirmatory evidence - they can find plenty on their own. Instead, it is a way to fool rational people - for example, Bayesians who update based on evidence will update wrong if fed biased evidence. Which doesn't really fit here.

The way to actually use confirmation bias to convince people of things is to present beliefs you want to transmit to them as evidence for things they already believe. Then confirmation bias will lead them to believe this new evidence without question, because they wish to believe it to confirm their existing beliefs.

Comment by patrissimo on Optimal Employment · 2011-02-22T07:32:29.719Z · LW · GW

Step 0: Get a time machine Step 1: Go back in time and tell yourself not to waste time on a degree, but to go invent Google or Facebook or something useful Step 2: Profit!

Comment by patrissimo on Procedural Knowledge Gaps · 2011-02-21T07:31:29.097Z · LW · GW

Couldn't it just be an erroneous application of (an intuited version of) Newton's law of cooling, which says that heat transfer is linearly proportional to heat difference? They assume that the thermostat temperature is setting the temperature of the heating element, and then apply their intuited Newton's Law.

Seems pretty rational to me.

Comment by patrissimo on Some rationality tweets · 2011-01-04T06:01:33.230Z · LW · GW

Lately I've been thinking about if and how learning math can improve one's thinking in seemingly unrelated areas.

This seems like a classic example of the standard fallacious defense of undirected research (that it might and sometimes does create serendipitous results)?

Yes, learning something useless/nonexistent might help you learn useful things about stuff that exists, but it seems awfully implausible that it helps you learn more useful things about existence than studying the useful and the existing. Doing the latter will also improve your thinking in seemingly unrelated areas...while having the benefit of not being useless.

If instead of learning the clever tricks of combinatorics as an undergraduate, I had learned useful math like statistics or algorithms, I think I would have had just as much mental exercise benefit and gotten a lot more value.

Comment by patrissimo on Efficient Charity: Do Unto Others... · 2011-01-04T05:52:52.898Z · LW · GW

Ok, great, I'm glad I misunderstood.

Comment by patrissimo on Efficient Charity: Do Unto Others... · 2011-01-02T07:04:23.803Z · LW · GW

Completely agree with your general point on marginal analysis (although I'm a TDT skeptic), and am a fan of GiveWell, but this is trivially wrong:

It is not possible for everyone to behave this way in elections: no voter is able to consider the existing distribution of votes before casting their own.

This seems to assume away information about the size of the electorate as well as any predictive power about the outcome. Surely the marginal benefit of a Presidential vote in a small swing state is massively higher than in a large solidly Democratic state, for example. And in addition to historical results, there is polling data in advance of the election to improve predictions.

Besides this being theoretically true, we can see it empirically from the spending patterns of both Presidential campaigns and political parties on Congressional races. They allocate money to the states / races where they believe it will do the most marginal good, which is often a very inequal distribution. Thus they do, in fact "consider the existing distribution of votes before casting" their advertising dollars.

Comment by patrissimo on Efficient Charity: Do Unto Others... · 2011-01-02T06:51:29.972Z · LW · GW

At the risk of provoking defensiveness I will say that it really sounds like you are trying to rationalize your preferences as being rational when they aren't.

I say this because the examples that you were giving (local food kitchen, public radio), when compared to truly efficient charities (save lives, improve health, foster local entrepreneurship), are nothing like "save 9 kids + some other benefits" vs. "save 10 kids and nothing else". It''s more like "save 0.1 kids that you know are in your neighborhood" vs. "save 10 kids that you will never meet" (and that's probably an overestimate on the local option). Your choice of a close number is suspicious because it is so wrong and so appealing (by justifying the giving that makes you happy).

The amount of happiness that you create through local first world charities is orders of magnitude less than third world charities. Therefore, if you are choosing local first world charities that help "malnourished" kids who are fabulously nourished by third world standards, we can infer that the weight you put on "saving the lives of children" (and with it, "maximizing human quality-adjusted life years") is basically zero. Therefore, you are almost certainly buying warm fuzzies. That's consumption, not charity. I'm all for consumption, I just don't like people pretending that it's charity so they can tick their mental "give to charity" box and move on.

Comment by patrissimo on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T06:26:17.858Z · LW · GW

I agree extremely on the issue of procrastination not being restful, this is a standard theme in modern productivity writing. Procrastination (like reading blogs / tweets / etc) is a sort of worst of both worlds, it is neither useful nor restful, it passes the time and avoids immediate pain without providing pleasure or renewal.

That's why The Energy Project, Pomodoro, Zen Habits, etc. recommend that you schedule renewal breaks into your day - at a minimum midmorning, lunch, and midafternoon. I think the deliberate practice literature recommends breaks every 90 minutes. Taking a walk outside & exercise are oft-recommended, but really, just being conscious of the goal of renewal and experimenting to find things that will work is all you need. It's helped me be more productive.

Social conversations with co-workers are also good, but it's important that they be relaxed & guilt-free. One of the secrets of renewal is that it works much better if accepted as a need, for some reason guilty renewal doesn't renew. Renewal requires relaxation while guilt prevents it, something like that.

Glad to hear that you're learning (and writing about) basic productivity hacks like this, LW will get its instrumental rationality black belt yet :).


Comment by patrissimo on Tallinn-Evans $125,000 Singularity Challenge · 2011-01-01T22:28:43.972Z · LW · GW

Wow, SIAI has succeeded in monetizing Less Wrong by selling karma points. This is either a totally awesome blunder into success or sheer Slytherin genius.

Comment by patrissimo on Some rationality tweets · 2011-01-01T22:22:55.682Z · LW · GW

Learning math sure isn't useless, and it seems to mostly consist of thinking about useless or nonexistent things.

I learned a lot of math (undergraduate major), and while it entertained me, it has been almost completely useless in my life. And the forms of math I believe to be most useful and wish I'd learned instead (statistics) are useful because they are so directly applicable to the real world.

What useful math have you learned that doesn't involve reference to useful or existent things?

Comment by patrissimo on New Year's Resolutions · 2011-01-01T22:11:06.807Z · LW · GW

I worry that new year's resolutions are a Schelling point for failed self-improvement that, by using a fundamentally flawed approach, tend to fail and then discourage people from future attempts at positive change.

Can we try to switch to the meme of "Annual retreat & reflect about one's life, goals, and habits", rather than these so frequently failed "resolutions", whose very name implies that the solution is more "resolve", and thus the problem is insufficient "resolve", rather than insufficient experimentation, knowledge about habit formation, realism about achievable change, or any of the other numerous actual reasons?

I mean, it's 2010, and we know we lose weight through hacks, not the application of more willpower - same goes for anything else.

Comment by patrissimo on What I've learned from Less Wrong · 2010-12-15T05:03:44.535Z · LW · GW

"That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally)."

Also how great propaganda works.

If you are going to describe a "great argument" I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.

Comment by patrissimo on Rationality Quotes: December 2010 · 2010-12-15T04:55:44.543Z · LW · GW

I found this quote brilliant solely because of the incongruous "like" in there. It makes the whole thing turn into a Deep Mystery instead of a Deep Saying.

After all, wouldn't someone who does the important things also stick to the most important words, ie those with content, unlike "like"? If so, how delightful is the erroneous arrogance of this quote! If not, what a fascinating challenge to my assumptions about the implications of language pattern!

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T04:50:40.425Z · LW · GW

Apparently this community really values the combination of wit, brevity & correctness, which are all good things.

Unfortunately, since your brief witty correct remark was about something irrelevant, that means we are rewarding entertainment that wins status/appreciation without contributing to meaningful discussion, relative to deep and/or thoughtful insights. Quite understandable, but I can see why you were horrified - one expects better of LWers.

I interpret this as evidence against the correctness of the elitism strain in LW culture. We are all monkeys, the great thing about LW is that we know it and want to change it - not that we have.

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T04:45:26.178Z · LW · GW

I don't think this is true. I know people who "assume good faith", and they are amazing a a pleasure to debate with - it never becomes argument. But I have not found this to be correlated with analytical thinking - if anything, the opposite.

Rather, my experience with analytical people (incl. myself) is that they just don't see the emotional subtext. They see the argument, the logical points, and they don't even think about the status implications, who challenged whose authority, and so forth. It's not as pleasant to think of we non-neurotypicals as oblivious rather than charitable, but it seems more accurate to me.

For example, the idea that all that matters is whether my argument is good is so natural to me and core to my family upbringing that it's taken me many years to unlearn it. To learn that people care how an argument is phrased, how openly you suggest they are wrong, and who the authority figure is (ie whether the challenger is of low status in that context).

In some ways, my obliviousness was very powerful for me, because ignoring status cues is a mark of status, as are confidence and being at ease with high-status people - all of which flow from my focus on ideas over people or their status. Yet as I've moved from more academic/intellectual circles to business/wealth circles, it's become crucial to learn that extra social subtext, because most of those people get driven away if you don't have those extra layers of social sense and display it in your conversational maneuvering.

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T04:12:24.508Z · LW · GW

I have also found that being able to speak bluntly and off the top of my head about what I believe to be true is enormously valuable for me in truth-seeking. Having friends and forums where that is the culture is immensely valuable. Yet learning how to not do that - how to use my "polite pen" - has also been immensely valuable to me in getting my ideas across to a broader audience.

Each has it's place, and I think what most LWers need to hear is the point in this post, but I think it would have been clearer if all the examples were from the workplace / regular life. Then it wouldn't have had this challenge to LW culture you perceived.

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T04:07:27.995Z · LW · GW

If we're going to talk about the cognitive framing effects of language, as the original post did, how about your use of the word "Mundane"?

To me, it seems actively harmful to accurate thinking, happiness, and your chance of doing good in the world. The implication is characterizing most humans as a separate lower class, with the suggestion of contempt and/or disgust for those inferior beings, which has empirically led to badness (historically: genocide. in my personal experience: it has been poisonous to Objectivism and various atheist groups I've been in).

I'd like to hear some examples where framing most people as both "lesser" and "other" has led to good for the world, because all the ones I'm pullin' up are pretty awful...

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T04:01:02.529Z · LW · GW

This is an awesome response and extension, although it doesn't invalidate the point that we should learn what signals our words will give and choose them consciously. It's basically always better to understand and use the subtext. Whether using it to make sure you don't accidentally press the emotional buttons of a good-willed collaborator, or understanding when others are using it to exploit you.

In my experience, relentless politeness + authenticity (don't give up your basic point, but phrase it very nicely) is a great help at defeating setups. In the presentation case, sure, the questioner has upgraded the idea. But he has still pointed out it's core flaw! A less adept questioner might either a) not question at all, knowing that it looks like a rude challenge, or b) question rudely because he doesn't know how to be polite. Either one of which would make it more likely for the bad idea to pass unchallenged.

The key is authenticity: politeness shouldn't stop you from putting the knife into something that should die, it should just make it so smooth that it hurts the minimum and shows everyone that you are acting in the common interest. It's an empowering tool so that you can play the game of fighting back against bad gaming without looking like a gamer or a fighter.

Anyway, I have a sunny disposition so I don't share your negative framing of this, but your meta-point about how others can use these rules for evil and/or selfishness is great (although maybe at too high a level of Slytherin to be really useful to most LWers).

Comment by patrissimo on Defecting by Accident - A Flaw Common to Analytical People · 2010-12-15T03:44:43.721Z · LW · GW

You seem to be assuming that what you want to hear is how people should be learning to communicate ("I'd prefer they skip it"), but part of the point is that we are not like most people. If you want to communicate effectively with the broader population, then you have to focus on what they like to hear, not judge communication suggestions based on whether you would like hearing it.

Also, I love brevity, but I charitably assumed that the politeness examples were exaggerated to make the point. Exaggerated examples, while they often bother analytical types who already get the point ("but that's too far the other way!") are (IMHO) quite useful at helping get across new ideas by magnifying them.

And compactness is hard, as is habit change. So developing compact politeness seems harder than developing politeness and then polishing it with brevity and clarity. Maybe too hard for some people - one habit at a time is often easier.

Comment by patrissimo on Applause Lights · 2010-12-07T04:55:53.760Z · LW · GW

Yeah, but you'd get lots of applause!

Comment by patrissimo on How to Save the World · 2010-12-02T00:34:22.944Z · LW · GW

I think this is missing the primary advice of "work on instrumental rationality." The art of accomplishing goals is useful for the goal of saving the world - and still useful if you change your goal later! (say, to destroying the world, or moving to a new one :) )

So while this is a great list of ways to be instrumentally rational specifically for philanthropy, I think the general tools of instrumental rationality are also useful too (like: have concrete goals, hypothesize how to achieve them, try methods, evaluate them and change based on results, find mentors who have succeeded at what you are trying to do, make sure you talk to people who think differently from you, be conscious about where to spend limited willpower...)

Comment by patrissimo on How to Save the World · 2010-12-02T00:29:42.373Z · LW · GW

Love almost all of this. I worry that (3) is making the common rationalist mistake of basing a strategy on the type of person you wish you were rather than the type you are. (Striding toward Unhappiness, we might call it).

So, you wish that your passion for a cause were more strongly correlated with the utilitarian benefit of that cause, and game the instinct to work on what feels good with small gifts while putting most of your effort towards what you think is optimal. But if the result is working on something you aren't as passionate and excited about, you may work less effectively, burn out on helping the world, or just be miserable. Your taste for a cause is what it is, not what you want it to be. It matters whether you feel good about what you do.

(4) compensates to this for some degree - you will tend to try to find reasons to value & love whatever you do, so to some extent you can pick a cause first and fall in love with it later. But this doesn't always work, and can result in demotivated team members who demoralize others. A passionate & excited team is a high-performing team.

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:41:49.795Z · LW · GW

Anything can be mapped to tropes, but not all tropes are the same. It matters what tropes your life, mission, or organization are mapped to! To skillfully navigate the world (I guess the LW term is "to win") you must know what tropes are being mapped to you, and what tropes your brain sees your identity as fitting into. That way you can manipulate others' perception of you (what stories are they telling about you? How are they telling those stories? Do they gain you status and resources), as well as making sure you aren't fooling yourself.

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:39:32.794Z · LW · GW

"So part of winning is being able to deal with human susceptibility to think in stories."

Exactly! It is especially relevant if you are trying to grow a following around an idea, which SIAI is. Winning requires wearing your Slytherin hat sometimes, and an effective Slytherin will manipulate the stories that they tell and the stories that are told about them.

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:36:27.823Z · LW · GW

Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I'm not arguing "against" SIAI, this post is to some degree emerging from me exploring the question of SIAI's organizational instrumental rationality. I have the impression from a variety of angles/sources that it's pretty bad. Since I care about SIAI's success, it's one of the things I think about in the background - why, and how you could be more effective.

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:28:40.124Z · LW · GW

First, I'm not claiming a connection between truth and tropism, but this idea that everything is equally tropish seems wrong. Not everyone has the role of a protagonist fighting for humanity against a great inhuman evil that only they foresee, and struggling to gather allies and resources before time runs out. Yet Eliezer has that role.

Second, even though tropes apply to everyone's lives to some degree, it matters which tropes they are. For example, someone who sees themselves as a fundamentally misunderstood genius who deserves much more than society has given them is also living a trope - but it's a very different trope with very different results. Identifying the tropes you are living is useful - it helps in your personal branding, can teach you lessons about strategies for achieving your goal, and may show you pitfalls.

For example, I live a very similar trope set to Eliezer, which is why I notice it, and it poses many challenges in being effective, because it's tempting to (as Nick alluded to above) play the role rather than doing the work.

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:22:36.570Z · LW · GW

This is so common as to be an adage: "Never attribute to malice that which is adequately explained by stupidity." ('s_razor)

Comment by patrissimo on The danger of living a story - Singularity Tropes · 2010-11-16T00:21:09.175Z · LW · GW

I can see how for your audience, the story-like qualities would be a minus. On the other hand, I think the story bias has to do with how people cognitively process information and arguments. If you can't tell your mission & strategy as a story, it's a lot harder to get across your ideas, whatever your audience.

The battle was meant to be metaphorical - the battle to ensure that AI is Friendly rather than Unfriendly. And I didn't say anything about hostile humans - the problem is indifferent humans not giving you resources.

Also, I'm not arguing against SIAI, I just find it amusing how well the futurist sector maps onto a story outline - various protagonists passionate about fighting some great evil that others don't see and trying to build alliances and grow resources before time runs out. You can squiggle, but that's who you are. Instrumental rationality means figuring out how to make best positive use of it and avoid it biasing you.

Comment by patrissimo on Error detection bias in research · 2010-09-30T02:51:30.038Z · LW · GW

Write several pieces of analysis code, ideally in different languages, and check that the results are the same? Even better, have someone else replicate your analysis code. That way you have a somewhat independent source of confirmation.

Also, use practices like tons of unit testing which minimize the chance for bugs in your code. All this must be done before you see the results, of course.

Is this confirmation bias really that bad in practice? Scientists get credit for upsetting previous consensus. So this may lead potentially disruptive research to happen slightly less often. But it remains the case that an attempted change to the consensus - a "surprising" result will still get changed eventually, by someone who doesn't question the surprising result, or questions it but thoroughly reviews their code and stands by it. So evidence for change will come slightly less often than it could, but changes will still be correct. Doesn't seem like a big deal.

Science got the charge on an electron right, even after Milliken's mistake.

Comment by patrissimo on Politics as Charity · 2010-09-29T17:43:39.115Z · LW · GW

And this argument has what to do with my personal decision to vote?

My choice does not determine the choices of others who believe like me, unless I'm a lot more popular than I think I am.

After saying voting is irrational, the next step for someone who truly cares about political change is to go figure out what the maximal political change they can get for their limited resources is - what's the most efficient way to translate time or dollars into change. I believe that various strategies have different returns that vary by many orders of magnitude.

So ordinary people doing the stupid obvious thing (voting, collecting signatures, etc.) might easily have 1/1000th the impact per unit time of someone who just works an extra 5 hours a week and donates the money it to a carefully chose advocacy organization. If these rationals are > 0.1% of the population, they have greater impact. And convincing someone to become one of these anti-voting rationals ups their personal impact by 1000 as much as convincing someone to vote.

Comment by patrissimo on Politics as Charity · 2010-09-29T17:36:11.658Z · LW · GW

Wow, so it is accurate for the same reason as the The Wire (based on a study of reality), that's awesome.

Comment by patrissimo on Politics as Charity · 2010-09-29T17:34:46.156Z · LW · GW

This is my worldview as well.