Posts

Comments

Comment by imaxwell on Proofs, Implications, and Models · 2012-11-01T15:50:36.040Z · LW · GW

In fancy math-talk, we can say apples are a semimodule over the semiring of natural numbers.

  • You can add two bunches of apples through the well-known "glomming-together" operation.
  • You can multiply a bunch of apples by any natural number.
  • Multiplication distributes over both natural-number addition and glomming-together.
  • Multiplication-of-apples is associative with multiplication-of-numbers.
  • 1 is an identity with regard to multiplication-of-apples.

You could quibble that there is a finite supply of apples out there, so that (3 apples) + (all the apples) is undefined, but this model ought to work well enough for small collections of apples.

Comment by imaxwell on Rationality Quotes September 2012 · 2012-09-05T17:28:45.446Z · LW · GW

True, and I hope no one thinks it is. So we can conclude that doing bad shows at first is not a strong indicator of whether you have a future as a showman.

I guess I see the quote as being directed at people who are so afraid of doing a bad show that they'll never get in enough practice to do a good show. Or they practice by, say, filming themselves telling jokes in their basement and getting critiques from their friends who will not be too mean to them. In either case, they never get the amount of feedback they would need to become good. For such a person to hear "Yes, you will fail" can be oddly liberating, since it turns failure into something accounted for in their longer-term plans.

Comment by imaxwell on Rationality Quotes September 2012 · 2012-09-03T22:01:52.552Z · LW · GW

The only road to doing good shows, is doing bad shows.

  • Louis C.K., on Reddit
Comment by imaxwell on Timeless Physics · 2012-04-24T07:48:52.915Z · LW · GW

Well sure, if you parametrize with a time factor the result will be a periodic function. But you can still de-parametrize and simply have a closed loop described relationally. A parametrization of a circle usually consists of periodic functions, but that doesn't mean the circle itself is periodic. It's just there.

Also remember that "exactly the same configuration" means exactly the same configuration, of everything, including for instance your calendar, your watch, and your brain and its stored memories. So pretty much by definition there would be no record of such a thing happening. We wouldn't need another variable to encode it because we wouldn't need to encode it in the first place.

Comment by imaxwell on Hearsay, Double Hearsay, and Bayesian Updates · 2012-02-20T20:31:35.970Z · LW · GW

I agree with most of what you say, but I'm not so sure about the last two. As others have pointed out, there are many, many cases where the primary suspect of a crime is never prosecuted. Given a choice, prosecutors will usually choose "easy" cases. So an alternate explanation for America's high prison population and incredibly high black prison population is that

  • more criminals are prosecuted and convicted in America, and
  • jurors are biased and black criminals are therefore easier to convict; and/or prosecutors are biased and therefore prosecute more black criminals.

Now, since I don't think it's actually optimal for everyone who ever breaks a law to be punished, I have no problem saying, for example, "More criminals are prosecuted and convicted here, and that's too bad."

Comment by imaxwell on Timeless Physics · 2012-02-20T19:32:26.209Z · LW · GW

This is off the top of my head, so it may be total bullshit. I find the idea of memory in a timeless universe slippery myself, and can only occasionally believe I understand it. But anyway...

If you want to implement a sort of memory in your 2D space with one particle, then for each point (x0,y0) in space you can add a coordinate n(x0,y0), and a differential relation

dn(x0,y0) = δ(x-x0,y-y0) sqrt(dx^2 + dy^2)

where δ is the Dirac delta. Each n(x0,y0) can be thought of as an observer at the point (x0,y0), counting the number of times the particle passes through. There is no reference to a time parameter in this equation, and yet there is a definite direction-of-time, because by moving the particle along a path you can only increase all n(x0,y0) for points (x0,y0) along that path.

A point in this configuration space consists of a "current" point (x,y), along with a local history at each point. If you don't make any other requirements, these local histories won't give you a unique global history, because the points could have been visited in any order. But if you impose smoothness requirements on x and y, and your local histories are consistent with those smoothness requirements, then you will have only one possible global history, or at most a finite number.

Comment by imaxwell on Timeless Physics · 2012-02-20T18:25:52.685Z · LW · GW

Super-late answer!

If you ask about a configuration X, "Where does this configuration come from?" I will point at a configuration W for which the flow from W to X is very high. If you ask, "Well, where does W come from?" I will point to a configuration V for which the flow from V to W is very high. We can play this game for a long time, but at each iteration I will almost certainly be pointing to a lower-entropy configuration than the last. Finally I may point to A, the one-point configuration. If you ask, "Where does A come from?" I have to say, "There is nowhere it comes from with any significant probability." At best I can give you a uniform distribution over all configurations with epsilon entropy. But all this means is that no configuration has A in its likely future.

The thing is, it doesn't make sense to ask what is the probability of a configuration like A, external to the universe itself: you can only ask the probability that a sufficiently long path passing through some specific configuration or set of configurations will have A in

  • its future, or
  • its past. The probability of the former is probably 0, so we don't expect a singularity in the future. That of the latter is probably 1, so we do expect a singularity in the past.
Comment by imaxwell on OPERA Confirms: Neutrinos Travel Faster Than Light · 2011-11-18T16:58:25.052Z · LW · GW

It sounded like a bad idea at first, but if the bet is 1 upvote / 1 downvote vs. 89 upvotes/89 downvotes, it could actually be a good use of the karma system. The only way to get a lot of karma would be to consistently win these bets, which is probably as good an indicator for "person worth paying attention to" as making good posts.

Comment by imaxwell on How can humans make precommitments? · 2011-09-15T03:53:01.173Z · LW · GW

The most obvious solution is to coerce your future self, by creating a future downside of not following through that is worse than the future downside of following through. Nuclear deterrence is a tough one, but In principle this is no different from coercing someone else. (I guess one could ask if it's any more ethical, at that...)

Comment by imaxwell on Open Thread: September 2011 · 2011-09-14T07:24:54.971Z · LW · GW

Hmm... I'm not sure. I'd take the word of someone with experience on an admissions committee, if you can get it.

If you do it, I think you'd be better off talking just a little about the character and much more about the community you found. Writing to the prompt is not really important for this sort of thing. (Usually one of the prompts is pretty much "Other," confirming that.)

Comment by imaxwell on Singularity Institute now accepts donations via Bitcoin · 2011-05-13T02:15:18.528Z · LW · GW

Update: 14.01 bitcoins are "in the mail" (read: sitting in ClearCoin waiting for the transaction to expire). At current exchange rates, that's around $100.

Comment by imaxwell on Timeless Physics · 2011-05-05T16:05:29.467Z · LW · GW

Years after first reading this, I think I've internalized its central point in a clear-to-me way, and I'd like to post it here in case it's useful to someone else with a similar bent to their thinking.

Without worrying about the specific nature of the Schrodinger equation, we can say the universe is governed by a set of equations of form x[i] = fi, where each x[i] is some variable in the universe's configuration space, each f[i] is some continuous function, and t is a parameter representing time. This would be true even in a classical universe---the configuration space would just look more like the coordinates for a bunch of particles, and less like parameters of a waveform. All this is really saying is that the universe has some configuration at every time.

Now, one thing you can do with parametric equations is eliminate the parameter. If we have, say, 1000 parametric equations relating x[1] through x[1000] to t, we can convert these to 999 equations relating x[1] through x[1000] to one another, and "cut out the middleman" so to speak. Your new equations will define the same curve in configuration space, and you can determine the relative order of events just by tracing along that curve (as long as there are no "singularities"---points where two different values of t gave you the same point in the configuration space).

Moreover, from inside the universe there's no way to tell the difference between these two situations. "Two hours ago" can mean either "at t - 2hr" or it can mean "at the point on this curve in configuration space where the clocks all say it's 7:00 instead of 9:00", and there's no experimental distinction to be made between these meanings. So positing a fundamental thing called "time" doesn't actually have any explanatory power!

From this understanding, timeless physics is better viewed as a more parsimonious way to frame any theory, rather than a part of quantum theory specifically. We could just as well explain Newtonian physics timelessly.

Comment by imaxwell on Singularity Institute now accepts donations via Bitcoin · 2011-05-04T14:11:39.883Z · LW · GW

There's a bitcoin escrow service called ClearCoin that includes the option of sending escrowed funds to one of their listed nonprofits, rather than back to the payer, if the escrow expires. I asked the site owner to include the SI donation address as an option and have been using it since.

Unfortunately for SI, everyone I've dealt with has been honest so far! I'll have to actually donate on purpose instead of incidentally, it seems.

Comment by imaxwell on Rationality Quotes: February 2011 · 2011-05-02T12:01:15.291Z · LW · GW

I just had to comment on this, it's too perfect. Thanks.

Comment by imaxwell on Singularity Institute now accepts donations via Bitcoin · 2011-03-16T15:41:50.389Z · LW · GW

If one other person will make an offer to round out the 70BTC, I can create a new BPM account and handle the distribution myself, returning the principal to Kevin and whomever else and sending the rest to SIAI.

Edited to point out: a higher bid will still win.

Comment by imaxwell on Singularity Institute now accepts donations via Bitcoin · 2011-03-16T04:47:36.451Z · LW · GW

If you want to donate to SIAI, I have an offer that will probably allow you to donate more!

My Radeon 5970 should generate close to 125 BTC per core per month at current difficulty levels. For an up-front fee of 70 BTC (about $60) or more, I will direct one core at the server of your choice (I recommend a pooling service like that at mining.bitcoin.cz) for one calendar month. I will gain the certainty of paying my huge electric bill, and you will gain a better-than-80% expected profit in only a month.

If you're interested in this offer, message me with a bid. Preference will be given to higher bids and to those who plan to donate some or all of their profit to SIAI---the higher the promised donation, the better.

If you're interested but don't know how to set up a mining server, I will walk you through it if your bid is accepted. It's really quite easy.

Comment by imaxwell on Singularity Institute now accepts donations via Bitcoin · 2011-03-15T15:08:41.597Z · LW · GW

I got 0.05 BTC from the Bitcoin Faucet. It looks like there's nothing meaningful I can do with 0.05 BTC, and I'm not certain I'll ever have more, so I'm donating it to SIAI instead of back to the Faucet.

I'm really just posting this so that the 0.05 BTC donation doesn't come off as the signalling equivalent of a 10-cent tip. If I do make more BTC, I'll donate at least some of that as well. (Actually, donating all of it sounds like a good way to avoid annoying tax questions, so I'm considering that.)

Comment by imaxwell on Rationality Quotes: February 2011 · 2011-02-04T15:30:08.695Z · LW · GW

I never thought of this quote outside the context of programming before reading it here, but it does seem pretty generally applicable. The force behind premature optimization is the force that causes me to spend so much time comparison shopping that the time lost eventually outvalues the price difference; or to fail to give money to charity at all because there may be a better charity to give it to. (I've recently started donating the dollar to Vague Good Cause at stores and restaurants when asked, because it's all well and good to say "SIAI is better," but that defense only works if I then actually give the dollar to SIAI.)

Comment by imaxwell on Optimal Employment · 2011-01-31T15:38:44.628Z · LW · GW

Is overqualification a concern? That is: if I'm already working toward a Ph.D. and I decide to complete that first, will it work against me in finding hospitality work? (I'd guess such jobs have sufficiently high rotation anyway that the answer is no.)

Do you know if the situation is equally good for more "career-like" jobs? (I.e. instead of making good money without too much strain, can I bust my ass to make even more money?)

Even if both the answers are the less-desired, I'm going to seriously discuss this with my wife.

Comment by imaxwell on The Santa deception: how did it affect you? · 2010-12-20T23:50:04.336Z · LW · GW

I don't think learning the truth really affected my development one way or the other. One day when I was I-don't-remember-how-old, I asked to be told the truth and I was. I do remember that I wasn't very good at maintaining the conspiracy former my younger siblings, and almost let the truth slip a few times---and I'm still not very good at it and have almost let the truth slip to children in my family.

I am wary of whether lying to kids habitually is really as good for them as we rationalize, but the real reason I'm inclined not to spread this myth to my hypothetical children is because I am so uncomfortable with lies myself---I can barely even bring myself to do it for the sake of a surprise party or something innocuous like that. I'd rather tell it as what it is: a fun story, like Cinderella or Harry Potter.

Comment by imaxwell on A sense of logic · 2010-12-13T04:55:42.983Z · LW · GW

Upvoted mostly for the self-honesty. I wonder sometimes if I'm more 'forgiving' of bad arguments for positions I already agree with. (Answer: probably, but unless I know how much it'll be hard to correct for.)

I do find it pretty unpleasant when people hold my opinion for reasons that are... lacking, but I think this may be more of an allergy to cliché than to bad logic. I get the same sensation when I hear people intone individualist or liberal catch-phrases in full sincerity, regardless of how much I might agree with the sentiment.

Comment by imaxwell on Reductionism · 2010-11-08T04:17:14.071Z · LW · GW

You can go one step further. If folks like Barbour are correct that time is not fundamental, but rather something that emerges from causal flow, then it ought to be that our universe can be simulated in a timeless manner as well. So a model of this universe need not actually be "executed" at all---a full specification of the causal structure ought to be enough.

And once you've bought that, why should the medium for that specification matter? A mathematical paper describing the object should be just as legitimate as an "implementation" in magnetic patterns on a platter somewhere.

And if it doesn't matter what the medium is, why should it matter whether there's a medium at all? Theorems don't become true because someone proves them, so why should our universe become real because someone wrote it down?

If I understand Max Tegmark correctly, this is actually the intuition at the core of his mathematical universe hypothesis (Wikipedia, but with some good citations at the bottom), which basically says: "We perceive the universe as existing because we are in it." Dr. Tegmark says that the universe is one of many coherent mathematical structures, and in particular it's one that contains sentient beings, and those sentient beings necessarily perceive themselves and their surroundings as "real". Pretty much the only problem I have with this notion is that I have no idea how to test it. The best I can come up with is that our universe, much like our region of the universe, should turn out to be almost but not quite ideal for the development of nearly-intelligent creatures like us, but I've seen that suggested of models that don't require the MUH as well. Aside from that, I actually find it quite compelling, and I'd be a bit sad to hear that it had been falsified.

Interestingly enough, a version of the MUH showed up in Dennis Paul Himes' (An Atheist Apology)[http://www.cookhimes.us/dennis/aaa.htm] (as part of the "contradiction of omnipotent agency" argument), written just a few years after Dr. Tegmark started writing about these ideas. Mr. Himes' essay was very influential on me as a teenager, and yet I never did hear of the "mathematical universe hypothesis" by that name until a few years ago. In past correspondence, he wrote that the argument was original to him as far as he knew, and at least one of his commenters claimed to also have developed it independently, so it may be a more intuitively plausible idea than it seems to be at first glance.

Comment by imaxwell on Reductionism · 2010-11-07T17:43:48.490Z · LW · GW

Probably no one will ever see this comment, but.

"I wish I knew where reality got its computing power."

If reality had less computing power, what differences would you expect to see? You're part of the computation, after all; if everything stood still for a few million meta-years while reality laboriously computed the next step, there's no reason this should affect what you actually end up experiencing, any more than it should affect whether planets stay in their orbits or not. For all we know, our own computers are much faster (from our perspective) than the machines on which the Dark Lords of the Matrix are simulating us (from their perspective).

Comment by imaxwell on Rationality quotes: June 2010 · 2010-06-08T15:36:35.079Z · LW · GW

I would prefer to say that conforming your thoughts to reality is science, and conforming reality to your thoughts is engineering...

Comment by imaxwell on Ugh fields · 2010-04-19T04:55:16.206Z · LW · GW

I love this post.

The really depressing thing about my "ugh fields" is how they stop me from doing things that really aren't even difficult or time-consuming, like filling out a one-page form and putting it in the mail. On the other hand, this is just as likely to come up with things that may be difficult or time-consuming, but are in some sense 'fun' for me: for instance, I've found it steadily harder to buckle down and study as I've progressed through school, in a topic (mathematics) that I started in because it was fun for me.

I hope maybe having the phrase "ugh field" to associate to this, will remind me to do something about it. I'd like to say it will improve my life, but I'll have to go live some of my life for a while and see how improved it is.

I guess what I should do is make some sort of rule: When I encounter an Ugh Field, I should immediately do whatever thing it is I'm feeling the 'ugh' about.

Comment by imaxwell on Attention Lurkers: Please say hi · 2010-04-19T04:47:09.650Z · LW · GW

Hi.

It's been quite a while since I posted here, so long that I initially couldn't remember my username. I rarely have much to add, and even though "I agree with this post" posts are, I think, slightly more accepted here than in some places, just agreeing doesn't by itself motivate me to say so most of the time.

Comment by imaxwell on You Be the Jury: Survey on a Current Event · 2009-12-10T19:32:11.117Z · LW · GW

The main reason my estimate is so high is because

  • I know my information came from two heavily biased sites, and

  • I found the "innocent" site a lot easier to follow and therefore paid more attention to it, so I know my information is particularly biased in that direction.

That said, I did consider a more-arrogant probability of 0.25 or so. My caution in this case isn't on general principle, but because I have something of an old history of embracing cause celebre cases like this only to decide on further reading that the person I'm defending is guilty as hell.

Comment by imaxwell on You Be the Jury: Survey on a Current Event · 2009-12-10T19:21:54.990Z · LW · GW

Given the information you're going on, that's not a bad estimate. Actually reading on the case may change your opinion dramatically, though; why not try it?

Comment by imaxwell on You Be the Jury: Survey on a Current Event · 2009-12-10T18:18:52.792Z · LW · GW

Do you really find it equally likely that Knox/Sollecito are guilty as you do that Guede is guilty? It seems like most of the weight should be given to Knox-Sollecito-Guede and Guede as possibilities, so unless you think the probability of Guede acting alone is very close to zero, this is sort of bizarre. In particular, it indicates that your P(KSG | G) is very close to 100%.

Comment by imaxwell on You Be the Jury: Survey on a Current Event · 2009-12-10T17:55:08.300Z · LW · GW

I didn't know anything about this case until I read this post.

My first impression: Wow, there sure is a lot of information on what a nice person Meredith Kercher was on that one site, and what a nice person Amanda Knox is on that other site. (The latter is at least potentially relevant, if you think a nasty person is more likely to kill someone.)

Anyway: Knox : ~35%; Sollecito : ~35%; Guede : ~95%. And I'd say it's about 60% that all of these probabilities are on the same side of 50% as yours.

(I had a bunch of information here on how I came to my opinion, but I don't think it's really important. Suffice it to say that the evidence I've seen against Ms. Knox and Mr. Sollecito is pretty terrible, but the evidence I haven't seen may be much better.)

Anyway, if those are the best, clearest sources available to me then I am pretty doubtful of my ability to form a useful opinion on this case at all.

Comment by imaxwell on Less Wrong Q&A with Eliezer Yudkowsky: Ask Your Questions · 2009-11-14T01:31:25.981Z · LW · GW

Previously, in Ethical Injunctions and related posts, you said that, for example,

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

It seems like you're saying you will not and should not break your ethical injunctions because you are not smart enough to anticipate the consequences. Assuming this interpretation is correct, how smart would a mind have to be in order to safely break ethical injunctions?

Comment by imaxwell on Marketing rationalism · 2009-04-15T17:08:02.011Z · LW · GW

It took me a long time to respond to this because I found the question resistant to analysis. My immediate impulse is to shout, "But, dammit, my rational argument is really truly actually valid and your bible quotation isn't!" This is obviously irrelevant since, by hypothesis, my goal is to be convincing rather than correct.

After thinking about it, I've decided that the reason the question was hard to analyze is because that hypothesis is so false for me. You haven't placed any constraints at all; in particular, you haven't said that my goal is

  • to convince others to be more rational via a correct argument, or
  • to convince others to be more rational, provided that this is true, or
  • to convince others to be more rational, as long as I don't have to harm anyone to do it, or
  • to convince others to be more rational, as long as I can maintain my integrity in the process.

If I take "convince others to be more rational" as a supergoal, then of course I should lie and brain-hack and do whatever is in my power to turn others into rationalists. But in reality, many of my highest values have less to do with the brain-states of others, than with what comes out of my own mouth. Turning others into rationalists at the price of becoming a charlatan myself would not be such a great tradeoff.

I regularly "lose" debates because I'm not willing to use rhetoric I personally find unconvincing. (Though I'm probably flattering myself to suppose that I would "win" otherwise.) To give a specific example, I am deeply opposed to drug prohibition, while openly predicting that more people will be addicted to drugs if they are legally available. This is a very difficult position to quickly relay to someone who doesn't already agree with me, but any simplification would be basically dishonest. I could invent instrumental reasons why I shouldn't use a basically dishonest argument in this case, but the truth is that I just hate lying to people, even more than I hate letting them walk around with false, deadly ideas.

I imagine Eliezer and Robin run into this themselves, when they say that a certain unusual medical procedure only has a small probability of success, but should be undergone anyway because the payoff is so high. Many people will hear "low probability of success" and stop listening, and many of those people will therefore die unnecessarily. Does this mean Eliezer and Robin should start saying that there is a high probability of success after all, in order to better save lives?

Now maybe your point here is that yes, we all should be lying for the sake of our goals---that we should throw out our rules of engagement and wage a total war on epistemic wrongness. I have considered this myself, and honestly I don't have a good rebuttal. I can only say that I'm not ready to be that kind of warrior.

Comment by imaxwell on The uniquely awful example of theism · 2009-04-12T16:29:51.436Z · LW · GW

Isn't that pretty much what CronoDAS said, though? The stated premise of banning certain drugs is that they are harmful (either individually or socially). So, again, drug prohibition fails on its own terms, because they are not even choosing the most harmful substances to ban.

A serious attempt to ban harmful substances would start with things like rat poison and cocaine, and work its way down as far as things like tobacco and alcohol and cheesecake, but probably leave out things like psilocybin and THC.

But of course I'm presuming there is a serious attempt to ban harmful substances. In reality there is a serious attempt to ban getting high, which is not the same thing at all.

Comment by imaxwell on Where are we? · 2009-04-03T02:30:19.150Z · LW · GW

I attend the University of New Hampshire in Durham, NH, and live on campus.

I am in Massachusetts about every other weekend on average, and I considered posting this in "Massachusetts." I could easily make a MA meetup.

Comment by imaxwell on Slow down a little... maybe? · 2009-03-07T04:23:15.549Z · LW · GW

I've given up on more than one message board because it grew to the point where I could no longer reaonably stay up-to-date. It would be nice if LW didn't develop the same problem.

That said, this seems like a hard rule to enforce, unless we happen to have fewer than three people wrting posts. I was about to say "how do we decide which three posts should appear" and then I remembered that that's what voting is for.

If this 'explosion' is temporary, I have no problem dealing with it for a few days or weeks. This is the time when LW is still new and exciting and everyong will be willing to read two dozen posts a day. By the time the novelty of reading has worn off, maybe the novelty of writing will as well.

Comment by imaxwell on Slow down a little... maybe? · 2009-03-07T04:13:34.212Z · LW · GW

Agree with this comment. It doesn't seem that OB is going to shut down altogether, and I expect most of us intend to keep reading it.

Comment by imaxwell on Kinnaird's truels · 2009-03-06T04:18:53.807Z · LW · GW

This is a really good question.

As others have pointed out, the real issue is not competence but perceived competence. But as others haven't really pointed out, as one deals with more perceptive rivals, the difference between competence and perceived competence approaches zero (if only asymptotically).

As for the last question---what should we do in a truel-like situation---the answer is, I guess, "That depends." If we're talking about a situation where one can falsify incompetence, or perhaps form an unbreakable contract to act incompetently, then the old chestnut applies: rational agents should win. In a literal truel, you could do this by, say, agreeing to fire roughly one-third of your shots (chosen by die roll) straight into the air, provided there were a way of holding you to it. In other cases, as some people pointed out, maybe you could just get really good at convincing people of your incompetence (a.k.a. "nonchalance").

But in a situation where this is impossible? Where competence and perceived competence are one? Then there is no strategy, any more than there was a strategy for passing as white in the last century. You will be punished for being good (unless you're so good that you win anyway).

Regarding the evolution of mediocrity: In some cases, Evolution selects for people who are good at convincing others that they are X, and, by a not-so-stunning coincidence, ends up with people who actually believe they are X, or even really are X. I don't know if "competence" is the sort of thing this works for, though, since it is in itself a genetic advantage almost by definition. Self-perceieved incompetence is just so much better a strategy than actual incompetence, and self-delusion such a commonly evolved trait, that I have trouble believing even a dolt like Evolution would fail to think of it.

Comment by imaxwell on That You'd Tell All Your Friends · 2009-03-02T03:22:19.665Z · LW · GW
  • Firstly, the combined ideas of "something to protect" and "rational agents should win" and "joy in the merely real." The idea that you should want to be rational not because it's inherently virtuous or important, but because it will allow you to get what you ultimately want, whatever that might be. The person I want to give this book to, currently believes that rationality is defined by cold, bloodless disinterest in the world, and thus has no interest in it.

  • Secondly, the combined ideas of "writing the bottom line first" and "guessing the teacher's password" and "your ability to be more confused by fiction". That you cannot first choose an opinion from the ether, and then figure out how to argue for it. In the effort to correlate your beliefs with reality, it is only your beliefs that you are capable of changing. The person I want to give this book to, currently believes that it is better to "win" a debate than to "lose", and thus defends his beliefs against all comers without first finding out where they are come from.

  • Thirdly, the idea that nature is allowed to set impossibly high standards and flunk you even if you do everything humanly possible to meet them. The person I want to give this book to, currently thinks that "I'm doing the best I can" is a viable excuse for failure.

Of course there should be advice on how, specifically, to be more rational, and the many failure modes possible. These three ideas are primarily the motivational ones: that rationality is necessary to anything you want to accomplish, and yet so difficult that it will take you a lifetime of effort to maintain a fighting chance of doing so. They are the ideas that, if internalized, will make people really want to try harder.

Comment by imaxwell on The Most Frequently Useful Thing · 2009-03-01T03:42:20.283Z · LW · GW

A few ideas:

  • the difference between Nobly Confessing One's Limitations and actually preparing to be wrong. I was pretty guilty of the former in the past. I think I'm probably still pretty guilty of it, but I am on active watch for it.

  • the idea that one should update on every piece of evidence, however slightly. This is something that I "knew" without really understanding its implications. In particular, I used to habitually leave debates more sure of my position than when I went in---yet this can't possibly be right, unless my opposition were so inept as to argue against their own position. So there's one bad habit I've thrown out. I've gone from being "open-minded" enough to debate my position, to being actually capable of changing my position.

  • That I should go with the majority opinion unless I have some actual reason to think I can do better. To be fair, on the matters where I actually had a vested interest, I followed this advice before receiving it; so perhaps this shouldn't be under 'useful' per se, although I've improved my predictions drastically by just parroting InTrade. (I don't bet on InTrade because I've never believed I could do better.)

  • Sticking your neck out and making a prediction, so that you have the opportunity to say "OOPS" as soon as possible.