Posts

Truth: It's Not That Great 2014-05-04T22:07:54.354Z
Effective Effective Altruism Fundraising and Movement-Building 2014-03-28T21:05:09.726Z
Self-Congratulatory Rationalism 2014-03-01T08:52:13.172Z
Steelmanning Young Earth Creationism 2014-02-17T07:17:19.181Z
White Lies 2014-02-08T01:20:29.528Z
Volunteering programmer hours / discussing how to improve LessWrong's software 2014-01-26T22:47:01.960Z
Things I Wish They'd Taught Me When I Was Younger: Why Money Is Awesome 2014-01-16T07:27:47.032Z
I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription 2014-01-11T10:39:04.856Z
Critiquing Gary Taubes, Final: The Truth About Diets and Weight Loss 2014-01-04T05:16:23.985Z
Critiquing Gary Taubes, Part 4: What Causes Obesity? 2013-12-31T22:04:47.781Z
What is the Main/Discussion distinction, and what should it be? 2013-12-30T05:09:54.547Z
Critiquing Gary Taubes, Part 3: Did the US Government Give Us Absurd Advice About Sugar? 2013-12-30T00:58:31.461Z
Critiquing Gary Taubes, Part 2: Atkins Redux 2013-12-30T00:58:23.884Z
Donating to MIRI vs. FHI vs. CEA vs. CFAR 2013-12-27T03:43:04.752Z
Critiquing Gary Taubes, Part 1: Mainstream Nutrition Science on Obesity 2013-12-25T18:27:32.219Z
The Statistician's Fallacy 2013-12-09T04:48:18.532Z
The Limits of Intelligence and Me: Domain Expertise 2013-12-07T08:23:47.600Z
Open Thread, December 2-8, 2013 2013-12-03T05:10:53.909Z
According to Dale Carnegie, You Can't Win an Argument—and He Has a Point 2013-11-30T06:23:33.609Z
Meetup : San Francisco / App Academy meetup [LOCATION CHANGE] 2013-11-29T17:21:04.485Z
On Walmart, And Who Bears Responsibility For the Poor 2013-11-27T05:08:14.668Z
Links: so-called "knockout game" a "myth and a "bogus trend." 2013-11-25T16:21:54.856Z
Mainstream Epistemology for LessWrong, Part 1: Feldman on Evidentialism 2013-11-16T16:16:21.749Z
AI Policy? 2013-11-11T19:40:07.830Z
The Evolutionary Heuristic and Rationality Techniques 2013-11-10T07:30:54.279Z
Academic Cliques 2013-11-08T04:27:43.075Z
Yes, Virginia, You Can Be 99.99% (Or More!) Certain That 53 Is Prime 2013-11-07T07:45:07.565Z
Is the orthogonality thesis at odds with moral realism? 2013-11-05T20:47:52.979Z
No Universally Compelling Arguments in Math or Science 2013-11-05T03:32:42.920Z
Lone Genius Bias and Returns on Additional Researchers 2013-11-01T00:38:40.868Z
Bayesianism for Humans 2013-10-29T23:54:14.890Z
Why didn't people (apparently?) understand the metaethics sequence? 2013-10-29T23:04:25.408Z
Is it worth your time to read a lot of self help and how to books? 2013-10-28T02:25:20.296Z
Replicating Douglas Lenat's Traveller TCS win with publicly-known techniques 2013-10-26T15:47:03.222Z
Only You Can Prevent Your Mind From Getting Killed By Politics 2013-10-26T13:59:59.544Z
What Can We Learn About Human Psychology from Christian Apologetics? 2013-10-21T22:02:19.113Z
Looking for opinions of people like Nick Bostrom or Anders Sandberg on current cryo techniques 2013-10-17T20:36:53.567Z
Trusting Expert Consensus 2013-10-16T20:22:10.507Z
A Voting Puzzle, Some Political Science, and a Nerd Failure Mode 2013-10-10T02:10:48.519Z
Torture vs. Shampoo 2013-09-23T04:34:25.417Z
How sure are you that brain emulations would be conscious? 2013-08-26T06:21:17.996Z
Learning programming: so I've learned the basics of Python, what next? 2013-06-17T23:31:37.478Z
Who thinks quantum computing will be necessary for AI? 2013-05-28T22:59:57.039Z
Could Robots Take All Our Jobs?: A Philosophical Perspective 2013-05-24T22:06:54.688Z
Can somebody explain this to me?: The computability of the laws of physics and hypercomputation 2013-04-21T21:22:48.208Z
Willing gamblers, spherical cows, and AIs 2013-04-08T21:30:24.813Z
What's your #1 reason to care about AI risk? 2013-01-20T21:52:00.736Z
Quote on Nate Silver, and how to think about probabilities 2012-11-02T04:29:26.247Z
The basic argument for the feasibility of transhumanism 2012-10-14T08:04:08.557Z
Rigorous academic arguments on whether AIs can replace all human workers? 2012-08-29T07:30:55.073Z

Comments

Comment by ChrisHallquist on The Best Textbooks on Every Subject · 2015-07-26T23:38:52.837Z · LW · GW

On philosophy, I think it's important to realize that most university philosophy classes don't assign textbooks in the traditional sense. They assign anthologies. So rather than read Russell's History of Western Philosophy or The Great Conversation (both of which I've read), I'd recommend something like The Norton Introduction to Philosophy.

Comment by ChrisHallquist on Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103 · 2015-01-29T11:17:20.209Z · LW · GW

Link(s)?

Comment by ChrisHallquist on Harry Potter and the Methods of Rationality discussion thread, January 2015, chapter 103 · 2015-01-29T11:16:48.818Z · LW · GW

OH MY GOD. THAT WAS IT. THAT WAS VOLDEMORT'S PLAN. RATIONAL!VOLDEMORT DIDN'T TRY TO KILL HARRY IN GODRIC'S HOLLOW. HE WAITED ELEVEN YEARS TO GIVE HARRY A GRADE IN SCHOOL SO THAT ANY ASSASSINATION ATTEMPT WOULD BE IN ACCORDANCE WITH THE PROPHECY.

Comment by ChrisHallquist on 2014 Less Wrong Census/Survey · 2014-10-25T03:54:55.143Z · LW · GW

Duplicate comment, probably should be deleted.

Comment by ChrisHallquist on 2014 Less Wrong Census/Survey · 2014-10-25T03:53:57.102Z · LW · GW

Agreed. I actually looked up tax & spending for UK vs. Scandinavian countries, and they aren't that different. It may not be a good distinction.

Comment by ChrisHallquist on 2014 Less Wrong Census/Survey · 2014-10-25T03:52:12.440Z · LW · GW

I thought of this last year after I completed the survey, and rated anti-agathics less probable than cryonics. This year I decided cryonics counted, and rated anti-agathics 5% higher than cryonics. But it would be nice for the question to be clearer.

Comment by ChrisHallquist on 2014 Less Wrong Census/Survey · 2014-10-25T03:45:59.728Z · LW · GW

Done, except for the digit ratio, because I do not have access to a photocopier or scanner.

Comment by ChrisHallquist on Non-standard politics · 2014-10-25T02:50:10.561Z · LW · GW

Liberal here, I think my major heresy is being pro-free trade.

Also, I'm not sure if there's actually a standard liberal view of zoning policy, but it often feels like the standard view is that we need to keep restrictive zoning laws in place to keep out those evil gentrifiers, in which case my support for loser zoning regulations is another major heresy.

You could argue I should call myself a libertarian, because I agree the main thrust of Milton Friedman's book Capitalism and Freedom. However, I suspect a politician running on Friedman's platform today would be branded a socialist if a Democrat, and a RINO if a Republican.

(Friedman, among other things, supported a version of guaranteed basic income. To which today's GOP mainstream would probably say, "but if we do that, it will just make poor people even lazier!")

Political labels are weird.

Comment by ChrisHallquist on question: the 40 hour work week vs Silicon Valley? · 2014-10-25T02:43:25.558Z · LW · GW

and anyone smart has already left the business since it's not a good way of making money.

Can you elaborate? The impression I've gotten from multiple converging lines of evidence is that there are basically two kinds of VC firms: (1) a minority that actually know what they're doing, make money, and don't need any more investors and (2) the majority that exist because lots of rich people and institutions want to be invested in venture capital, can't get in on investing with the first group, and can't tell the two groups apart.

A similar pattern appears to occur in the hedge fund industry. In both cases, if you just look at the industry-wide stats, they look terrible, but that doesn't mean that Peter Thiel or George Soros aren't smart because they're still in the game.

Comment by ChrisHallquist on Could Robots Take All Our Jobs?: A Philosophical Perspective · 2014-09-17T11:30:07.690Z · LW · GW

Hi! Welcome to LessWrong! A lot of people on LessWrong are worried about the problem you describe, which is why the Machine Intelligence Research Institute exists. In practice, the problem of getting an AI to share human values looks very hard. But, given that human values are implemented in human brains, it looks like it should be possible in principle to implement them in computer code as well.

Comment by ChrisHallquist on Announcing The Effective Altruism Forum · 2014-08-27T05:29:37.642Z · LW · GW

I think the "Well-kept gardens die by pacifism" advice is cargo culted from a Usenet world where there weren't ways to filter by quality aside from the binary censor/don't censor.

Ah... you just resolved a bit of confusion I didn't know I had. Eliezer often seems quite wise about "how to manage a community" stuff, but also strikes me as a bit too ban-happy at times. I had thought it was just overcompensation in response to a genuine problem, but it makes a lot more sense as coming from a context where more sophisticated ways of promoting good content aren't available.

Comment by ChrisHallquist on [link] [poll] Future Progress in Artificial Intelligence · 2014-07-10T01:23:46.935Z · LW · GW

So regarding MIRI, you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years.

I should note that it's not obvious what the experts responding to this survey thought "greatly surpass" meant. If "do everything humans do, but at x2 speed" qualifies, you might expect AI to "greatly surpass" human abilities in 2 years even on a fairly unexciting Robin Hansonish scenario of brain emulation + continued hardware improvement at roughly current rates.

Comment by ChrisHallquist on Consider giving an explanation for your deletion this time around. "Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild" · 2014-07-08T16:01:43.098Z · LW · GW

I like the idea of this fanfic, it seems like it could have been executed much better.

EDIT: Try re-writing later? As the saying goes, "Write drunk; edit sober."

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-07-05T23:41:11.895Z · LW · GW

So I normally defend the "trust the experts" position, and I went to grad school for philosophy, but... I think philosophy may be an area where "trust the experts" mostly doesn't work, simply because with a few exceptions the experts don't agree on anything. (Fuller explanation, with caveats, here.)

Comment by ChrisHallquist on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-05T06:35:29.240Z · LW · GW

Have you guys given any thought to doing pagerankish stuff with karma?

Can you elaborate more? I'm guessing you mean people with more karma --> their votes count more, but it isn't obvious how you do that in this context.

Comment by ChrisHallquist on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-04T15:59:15.135Z · LW · GW

Everyone following the situation knew it was Eugine. At least one victim named him publicly. Sometimes he was referred to obliquely as "the person named in the other thread" or something like that, but the people who were following the story knew what that meant.

Comment by ChrisHallquist on [moderator action] Eugine_Nier is now banned for mass downvote harassment · 2014-07-03T20:28:38.990Z · LW · GW

I'm glad this was done, if only to send a signal to the community that something is being done, but you have a point that this is not an ideal solution and I hope a better one is implemented soon.

Comment by ChrisHallquist on Downvote stalkers: Driving members away from the LessWrong community? · 2014-07-03T04:13:33.634Z · LW · GW

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

Comment by ChrisHallquist on Downvote stalkers: Driving members away from the LessWrong community? · 2014-07-02T05:31:11.305Z · LW · GW

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.

I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.

Comment by ChrisHallquist on Truth: It's Not That Great · 2014-05-06T21:31:32.719Z · LW · GW

...my motivation has been "I see people around me succeeding by these means where I have failed, and I want to be like them".

Seems like noticing yourself wanting to imitate successful people around you should be an occasion for self-scrutiny. Do you really have good reasons to think the things you're imitating them on are the cause of their success? Are the people you're imitating more successful than other people who don't do those things, but who you don't interact with as much? Or is this more about wanting to affiliate the high-status people you happen to be in close proximity to?

Comment by ChrisHallquist on LessWrong as social catalyst · 2014-04-29T04:27:29.285Z · LW · GW

I love how understated this comment is.

Comment by ChrisHallquist on Request for concrete AI takeover mechanisms · 2014-04-29T04:26:23.540Z · LW · GW

People voluntarily hand over a bunch of resources (perhaps to a bunch of different AIs) in the name of gaining an edge over their competitors, or possibly for fear of their competitors doing the same thing to gain such an edge. Or just because they expect the AI to do it better.

Comment by ChrisHallquist on Open Thread April 8 - April 14 2014 · 2014-04-09T20:03:31.660Z · LW · GW

Maximizing your chances of getting accepted: Not sure what to tell you. It's mostly about the coding questions, and the coding questions aren't that hard—"implement bubble sort" was one of the harder ones I got. At least, I don't think that's hard, but some people would struggle to do that. Some people "get" coding, some don't, and it seems to be hard to move people from one category to another.

Maximizing value given that you are accepted: Listen to Ned. I think that was the main piece of advice people from our cohort gave people in the incoming cohort. Really. Ned, the lead instructor, knows what he's doing, and really cares about the students who go through App Academy. And he's seen what has worked or not worked for people in the past.

(I might also add, based on personal experience, "don't get cocky about the assessments." Also "get enough sleep," and should you end up in a winter cohort, "if you go home for Christmas, fly back a day earlier than necessary.")

Comment by ChrisHallquist on Effective Effective Altruism Fundraising and Movement-Building · 2014-03-29T04:08:30.117Z · LW · GW

Presumably. The question is whether we should accept that belief of theirs.

Comment by ChrisHallquist on A few remarks about mass-downvoting · 2014-03-17T20:51:55.312Z · LW · GW

And the solution to how not to catch false positives is to use some common sense. You're never going to have an aytomated algorithm that can detect every instance of abuse, but even an instance that is not detectable by automatic means can be detectable if someone with sufficient database access takes a look when it is pointed out to them.

Right on. The solution to karma abuse isn't some sophisticated algorithm. It's extremely simple database queries, in plain english along the lines of "return list of downvotes by user A, and who was downvoted," "return downvotes on posts/comments by user B, and who cast the vote," and "return lists of downvotes by user A on user B."

Comment by ChrisHallquist on Rationality Quotes March 2014 · 2014-03-10T02:23:10.762Z · LW · GW

Ah, of course, because it's more important to signal one's pure, untainted epistemic rationality than to actually get anything done in life, which might require interacting with outsiders.

This is a failure mode I worry about, but I'm not sure ironic atheist re-appropriation of religious texts is going to turn off anyone we had a chance of attracting in the first place. Will reconsider this position if someone says, "oh yeah, my deconversion process was totally slowed down by stuff like that from atheists," but I'd be surprised.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-08T19:05:05.526Z · LW · GW

Nutrition scientists disagree. Politicians and political scientists disagree. Psychologists and social scientists disagree. Now that we know we can be looking for high-quality contrarians in those fields, how do we sort out the high-quality ones from the lower-quality ones?

What's your proposal for how to do that, aside from just evaluating the arguments the normal way? Ignore the politicians, and we're basically talking about people who all have PhDs, so education can't be the heuristic. You also proposed IQ and rationality, but admitted we aren't going to have good ways to measure them directly, aside from looking for "statements that follow proper logical form and make good arguments." I pointed out that "good arguments" is circular if we're trying to decide who to read charitably, and you had no response to that.

That leaves us with "proper logical form," about which you said:

Proper logical form comes cheap, but a surprising number of people don't bother even with that. Do you frequently see people appending "if everything I've said so far is true, then my conclusion is true" to screw with people who judge arguments based on proper logical form?

In response to this, I'll just point out that this is not an argument in proper logical form. It's a lone assertion followed by a rhetorical question.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-04T05:48:16.396Z · LW · GW

Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:

Comment by ChrisHallquist on The sin of updating when you can change whether you exist · 2014-03-04T03:26:33.357Z · LW · GW

Username explicitly linked to torture vs. dust specks as a case where it makes sense to use torture as an example. Username is just objecting to using torture for general decision theory examples where there's no particular reason to use that example.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-04T02:08:18.122Z · LW · GW

But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain than you are there, I admit I am very skeptical of them.

With philosophy, I think the easiest, most important thing for non-experts to notice is that (with a few arguable exceptions are independently pretty reasonable) philosophers basically don't agree on anything. In the case of e.g. Plantinga specifically, non-experts can notice few other philosophers think the modal ontological argument accomplishes anything.

The crackpot warning signs are good (although it's interesting how often basically correct people end up displaying some of them because they get angry at having their ideas rejected and so start acting out...

Examples?

We are so fascinated by the man-bites-dog cases of very intelligent people believing stupid things that it's hard to remember that stupid things are still much, much likelier to be believed by stupid people.

(possible exceptions in politics, but politics is a weird combination of factual and emotive claims, and even the wrong things smart people believe in politics are in my category of "deserve further investigation and charitable treatment".)

I don't think "smart people saying stupid things" reaches anything like man-bites-dog levels of surprisingness. Not only do you have examples from politics, but also from religion. According to a recent study, a little over a third of academics claim that "I know God really exists and I have no doubts about it," which is maybe less than the general public but still a sizeable minority (and the same study found many more academics take some sort of weaker pro-religion stance). And in my experience, even highly respected academics, when they try to defend religion, routinely make juvenile mistakes that make Plantinga look good by comparison. (Remember, I used Plantinga in the OP not because he makes the dumbest mistakes per se but as an example of how bad arguments can signal high intelligence.)

So when I say I judge people by IQ, I think I mean something like what you mean when you say "a track record of making reasonable statements", except basing "reasonable statements" upon "statements that follow proper logical form and make good arguments" rather than ones I agree with.

Proper logical form comes cheap, just add a premise which says, "if everything I've said so far is true, then my conclusion is true." "Good arguments" is much harder to judge, and seems to defeat the purpose of having a heuristic for deciding who to treat charitably: if I say "this guy's arguments are terrible," and you say, "you should read those arguments more charitably," it doesn't do much good for you to defend that claim by saying, "well, he has a track record of making good arguments."

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T18:09:51.370Z · LW · GW

I question how objective these objective criterion you're talking about are. Usually when we judge someone's intelligence, we aren't actually looking at the results of an IQ test, so that's subjective. Ditto rationality. And if you were really that concerned about education, you'd stop paying so much attention to Eliezer or people who have a bachelors' degree at best and pay more attention to mainstream academics who actually have PhDs.

FWIW, actual heuristics I use to determine who's worth paying attention to are

  • What I know of an individual's track record of saying reasonable things.
  • Status of them and their ideas within mainstream academia (but because everyone knows about this heuristic, you have to watch out for people faking it.
  • Looking for other crackpot warning signs I've picked up over time, e.g. a non-expert claiming the mainstream academic view is not just wrong but obviously stupid, or being more interested in complaining that their views are being suppressed than in arguing for those views.

Which may not be great heuristics, but I'll wager that they're better than IQ (wager, in this case, being a figure of speech, because I don't actually know how you'd adjudicate that bet).

It may be helpful, here, to quote what I hope will be henceforth known as the Litany of Hermione: "The thing that people forget sometimes, is that even though appearances can be misleading, they're usually not."

You've also succeeded in giving me second thoughts about being signed up for cryonics, on the grounds that I failed to consider how it might encourage terrible mental habits in others. For the record, it strikes me as quite possible that mainstream neuroscientists are entirely correct to be dismissive of cryonics—my biggest problem is that I'm fuzzy on what exactly they think about cryonics (more here).

Comment by ChrisHallquist on A few remarks about mass-downvoting · 2014-03-03T16:55:01.220Z · LW · GW

Oh, I see now. But why would Eliezer do that? Makes me worry this is being handled less well than Eliezer's public statements indicate.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T08:05:06.152Z · LW · GW

Plantinga's argument defines God as a necessary being, and assumes it's possible that God exists. From this, and the S5 axioms of modal logic, it folllws that God exists. But you can just as well argue, "It's possible the Goldbach Conjecture is true, and mathematical truths are if true necessarily true, therefore the Goldbach Conjecture is true." Or even "Possibly it's a necessary truth that pigs fly, therefore pigs fly."

(This is as much as I can explain without trying to give a lesson in modal logic, which I'm not confident in my ability to do.)

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T07:58:45.096Z · LW · GW

People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.

My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."

This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).

Comment by ChrisHallquist on A few remarks about mass-downvoting · 2014-03-03T07:25:03.540Z · LW · GW

His assertion that there is no way to check seems to me a better outcome than these posts shouting into the wind that don't get any response.

Did he assert that, exactly? The comment you linked to sounds more like "it's difficult to check." Even that puzzles me, though. Is there a good reason for the powers that be at LessWrong not to have easy access to their own database?

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T04:34:36.128Z · LW · GW

The right rule is probably something like, "don't mix signaling games and truth seeking." If it's the kind of thing you'd expect in a subculture that doesn't take itself too seriously or imagine its quirks are evidence of its superiority to other groups, it's probably fine.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T04:21:33.026Z · LW · GW

You're right, being bad at signaling games can be crippling. The point, though, is to watch out for them and steer away from harmful ones. Actually, I wish I'd emphasized this in the OP: trying to suppress overt signaling games runs the risk of driving them underground, forcing them to be disguised as something else, rather than doing them in a self-aware and fun way.

Comment by ChrisHallquist on Rationality Quotes February 2014 · 2014-03-03T03:29:28.257Z · LW · GW

Abuse of the karma system is a well-known problem on LessWrong, which the admins appear to have decided not to do anything about.

Update: actually, it appears Eliezer has looked into this and not been able to find any evidence of mass-downvoting.

Comment by ChrisHallquist on Lifestyle interventions to increase longevity · 2014-03-03T03:11:00.359Z · LW · GW

How much have you looked into potential confounders for these things? With the processed meat thing in particular, I've wondered what could be so bad about processing meat, and if this could be one of those things where education and wealth are correlated with health, so if wealthy, well-educated people start doing something, it becomes correlated with health too. In that particular case, it would be a case of processed meat being cheap, and therefore eaten by poor people more, while steak tends to be expensive.

(This may be totally wrong, but it seems like an important concern to have investigated.)

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-03T00:42:28.863Z · LW · GW

So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.

I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.

The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm definitely right; this other person has nothing to teach me."

So we have two opposite failure modes to avoid here. The first failure mode is the one where we fetishize the specific IQ number even when our own rationality tells us something is wrong - like Plantiga being apparently a very smart individual, but his arguments being terribly flawed. The second failure mode is the one where we're too confident in our own instincts, even when the numbers tell us the people on the other side are smarter than we are. For example, a creationist says "I'm sure that creationism is true, and it doesn't matter whether really fancy scientists who use big words tell me it isn't."

I guess I need to clarify that I think IQ is a terrible proxy for rationality, that the correlation is weak at best. And your suggested heuristic will do nothing to stop high IQ crackpots from ignoring the mainstream scientific consensus. Or even low IQ crackpots who can find high IQ crackpots to support them. This is actually a thing that happens with some creationists—people thinking "because I'm an , I can see those evolutionary biologists are talking nonsense." Creationists would do better to attend to the domain expertise of evolutionary biologists. (See also: my post on the statistician's fallacy.)

I'm also curious as to how much of your willingness to agree with me in dismissing Plantinga is based on him being just one person. Would you be more inclined to take a sizeable online community of Plantingas seriously?

Unless you are way way way more charitable than I am, I have a hard time believing that you are anywhere near the territory where the advice "be less charitable" is more helpful than the advice "be more charitable".

As I said above, you can try to pinpoint where to apply this advice. You don't need to be charitable to really stupid people with no knowledge of a field. But once you've determined someone is in a reference class where there's a high prior on them having good ideas - they're smart, well-educated, have a basic committment to rationality - advising that someone be less charitable to these people seems a lot like advising people to eat more and exercise less - it might be useful in a couple of extreme cases, but I really doubt it's where the gain for the average person lies.

On the one hand, I dislike the rhetoric of charity as I see it happen on LessWrong. On the other hand, in practice, you're probably right that people aren't too charitable. In practice, the problem is selective charity—a specific kind of selective charity, slanted towards favoring people's in-group. And you seem to endorse this selective charity.

I've already said why I don't think high IQ is super-relevant to deciding who you should read charitably. Overall education also doesn't strike me as super-relevant either. In the US, better educated Republicans are more likely to deny global warming and think that Obama's a Muslim. That appears to be because (a) you can get a college degree without ever taking a class on climate science and (b) more educated conservatives are more likely to know what they're "supposed" to believe about certain issues. Of course, when someone has a Ph.D. in a relevant field, I'd agree that you should be more inclined to assume they're not saying anything stupid about that field (though even that presumption is weakened if they're saying something that would be controversial among their peers).

As for "basic commitment to rationality," I'm not sure what you mean by that. I don't know how I'd turn it into a useful criterion, aside from defining it to mean people I'd trust for other reasons (e.g. endorsing standard attitudes of mainstream academia). It's quite easy for even creationists to declare their commitment to rationality. On the other hand, if you think someone's membership in the online rationalist community is a strong reason to treat what they say charitably, yeah, I'm calling that self-congratulatory nonsense.

And that's the essence of my reply to your point #5. It's not people having self-congratulatory attitudes on an individual level. It's the self-congratulatory attitudes towards their in-group.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-02T23:02:42.303Z · LW · GW

Saying

Interesting point. I'm not entirely clear how you arrived at that position. I'd like to look up some detail questions on that. Could you provide references I might look at?

sort of implies you're updating towards the other's position. If you not only disagree but are totally unswayed by hearing the other person's opinion, it becomes polite but empty verbiage (not that polite but empty verbiage is always a bad thing).

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-02T22:55:08.766Z · LW · GW

There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.

Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.

I won't try to comment on the formal argument (my understanding that literature is mostly just what Robin Hanson has said about it), but intuitively, this seems wrong. It seems like two people trading probability estimates shouldn't need to deduce exactly what the other has observed, they just need to make inferences along the lines of, "wow, she wasn't swayed as much as I expected by me telling her my opinion, she must think she has some pretty good evidence." At least that's the inference you would make if you both knew you trust each other's rationality. More realistically, of course, the correct inference is usually "she wasn't swayed by me telling her my opinion, she doesn't just trust me to be rational."

Consider what would have to happen for two rationalists who knowingly trust each other's rationality to have a persistent disagreement. Because of conservation of expected evidence, Alice has to think her probability estimate would on average remain the same after hearing Bob's evidence, and Bob must think the same about hearing Alice's evidence. That seems to suggest they both must think they have better, more relevant evidence to the question at hand. And might be perfectly reasonable for them to think that at first.

But after several rounds of sharing their probability estimates and seeing the other not budge, Alice will have to realize Bob thinks he's better informed about the topic than she is. And Bob will have to realize the same about Alice. And if they both trust each other's rationality, Alice will have to think, "I thought I was better informed than Bob about this, but it looks like Bob thinks he's the one who's better informed, so maybe I'm wrong about being better informed." And Bob will have to have the parallel thought. Eventually, they should converge.

Comment by ChrisHallquist on Self-Congratulatory Rationalism · 2014-03-02T22:35:33.241Z · LW · GW

Personally, I am entirely in favor of the "I don't trust your rationality either" qualifier.

Comment by ChrisHallquist on White Lies · 2014-02-24T20:25:23.361Z · LW · GW

Upvoted for publicly changing your mind.

Comment by ChrisHallquist on White Lies · 2014-02-23T19:51:48.048Z · LW · GW

Further, the idea that the tribe of Honest Except When I Benefit is the vast majority while Always Honest is a tiny minority is not one that I'll accept without evidence.

Here's one relevant paper: Lying in Everyday Life

Comment by ChrisHallquist on Rationality Quotes February 2014 · 2014-02-21T23:28:32.295Z · LW · GW

We can't forecast anything so let's construct some narratives..?

I think the point is more "good forecasting requires keeping an eye on what your models are actually saying about the real world."

Comment by ChrisHallquist on White Lies · 2014-02-21T00:55:12.105Z · LW · GW

In addition to mistakes other commenters have pointed out, it's a mistake to think you can neatly divide the world into "defectors" and "non-defectors," especially when you draw the line in a way that classifies the vast majority of the world as defectors.

Comment by ChrisHallquist on Rationality Quotes February 2014 · 2014-02-19T02:50:32.026Z · LW · GW

Oops, sorry.

Comment by ChrisHallquist on Rationality Quotes February 2014 · 2014-02-18T21:32:20.949Z · LW · GW

"Much of real rationality is learning how to learn from others."

Robin Hanson

Comment by ChrisHallquist on Rationality Quotes February 2014 · 2014-02-18T20:52:54.174Z · LW · GW

I once talked to a theorist (not RBC, micro) who said that his criterion for serious economics was stuff that you can’t explain to your mother. I would say that if you can’t explain it to your mother, or at least to your non-economist friends, there’s a good chance that you yourself don’t really know what you’re doing.

--Paul Krugman, "The Trouble With Being Abstruse"