Responses to Tyler Cowen on Rationality

post by Zvi · 2017-04-04T23:40:02.000Z · LW · GW · 0 comments

Contents

  Ezra Klein
  Tyler Cowen
  Ezra Klein
  Tyler Cowen
None
No comments

In his recent podcast with Ezra Klein (recommended), Ezra asked Tyler his views on many things. One of them was the rationality community, and he put that response on Marginal Revolution as its own post.

Here is the back and forth:

Ezra Klein

The rationality community.

Tyler Cowen

Well, tell me a little more what you mean. You mean Eliezer Yudkowsky?

Ezra Klein

Yeah, I mean Less Wrong, Slate Star Codex. Julia Galef, Robin Hanson. Sometimes Bryan Caplan is grouped in here. The community of people who are frontloading ideas like signaling, cognitive biases, etc.

Tyler Cowen

Well, I enjoy all those sources, and I read them. That’s obviously a kind of endorsement. But I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable. And that pops up in some of those people more than others. But I think it needs to be realized it’s an extremely culturally specific way of viewing the world, and that’s one of the main things travel can teach you.

Julia Galef published a response to this, pushing back on the idea that rationalism is just another ethos or even another religion. Her response:

My quick reaction:

Basically all humans are overconfident and have blind spots. And that includes self-described rationalists.

But I see rationalists actively trying to compensate for those biases at least sometimes, and I see people in general do so almost never. For example, it’s pretty common for rationalists to solicit criticism of their own ideas, or to acknowledge uncertainty in their claims.

Similarly, it’s weird for Tyler to accuse rationalists of assuming their ethos is correct. Everyone assumes their own ethos is correct! And I think rationalists are far more likely than most people to be transparent about the premises of their ethos, instead of just treating those premises as objectively true, as most people do.

For example, you could accuse rationalists of being overconfident that utilitarianism is the best moral system. Fine. But you think most people aren’t confident in their own moral views?

At least rationalists acknowledge that their moral judgments are dependent on certain premises, and that if someone doesn’t agree with those premises then it’s reasonable to reach different conclusions. There’s an ability to step outside of their own ethos and discuss its pros and cons relative to alternatives, rather than treating it as self-evidently true.

(It’s also common for rationalists to wrestle with flaws in their favorite normative systems, like utilitarianism, which I don’t see most people doing with their moral views.)

So: while I certainly agree rationalists have room for improvement, I think it’s unfair to accuse them of overconfidence, given that that’s a universal human bias and rationalists are putting in a rare amount of effort trying to compensate for it.

Bryan Caplan, on the border of the rationality community as noted in the question, also offered a response:

Here’s how I would have responded:


The rationality community is one of the brightest lights in the modern intellectual firmament.  Its fundamentals – applied Bayesianism and hyper-awareness of psychological bias – provide the one true, objective vantage point.  It’s not “just another kind of religion”; it’s a self-conscious effort to root out the epistemic corruption that religion exemplifies (though hardly monopolizes).  On average, these methods pay off: The rationality community’s views are more likely to be true than any other community I know of.

Unfortunately, the community has two big blind spots.

The first is consequentialist (or more specifically utilitarian) ethics.  This view is vulnerable to many well-known, devastating counter-examples.  But most people in the rationality community hastily and dogmatically reject them.  Why?  I say it’s aesthetic: One-sentence, algorithmic theories have great appeal to logical minds, even when they fit reality very poorly.

The second blind spot is credulous openness to what I call “sci-fi” scenarios.  Claims about brain emulations, singularities, living in a simulation, hostile AI, and so on are all classic “extraordinary claims requiring extraordinary evidence.”  Yes, weird, unprecedented things occasionally happen.  But we should assign microscopic prior probabilities to the idea that any of these specific weird, unprecedented things will happen.  Strangely, though, many people in the rationality community treat them as serious possibilities, or even likely outcomes.  Why?  Again, I say it’s aesthetic.  Carefully constructed sci-fi scenarios have great appeal to logical minds, even when there’s no sign they’re more than science-flavored fantasy.

P.S. Ezra’s list omits the rationality community’s greatest and most epistemically scrupulous mind: Philip Tetlock.  If you want to see all the strengths of the rationality community with none of its weaknesses, read Superforecasting and be enlightened.

This provoked a Twitter reaction from Bryan’s good friend and colleague Robin Hanson:

@RobinHanson: @bryan_caplan says group w/ views “more likely to be true” is too open to sf. To him is an axiom that weird stuff can never be foreseen (!)

@RobinHanson: Seems to me “that scenario seems weird” just MEANS “that scenario seems unlikely to me”. Its a restatement of claim, not an argument for it.

@bryan_caplan: How about “seems unlikely to almost all people who aren’t fans of science fiction”?

@bryan_caplan: I said “weird AND unprecedented,” not just “weird.” And replace “never” with “almost never.”

@RobinHanson: if it were “stuff re computers that seems less weird to computer experts”, that would be argument in its favor. Same if computer -> physics

@RobinHanson: Almost everything in tech is unprecedented on long timescale. That’s way too low a bar to matter much.

Now, my response to all of the above.

The first thing to note is that Tyler’s and Bryan’s responses are both high praise. They then offer constructive criticism that deserves a considered response. If some of the reason for that criticism is to balance out the praise to avoid too closely associating with the rationality community too closely or seeming to endorse it too much, that does not make the criticism invalid, nor does it overwhelm or invalidate the praise. On a fundamental level, I’ll take it all.

I think that Tyler’s criticism goes too far when it uses the term ‘just another kind of religion,’ the same way it is not correct when others call Atheism a religion. That is not what religion means. Tyler knows this, and he was speaking in an interview rather than a written piece. I believe ‘a different set of ethoses’ however is perfectly fair, and the claim that we represent something like ‘the true, objective vantage point’ does tend to get rather obnoxious at times. As Tyler notes, some of us are worse offenders at this than others.

Julia and Bryan both push back hard, and push back well, but I think they are a little too quick to take on an adversarial stand. To say that our fundamental principles ‘provide the one true, objective vantage point’ as Bryan does goes too far. Yes, Bayesianism is simply how probability and the universe work, and hyper-awareness of psychological bias is key to properly implementing Bayesianism. Any effort that is not Bayesian, or that is unaware of our psychological biases is doomed to fail hard outside of its training set.

That does not mean that we have it right. On the contrary, we have it wrong. We know we have it wrong! Our community blog is called Less Wrong. When we are being careful, we call ourselves Aspiring Rationalists, since no one can be fully rational. This is why Tyler’s suggestion of calling it the Irrational Community did not strike me as so crazy. We basically do that already! We are the community of people who know and acknowledge our irrationality and seek to minimize it. The name rationalist is one we  have debated for a long time. It has big advantages and big disadvantages. At this point, we are stuck with it.

Julia responds, quite reasonably, that at least we are trying to do something about the problem whereas basically no one else is even doing that. Among those who believe there is a right answer, we may be the least overconfident ones around. Yes, there are those whose models treat all perspectives as equal/legitimate, and thus are self-refuting, but I do not feel the need to take that perspective seriously. The fact that you do not know which parts of which perspectives are right, and which ones are better or worse than others, is a fact about you, not about the universe.

What it does mean, which I will stand by, is that we are claiming is that if others are not Bayesian and aware of bias, everyone else has it wrong, too. Other calculation methods won’t get the right answer outside their training sets. That does not mean that other traditions cannot have great metis. They absolutely can, and we have much to learn from them. One series I keep planning to start is called ‘the models’, covering various different models of the world and/or humans and/or social dynamics that I have in my head and pull out when they are appropriate. All of them are, of course, wrong, but they are also useful. You want to get in everyone’s head, figure out what they know that you do not know.

This is where travel, and being culturally specific, come in as well. I think these are definite areas for improvement. Tyler is a huge fan of travel, and judges harshly those who stay in one place. I had the opportunity to travel around the world a lot back in the day, and no question it helped with perspective, although I did not investigate the places I was going as much as I should have. Living in even different American cities (I have resided in Denver, Renton and Belmont, in addition to New York City) and renting testing houses in others can provide perspective.

Wherever you live, the area seeps into your brain. You take in its memes, its values, its ambitions, its industry. Living in Denver working for a gaming start-up with people from a different social background, on no money, meant everything felt different. Working at Wizards in Renton for seven months was another. Living in a suburb of Boston, working out of an apartment and trying to find connection from that base was another, even if that did not last so long. Tyler says the year he spent in Germany was the year of his life he learned the most, and I believe him.

This is a lot of why I feel like our concentration in the Bay Area has been a mistake. Yes, I have a personal bias here, as I have watched many of my best friends, the core of my community, leave me behind, each claiming the pull from the last person who decided they couldn’t hack it in the Big Apple so they would move to the Bay and pretend it was their dream all along (or just wanted to hang out with a lot of rationalists and work in a start-up). They’re even about to do it again! Yes, I fought that every step of the way. And yes, I find the Bay to be a toxic intellectual environment – again, different places have different attitudes.

What I am objecting to here however is not the particular unfortunate choice of the Bay. I am objecting to our choice to concentrate at all! By placing so many of us in the same place, we get to interact more cheaply, which is great, but we also seep most of our best people in the same culture, the same ideas and memes, the same reality tunnel. We need to maintain different perspectives, to draw from different cultures, and moving us all together to a new different place would free us from The Bay, as well as its expense, but it would not solve the deeper problem.

We are, as Julia notes, excellent (but far from perfect!) at soliciting and considering criticism and new ideas, but if you keep looking in the same places, you lack perspective. We need to, as a wise member of our community said, eat dirt, to find the things we do not realize that we need.

Tyler Cowen is holding us to an unreasonably high standard. And that is great! This is exactly what we are trying to do: uncover the truth, overcome our biases, get the right answers. We are not trying to be less wrong than others. We are trying to be less wrong than yesterday, the least wrong we can be. Nature does not grade on a curve, and Tyler is challenging us to do even better, to incorporate all the wisdom out there and not only our narrow reality tunnel. Challenge accepted!

Bryan gives us even higher praise than Tyler does, but points to what he sees as two blind spots. On one, I somewhat agree, and on the other I strongly disagree.

His first objection is to Utilitarian/Consequentialist Ethics. Ethics is certainly a hard problem, and I have thought about it a lot but not as much as I would like to. I am certainly not confident I have it right! As Julia notes, we wrestle with the flaws, and there are certainly flaws, although calling them ‘devastating’ counter-examples does not seem right to me. I also think that being able to describe a system in one sentence is a quite legitimate advantage; we should judge simpler theories as more likely to be correct.

I read Bryan’s link, and these objections are good and worth thinking about, but they do not seem to be ‘devastating’. They seem to basically be saying “not so fast!” and “solving these problems properly is hard!” but if there is one group that would readily agree with both of those propositions, we’d be it. Yes, figuring out what actions would have the best consequences is hard! We spend most of our time trying to figure that one out, and have to use approximations and best guesses, and rightfully so. That’s what I’m up to right now. Yes, figuring out what things are how good is hard too. Working on that one too, and again going with my best guesses. Yes, you have to define utility in a way that incorporates distributive justice if you care about distributive justice. Yes, people who act according to different principles will tend to produce different future levels of utility, yes you need to figure out how to honor obligations to those close to you. The response that other systems reduce to utility seems right for the last challenge.

The reason I somewhat agree with Bryan is that I do not only lack confidence in utilitarianism; I think utilitarianism is the right way to start thinking about things, but I also think act utilitarianism is wrong. I do my best to follow Functional Decision Theory, and I believe in virtue ethics, because I believe this is the way to maximize utility both for oneself and for all. I even view many of the problems that we face in the world, and as a community, and with technology, follow from people and systems following act utilitarianism and/or causal decision theory, and leaving key considerations out of their equations, resulting in poor optimization targets run amok, failure to consider higher-order and hard-to-measure effects, and inability to cooperate. I think this is really, really bad and fixing this is ridiculously important. I will keep working on how to make this case more effectively, and on how to fix it.

I am very interested in talking about such matters further, but this is not the place, so moving on to Bryan’s second objection. I wish Bryan would give us a little more credit here. I think Robin’s answers are strong, but if anything too kind. These are not ‘carefully constructed sci-fi scenarios.’ In many cases, quite the opposite; if you do not have Hostile AI in your sci-fi world, you need to explain why (and the real reason is likely because ‘it would ruin a good story’)! These are, for the most part, generic technological extrapolations from what exists today. Rationalists tend to think about the future more than others, and consider what technologies might occur in the future. Those who are not even familiar with sci-fi at all largely do not think about potential future technological developments, and would consider any new technologies to be ‘weird.’ Even further than that, most people on Earth (and certainly most people who have died) would consider many current, existing technological developments to be ‘weird.’ It seems very strange to base one’s prior on whether those people would consider artificial intelligence to sound weird or not.

I see Bryan rejecting these possibilities, ironically, exactly for aesthetic reasons, the very reason he accuses us of falling for them – they are weird to him. That does not mean that we do not need evidence for such claims, or that the priors should start out high! It is easy to forget all the evidence and analysis that one has already put in. Everyone who is interested should look carefully at the arguments and the evidence, and reach their own conclusions. Whether or not you consider us to have fulfilled our burden to provide extraordinary evidence, we do indeed work to provide exactly that. Many of us are happy to discuss these matters at length, myself included. In particular, I would be interested in hearing Bryan’s reasons why AI is so unlikely, or why AI is unlikely to be hostile, and would hope that the answer is not just ‘that sounds like sci-fi so I don’t have to take the possibility seriously’.

While I was writing this, a third response came in from Noah Smith, called Are Rationalists Dense?

It starts out describing the discussion as a “interesting food-fight,” which again does not match my view of what is going on – we are not perfect, there’s nothing wrong with calling us out where we need improvement, and it is good and right to push back on that when we feel the criticism has gone too far.  He has speculations on why we may be ‘rubbing others the wrong way’ and comes up with three hypotheses: The name, the leaders and the fans.

The name, as has been noted here and many times before, is in some ways unfortunate. I wish it didn’t have the implication that those not explicitly in our community were therefore irrational whereas we were already rational; we do not actually believe this, or at least I hope most of us do not. We also, I hope, do not believe that one must care about Effective Altruism and/or A.I. Risk in order to be rational; rather, we hope to spread rational thinking in the hope that some of those who receive it will then realize such causes are important. But again, it also got and gets us attention and focus, and a rallying cry, which is why a lot of groups end up with names that rub some people the wrong way (he notes Black Lives Matter, Pro-Life and Reality-Based Community, all of whose names give attention, focus and a rallying cry, and all of whose names understandably piss off some others)

The leaders have been known to rub some people the wrong way, for sure. Yudkowsky, Hanson and Alexander are not the most conflict-free bunch. What he calls the fans are often not that friendly, and can come off not so well in online interactions. None of that is unusual.

If I had to add a fourth reason, it is simply because we are, as Noah put it, a ‘seemingly innocuous online community mostly populated by shoe-gazing nerds’ and people generally do not like nerds, especially nerds who are making claims. We do not need super-complex explanations!

Noah’s suggestions are definitely good areas for us to work on, if we seek broader acceptance and a better image – work on the image of our leaders, work on our general interactions, consider describing ourselves in softer language somehow. To some extent, these things are worth seeking, and I certainly strive to improve in this area. What I am even more interested in, is making us better.

 

 

 


0 comments

Comments sorted by top scores.