Exploring Democratic Dialogue between Rationality, Silicon Valley, and the Wider World

post by Remmelt (remmelt-ellen) · 2021-08-20T16:04:44.683Z · LW · GW · 19 comments

Contents

19 comments

I was asked to review a draft that critiques the broader rationality movement for a tendency to engineer elegant solutions to fix the world in ways that bypass 'inefficient' democratic dialogue. 

However, the authors worried that their explanations would trigger ingroup-outgroup conflict amongst readers, rather than engage rationalists to discuss and collaborate on reforms. They deliberately decided to ditch this version and not publish it in the media.

IMO though, they made some solid points about the broad sociology of the rationalist movement. So I selected and edited excerpts for clarity and constructiveness (skipping the intro that got into the already much discussed controversy between New York Times journalist Cade Metz and Scott Alexander), posted below with the authors’ permission. 

Do ask friends for their two cents, but please share the link in private. Understanding this piece requires some context on how our community works, and throwing out a message on Twitter, Reddit, and the like is a poor choice for sharing that context.

I hope that you will find some insightful nuggets amongst the commentary. 
What's your sense of what the article describes accurately, and what it fails to capture?

Edit: I should have more literally said that, yes, this draft has a bunch of commentary that is vague and antipathetic, and I hope you're going to get something out of it anyway.


The deep connections between Silicon Valley, rationalism, and the anti-democratic right are more complicated, and more revealing, than either Metz or his critics allow for. If Silicon Valley simply harbored a virulent minority of semi-closeted white supremacists it would be … exactly like much of America, as many have woken up to in the last decade. What is more important is what specifically links SV’s apparent rationalism to NRx attitudes. Given the way technologists’ dreams are increasingly shaping our future, we have a right to know what these dreams hold.
 

SV’s mythos and Rationalism start from the optimistic premise that technology and reason can fix the world, helping people understand what they have in common. NRx instead fixates on the misanthropic principle that democracy is always a ruse for the rule of the powerful and those who rule should be “worthy” (viz. brilliant CEO-godkings). Standard democracy is, in their minds, simply a cover under which the mediocre and corrupt displace the brilliant and able.
 

But both these ways of thinking about the world have a common origin in the engineer’s faith that logic and reason could transform the complex, messy challenges of the world into solvable problems. This faith inspires the civic religion of Silicon Valley, which preaches that new technologies spread liberal values and allow consumers to build in common. It leads Rationalists to argue that individual human reason could transform the world when it was purged of thinking errors. Equally, it inspires neo-reactionaries to argue that, when most people don’t follow this path, it shows they don’t know what is good for them, and would be better off if they were ruled by those who do. In their heretical interpretation, the secret gospel of the Valley isn’t that technology frees consumers, but that it empowers the founders and CEOs who have the brilliance and ruthlessness to do what is necessary. 
 

Both the bright and dark versions of the religion of the engineers are dismayed by the pluralistic messiness of democracy, the inevitability of disagreement and talking past each other between people with different cultures, values and backgrounds. Both want to substitute something cleaner and simpler for this disorder, whether it be the bland corporate civicism of the social network, or the enforced order of the absolutist. 
 

And this frames the political challenge for those who want a different vision of technology. If the great machineries of Silicon Valley are ever to serve democracy rather than undermine it, the people who are building and rebuilding it will need to come to a proper understanding of the virtues that are inseparable from its disorder, and embrace them. This would start from a much more humble understanding of rationality, one that recognizes the messiness and diversity of the ways in which people think about the world as a strength, not a weakness. A better understanding of how human reasoning actually works would help inoculate the makers of technology against both the blithe faith that they can build a world without disagreement, and its distorted shadow version justifying tyranny.
 

SV is a microcosm of what the world would look like if it were run by engineers. Engineers take complicated situations and formalize them mathematically, abstracting away as much of the mess as they can. Their models can often be used to figure out what is going wrong or what could be improved, discovering how complex and creaking machineries can be replaced with simpler, cleaner mechanisms. Engineers, then, tend to think of their work as a series of “optimization problems,” transforming apparently complex situations into an “objective function,” which ranks possible solutions, given the tradeoffs and constraints, to find the best one and implement it.  
 

This is the mainspring of Silicon Valley culture - the faith that complicated problems can be made mathematically tractable and resolved, and that a focus on doing so is the path to a much brighter world. The first step is to identify what you want (figuring out some quantity that you want to maximize or minimize). The second is to use whatever data you have to provisionally identify the resources that you can employ to reach that goal, and the hard constraints that stand in your way. The third is to identify the best possible solution, given those resources and constraints, and try to implement it. And the fourth is to keep updating your understanding of the resources and constraints, as you gather more information and try to solve the problem better. This way of thinking is rational (it closely resembles how economists model decision making), and it is decidedly goal oriented. 
 

This approach underlies what used to be exuberant and refreshing about SV, and very often still is. Engineers, almost by definition, want to get things done. They are impatient with people who seek to immerse themselves into and integrate different perspectives on issues rather than just solve the “primary” problem laid out in front of them. The closer a problem statement is to something formally optimizable, the more excited they are to engage with it.
 

And when engineers unleashed their energies on big social problems, it turned out that a lot of things could and did get done. Many of the great achievements of the modern age are the product of this kind of ingenuity. Google search – a means for combing through a vast, distributed repository of the world’s information and providing useful results within a fraction of a second – would have seemed like a ludicrous impossibility only three decades ago. Google’s founders used a set of mathematical techniques to leverage the Internet’s own latent information structures, ranking online resources in terms of their likely usefulness and unleashing a knowledge revolution. More recently, faster semiconductors have allowed the application of a wide variety of machine learning techniques – many of them based around optimization – to social needs and (often not at all the same thing) business models.
 

Many people have written about how this idealistic vision has been dragged down into a slough of despond by predatory profit models, monopolistic tendencies, and the deficiencies of algorithmic capitalism. Less attention has been paid to the vision’s own internal blind spots, even though some of the wiser leaders in the community recognized them early. For example, the PhD advisor of Sergei Brin and Larry Page, who invented Google’s PageRank algorithm, Terry Winograd (2006) famously highlighted the dangers of rationalism.  While he saw the power and attraction of formalism, he saw how such formalism could easily destroy the very values it aimed to formalize and serve, especially in areas of social and economic life we understand very poorly.
 

This is why Silicon Valley does so badly at comprehending politics, where people not only disagree over what the best solutions are to problems or the precise setting of valuation parameters, but clash over the fundamental terms in which the problem is conceptualized. Is racism a characteristic of individual preferences or an arrangement of social forces? Is fairness a property of a whole society or of a particular algorithm? Is human flourishing captured by economic output? What does “power” and its “decentralization” mean? 
 

The millenarian bet of SV was that these problems would dissipate when confronted with the power of optimization, so that advances in measurement and computational capacity would finally build a Tower of Babel that could reach the heavens. Facebook’s corporate religion held that cooperation would blossom as its social network drew the world together. Google’s founder Sergey Brin argued that the politicians who won national elections should “withdraw from [their] respective parties and govern as independents in name and in spirit.” These truths seemed so obvious that they barely needed to be defended. Reasonable people, once they got away from artificial disagreement, would surely converge upon the right solutions. The engineer’s model of how to think about a complex world became their model of how everyone ought and would think about a complex world, once technology had enabled them. Everyone was an engineer at heart, even if some didn’t know it yet. 
 

As the historian Margaret O’Mara has documented, this faith made it hard for Silicon Valley to understand its own workings. The technically powerful frameworks that engineers created often failed because they weren’t responsive to what people actually wanted, or sometimes succeeded in unexpected ways. Companies with mediocre solutions that went viral could triumph over companies with much better products. 
 

As O’Mara shows, success often depended less on technical prowess than on the ability to tell compelling stories about the products that were being sold. Social networks spun out from Stanford University, from the nascent venture capital industry and from other nexuses, shaping who got funded and who did not. Silicon Valley had a dirty intellectual secret: its model of entrepreneurial success depended significantly on the unequal social connections and primate grooming rituals that it loudly promised to replace. A culture that trumpeted its separation from traditional academic hierarchies recreated its own self-perpetuating networks of privilege – graduating from Y-Combinator was every bit as much an elite credential as getting tapped for Yales’ Skull and Bones.  
 

The growing gap between how the Valley worked and how it told itself it worked generated extravagant tapestries of myth to cover the fissures. Founders like Steve Jobs and Mark Zuckerberg were idolized for their brilliance, even though others who were as bright or brighter had failed through bad luck (e.g. General Magic’s 90s pioneering of what became the iPhone), discrimination (e.g. the first programmers were women and minorities, as celebrated in Hidden Figures) or from being so innovative that they were ahead of their time (e.g. Xerox PARC). Those who constructed and sold these myths, often women from very different backgrounds from the men they helped build cults around, were written out of the success stories as well. Bargain basement ideologies, such as the new Stoicism or Girard’s theory of mimetics, provided justification for why things were as they were, or lent a spurious intellectual luster to an economy built as much around gladhanding as intellectual flair. And always, the master-myth was the notion that the Valley helped people to cooperate to make the world a better place. 
 

When things took a turn for the worse, Silicon Valley companies clung to these myths. In a 2016 internal memo that later became notorious, senior Facebook executive Andrew Bosworth argued that Facebook’s power “to connect people” was a global mission of transformation, which justified the questionable privacy practices and occasional lives lost through people bullied into suicide, or terrorist attacks organized via social media. Connecting people together via Facebook was “de facto good,” unifying a world that was divided by borders and languages. 
 

This rhetoric wore thin, as it became obvious that Facebook and other Silicon Valley platforms could amplify profound social divides, enabling the persecution of Rohingya minorities in Myanmar, allowing India’s BJP party to spur on ethnic hatred towards fellow citizens without repercussions, and magnifying the influence of America’s far right. As the writer Anna Wiener showed, the street found its own uses for things. Platforms such as GitHub, originally built for programmers to collaborate on coding open source software, unexpectedly provided a place for the far right to organize. 
 

The machineries of optimization that SV built weren’t the only cause of this polarization, but they likely helped it spread and deepen. Social media giants devised algorithms that would learn to optimise “engagement” by pushing out content that made their consumers keep clicking and scrolling through the interface, and look at the profit-making ads that popped up. Users often engaged most with posts or videos that shocked or surprised them. Enticed down an individualized rabbit-hole, each would enter a land of dark wonder where agreed facts were obvious lies, and logic was turned upside down. Instead of providing a rational alternative to divisive politics, SV’s products deepened the divisions. 

 

**********

 

Rationalism was deeply entwined with SV thinking and replicated its flaws on a more intimate scale. Rationalists didn’t see themselves as a cult (though they did know that many outsiders saw them that way, and even joked about it now-and-then). Instead, they believed that they were pioneering a transformative and universalizable approach to reasoning. Human beings could be washed clean of the original sin of unbridled cognitive bias by being bathed in rational thinking. Maladaptive bias could gradually be overcome, making those following Rationalists “less wrong.” As a Rationalist learned over time, she would more closely approximate a true understanding of the world. And as others too followed the same learning path, they would all converge more closely together.
 

Viewed from one angle, Rationalism was optimization turned into a philosophy of personal self-improvement. Viewed from another, it was a complex intellectual amalgam of evolutionary cognitive psychology, Bayesian statistics, and mathematical game theory, with bits and pieces of epistemology (philosophical inquiry into how we know things) thrown in. 
 

Evolutionary psychology explained man’s fallen state. Evolutionary forces had shaped human thinking [? · GW] to make it prone to a variety of cognitive biases – mental shortcuts that didn’t make much sense in the modern world. For example, we are all prone to double down on mediocre projects we committed to and can’t recover sunk costs from. Or we stick with the beliefs we or others close to us hold dear, even when the evidence starts showing that those beliefs are false. 
 

But redemption was possible, thanks to a theorem that provided a mathematical method for updating how confident you were about a statement (or its counterpart) being true, as you discovered new evidence. As you continuously tested and retested your beliefs, Bayes’ theorem would guide you on a path towards rationality. And as more disciples embraced Bayesian reasoning, game theory suggested that they should converge on a common understanding of the truth.
 

These ideas spurred a gospel of self-improvement, whose evangelists sought new converts on online discussion forums such as LessWrong, and blogs such as Overcoming Bias. Their ideas had enormous indirect consequences. Rationalists took up the philosopher Peter Singer’s nascent ideas about charity and helped turn them into “effective altruism” – a way to think systematically and rationally about charity, weighing up the value of goals (in increasing human happiness) and the relative effectiveness of various means of reaching those goals. This approach spread to SV, reshaping philanthropy through organizations such as GiveWell and (less directly) the Gates Foundation, each seeking to discover the most effective organizations that you could give money to. 
 

The fundamental Rationalist bet – that clear logical thinking would allow you to better understand the world – was validated by the work of Philip Tetlock and his colleagues, who set out to turn “forecasting” into a science. Tetlock found that conventional experts tied to a particular domain were overrated in their ability to predict the future. Instead, open-minded generalists could train to become “superforecasters” enabling them to forecast a future event relatively accurately by thinking logically about similar cases in the past (e.g. about how frequently such an event was observed subsequently, and what caused it to happen).
 

However, sometimes Rationalism didn’t counter the biases of its practitioners or speed their convergence on learning wisdom available in other traditions…it instead took them on strange and often circular intellectual journeys.  Many rationalists became obsessed by threats to and opportunities for the long-term future of humanity, driven by the notion that small changes in our society’s trajectory today might dramatically alter the life prospects of our distant descendants. George Mason university economist Tyler Cowen used this notion in his case for maximising the economic growth rate, arguing that if corporations make sustained improvements in how much more they produce and sell relative to years before, the benefits to consumers will compound substantially over the long run. Others, such as Oxford-based philosophers Toby Ord and Nick Bostrom, instead argued that minimizing existential threats (e.g. that a pandemic, an asteroid, or a powerfully optimising machine eliminates all humans).  
 

Eliezer Yudkowsky, a blogger so prominent early on that Alexander jokingly referred to the community’s central tenet as his being the “rightful caliph”, called the sirens about the possibility that an ultra-powerful machine intelligence would run wild. However, a key conclusion from the line of “AI safety” research he later stimulated was that machines must be hardcoded to be uncertain about which objectives to optimise for, and thus kept constantly dependent on feedback from their human overseers. Effectively, this conclusion undermines Yudkowky’s original imperative to model autonomous machines, returning them to their humdrum role as tools and aids to human cooperation and communication (areas already researched for decades in non-AI-focused fields like human-computer interaction).
 

Rationalists gathered in such intellectual culs-de-sac because they were convinced they had found a better way to think than the groups around them, almost by definition. Yudkowsky saw Rationalism as a revolt against conformity:  no one could become a true Rationalist until “their parents have failed them, their gods are dead, and their tools have shattered in their hand.” They needed to have their “trust broken in the sanity of the people around them.” Even standard approaches to science were inadequate, having formed long before the role of bias was understood properly. Such an isolating posture naturally leads adherents to be impatient with established but unfamiliar traditions of knowledge, or the slow, pluralistic social understanding that results from coalition and consensus-building.
 

This spawned a movement that was sometimes inward-looking and self-congratulatory, and gave rise to its own special shibboleths (as Cowen has politely but firmly suggested, Rationalism had some notably irrational elements, and was best considered as “just another kind of religion”). It also primed its members to believe that there were stark inequalities in humans’ ability to reason, and that they were on the right end of the inequality. 
 

The notion of natural and enduring inequalities of reason might explain why rationalism has had little appeal in the broader public. Few outsiders seemed interested in grand proposals to remake politics along rationalist lines. The proposal for a “futarchy” by Robin Hanson, blog author of Overcoming Bias and Cowen’s university colleague, would rebuild the political system around betting markets. Yet speculation on world events, even within today’s heavily regulated betting pools and derivative markets, is frequently taken by the public as an unscrupulous form of misappropriation that accelerates inequality and financial instability. 
 

Contests for epistemic supremacy as the foundation of a better future made sense to many Rationalists, but not to many others. But Rationalists were well used to believing in things such as cryonics and the ‘many worlds’ theory of quantum physics that the public scoffed at. Therefore, they took such scoffing as a sign of the public’s irrationality or disingenuousness, rather than a suggestion that ideas ought to be re-examined. This theme loomed large in Hanson’s later work, which suggested that most communication in and around democracy was a kind of social show, concealing deeper power struggles and dynamics.
 

Such cynicism made it easier for a minority of Rationalists to drift gradually towards a darker kind of politics. If there was a crucial distinction between a small cognitive elite that was capable of reasoning and thinking well, and a much larger population that was so deranged by cognitive bias that it could barely think at all, then why should you believe in democracy in the first place? It would be better to allow those who could think clearly to rule over those who could not. If democratic communication was always overrun by disingenuous signaling, the best we could hope for is to be ruled by our epistemic bests. 
 

This was the point where Rationalism, NRx and SV joined together. Where Rationalism went sour, it inclined towards a dogmatic elitism, suggesting that the few who could think clearly ought re-engineer society for the many who could not. The reactionary argument that democracy was a fundamental mistake seemed quite attractive to some Rationalists, especially those who had seen their ideas wither in the face of public indifference. 
 

And SV – together with authoritarian societies such as China, and quasi-democratic societies such as Singapore – provided a model of how this could be done. Curtis Yarvin, who had played a significant role in the early days of LessWrong, advocated an idiosyncratic model of rule that combined absolutist monarchy with Silicon Valley founder-worship. CEO-kings, like the Roman emperor Septimius Severus in Gibbons’ Decline and Fall, would recognize that the “true interest of an absolute monarch generally coincides with that of his people” and govern to provide stability and prosperity.
 

In short, Rationalist debate was pulled backwards and forwards between two points of attraction. The more powerful one was optimistic about the possibility of making the world more rational. The less powerful concluded that most people were incapable of being converted, and moreover didn’t deserve it. Rationalists weren’t simply more tolerant of contrarian arguments about, for example, racial or gendered differences in intelligence or capacity to learn and work. They were fascinated by these arguments, which spoke to the core divide in their community: could all find the truth of Rationalism and be saved, or was salvation reserved for a tiny Elect, condemning everyone else to perdition?
 

These core disagreements reflected and helped shape how SV thought about politics. Again, a majority held to their version of the liberal creed, anticipating that technology and connection would make the world into a great thrumming hive of thought and activity. Individual differences of class, creed and interest would fade into insignificance. And yet again, a small but influential minority sought to speed the return of the Outer Gods and their bloody handed servants. The leaders of the two factions mingled together, serving on the same boards, investing in the same firms, and arguing with each other in common terminology over contrary ends.
 

When democratic crisis came, the former had no good way to react, or even, initially, to understand what was happening. How could their tools, which were designed to draw people and societies together, instead be helping to tear them apart? The latter, although they often despised Trump and those around him as useful idiots, saw his victory as justifying their darkest hopes and beliefs. Trump was no-one’s idea of a ruthlessly competent CEO-priestking – but he might clear the path for one by tearing away the organizing myths of liberal equality. The result was a kind of doublethink. SV finds it hard to think about helping to fix democracy, because it has a hard time thinking about democracy itself. 
 

19 comments

Comments sorted by top scores.

comment by Ben Wōden (ben-w-den-1) · 2021-08-21T17:37:01.168Z · LW(p) · GW(p)

I have a few problems with this, but most of them stem from one particular big gap in the argument, so I'm going to sort of start in on that, and then bring in the other issues as and when they feel relevant. I've gone back and forth on whether to perhaps structure this review more in a 'going through the article' way, or a sort of 'here's my list of problems' way, but neither really seemed satisfactory so I've gone for this sort of 'hub and spoke' model, which isn't so suited to linear writing but there we go.

This ended up getting long as hell, so I've put some headings in.

Core Issue: Democracy is not based on the assumption of popular rationality, so there's no particular reason rationalism should threaten it.

The core issue here is the lack of any attempt to demonstrate that anti-democracy ideas flow from the loose group of ideas popular around here they call 'rationalism' (I don't think this term is particularly endorsed by the community but I'll use it for want of a better alternative), either in terms of their being logically implied by rationalism or commonly held by those who ascribe to rationalism. It's just sort of assumed that, because rationalists believe you can be more or less rational, and some people are more rational than others, they must therefore be against democracy, at least once they fully own up to their beliefs and follow them to their logical conclusions. It is assumed that democracy must be backed up by suppositions that all people are equally rational or equally capable of governing, and that therefore swiping at this core assertion must logically weaken the case for democracy and strengthen the case for dictatorship.

This isn't factually true in terms of what people actually believe, but neither is it the case that rationalist democrats are hypocritical and/or delusional, and just emotionally resisting becoming neo-reactionaries. Belief in unequal distribution of ability to govern doesn't imply anti-democratic positions, because democracy doesn't rest on the idea that all people are equally (and highly) skilled at statecraft or rationality or governing or whatever. If anything, it's quite the reverse!

Indeed, if people were perfectly rational, altruistic, and knowledgeable, the case for democracy would collapse! What would be the point if you could just pick one person (maybe at random, maybe on a hereditary system just for simplicity of transfer of power) and have them rule benevolently and rationally? Democracy is a bulwark against the irrationality and ignorance of the individual, based on never giving any one person so much power that they can do huge damage, and 'smearing out' the effects of bias and ignorance by sampling over a huge number of people and reducing the variance in the overall results. Given the asymmetric payoff landscape, reducing variance is brilliant!

There's of course a lot of complexity I'm not going into here. Democracy isn't the only way of combining information on preferences and judgements; markets are also part of that and they do better at some things and worse at others, and democracies also delegate decision-making away from the populace and their representatives and to an expert class in all sorts of cases, and the ideal relationship between democracy, markets, and expertise can be argued back and forth forever. Classical public choice economics is, in my view, well challenged by recent work by Caplan, Hanson, and others which casts doubt on just how strongly democracy can turn individual irrationality into group rationality, and I don't want to get into that at this point.

The reason I bring it up is to show that the authors here don't get into it at all, and just seem to assume that the case for democracy rests on a strong belief in a high and somewhat equally distributed level of rationality and competence on the part of the general public, implying that the rationalism's insistence that in fact most people are extremely biased and irrational must logically swipe hard at that case and imply anti-democratic sentiment. This is the reverse of the truth!

Rationality is not Intelligence, and indeed this is a core Rationalist idea.

Branching out from this fundamental issue with the piece are a few other related problems. The piece seems to assume that rationalism's focus on the idea that most people are irrational and that becoming rational and unbiased is difficult, deliberate work and few manage it very well implies some sort of deference to the intelligent and/or academically elite (both of which are in high supply in tech/SV, therefore people should be more deferent to tech/SV/engineers/clever-clogs-types, seems to be the authors' implicit understanding of the rationalist position). But this is also not just wrong but backwards.

You can see this by looking at what leading/famous rationalists have to say about credentialed experts and clever people. One of the key ideas of rationalism is that not just even, but especially high-IQ and academically elite people can and often are very irrational and biased. The idea is that rationality isn't something you automatically get by being clever/educated, but something mostly orthogonal that one has to work to cultivate separately. The writings of notable rationalists like Yudkowsky, Mowshowitz, and Alexander are full of examples of how clever, educated people are frequently making exactly the same errors of rationality that we're all happy to point out and laugh at in anti-vaxxers, creationists, religious fundamentalists, and other such low-status people, but to much greater harm because their words carry authority.

This has been more obvious than ever during the pandemic, as rationalists in general have been relentlessly critical of clever and educated people and argued for a much more decentralised, much more fundamentally democratic attitude to prevail in which everyone should think through the implications of the data for themselves, and not just defer to the authority of intelligent, educated people. I doubt there are many people at the top of the CDC with an IQ below 130 or without a PhD, but I don't think any organisation has come in for more criticism from rationalists over the last 18 months.

Rationalism is absolutely not about deference to the clever and educated, but about recognising that even the cleverest and most educated are often highly irrational, and working hard to learn what we can from experts while having our eyes open to this fact, thinking for ourselves, and also understanding that we ourselves are also naturally highly irrational, and need to work to overcome this internally as well as just noticing the mistakes of others. In that sense, I see it is as far more rhetorically democratic than the prevailing attitude in most liberal democracies, and a world away from the kind of faith in the elite that would be required to support further centralisation of decision-making.

The dreaded race issue

I shall now be addressing the race bit. If, like me, your main reaction is a resigned 'oh, my god, not this shit again, please" then please do skip forward to the next bold bit. I'm responding because I'm already writing a response and so, for completeness, feel I should address it, but I wouldn't do so if it appeared alone because I also think it's just not really worth discussing.

This confusion of intelligence and rationality also ruins the authors attempts to slide in the dreaded race issue. The authors note that rationalists realise that some people are more rational than others, then note that there are also people who think people of some races are more intelligent than others, and so put 2 and 2 together and get 22, concluding that perhaps all these rationalists would be racists if they were honest with themselves.

This fails on so many levels it's difficult to focus on them one by one, but I'll try to do so. First off there's no evidence offered that these are the same people. Some people think there are genetically-mediated racial intelligence differences. The only two I can think of are Charles Murry and Garret Jones, and even then I'm not actually sure if either of them actually does (I'm not hugely interested in race/IQ and so haven't looked into it that much). But note neither of those people identify as rationalists. So how do we know even that the supposed pre-requisite beliefs for this racist downstream belief are even present in the same people?

EDIT: Based on comments, I withdraw this preceding criticism as not fully explored and somewhat contradictory with a later criticism.

Even assuming they are, the issue then falls down because intelligence isn't the same as rationality. Even if our straw rationalist did think there were genetically mediated racial intelligence differences, this wouldn't imply similar rationality differences.

Even assuming that it did, and that this straw rationalist is actually a straw intelligencist who thinks IQ is the sole determiner of ability to make rational decisions, then even buying all the arguments of the race/IQ chaps hook, line, and sinker (which, to reiterate, neither I nor any rationalist I've ever spoken to or read actually does), the differences are tiny and absolutely dwarfed by within-race variance anyway, so just wouldn't make much difference to anything.

EDIT: I also withdraw this above paragraph based on comments, as it's somewhat contradictory with my first (also withdrawn) paragraph, and paragraphs 2 and 4 (below) of my criticism are enough anyway.

And then there's the fact that even if we bought the argument all the way up to this point, it would still fall down on the 'X group of people isn't very rational so should be denied a voice in democracy' step, because, as I addressed first and foremost in this comment, arguments for democracy in no way rest on the assumption that the people voting are paragons of rationality - indeed quite the reverse.

So this just seems like another lazy, failed attempt to tar the world of rationalism with the brush of scientific racism, and to be honest I'm getting irritated with the whole thing by now and even half-hearted allusions to it such as this are starting to really affect my ability to take people seriously.

NRx isn't rationalism, not even a bitty bit.

But then, okay, so I think the response I'd get to my arguments so far, which are really all add up to 'none of these anti-democratic/NRx beliefs actually logically flow from the tenets of rationalism in any way', would be for someone to say okay but what about the actual NRx lads - they do actually exist right?

This is, to some extent, fair enough. There are NRx people. Not many, but then there aren't many rationalists either. But yeah there are people who think we should abolish democracy and have some combination of monarchs or CEOs ruling societies a bit like feudal kingdoms, or perhaps Prussia, or perhaps Tesla, or something. Idk man, put two NRx chaps in a room and you'll have 3 different fundamental plans for the re-organisation of society in a way almost precisely engineered to sound like hell on earth to normie libs.

But it's not like these NRx lads are 'rationalists gone bad' or anything of the sort. I've only read Curtis Yarvain and Nick Land 1st-hand, and Michael Anissimov and one other I can't remember the name of 2nd-hand in the context of Scott Alexander ripping chunks out of them for 50k words (which I'm not claiming gives me the same level of insight), and they really aren't just starting with Yudkowsky and Bayes' Theorem then taking a few extra steps most rationalists are too chickenshit to make and ending up at Zombie James I as God-Emperor-CEO.

Land is straight-up continental philosophy, which is about as close to the opposite of rationalism a'la Yudkowsky as I can think of, and generally viewed with a mixture of bafflement and derision by every rationalist I've come across. Yarvain occasionally mentions Bayes' Theorem, but in the vague sort of offhand way that loads of Philosophers of Science with all sorts of non-rationalist, non-Bayesian views do. Loads of people understand Bayes Theorem as a trivially true and useful bit of probability maths without making it the central pillar of their epistemology in the way that rationalists do. Yarvain seems to be a rationalist in so far as he has a basic grasp of probability maths and doesn't make a few basic reasoning errors that most people do if they don't think about probability too deeply. That doesn't make him a rationalist any more than the fact that my mate Wes is really keen on not falling victim to the sunk cost fallacy makes him one (he's not - the man is a fanatical frequentist).

These NRx lads aren't just not rationalists, they're one of the rationalists' favourite foils. Scott Alexander is (reluctantly, it seems) one of the most famous rationalists going, and he sort of got famous for basically writing a PhD thesis about how NRx is nuts. NRx chaps are banned from his discussion forums and from LessWrong, which is a level of exclusion the rationalist community rarely reaches for.

So, shorn of an argument that NRx/anti-democratic sentiment flows naturally from rationalist principles, and without any evidence that many rationalists become NRx, this article just falls back on the most ludicrous guilt-by-association nonsense. Apparently rationalists and NRx 'sit on the same boards' at some companies. Okay, well, I'm sure there are plenty of mainstream Democrats on those boards as well (for all the focus on the oddities, the political culture of SV/tech is overwhelmingly mainstream Democrat), so are all those mainstream Democrats also somehow secret NRx or something?

Apparently rationalists and NRx 'argue using common terminology over contrary ends', which is about the weakest claim of alignment imaginable. It's a really wordy and confusing way of saying the two groups disagree about what is desirable but are capable of communicating that disagreement in a mutually intelligible way. Fanatical pro-life and fanatical pro-choice protestors arguing over whether some particular type of abortion should be legal or illegal can 'argue using common terminology over contrary ends'. This is a statement of no substance, dripping with implied conspiracy but actually claiming basically nothing.

I will admit that NRx and rationalists sometimes talk to each other. Apparently the rationalists don't like doing it as much as the NRx do, so much so that NRx as a subject is banned from a bunch of rationalist discussion spaces, but yeah, they've expressed their disagreements in words that each other understood. Rationalists and NRx often have a decent understanding of each other's positions, and can express where they disagree. I'm just not sure how I'm supposed to step from that to thinking they're secretly in league or whatever the hell the authors are implying here.

Rationalism isn't optimisation.

The article seems to lean pretty hard on the idea that a core tenet of rationalism is reducing things to one quantity that can then be min/maxed to get the desired result. This couldn't be more wrong. One of the most important things I've learned from reading rationalists is how doing this can lead to huge problems. Alexander and Mowshowitz never stop banging on about how relentless optimisation pressure destroys value all over the place all the time, and Bostrom and Yudkowsky have basically been arguing for two decades that optimisation pressure may well destroy all value in the universe if we aren't careful.

I've learned more about the dangers of optimisation from rationalism and rationalism-adjacent authors than anywhere else. Of course, optimisation can also be really good! More of a good thing is trivially good! Finding ways to optimise manufacturing has done amazing amounts of good all over the world! Optimisation can be awesome, but can also be incredibly destructive. I don't know any group of thinkers who are more sceptical of blind optimisation, or who spend more time carefully teasing out conditions in which optimisation tends to be helpful and those where it tends to be harmful, or who are more dedicated to promoting care and precision around how we define what we optimise, lest we do huge damage.

This area is probably the main part of my own thinking where I've found rationalism and rationalists the most helpful, and it's all been in the direction of dragging me towards being less fond of just piling everything into a measure and min/maxing it, so I really don't quite know what to do with a criticism of the movement that claims we're the trigger-happy optimisers.

Using precise language doesn't mean ignoring imprecisely-phrased concerns

One section laments the fact that rationalists focus on problems that can be precisely expressed, and neglects those that can't, or perhaps aren't. There's then a list of examples of things that salt-of-the-earth democratic types talk about that us rationalists apparently arrogantly ignore because they don't fit our standards for what counts as a problem that's useful to solve.

The problem is that topics covered by the list of things rationalists supposedly don't think merit discussion could basically be a contents list for SSC/ACX, the single most popular rationalist blog there is. As far as I can see, there's tonnes of rationalist discussion on these issues.

One thing that there isn't really is a corpus of rationalist comment on linguistic confusions like "Is racism X or is it Y?" Generally rationalists are particularly good at noticing when the same term is being used to describe a bunch of different things and where this is causing people to talk past each other and not grasp each other's positions. Again, SSC/ACX is full of this sort of stuff.

But attempting to bring clarity to terminologically-confused pseudo-debates by bringing in a more linguistically precise approach that doesn't get everyone hung up on "Is the issue one thing or another" when in fact both things are issues worth discussing and a piece of the eventual solution isn't ignoring the issue! Indeed, I'd argue that it's embracing and discussing the issue more fully than either "side" of the "is it X or Y" false dichotomy is doing so, because the rhetorical outcome of taking a firm position on that 'question' tends to be the loss of the ability to even discuss the other half of the issue.

These 'intellectual cul-de-sacs' don't half seem awesome

One of the supposed costs of rationalism's alleged obsession with optimisation and ignorance of the real political issues of the day is the wasting of intellectual resources in a number of 'intellectual cul-de-sacs' like AI risk and effective charity. No definition of an intellectual cul-de-sac is ever given, and no argument made that these research areas meet that non-existent definition, beyond the fact that they're niche fields of study whose results don't command wide popular support. Like Quantum Mechanics in 1920, or Climate Science in 1980.

Obviously it's a bit unfair of me, with the benefit of hindsight, to pick two areas that were once extremely niche fields of study whose main results were not endorsed at large, but are now both hugely consequential and widely endorsed. I can do that looking backwards. Obviously not every minority position ends up becoming mainstream, and not every niche field ends up yielding huge amounts of important knowledge. But the authors don't offer even the slightest justification for why the existence of these fields is a bad thing or why they are a 'cul-de-sac', so I don't really have much of a response except to note that there's no real argument here.

Funnily enough, when a group of people comes up with a sort of fundamental philosophy that's a bit different from the norm, that generally leads them to a few downstream beliefs that aren't held by the majority. This is how intellectual progress happens, but also how random spurs into nowhere happen (I guess this is what they mean by cul-de-sacs, but they don't really say). The fact that these downstream beliefs are non-mainstream doesn't help you tell whether the upstream philosophy is right or not, or whether you're looking at the early phase of a paradigm shift or just a mistake. At this point, those two things look the same.

Tying this up

So I think I've mostly covered my main objections.

A focus on rationality and the notion that most people are not particularly rational does not undermine democracy in the slightest; this argument seems to be based on an assumed justification for democracy that is almost precisely backwards.

The authors conflate intelligence with rationality to make some of their points, when in fact the near-orthogonality of these is core rationalist doctrine.

The race argument is silly as usual.

NRx aren't rationalists, at least not in a central enough way for any of the arguments the authors want to make, so they're left with silly guilt-by-association nonsense no different from the NYT debacle.

Rationalism isn't based around min/max optimisation, and indeed is one of the core communities resisting and warning about the growth of such way of thinking and working.

Using precise language and refusing to be drawn into false dichotomies that rest on confusing language doesn't count as ignoring the issues addressed by that confusing language, and in fact plenty of rationalists talk about these things all the time.

And yes, rationalists are often involved in minority concerns like AI risk or EA, but that proves nothing about rationality unless you can actually demonstrate that these things are mistakes/bad, rather than just unpopular, which the authors don't do.

There are some other feints at arguments in the piece, but I don't really know how to respond to them as they're mostly just sort of negative-sounding padding. Apparently Tyler Cowen (who seemed to count as a rationalist himself in another part of the piece) reckons rationalism is a 'religion', but there's no explanation as to why, or whether he might be right, or whether whatever he might be right about might imply anything bad. It's just ominous-sounding. When I said I was an atheist to someone once, they said something like "but isn't that just having a religious faith in Dawkins, so aren't you just another sort of fundamentalist" and that wasn't really an argument either.

There's a bunch of similarly ominous-sounding-but-non-specific stuff about how, actually, the first programmers were women and minorities, but no real explanation of why this has anything to do with rationalism, so I don't know how to address it. There's a lot of imagining that Facebook and whatnot are run by rationalists, when this is obviously untrue, and so a lot of the sins of the tech industry in general end up getting transplanted onto a group of people who, as far as I can see, have been doing the hardest work of anyone sounding the alarms about those exact same issues. I'm sure it would be news to the notable rationalist who wrote 'Against Facebook', that Facebook is run by people just like him and on principles he'd endorse, and therefore he's somewhat responsible for the results of their actions.

Doubtless the authors would say that's not what they meant, but the text is so meandering and full of this sort of not-quite-saying-the-thing-they're-clearly-implying-and-want-us-to-feel-is-true that it's hard to pin down what claims they're actually making. Silicon Valley culture (as one commenter points out, the whole idea of SV as the hub of tech is perhaps a bit outdated by now anyway), NRx, rationalism, LW, SSC/ACX, IQ researchers, Venture Capitalism, and more are just sort of seamlessly blended into one morass that can be pointed at as kinda bad. I was hoping something clearly signposting itself as an attempt to look into 'dark side of rationalism' concerns seriously would do better than this.

If I've misunderstood this or any of this is unfair/wrong, then do point this out. I must say I found it a bit annoying so I am probably not firing on all rationality cylinders myself, though I've left a big gap between reading and then commenting and checking back to make sure I'm remembering right to hopefully keep the kind of 'triggered ingroup/outgroup fighting' that the authors have stated they want to avoid at bay, but I can only do so much.

Replies from: Zack_M_Davis, remmelt-ellen
comment by Zack_M_Davis · 2021-08-22T02:39:13.688Z · LW(p) · GW(p)

even buying all the arguments of the race/IQ chaps hook, line, and sinker (which, to reiterate, neither I nor any rationalist I've ever spoken to or read actually does)

In public. Absent some kind of epic infosec fail (like a disgrunted acquaintance leaking private emails), you can't know what beliefs people might have that they're incentivized not to talk about [LW · GW]. To this you might reply, "Absence of evidence is evidence of absence [LW · GW]: without a reason to think that people are holding back, you could just as baselessly speculate that maybe rationalists are secretly Satanists or flat-earthers."

Is there any such reason? I don't know. There is an interesting Scott Alexander post, "Kolmogorov Complicity And The Parable Of Lightning", about the pragmatic necessity for intellectuals to avoid contradicting powerful orthodoxies. But doesn't that seem like a weirdly specific abstract topic for Scott to write about, unless he had some concrete contemporary example in mind? What do you think he was thinking of?

the differences are tiny and absolutely dwarfed by within-race variance anyway, so just wouldn't make much difference to anything.

Sort of. Statistical group differences can be large enough to have social consequences in at least some contexts, while also being small enough such that the correct Bayesian update about an individual based on group membership is small (not zero, but small). For cognitive ability measures, the current black–white gap in the U.S. is about 0.85 standard deviations (Cohen's d), and the white–Asian gap is at about d ≈ 0.3. Is that "tiny"? I don't know; depends on how you want to use words. For comparison, the male–female height gap is about d ≈ 1.5. Physical sex differences like the fact that men are taller than women on average seem like the kind of background common knowledge that is sometimes decision-relevant (e.g., to product designers) even if tall women exist and you can notice someone's height independently of their sex (um, almost; empirically, your brain does do a Bayesian group-membership base rate adjustment without your conscious awareness).

Replies from: lsusr
comment by lsusr · 2021-08-22T03:02:56.223Z · LW(p) · GW(p)

If intelligence is a Gaussian distribution then small differences in the mean result in massive differences in relative representation at the tails. A small difference in average performance can result in overwhelming differences within elite groups while simultaneously being unimportant to most people.

Replies from: ben-w-den-1
comment by Ben Wōden (ben-w-den-1) · 2021-08-22T11:26:39.707Z · LW(p) · GW(p)

I think both of these are reasonable responses. I brushed over a lot of nuance in this section because I just didn't want this part of the discussion to dominate. I realise that the raw population differences (in the US) are quite chunky (though d=0.83 is higher than I have heard), but what I glossed over completely is that the only not-obviously-silly investigations of the issue I've seen do admit that clearly there are environmental confounders, and then claim that those confounders only account for some of that difference, and that some genetic component remains. These estimations of genetic component (which, to my vague memory, were in the d=0.1-0.3 range) are what I was calling 'tiny'.

However I now realise I was being a bit inconsistent in also saying that I've never seen rationalists endorse this, because actually I have seen rationalists endorse that tiny effect, but never the absurd-on-its-face idea that the entire observed gap is genetic. So I'm using one version of the hypothesis when refuting one step in the argument, and another version in refuting another step. This was wrong of me.

Perhaps even calling the smaller estimate 'tiny' is still a bit harsh because, as Isusr says, this is still enough for the far ends of the distribution to look very different. So I think the best thing is that I drop this entire size-related part of my argument.

Zack is also right that there is a bit of hard-to-observe going on in my 'no rationalists actually believe this' argument as well. I think I'm on solid ground in saying that I don't think more than a negligible number of rationalists by the hardcore all-the-observed-differences-are-genetic interpretation, but perhaps many are privately convinced there's some genetic component - I wouldn't know. I don't think Kolmogorov Complicity is about that, I think it's about gender issues in CS. But I think the whole point is one can't really tell, so this is also a fair point.

So, on reflection, I drop the 'no rationalists buy these claims' and 'the differences are tiny anyway' parts of my argument as they're somewhat based on different assumptions about what the claims are, and both have their own problems. I will rest my entire 'the race bit is silly' position upon the much more solid grounds of 'but intelligence isn't rationality' and 'even if it were, it wouldn't imply what you imply it implies about who should have political power', which are both problems I have independently with the rest of the piece.

I think a very long, drawn-out argument could still rescue something from my other two points and show that this part of the piece is even weaker than would be implied from just those two errors, but I don't really want to bother because it would be complicated and difficult, the failure of the argument is overdetermined anyway, and talking about it makes me irritable and upset and poses non-negligible risk to the entire community so I just don't see it as worth it in this case.

comment by Remmelt (remmelt-ellen) · 2021-08-24T17:01:59.624Z · LW(p) · GW(p)

There's a bunch I agree on like that democracy is just one system for aggregating preferences and ideas, that AI safety has become more accepted as a field by academia over time, that the article makes vaguely pejorative associations, and that Yarvain doesn't appear to particularly argue from LW premises (nor have played any key role in this community, as Christian pointed out). 

Hope you don't mind that I play devil's advocate on your elaborate and thoughtful comment. Will keep it brief, but can expound later.
 

seem to assume that the case for democracy rests on a strong belief in a high and somewhat equally distributed level of rationality and competence on the part of the general public

So my interpretation of the article (haven't asked the original authors) is more like that it's saying that rationalists rate how high the worth of an idea or someone's thinking competence scores across too narrow a set of dimensions. And in that, fail to integrate outsiders' perspectives in ways that could take the community out of local decision-making optima.

Using precise language doesn't mean ignoring imprecisely-phrased concerns

IMO, some comments on this article itself are strong examples of people failing to pick up on imprecisely-phrased concerns. 

As far as I can see, there's tonnes of rationalist discussion on these issues.

I have also seen rationalist discussions on issues mentioned in this article – e.g. social media recommendation algorithms and filter bubbles. But this is different than interpreting outsiders’ concerns on issues. 
 

NRx isn't rationalism, not even a bitty bit.

I'm trying to triangulate between people's impressions here, since I've only briefly visited Berkeley (so not geographically SV, but I think the authors were really referring to technologists around the San Francisco Bay Area put widely).

Two counterarguments:
1. NRx is a tiny community so the fact that many rationalists even know about them, have apparently sometimes shared board seats with them, read their work sometimes, and have actively tried to keep them out of the forum indicates that neo-reactionaries are relatively much more connected with the rationalist community vs. other communities out there.

2. Some rationalists figures have a similarly undemocractic mindset to pushing for system changes premised on their reasoning steps.  e.g. Hanson on implementing prediction markets / futurarchy system, while arguing that others don't take his arguments seriously because it goes against signalling to their tribe.
 

I really don't quite know what to do with a criticism of the movement that claims we're the trigger-happy optimisers.

So the writing does seem to attribute naive ways of optimising stuff to rationality. But that wasn't the core of the criticism. The core critique was that we tend to distill societal issues into optimisation problems, whether or not we see optimisation as good or bad (e.g. thinking about how to do the most good; worrying about Goodharting or an instrumentally converging AGI). And this view has its own implicit set of assumptions baked in.

Similarly, we talk relatively much about what drives human (and come to think of it, artificial) intelligence (also in this comment thread), including genetic factors, even though the discussions I've read seemed a lot more nuanced and socially considerate than the authors let on. Edit: I also agree that the article confounds g factor with what we would call epistemic rationality, and that although the community also prizes the former to some extent, rationalists do clearly distinguish the two, encouraging diligent practice in refining judgement for the latter.
 

comment by Gordon Seidoh Worley (gworley) · 2021-08-20T18:56:09.553Z · LW(p) · GW(p)

This feels very much like an outsider's take on both Bay-area tech culture and LW rationality. As much as anything they give this away by referring to it as Silicon Valley all the time (locals sometimes talk that way, but the dynamic center of the industry as moved north to San Francisco so that Silicon Valley feels more like the stody place you go to ask for money) and constantly referring to "rationalism" interchangeably with "rationality" and not seeming to realize the LW rationality movement basically never says "rationalism" to avoid the name collision with traditional rationalism in philosophy. This immediately makes me suspicious the author is going to have much of interest to say because they don't seem to have engaged enough with the thing they're analyzing to actually understand it.

I also don't see a lot of evidence that the author's interpretation of events is even a good summary of what happened. It feels much more like they looked at some headlines, tried to tie it into a narrative that fit their preconceived notions of how the world works, and then spun it as some lesson they could give back to folks in the Bay area and the rationalist community. Even if they are accidentally right, I'm unconvinced because their method appears flawed.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2021-08-21T04:13:47.610Z · LW(p) · GW(p)

To be fair, I was reviewing their doc, and I even didn't pick up on how 'rationalism' could be confused with the philosophical counterpart (although 'rationality' did intuitively click better for me).  I would say I'm medium-involved in the rationality community, having attended a CFAR workshop and other gatherings, read a bunch of LW and SSC posts, etc.

Do you happen to have ideas on different methodology? (no worries if you haven't, don't want to put you on the spot here!)

comment by Dagon · 2021-08-20T18:49:12.089Z · LW(p) · GW(p)

I think the authors are right that there's probably no way to share this broadly that doesn't induce ingroup/outgroup reactions and non-nuanced "debate" at high volumes.  However I think it ALSO doesn't go far enough to be useful in smaller group debates or analysis.  It tiptoes around some of the elements that actively need to be debated and understood - specifically, the very concepts of individual liberty and resource ownership in a world where none of us are all that rational nor altruistic.

A lot of vitriol against SV is around the idea and implementation of "meritocracy", and whether it's just a mechanism to perpetuate social and gender disparities, or is a powerful tool for optimization.  I think "meritocracy" is similar to "socialism" in that there are examples of it working well, and of horrific excesses, and some aspects on some margins are improvements, but it's not a useful roadmap for ... anything.  But boy howdy do people get heated up over debating them.  This is likely because both ideas have embedded but un-examined (or at least not universally agreed) theories about individual vs group valuation of resource control.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2021-08-21T04:09:54.254Z · LW(p) · GW(p)

This resonates, particularly re: 'the very concepts of individual liberty and resource ownership'. 

I appreciate this nuanced perspective.

comment by CraigMichael · 2021-08-22T00:18:02.412Z · LW(p) · GW(p)

At the risk of losing karma, let me see if I can spin some of anon’s points in to gold… err… silver? Maybe bronze? I think this a bronze-worthy observation. I hope. :)

Consider the distinction between two disciplines. The first being Science Communication. Which is exactly what it sounds like, communicating the results of science to a general audience: https://en.m.wikipedia.org/wiki/Science_communication

The second is the Science of Science Communication, which is about how to make Science Communication more effective because there’s all of these ways that it can go wrong that are not obvious (like the tendency of highly numerate people to be not only unpersuaded by evidence presented to them that contradicts their world view, but also to be come more polarized to their world view despite the contradictory evidence.) https://www.pnas.org/content/114/36/9587 and https://www.vox.com/2014/4/6/5556462/brain-dead-how-politics-makes-us-stupid

Perhaps there is a lesson to be taken here, that assuming good faith on the part of the authors, it may be beneficial to us to take steps to avoid being misunderstood in such common ways so frequently? Perhaps we’re bad at communicating rationality to a general audience?

So I’m thinking something along the lines of a “rationale of rationality communication” to bring a sensitivity to the state of mind of those who are likely to mischaracterize aspiring rationalists, to help prevent actual mischaracterization.

It may be worth taking something like what Ben wrote and adding some examples and making something like “An Aspiring Rationalist FAQ” to dispel these? At least if they came up again we could say “look, LessWrong has a list of ways that rationality is commonly misunderstood” and it would hopefully reduce the mischaracterizations and make the dialogue on these points more productive?

P.S. One exchange that comes to mind — I don’t think we’re like Dawkins, but I think Neil’s point from 2006 here is always worth meditating on regarding having a sensitivity to the state of mind of one’s audience. https://youtu.be/-_2xGIwQfik

(Edits for clarity)

Replies from: ChristianKl, remmelt-ellen
comment by ChristianKl · 2021-08-22T20:32:58.841Z · LW(p) · GW(p)

Perhaps we’re bad at communicating rationality to a general audience?

Our community has generally not tried to communicate rationality to a general audience. 

Julia Galef's book "The Scout Mindset" would be a book about communicating rationality to a general audience. I don't think that someone who reads it without prejudice would come up with the above article.

When it comes to the rationality community it's also worth noting that we generally hold positions that are very complex. A normal person is not going to understand how physicists think about physics by reading a FAQ and the won't understand how rationalist think about thinking by reading a FAQ. The positions are just to complex to get by reading a FAQ.

comment by Remmelt (remmelt-ellen) · 2021-08-24T16:25:50.422Z · LW(p) · GW(p)

In the cases where rationalists are communicating rationalist thought to outsiders, I agree that it's important to get better at presenting those thoughts in ways that help outsiders understand and not leave some instinctive bad taste in their mouth.

This seems to rest on being able to understand outsiders' perspectives better and trying to explain rationalist ideas starting from their end. IMO, we are short on field builders who can do that intent listening and interpretation work.

TBH, I also worry that this framing is a red herring.  There are cases where

  1. outsiders do really get some core parts of rationalist mindsets,
  2. try to point out thinking traps there to rationalists in vague/provocative-sounding ways that don't connect for rationalists.
  3. rationalists jump to confident conclusions about what the outsiders meant, but fail to step outside of their own paradigms to interpret the key aspects those outsiders are trying to convey.
comment by ChristianKl · 2021-08-21T13:01:44.334Z · LW(p) · GW(p)

What is more important is what specifically links SV’s apparent rationalism to NRx attitudes. Given the way technologists’ dreams are increasingly shaping our future, we have a right to know what these dreams hold.

Having open discussions on LessWrong seems to me a very democratic way of going about discussing dreams of the future. There's some hurdle of needing a certain amount of intellectual capacity to engage with it, but's it's very different from doing things in a closed way.

It's quite ironic that the authors of this article don't stand by it with their own names, which tells you a lot of what value they put on transparacy in practice.

Tetlock found that conventional experts tied to a particular domain were overrated in their ability to predict the future. Instead, open-minded generalists could train to become “superforecasters” enabling them to forecast a future event relatively accurately by thinking logically about similar cases in the past.

Tetlocks work seems to me a to have a very demoratizing conclusion. Tetlock found that it doesn't take deep subject matter experience or Mensa level intelligence to make good predictions. GJOpen is in itself a very democratic endevour. 

Tetlock does found that thinking about similar cases in the past is part of being good at forcasting but I don't know what the word "logically" does in the sentence.

If you wanted to strawman Telock I'm not sure that there's a way to do it better then what this article does.

I have the impression that the author thinks that "a normal person can learn to forcast better then domain experts" somehow implies being undemocratic because they see "democracy" as being about listening to them and their domain expert friends. 

Rationalists weren’t simply more tolerant of contrarian arguments about, for example, racial or gendered differences in intelligence or capacity to learn and work.  [...] These core disagreements reflected and helped shape how SV thought about politics. 

To me that looks like either purposefully strawmanning or incompetence. Lawrence Summers focused on those over at Harvard. SV political discourse in the last years were mainly about differences in motivation and not gendered differences in intelligence or capacity to learn and work.

Engineers, then, tend to think of their work as a series of “optimization problems,” transforming apparently complex situations into an “objective function,” which ranks possible solutions, given the tradeoffs and constraints, to find the best one and implement it.  

I don't see that anywhere. Even GiveWell doesn't doesn't have a charity ranking but uses other ways to present their charity evaluations. 

As a whole rationalists are much more hackers then they are engineers. Part of being a hacker is being constantly switching between layers of abstraction and not staying with one layer and just focusing on that.

More in the Silicon Valley space, there's a reason that YC focuses on telling people to talk to their users. Having metrics is important but anyone who thinks that a good Silicon Valley startup.

An analysis that pretends that there are no hackers in Silicon Valley and completely ignores the hacking ideology is going to do a very poor job at capturing what Silicon Valley is about. 

To go back to our rationality community, the idea that our rationalist community focuses on Optimization completely misunderstands it. We don't have clear metrics for rationality that we could optimize. I think it would be great if we would have such a metric to test our rationality training but till now we didn't persue rationality training as an optimization problem.

Inadequate Equilibria [? · GW]came to the conclusion that most of the bigger problems we have in the world aren't optimization problems but about finding ways to align incentives between different agents or building coalitions.

Curtis Yarvin, who had played a significant role in the early days of LessWrong

That would be news to me and I have been around LessWrong for a long time.

However, sometimes Rationalism didn’t counter the biases of its practitioners or speed their convergence on learning wisdom available in other traditions…

This is wrong. You find plenty of posts about Circling or meditation on LessWrong. Rationalists are one of the communities that did manage to adopt hand signs for approval after it turned out that this is very useful cultural technology. 

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2021-08-21T17:20:01.254Z · LW(p) · GW(p)

Thanks for the points.

The ‘hackers rather than engineers’ point is interesting - what are some strong examples of people in the community acting like the former rather than the latter?

I actually think GiveWell analysts’ approach of filling their estimates for each charity in a spreadsheet (or charity entreneurship’s intervention selection processes) look a lot like ranked optimisation. It’s not as robotic as the authors made it sound though (e.g. cluster thinking approach).

Agreed re: how we pursue rationality training.

The point on aligning between different stakeholders is a good one, though I actually think the community has neglected that kind of multi-agent analysis in the past (e.g. see Critch’s recent post), and arguments I have seen seem premised on abstractions and metrics of success that make obvious sense to us (as far as I can tell, they involved few real-life human-to-human conversations trying to interpret how other stakeholder groups perceive the world differently and building a broader consensus with them).

For the other points (e.g. LessWrong conversations being open, save some intellectual capacity needed to engage), it might be better to chat back and forth about them in a call some time! To me, most of them seem to capture a narrowly relevant perspective, but then jump to a ‘therefore this written sentence is clearly wrong’ conclusion from there.

One clarification – I decided myself to not mention the author’s names. So that is on me. Two reasons: it might have provoked more instinctual negative reactions (and assumptions) about the authors, and also I had made edits that the authors didn’t have the spare time to look through well.

Replies from: ChristianKl
comment by ChristianKl · 2021-08-22T06:23:01.132Z · LW(p) · GW(p)

GiveWell recommends currently recommends nine charities. I don't think they their numbering between those asserts a ranking. 

If you would have a ranking the difference between the charities based on an optimization metric, the 9th rank and the 10th rank would likely be similar to that of the difference between the 4th ranked and the 5th ranked. 

Rather Give Well gives a list of charities that they consider worthy of donations and then it's up to the donors to pick among that list the ones that feel best to them. Given the options that are available for going about recommending people to give to charities that's not the one that favors going for ranks. They easily could have chosen ranks but decided against it. 

I don't remember any significant post that ranks solutions to a problem on LessWrong or one that recommends that you should rank solutions. 

The ‘hackers rather than engineers’ point is interesting - what are some strong examples of people in the community acting like the former rather than the latter?

Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn't bring you safety.

A lot of the rest of the Xrisk debate is also more from a safety perspective where the paradigms come out of the hacker movement.

GiveWell might be more in the middle between the two poles but as far as I understand a GiveWell report tries to tell you everything that's worthwhile to know about a given charity and not just those things that focus on the metrics. 

I don't think there's someone I would see as a core part of the rationalist community that's more engineer then hacker. 

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2021-08-24T16:04:22.152Z · LW(p) · GW(p)

I don't think they their numbering between those asserts a ranking. 

I think defining outputs in technically precise ways isn't very useful in this case (I think it obfuscates the thrust of the argument).  I'm saying more something like 'these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don't literally mention that the top charity on their webpage is their no. 1 charity.’
 

Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn't bring you safety.

Nice! This was clarifying, thanks.
 

Replies from: ChristianKl
comment by ChristianKl · 2021-08-24T17:46:24.638Z · LW(p) · GW(p)

GiveWell doesn't have a metric based on which it decides whether charity 1 is ranked above or below charity 2. 

It's charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear. 

That's exactly the alternative to weighting all the criteria explicitely and ranking the charities. It's leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.

I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.

Replies from: remmelt-ellen
comment by Remmelt (remmelt-ellen) · 2021-08-24T20:47:29.014Z · LW(p) · GW(p)

Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit

Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.

Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.

Replies from: ChristianKl
comment by ChristianKl · 2021-08-25T06:15:02.299Z · LW(p) · GW(p)

Ranked is a word with a clear meaning and it's not what they are doing. 

GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.

If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the "against cash" results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn't be if charities would just be ranked by the metric in the results tab.

Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don't think that excuses strawmanning in any way when you assert to want to have a constructive discussion.

The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It's again quite different then this article that eschews any transparency. Of course you or the outers can say "we care about things besides transparency more" but that's still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.