Dominic Cummings: how the Brexit referendum was won
post by The_Jaded_One · 2017-01-12T21:26:02.639Z · LW · GW · Legacy · 69 commentsThis is a link post for http://blogs.spectator.co.uk/2017/01/dominic-cummings-brexit-referendum-won/
Contents
69 comments
69 comments
Comments sorted by top scores.
comment by moridinamael · 2017-01-12T21:31:00.460Z · LW(p) · GW(p)
Holy shit.
We evolved to make sense of this nonlinear and unpredictable world with stories. These stories are often very powerful. On one hand the work of Kahneman et al on ‘irrationality’ has given an exaggerated impression. The fact that we did not evolve to think as natural Bayesians does not make us as ‘irrational’ as some argue. We evolved to avoid disasters where the probability of disaster X happening was unknowable but the outcome was fatal. Rationality is more than ‘Bayesian updating’. On the other hand our stories do often obscure the branching histories of reality and they remain the primary way in which history is told. The mathematical models that illuminate complex reality in the physical sciences do not help us much with history yet. Only recently has reliable data science begun to play an important role in politics.
This was not the content of an article I expected to be written by the mind behind Brexit.
Replies from: waveman, The_Jaded_One, username2, MrMind↑ comment by waveman · 2017-01-12T22:46:49.397Z · LW(p) · GW(p)
And
Generally the better educated are more prone to irrational political opinions and political hysteria than the worse educated far from power. Why? In the field of political opinion they are more driven by fashion, a gang mentality, and the desire to pose about moral and political questions all of which exacerbate cognitive biases, encourage groupthink, and reduce accuracy. Those on average incomes are less likely to express political views to send signals; political views are much less important for signalling to one’s immediate in-group when you are on 20k a year. The former tend to see such questions in more general and abstract terms, and are more insulated from immediate worries about money. The latter tend to see such questions in more concrete and specific terms and ask ‘how does this affect me?’. The former live amid the emotional waves that ripple around powerful and tightly linked self-reinforcing networks. These waves rarely permeate the barrier around insiders and touch others.
Something for LWers to think about. Being smart can make you more susceptible to some biases.
Replies from: niceguyanon↑ comment by niceguyanon · 2017-01-13T14:37:28.567Z · LW(p) · GW(p)
Being smart can make you more susceptible to some biases.
Agree but Dominic is making a much stronger claim in this excerpt, and I wish he would provide more evidence. It is a big claim that
- the more educated are prone to irrational political opinions
- average incomes are less likely to express political opinions to send signals.
These are great anecdotes but have there been any studies indicating a link between social status and willingness to express political views?
Replies from: bogus↑ comment by bogus · 2017-01-13T17:00:38.890Z · LW(p) · GW(p)
the more educated are prone to irrational political opinions
I'm quite sure that this is wrong actually - that more educated folks still have better opinions about policy, but only weakly so. Bryan Caplan has pointed this out in his work on the irrationality of the common voter. It becomes right though when you control for rational judgment in private, non-political contexts - education greatly improves that and you would expect it to have the same effect in political judgments. But it doesn't, really.
average incomes are less likely to express political opinions to send signals.
It's often pointed out that lower-income folks tend to be politically apathetic, so to the extent that they do have opinions on policy you would expect these to be less influenced by signaling dynamics. But signaling is not the only source of error (involving both random noise and persistent bias) in political judgments!
Replies from: waveman↑ comment by waveman · 2017-01-13T23:42:04.686Z · LW(p) · GW(p)
more educated folks still have better opinions
In many cases yes I agree.
He argues that very few people, educated or not, actually have any strong factual or logical basis for their opinions.
But I think the more important point is that educated people do have some specific failure modes. From other sources
Hate to admit they are wrong
Over-complicate things
Tend to privilege theory over observation and simple heuristics.
Focus on being right versus winning.
Deny the existence of things they don't understand.
Fail to communicate with people of average intelligence / typical mind fallacy.
↑ comment by The_Jaded_One · 2017-01-12T21:40:29.843Z · LW(p) · GW(p)
It seems like Brexit was basically a small group of rationalists hijacking history. Remain was overwhelmingly likely without the competence of the leave campaign. Pretty impressive.
Of course I'm sure there's another side to this story, so take it with a pinch of salt.
Replies from: whpearson, MrMind, username2↑ comment by whpearson · 2017-01-13T09:10:44.110Z · LW(p) · GW(p)
They are rationalists in the sense that they won through the application of their intellect. I don't think that they were rationalists in the sense that they want to raise to rationality waterline.
I was hoping for the rational reason why Britain should have left (I'm only a portion of the way through ), instead of the lies about spending the money currently given to the EU on the NHS (which they had no way of influencing and made no plan for the current projects that would lose EU funding).
Replies from: username2, The_Jaded_One, bogus, The_Jaded_One, Douglas_Knight↑ comment by username2 · 2017-01-13T09:39:27.397Z · LW(p) · GW(p)
They are rationalists in the sense that they won through the application of their intellect. I don't think that they were rationalists in the sense that they want to raise to rationality waterline.
While we shouldn't get hung up on definitions, I'm pretty sure the most common meaning of "rationalist" in this community is the former, not the latter.
Replies from: whpearson↑ comment by whpearson · 2017-01-13T18:55:33.449Z · LW(p) · GW(p)
Things may have changed in 8 years. I'm not sure if you noticed that I got the phrase, "raising the sanity waterline," from this sites founder.
I'm a bit sad that this has been lost if it has.
Replies from: philh, username2↑ comment by philh · 2017-01-16T11:07:11.345Z · LW(p) · GW(p)
He says "rationalists have a lot of work to do", but I don't think that implies "people who don't want to do this work are not rationalists".
If someone uses their brainpower to reliably win, but isn't interested in helping others do the same, I think you could say something like "they are rationalists, but not our kind of rationalists". That would be totally reasonable. But I don't think you could say "they aren't rationalists". The argument that exploiting others' irrationality is net bad in the long run, is fairly specific and not obviously true for all sets of values and beliefs.
(This is a separate question from whether Cummings and Vote Leave are such people.)
Replies from: username2, whpearson↑ comment by username2 · 2017-01-18T10:38:19.778Z · LW(p) · GW(p)
If someone uses their brainpower to reliably win, but isn't interested in helping others do the same, I think you could say something like "they are rationalists, but not our kind of rationalists".
I'm not sure this is even reasonable. There's a quiet majority of people on this site and other rationality blogs and in the real world (including Dominic Cummings, apparently) who learn these techniques and use their rationalist knowledge to "win." And they don't give back, other than their actions on the world stage. And personally, I think that's okay. Not everyone needs to take on the role of teacher.
Replies from: philh↑ comment by whpearson · 2017-01-16T21:31:32.701Z · LW(p) · GW(p)
I consider what constitutes a modern rationalist up for our own definition. My wife gets annoyed that LW style rationalists aren't like philosophical rationalists.
I don't know of any other community apart from this one that uses rationalist to mean someone who uses their brainpower to reliably win.
Cummings probably doesn't consider himself a rationalist (he used the term pejoratively in the article). So I considered The_Jaded_Ones comment describing them as rationalist as being akin to saying they were part of the in group, someone to be admired/emulated.
I'm an uneasy sometime member or the "rationality" community, I've been to a few LW meetups. So I'm interested in what people mean when they say someone is a rationalist. Is that the sort of person I will be hanging out with if I go again?
Replies from: philh↑ comment by philh · 2017-01-17T11:03:56.822Z · LW(p) · GW(p)
So, the thing I want to emphasize is that the community is not about the community. Other communities are about nothing more than themselves, and that's fine, but this community has purpose. We can't define a rationalist as being a member of the community, or that purpose gets lost.
So we have to be able to ask whether someone is a rationalist without talking about the community. We might decide later that yes, they're a rationalist, but all the same we don't think they're a good person and we don't want them in the community; but those have to be separate questions.
(Other communities use the word "rationalist" differently to us, and that's fine too. I don't claim there's an objective definition of the word. Just that we need to use a definition that doesn't talk about us.)
Similarly, rationality can't be about rationality. If the goal of rationality is merely to spread rationality, then rationality might as well be herpes. If the goal is "to win, and also to spread rationality", you have to ask what if those two goals conflict? Maybe you go for something like "the goal of rationality is to win, conditional that part of winning means spreading rationality", but that seems like an unnatural carving of concept-space. And I question what the point is; if the point is simply that there are certain types of people who win but who we don't like very much, then we're going about it wrong. Instead of excluding them from the definition of "rationalists", we can just exclude them from the community.
All that said, I personally wouldn't exclude Cummings from the community, not based on this post. I don't think you're likely to meet many people like him at LW meetups, but as far as I'm concerned he'd be welcome at the London group.
Replies from: whpearson↑ comment by whpearson · 2017-01-17T20:25:03.738Z · LW(p) · GW(p)
And I question what the point is; if the point is simply that there are certain types of people who win but who we don't like very much, then we're going about it wrong. Instead of excluding them from the definition of "rationalists", we can just exclude them from the community.
But the community is defined as a rationalist community right? Not a specific type of rationalist just plain simple rationalist. If we can't explain why some people would be excluded from it, then the community seems ill defined and likely to drift and fall apart. Why do we even have one?
We could define rationalists as any of the following
- people who want to use their brain meats to make humanity win (i.e. not lose and go extinct)
- people who want to have correct and useful beliefs about the world and spread those beliefs and the methods of generating those beliefs aka epistemic rationalists.
Either of those would fit a large part of the people on lesswrong and capture bits of the the spirit of CFAR.
A community that is truly only about "people that use their brain to win" has very little useful to say to each other. Under many goal/belief systems I should hide my goal and beliefs so that people can't interfere with them. I should actively mess up other peoples goal and belief systems so that they are ineffectual agents.
You could for example use user research and marketing to generate highly persuasive materials to convince people to join an evangelical church and get lots of money from those people. If your goal was simply to get lots of money would you count as a rationalist?
Replies from: philh↑ comment by username2 · 2017-01-14T05:26:44.691Z · LW(p) · GW(p)
If the point of "rationality" is evangelism, count me out. But anyway if you want to point to EY quotes, then consider "rationalists win" or the 12 virtues of rationality (which are about winning, not evangelizing).
Replies from: whpearson↑ comment by whpearson · 2017-01-14T09:40:59.795Z · LW(p) · GW(p)
It is not about evangelising for me. It is about not using tool sets that rely on other people being irrational. If your incentives are to keep people uninformed so that they will do what you want and you "win" then you are reinforcing the status quo of a world of misinformation/fraud and spin. This I think will cause us all to lose long term.
↑ comment by The_Jaded_One · 2017-01-13T09:48:21.832Z · LW(p) · GW(p)
If you read the whole thing (quite an ask, I know!) then Cummings does go into how he thinks we can fix politics.
He also gives his argument as to why leave was the right choice, but that section is fairly brief.
↑ comment by bogus · 2017-01-13T19:21:56.005Z · LW(p) · GW(p)
lies about spending the money currently given to the EU on the NHS
What they said is that in the longer run, money that used to go to the EU could be redirected to domestic priorities, including the NHS. And many current destinations of "EU funding" are quite silly indeed - do you think paying wealthy English landowners to mismanage their land is a good use of funding, whether "EU" or otherwise?
Replies from: whpearson↑ comment by whpearson · 2017-01-13T20:36:05.716Z · LW(p) · GW(p)
I'm cynical enough to think that big landowners will still get paid to mismanage their land. They managed to get the EU to do it, I suspect they'll manage to get Britain outside the EU to do it.
I'm intrigued to find out Cumming's solution to the political classes, I've not found it in all the verbiage yet though.
↑ comment by The_Jaded_One · 2017-01-13T10:14:20.902Z · LW(p) · GW(p)
ctrl+f "Why do it?" and "The political media and how to improve it" in the article
↑ comment by Douglas_Knight · 2017-01-20T21:03:04.982Z · LW(p) · GW(p)
I think Cummings wants to "raise the sanity waterline." But rather than argue about that, I think a better definition of "rationalist" is someone who writes about how to think and how to win, particularly in a way comprehensible to LW. He certainly fits that definition.
(I would like to exclude Scott Adams who claims to write about these subjects, and from whom I do learn, but who does not write precisely.)
↑ comment by MrMind · 2017-01-13T15:41:08.313Z · LW(p) · GW(p)
Maybe Leave won regardless of or even despite my ideas. Maybe I’m fooling myself like Cameron. Some of my arguments below have as good an empirical support as is possible in politics (i.e. not very good objectively) but most of them do not even have that. Also, it is clear that almost nobody agrees with me about some of my general ideas. It is more likely that I am wrong than 99% of people who work in this field professionally.
He himself warns not to be construed as too influential. In this case the Scott's caveat apply: elections that are won by slim margin don't say much of significance.
Replies from: The_Jaded_One, NatashaRostova↑ comment by The_Jaded_One · 2017-01-13T15:55:59.164Z · LW(p) · GW(p)
His argument is that although Leave won by a small majority, it should have lost by a very large majority (for various reasons, particularly that the status quo has an advantage in these things) and that that is the large difference we should be thinking about.
Replies from: Jiro↑ comment by Jiro · 2017-01-13T16:35:56.433Z · LW(p) · GW(p)
I'm pretty sure that in Trump vs. Clinton, Clinton would have won by a large majority if Trump didn't campaign. But it would be silly to say "Trump should have lost by a large majority" on that basis.
Saying "one side should have lost because of X" implies that X has outsized effect on one side compared to the other. But telling political stories is, like campaigning, something that both sides do and which they pretty much have to do to have a reasonable chance at winning.
Replies from: The_Jaded_One, The_Jaded_One↑ comment by The_Jaded_One · 2017-01-13T17:07:28.724Z · LW(p) · GW(p)
I think the comparison in the case of Cummings and Brexit is to what other pro-leave campaigns would have done, rsther than to no campaign at all.
Replies from: Jiro↑ comment by Jiro · 2017-01-13T21:29:33.839Z · LW(p) · GW(p)
The point is that saying "they wouldn't have won if they didn't do X", in a context where you are trying to say something useful, implies that X is some special thing that was only done by them, not that X is something that everyone does. Nobody says "Trump would have lost if he had failed to breathe", because everyone running a campaign needs to breathe and saying that you don't win if you don't breathe is obvious, trivial, and tells you nothing special about Trump.
And "the pro-Brexit campaign did special things which the anti-Brexit campaign did not also do" has not been well-supported here.
Replies from: The_Jaded_One↑ comment by The_Jaded_One · 2017-01-13T21:48:47.382Z · LW(p) · GW(p)
Well according to the article, he and his team did do special things. Of course you may not believe that, but he presents a plausible narrative.
↑ comment by The_Jaded_One · 2017-01-13T17:10:01.419Z · LW(p) · GW(p)
Clinton would have won by a large majority if Trump didn't campaign
I wonder what would have happened if Trump had run a very boring, straight-laced campaign though?
↑ comment by NatashaRostova · 2017-01-13T22:37:19.335Z · LW(p) · GW(p)
I think it's fair to argue that elections that are won by a slim margin don't say much of significance about discrete narrative changes in the weeks leading up to the election. That could be false though, if for example we view Trump winning the election as a 'treatment' effect, which gives him a new discrete ability to change the narrative.
But more generally, I think an election such as Brexit does give us a significant story, not necessarily for the week leading up to it, but for the changing preferences of a population in the year or two leading up to it and the invocation of the election itself.
↑ comment by username2 · 2017-01-13T09:37:37.245Z · LW(p) · GW(p)
An argument for embracing, not avoiding "mind killing" politics?
Replies from: The_Jaded_One↑ comment by The_Jaded_One · 2017-01-13T10:09:50.668Z · LW(p) · GW(p)
The problem with politics is that discussion of it tends to devolve into something that's a toxic mess that serves no useful purpose, doesn't inform anyone and doesn't make the site better.
Sure, there are benefits to be had from discussing politics on a rationality site, but I can see the argument against it: previous attempts have devolved into the toxic mess instead of yielding any insight.
Replies from: username2↑ comment by username2 · 2017-01-13T16:15:44.866Z · LW(p) · GW(p)
This thread seems to not fit that pattern. The only annoying content is related to moderation.
Replies from: Manfred, The_Jaded_One↑ comment by Manfred · 2017-01-13T18:41:47.258Z · LW(p) · GW(p)
This thread doesn't fit that pattern largely because LW users are aware of the problems with talking about politics and are more likely to stay on the meta-level as a response to that. There is, in fact, not a single argument for/against brexit in this thread, which I think is a shining advertisement for LW comment culture. On the other hand, I think this article is also particularly well-suited for not immediately inspiring object-level argument, at least as long as it's not posted on /r/news or similar.
Replies from: 9eB1↑ comment by 9eB1 · 2017-01-14T02:27:49.268Z · LW(p) · GW(p)
Part of the reason is also because this is a UK issue and most LessWrong readers are not from there, so people have a little bit more of a outsider's or non-tribalist perspective on it (although almost all LW commenters would certainly have voted for Remain).
↑ comment by The_Jaded_One · 2017-01-13T16:22:07.699Z · LW(p) · GW(p)
Yeah, I mean I think there are successes and failures, and I personally think that LW should try to talk more about "real" issues like politics.
↑ comment by username2 · 2017-01-13T09:35:24.713Z · LW(p) · GW(p)
This was not the content of an article I expected to be written by the mind behind Brexit.
Why? Rationalists are more likely to embrace weird or counter intuitive positions supported by chains of reasoning. I don't mean this as a bad thing. I would think the probability of a rationalist being behind a weird and unconventional position is higher than baseline.
Replies from: plethora↑ comment by plethora · 2017-01-14T22:15:47.380Z · LW(p) · GW(p)
Right, and he addresses this in the article:
This lack of motivation is connected to another important psychology – the willingness to fail conventionally. Most people in politics are, whether they know it or not, much more comfortable with failing conventionally than risking the social stigma of behaving unconventionally. They did not mind losing so much as being embarrassed, as standing out from the crowd. (The same phenomenon explains why the vast majority of active fund management destroys wealth and nobody learns from this fact repeated every year.)
We plebs can draw a distinction between belief and action, but political operatives like him can't. For "failing conventionally", read "supporting the elite consensus".
Now, 'rationalists', at least in the LW sense (as opposed to the broader sense of Kahneman et al.), have a vague sense that this is true, although I'm not sure if it's been elaborated on yet. "People are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing" (e.g. "political actors are more interested in going through the conventional symbolic motions of working out which side they ought to be on than in actually working it out") is widespread enough in the community that it's been blamed for the failure of MetaMed. (Reading that post, it sounds to me like it failed because it didn't have enough sales/marketing talent, but that's beside the point.)
Something worth noting: the alternate take on this is that, while most people are more interested in going through the conventional symbolic motions of doing a thing than they are in actually doing the thing, conventional symbolic motions are still usually good enough. Sometimes they aren't, but usually they are -- which allows the Burkean reading that the conventional symbolic motions have actually been selected for effectiveness to an extent that may surprise the typical LW reader.
It should also be pointed out that, while we praise people or institutions that behave unconventionally to try to win when it works (e.g. Eliezer promoting AI safety by writing Harry Potter fanfiction, the Trump campaign), we don't really blame people or institutions that behave conventionally and lose. So going through the motions could be modeled purely by calculation of risk, at least in the political case: if you win, you win, but if you support an insurgency and lose, that's a much bigger deal than if you support the consensus and lose -- at least for the right definition of 'consensus'. But that can't be a complete account of it, because MetaMed.
↑ comment by MrMind · 2017-01-13T08:47:57.432Z · LW(p) · GW(p)
We evolved to avoid disasters where the probability of disaster X happening was unknowable but the outcome was fatal.
Most definitely not.
If the probability of something is unknowable, we die. We might avoid things that we don't know how to calculate exactly, so we buffer with loss aversion. But we most definitely do not have a grasp on ungraspable things.
There's a big difference between 'unknowable' and 'unknowable with precision'.
↑ comment by moridinamael · 2017-01-13T15:24:39.682Z · LW(p) · GW(p)
I think that's what he's trying to say. We evolved to be risk averse and specifically to avoid things that sounded really bad even if we didn't know how common they were.
I don't think he's saying that we evolved to avoid disasters that we couldn't possibly see coming. Because we clearly didn't.
Replies from: Tyrin↑ comment by Tyrin · 2017-01-16T22:13:21.934Z · LW(p) · GW(p)
But it is not clear at all why stories do not approximate Bayesian updating. Stories do allow us to reach far into the void of space which cannot be mapped immediately from sensory data, but stories also mutate and get forgotten based how useful they are which at least resembles Bayesian updating. The question is whether this kind of filtering throws off the approximation so far that it is qualitatively a different computation.
Replies from: moridinamael↑ comment by moridinamael · 2017-01-16T23:04:18.306Z · LW(p) · GW(p)
I don't think we can say that the mutation or loss of stories is very close to Bayesian updating. It may be a form of natural selection, and maybe sometimes the trait being selected for is "truth", but very often it's going to be something other than truth. Memes mutate in order to be more viral, and may lose truth on the way.
Stories about big, shocking, horrible events are more memetically contagious and will thus look more probable, if you're assuming that their memetic availability reflects their likelihood.
Replies from: Tyrin↑ comment by Tyrin · 2017-01-17T14:56:03.506Z · LW(p) · GW(p)
Even if stories are selected for plausibility, truth and whatever else leads most directly to maximal reward only once in a while, that would probably still be equivalent to Bayesian updating, just interfered by an enormous amount of noise.
Natural selection is Bayesian updating too: http://math.ucr.edu/home/baez/information/information_geometry_8.html
Replies from: moridinamael↑ comment by moridinamael · 2017-01-17T15:17:52.917Z · LW(p) · GW(p)
I don't think you can justify using the word "equivalent" like that. I think maybe you mean "evolution and memetics are similar to Bayesian updating in some ways". That is not the same thing as "equivalence". It is not really helpful to take a very specific thing and say that it is "equivalent" to other very very different things, especially if such a comparison does not help you make any predictions.
My culture has a story in it that the Creator of the Universe is going to come down in the form of a man and destroy the world if people do too many things that are said to be bad by a certain book. There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.
Replies from: Tyrin↑ comment by Tyrin · 2017-01-17T23:32:57.049Z · LW(p) · GW(p)
I didn't mean 'similar'. I meant that it is equivalent to Bayesian updating with a lot of noise. The great thing about recursive Bayesian state estimation is that it can recover from noise by processing more data. Because of this, noisy Bayes is a strict subset of noise-free Bayes, meaning pure rationality is basically noise-free Bayesian updating. That idea contradicts the linked article claiming that rationality is somehow more than that.
There is no plausible way in which the process by which this meme has propagated can be explained by Bayesian updating on truth value.
An approximate Bayesian algorithm can temporarily get stuck in local minima like that. Remember also that the underlying criterion for updating is not truth, but reward maximization. It just happens to be the case that truth is extremely useful for reward maximization. Evolution did not achieve to structure our species in a way that makes it make it obvious for us how to balance social, aesthetic, …, near-term, long-term rewards to get a really good overall policy in our modern lives (or really in any human life beyond multiplying our genes in groups of people in the wilderness). Because of this people get stuck all the time in conformity, envy, fear, etc., when there are actually ways of suppressing ancient reflexes and emotions to achieve much higher levels of overall and lasting happiness.
Replies from: moridinamael↑ comment by moridinamael · 2017-01-19T15:01:36.443Z · LW(p) · GW(p)
Let's taboo "identical".
In the limit of time and information, natural selection, memetic propagation, and Bayesian inference all converge on the same result. (Probably(?))
In reality, in observable timeframes, given realistic conditions, neither natural selection nor memetic propagation will converge on Bayesian inference; if you try to model evolution or memetic propagation with Bayesian inference, you will usually be badly wrong, and sometimes catastrophically so; if you expect to be able to extract something like a Bayes score by observing the movement of a meme or gene through a population, the numbers you extract will be badly inaccurate most of the time.
Both of the above are true. I think you are saying the first one, while I am focusing on the second one. Do you agree? If so, our disagreement is a boring semantic one.
Replies from: Tyrin↑ comment by Tyrin · 2017-01-20T08:01:54.952Z · LW(p) · GW(p)
the numbers you extract will be badly inaccurate most of the time
As its the case with an myopic view on any Bayesian inference process that involves a lot of noise. The question is just whether rationality is about removing the noise, or whether it is about something else; whether "rationality is more than ‘Bayesian updating’". I do not think we can answer this question very satisfyingly yet.
I tend to think what Cumming says is akin to saying something like: "Optimal evolution is not about adapting according to Bayes rule, because look at just how complicated gene expression is! See, evolution works by stories encoded in G, A, C and T, and most of them get passed on even though they do not immediately help the individual!"
comment by chaosmage · 2017-01-13T17:32:28.388Z · LW(p) · GW(p)
This is a particularly instructive article, worth in-depth study. Thanks for posting it.
Replies from: The_Jaded_One↑ comment by The_Jaded_One · 2017-01-13T17:38:48.473Z · LW(p) · GW(p)
Well I did get it from Reddit SSC, tbh I was surprised that it wasn't here already.
Anyway, appreciated.
Replies from: 9eB1comment by Elo · 2017-01-12T21:54:07.482Z · LW(p) · GW(p)
I am uneasy about this link being here because Brexit was politics. I am not removing it yet.
Replies from: James_Miller, moridinamael, username2, Gram_Stone, MrMind↑ comment by James_Miller · 2017-01-13T00:12:53.784Z · LW(p) · GW(p)
Please keep it. Politics is the mind killer mostly comes into play when debating which side is morally right, not when trying to figure out why one side won.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-01-13T04:12:05.668Z · LW(p) · GW(p)
It comes into play quite a bit when talking about why one side won as well, since you keep seeing people say, "We didn't win because we weren't faithful enough to our principles," when it is obvious from the beginning that political parties will tend to lose because they are too faithful to their principles, i.e. not centrist enough.
Replies from: gjm, James_Miller↑ comment by gjm · 2017-01-13T13:13:26.553Z · LW(p) · GW(p)
That may be "obvious from the beginning" but it's far from clear to me that it's correct. Here are some reasons why.
- The appeal of a political party to a given voter is not simply a matter of computing some measure of similarity between its principles and the voter's. Some of the other things it depends on -- e.g., how trustworthy the party's people seem, whether the party succeeds in arousing enthusiasm rather than mere consent, whether the party's statements are clear and vivid enough to get through to the voter -- mat well favour more extreme positions.
- To win elections, a party must not only get voters on side but also get them out of their houses and into the polling stations on election day. Again, this is a matter of enthusiasm as well as consent, and may favour more extreme positions.
- In many countries, political success is not a matter of a simple nation-wide majority vote. There are constituencies and electoral colleges and the like. This means that political success may depend on identifying particular spatially-correlated groups of people and appealing to them, and there is no guarantee that this looks anything like appealing to the nationwide median voter.
- When there are more than two candidates, or more than two parties, you can win by appealing to a reasonably-sized minority, and their preferences may be some way away from the "centre".
- When multiple issues are at play, you can't just arrange parties on a linear scale and ask where the centre is. Aiming for the centre on every issue may result in every voter finding your party mediocre and preferring another party that's extreme according to their highest-priority issue, and success may depend on finding a bunch of specific issues and adopting specific (perhaps "extreme") positions on them.
- It may be worth noting that one of Cummings's claims is exactly that looking at everything on a left/right axis and aiming for the centre is a big mistake and misunderstands what issues people are actually concerned about, and that many positions widely regarded as "extreme" in very different directions actually coexist in the minds of a large fraction of the electorate.
↑ comment by James_Miller · 2017-01-13T15:17:32.574Z · LW(p) · GW(p)
To test this, go to a hyper-partisan news service that holds political views you disagree with but which also is trying to appeal to high IQ people. (The Weekly Standard if you are on the left would work.) You will find the website's policy analysis difficult to take, but will probably agree with, or at least find reasonable its analysis of why one side won or lost a particular political battle.
↑ comment by moridinamael · 2017-01-12T22:38:44.463Z · LW(p) · GW(p)
I feel that there is more than enough rationality-specific content that this link is appropriate. He talks about Superforecasters!
↑ comment by username2 · 2017-01-13T09:42:21.500Z · LW(p) · GW(p)
I am extremely uneasy about that being your basis for moderation if you act. The fine article is explicitly about applying rationalist knowledge to effect real world change. If that is not on topic, what are we doing here? Internet philosophy hour?
↑ comment by Gram_Stone · 2017-01-12T22:24:57.450Z · LW(p) · GW(p)
Genuine question: Did the Apolitical Guideline become an Apolitical Rule? Or have I always been mistaken about it being a guideline?
Replies from: Elo↑ comment by Elo · 2017-01-12T22:37:45.277Z · LW(p) · GW(p)
Always a guideline. I am still uneasy about the link being here, and would prefer to make it clear, rather than be silent.
Replies from: Gram_Stone↑ comment by Gram_Stone · 2017-01-12T22:49:57.269Z · LW(p) · GW(p)
Thanks for clarifying. It was easy for me to forget that as well as being a moderator, you're also just another user with a stake in what happens to LW.
comment by James_Miller · 2017-01-13T00:10:09.609Z · LW(p) · GW(p)
I suspect that in general big mistakes cause defeat much more often than excellent moves cause victory. There are some theoretical reasons to suspect this is true from recent statistical analysis of human and computer decisions in chess.
I wonder how much of life outcome (after accounting for genetics and your parents' wealth) is determined by your mistakes.
Replies from: Viliam, Viliam↑ comment by Viliam · 2017-01-13T13:24:39.548Z · LW(p) · GW(p)
I wonder how much of life outcome (after accounting for genetics and your parents' wealth) is determined by your mistakes.
Life is horribly imbalanced; small mistakes can cause insanely disproportional damage. It takes literally a few seconds of time and really bad luck to get killed or injured forever. I know a few people who have serious health problems originating with "when I was a small kid, I was doing [a prefectly innocent activity all kids do all the time], and at some moment I fell down and something broke, and at first everyone thought it would heal okay, but since then at random moments I keep feeling horrible pain in [a body part], and it's been like this for decades, and doctors have no idea how to fix it properly". Or, while it can take only a few minutes to get insights like "eating healthy food and exercising regularly should become one of my top priorities, because it makes life longer and more pleasant", you still have to take everyday actions for months and years to actually achieve this. And then, one unlucky fall may break your spine, and you may end in a wheelchair forever.
One moment of depression is enough to commit suicide, but years of health care cannot cure cancer. Signing one bad contract can cost you a lot of money. It is easy to damage property, but more difficult to fix it. Etc. Even speaking of rationality, good ideas typically additionally require a lot of work, but bad ideas can ruin your life in a few minutes easily.
Sometimes there are opposite situations, for example one could spend years in an abusive relationship, and then end it in an afternoon. Or it may take a while to apply for a great job that requires skills you already happen to have. Making a good friend can significantly improve your life afterwards. -- But it still feels like these are rare exceptions, while the opportunities to ruin your life are there all the time, we just usually avoid them.
It could be interesting to look at one's own life, and try to classify things that had nontrivial impact, by two criteria: "good decision" vs "bad decision", and "one-time decision" vs "repeated decision". But there is a problem that "mistakes we didn't make" are quite invisible. For example, it would be easy to forget things like "not doing crime" or "not taking drugs" in the list of good repeated decisions, but it probably has a big impact. I am not making this list right now, because it would take too much time, but maybe I will do it later privately.
↑ comment by Viliam · 2017-01-13T11:18:27.643Z · LW(p) · GW(p)
Related: Debiasing as Non-Self-Destruction
Replies from: MrMindIt seems to me that how to be smart varies widely between professions. (...) Yet such concepts as "be willing to admit you lost", or "policy debates should not appear one-sided", or "plan to overcome your flaws instead of just confessing them", seem like they could apply to many professions. And all this advice is not so much about how to be extraordinarily clever, as, rather, how to not be stupid. Each profession has its own way to be clever, but their ways of not being stupid have much more in common. And while victors may prefer to attribute victory to their own virtue, my small knowledge of history suggests that far more battles have been lost by stupidity than won by genius.
Debiasing is mostly not about how to be extraordinarily clever, but about how to not be stupid. Its great successes are disasters that do not materialize, defeats that never happen, mistakes that no one sees because they are not made. Often you can't even be sure that something would have gone wrong if you had not tried to debias yourself. You don't always see the bullet that doesn't hit you.
The great victories of debiasing are exactly the lottery tickets we didn't buy - the hopes and dreams we kept in the real world, instead of diverting them into infinitesimal probabilities. The triumphs of debiasing are cults not joined; optimistic assumptions rejected during planning; time not wasted on blind alleys. It is the art of non-self-destruction. Admittedly, none of this is spectacular enough to make the evening news.
comment by The_Jaded_One · 2017-01-12T21:27:24.707Z · LW(p) · GW(p)
We had some technical problems with this linkpost, for some reason it started changing the link to link to itself instead of the article.
Please feel free to re-comment.