Posts

Three thoughts on Deplatforming 2021-01-14T12:40:01.710Z

Comments

Comment by betulaster (raman-malykhin) on Three thoughts on Deplatforming · 2021-02-01T06:14:50.093Z · LW · GW

What? You expected to be downvoted and you had the audacity to post anyway?

No, prior to your comment I had no idea that posts like this got straight up downvoted. I knew that it wasn't going to cause a furore of any sort, because 


(first-time post, political topic, not a lot of hard analysis and abstraction, not a lot of sources linked to)

but I expected that posts like this probably get one or two upvotes and maybe a single comment and nobody notices them.

 

No idea how to do that on Twitter (the amount of data there is insane), it just reminded me of a "cancel watch" for higher education.

I wonder how they're doing it. They do request that readers email them with a list of cases, but that's a prudent step whatever methodology you use, there would always be cases you haven't noticed. If they're just listing suggestions from readers and relying their on own networks and twitter feeds, that is too bad, no good way to scale it up.

Comment by betulaster (raman-malykhin) on Three thoughts on Deplatforming · 2021-01-31T20:54:20.759Z · LW · GW

Congratulations on making your first Less Wrong post about a relatively current political topic and not being downvoted! You may be the first person in history who achieved this!

Hah, thanks. At the risk of stroking my ego one too many times - can I ask you to speculate on why that might be the case?
What I mean is - I'm sure what I wrote has some meritoric value (I would've kept it to myself otherwise), but I expected this post to do similarly to how other comparable posts do on LW (first-time post, political topic, not a lot of hard analysis and abstraction, not a lot of sources linked to). Hearing that this isn't the case is surprising.

I agree with Ericf that the Americans you meet (as a non-American) are not a representative sample. In my opinion, there is still a chance your observation might be valid, only less strongly.

Possibly yes, I agree. But as I had noted responding to Ericf, some (American) decisionmaking seems to be driven by exactly the same sampling bias error I made. Indeed, the Twitter letter to Dorsey that apparently spurred him to act on deplatforming Trump was reported as signed by 300 people - Twitter employed around 4600 as of 2019.  Cancel culture seems to follow the same pattern at least sometimes - Kevin Spacey definitely got tried in the court of public opinion (and boy did he lose) faster than any accusations made it into court of law. 
I wonder if it's the sampling bias again, though - that is, only the minority of cases when people got cancelled by a minority of politically active Americans gets reported. I guess to verify, it would be useful to have a "cancel watch" to trace a large number of shitstorms on Twitter and see which ones followed up with some real-world action. But that would be a lot of handiwork. Any idea as to how this could get verified more elegantly?

I guess my question is whether the proper adjective for the "Activistocracy" should be "American" or rather "non-post-Soviet" or... what?

Yeah that's a good one. It doesn't seem to me that Western Europe is like that, but I don't have good exposure  to that culture. Hobbes definitely had more influence on European politics I think, with European governments being a lot more socially oriented (public healthcare, education, transportation...), "big" and leviathanish, compared to the US. The leviathan-ness is a very heavy factor in post-USSR politics - I wrote a pretty long comment on my model of it here. So that would leave a lot less space for such citizen activism in both cases.
Canada and the UK could be interesting cases to verify - Canadian and UK governments have some more public initiatives I think. Have there been any high-profile Canadian/British cancellings?

The problem is not normies coming to internet, but normies judging the entire internet.

I'm not convinced these are different things, to be honest, not in the American (non-post-Soviet?) case. When normies believe it is upon them to take a moral stand about everything they see and do, because their government won't, whatever they come to will be judged. The internet included.

Which feels uncomfortable, but it kinda makes sense: you cannot have something 100% user friendly and keep the public away.

I agree, this maps onto some of my ideas about that. From what I know, Tor has become somewhat useless for "truly" illegal activity - pretty much all drug trade in Russia happens through it, drug related imprisonment rates are insane anyway - but it might become "the next internet" by virtue of being hard enough to configure and navigate for the general public. But obviously, that frontier is ultimately gonna get colonized too, so yeah, possibly it's gonna be invite only - or we're just going to be passing emails across heavily filtered mailing lists. 

The "Bible Belt Valley" sounds interesting, but first they need their own credit card company, then their own ISPs, and only afterwards their web pages will be sustainable in long term

A kind-of-sustainable alternative to this seems to exist, and this usually involves ISPs in countries that are ideologically opposed to whatever place is cancelling you. 
Consider The Daily Stormer, it's a neo-nazi blog that got under fire after Unite the Right. After hopping registrars and hosters for a year or so, they got set with a Chinese registrar/ISP and have been alive since. 
Parler seems to be following suit in some sense, they have moved to a Russian host. Reuters reports that whatever is left of 8chan, 8kun, is also hosted there.
Obviously, this will only work if whatever free speech you disseminate on that platform doesn't affect the internal politics of the country you're hosting in, so that's a factor. Here's The Daily Stomer's Andrew Anglin acknowledging pretty much that:


After the CDN ban and the registrar ban – which are both coming soon for Parler and a bunch of other MAGA related sites – they are going to be in “CHINA PLZ HALP” territory.

That was easy enough for me, as I’ve always been relatively pro-PRC, or at least not anti-PRC. I think these sites that have been promoting all of these idiotic theories about China being behind the problems in the US are going to be met with a lot less friendliness than I was.

 

Which leads to the question: what are the forces that drive Visa and MasterCard on "the right side of history"

Here's what seems interesting. Both the Eranet that hosts Daily Stormer and DDOS Guard that hosts 8kun and Parler are extant "clear" companies, and while I don't know for sure, I imagine they can work with mainstream payment providers. At the very least, Visa and Mastercard work in China and Russia very well in general, so it's not like the ideological opposition would cut you off per se.
I think it could be the absence of an opposing voice. There is enough people to coordinate in threatening to not use their Visas or MasterCards anymore if [insert badwrong company] is allowed to use it, but there isn't enough people to threaten the same if [same company] is deplatformed. And yes, that potentially implies a vicious circle - to potentially estabilish a platform to coordinate, you need a platform to coordinate. I think the kinda-sustainable platforms like Tor or hosting in ideologically opposing countries I mentioned above could serve as ways to break that circle.

Comment by betulaster (raman-malykhin) on Three thoughts on Deplatforming · 2021-01-16T07:31:41.607Z · LW · GW

In the end there was a tradeoff. Democracies have constant struggle, as power is prevented from being entrenched.

This is interesting. I'm not familiar with Montesquieu, but does he stem from any political tradition that was significantly opposed to Hobbism and socially contracted "leviathans"? Because this sounds like something Hobbes would abhor, like, something almost made out as a gigantic middle finger to Hobbes.

Comment by betulaster (raman-malykhin) on Three thoughts on Deplatforming · 2021-01-16T07:21:34.107Z · LW · GW

Actually, good point! I do agree I'm probably exposed only to an overrepresented fraction.

But at the same time, looks to me like a lot of decision-making processes are also based on the same sampling bias error. Kevin Spacey lost pretty much every acting contract he had the moment he got cancelled, Harvey Weinstein only a bit slower.

Comment by betulaster (raman-malykhin) on Is corruption a valuable antidote to overregulation? · 2020-11-09T09:55:08.508Z · LW · GW

There is a Russian saying (it's frequently ascribed to Saltykov-Shchedrin, and sounds to me like something he'd write, but I'm not able to properly source this) to precisely that effect... but not in the way you imagine. Roughly translated, it goes like this - "In Russia, the severity of the laws is compensated by the non-neccesity to obey them". 

(I don't have too much time for this, so apologies for shoddy sources down in the answer. Please let me know if you'd like more proper ones, I'll be sure to come back to that later.)

My personal model of Russian corruption maps very well onto Banfield's amoral familism. In a nutshell, the model is that every single individual is morally incentivized to increase utility of his small social ingroup (usually his family, but may be also a group of otherwise associated friends - part of Vladimir Putin's inner circle can be traced back to a housing co-op in the 90s), to the detriment of both his own and society's total utility. 

One addition from my own anecdotal experience, that I think Banfield never observed or made, is that an amorally familistic society needs a tyrannical/absolutist ruler to oversee it, lets it collapses into a hobbesian war of all against all - or, at least, that is the usual Russian perspective of it IMO. The tyrant sets the boundaries that make sure the ingroups don't slaughter each other completely in the struggle for utility, and enforces them with terrible, ruthless power. He is, in that sense, elevated from the earthly struggle for utility by that duty - that's why it's not really possible to blame him for any social woes, and instead the blame falls on those that carry out his decisions (and are "earthly" and subject to familism like all others) - there is another, IMO popular, saying to that effect, "the tsar is good, the boyars are bad".

(Ostensibly, this contradicts the saying from the very first paragraph - wasn't it possible to get around draconian laws by corruption? Yes it was. The idea is that while the tsar has his own, directly-controlled enforcers - the oprichnina or the rosgvardiya - and god help you if you cross the law and they notice, he also has a vastly more expansive hierarchy of appointed administrators and law enforcers (whom the average member of society has vastly more chances of encountering in his affairs), who are much more human, hence amorally familistic, hence corruptible.)

Regarding how this maps onto progress, it would probably be useful to consider Acemoglu/Robinson's "Why Nations Fail", keeping in mind two things.
First, Russia is not a progressive country by any means, empirically. 
Second, the idea that economic growth equals broadly understood progress is IMO defensible, but not very trivial. But since Acemoglu and Robinson are insititutionalist growth economists, who see technological/scientific progress as the driver of economic growth, it's still not useless.

The WNF take on progress is that people are incentivized to create/adopt disruptive technologies that drive progress only if they have faith in the social/political institutions that will ensure they can extract utility (for themselves, or whomever they care about) from that. If they instead know that institutions care about estabilishing the pre-existing mono/oligopolies, any disruption will be quashed, and progress will be sporadic and unsustainable in the long run. 

The Russian case appears to me to be almost that - the ruler is more concerned with maintaining social order than with maintaining the economic status quo (although those two may be connected), but since any large economic entity (like a large corporation running on a technology that is about to get disrupted) is by default more powerful than it's disruptive challenger, it works out the same. The amoral familism begets a ruler concerned with enforcing boundaries and the status quo, and disruption just goes agains the grain of both.

To conclude, yes, corruption does get used as a way out of overregulation, but not neccessarily to the benefit of long term progress.

(Acemoglu and Robinson also make a more direct connection by claiming that you neccessarily need a liberal democracy to estabilish these institutions that are conducive to progress and disruption. I'm hesitant about accepting that head on, but if you choose to agree with that, the case is even easier - amoral familism sees democracy as just ridiculous, and liberalism as weak. If I'm amorally familistic, I only care about satisfying desires of my ingroup - why would I ever want to live in a society which equates my desires to ones from outgroups instead? And if I choose to have a ruler that enforces social order and prevents an all-out collapse, how would he be able to enforce it with terrible punishments if he's limited by some "freedoms"?)

P.S. I have to note that Acemoglu and Robinson are concerned with long-term sustainable growth only. In the short term, however, it would probably be a lot easier for the ruler, or for currently endowed players (like oligopolies/oligarchs) to get things done by getting around red tape. 

Comment by betulaster (raman-malykhin) on Nuclear war is unlikely to cause human extinction · 2020-11-07T09:34:28.586Z · LW · GW

One thing that seems important to note: nuclear warfare need not occur in a vacuum. If countries possessing nuclear weapons are trading all-out strikes, as in your model, they probably are in a state of (World?) war already, and either have fought with other weapons prior to the nuclear exchange, or plan to continue to do so after it. This may include use of non-nuclear weapons with high collateral damage, like chemical or biological agents, or saturation bombardment targeting high-population areas. I wonder if that skews the assessment of damage in any meaningful way.

Comment by betulaster (raman-malykhin) on Reasons that discussions might feel hopeless · 2020-10-28T23:20:40.781Z · LW · GW

This is not really erisology in any way, but I think specific topics of discussion/interactions with specific people may very well become Ugh Fielded if you have an initially bad experience.

Comment by betulaster (raman-malykhin) on Some criticisms of polling · 2020-10-28T13:40:57.771Z · LW · GW

Since you address "how likely meeting a certain politically charged event would be", I assume your question is focussed on what I've called "Polling 2", which concerns itself with predicting future events. 

Yes, you're right, and I should have been more clear - thanks for pointing that out.

The best way to put the matter into quantitative terms may be to ask the interviewee what odds he would give in a bet on the event occuring

I don't know if I'm convinced that would work. I think that most people fall into two camps regarding betting odds. Camp A is not familiar with probability theory/calculus and doesn't think in probabilistic terms in their daily life - they are only familiar with bets and odds as a rhetorical device ("the odds were stacked against him", "against all odds", etc). Camp B are people who bet/gamble as a pastime frequently, and are actually familiar with betting odds as an operable concept. 
If you ask an A about their odds on an event related to a cause they feel strongly about, they will default to their understanding of odds as a rhetorical device and signal allegiance instead of giving you a usable estimate. 
If you ask a B about their odds on the same event, they will start thinking about it in monetary terms, since habitual gamblers usually bet and gamble money. But, as you point out, putting a price on faith/belief/allegiance is seen as an immoral/dishonorable act, and would cause even more incentive to allegiance-signal instead of truly estimating probabilities.
In this way, this only works either for surveying people with good skills at rationality/probability, or for surveying people about events they don't have strong feelings on.

However, there are two pitfalls to this argument, and that's why I'm not stating this with complete certainty. 
First, this is still speculation - I have no solid data on how familiar an aggregate (I'm not sure average is a good term to use here, given that mathematicians probably understand this concept very well while being relatively scarce) person is with concept of betting odds - and actually, do tell if you know any survey data that would allow to verify this, a gauge of how familiar people are with a certain concept seems like data useful enough to exist. 
Second, this may be cultural. I'm not American and have never stayed in America long-term - and based on you mentioning baseball and the time of your reply I assume that you are - so potentially the concept of betting is somehow more ingrained into the culture there,  and I'm just defaulting to my prior knowledge.

The rather convoluted question I came up with to assess an interviewee's resistance to chance of intent has the disadvantage of generating a discreet (non-continuous) answer. and I worry it might also confuse some interviewees

Yup. I didn't see the point in highlighting that, since you mention yourself that the measure is imperfect, but this echoes my concern on betting. At the risk of sounding like an intellectualist snob, I think even the probabilistic concepts that most lesswrongers would see as basic are somewhat hard to imagine and operate with, save for as rhetorical devices, to the general public. 

Comment by betulaster (raman-malykhin) on Some criticisms of polling · 2020-10-26T21:08:25.135Z · LW · GW

I don't have good data to back this up, but I have a feeling that people are thinking in more binary terms than you expect. More specifically, I conjecture that if you were to ask someone for how likely meeting a certain politically charged event would be, they would parse your question as a binary one and answer either "almost certainly" or "very unlikely" - and when pressed for a number, would give you either between 90-100%, or 0-10% respectively. 

Comment by betulaster (raman-malykhin) on Should we use qualifiers in speech? · 2020-10-23T15:11:10.525Z · LW · GW

I don't have a full answer, but here's what seems important to consider - in my experience, the baseline for the level of confidence in speech that is associated with competence and authority is a lot lower in intellectual circles like LessWrong, compared to the general public. 

This is because exposure to rationality and science usually impresses into someone that making mistakes is "fine" and an unavoidable component of learning, and that while science has made very impressive progress there is still a lot to learn and understand about the world. On the other hand, the real world and social opinion usually very closely associate mistakes with failure and the ensuing moral penalties and lowered status. 

Based on that, if you rely a lot on qualifiers while speaking to someone who's not as exposed to or interested in intellectual thought, they may write you off as confused or unsure - they will expect "smarter" people to give them definite verdicts. So if you're trying to socially maneuver someone into agreeing with your reasoning based on competence, forgoing qualifiers is probably a good idea.

Comment by betulaster (raman-malykhin) on Stupid Questions October 2020 · 2020-10-21T14:35:31.828Z · LW · GW

How do people read LessWrong? I subscribe to the RSS feed of the front page, but that tends to be suboptimal, as some posts aren't that well-aligned with my interests or are questions/discussion starters as opposed to being mid/longform reads that I'd mostly want to read LW for.

Comment by betulaster (raman-malykhin) on What are some beautiful, rationalist artworks? · 2020-10-17T10:45:54.932Z · LW · GW

Illustration from Michael Haddad for Wired. It was originally commissioned for an article about biohackers, but I find that it captures the spirit of agency and self-improvement that is well-aligned with some of rationalist values. 

Comment by betulaster (raman-malykhin) on What are some beautiful, rationalist artworks? · 2020-10-17T10:43:41.886Z · LW · GW
Comment by betulaster (raman-malykhin) on Is Stupidity Expanding? Some Hypotheses. · 2020-10-16T00:34:07.653Z · LW · GW

I may attempt a more comprehensive analysis to suggest some tests later (although I'm not sure that would be very successful  - my rationality skills feel like they are nascent at best), but from a superficial read, it seems to me that points A13 and B10 are essentially the same - both deal with stupidity becoming more widespread as a matter of consumerist/capitalist politics/market foces. That could be opposed by noticing that B10 deals with actually smart individuals who pretend to be stupid to reap the benefits, and A13 deals with actually stupid individuals, which in turn is opposable by the classic Ben Kenobi argument. But at the very least, if A13 deals with actions of level-0 actors, B13 would then be the level-1 response to that.

Comment by raman-malykhin on [deleted post] 2020-09-27T23:07:15.628Z

Of course, you have to modulate that by the possibility that allowing people to live off their UBI or blow it on frivolous spending will cancel out those good effects. That question is beyond my pay grade, and I suspect nobody really knows

This makes me think of something. Can't we look at what people who experienced windfall gains spent their newfound money on? Looking into lottery winners seems like an easy enough to obtain sample, although not unproblematic - it takes a certain kind of person to participate in a lottery in the first place. But if we can break that sample down by some demographic factors - maybe cultural factors, or education level, or something like that - maybe there can be some emergent pattern that tells us how many of those people and which will go "fish", or use that money to propel themselves out of the poverty trap, or blow it on frivolous spending.

Comment by betulaster (raman-malykhin) on Up close and personal with the world · 2020-09-25T13:58:16.921Z · LW · GW

This is interesting. I wonder how this would apply to the literature on investing mistakes (esp. amateur investors like people who get burned on Robinhood). 

If your idea is correct, IMO people then must be thinking about investing money using priors, concepts and feelings from their own spending experience. Maybe some patterns of systemic bias/mistakes associated with trying to use the "feeling for what N dollars buys and feels like spending" can be teased out.

Comment by betulaster (raman-malykhin) on The Axiological Treadmill · 2020-09-20T09:59:31.891Z · LW · GW

Eventually - sure. But for that eventuality to take place, the "electrical shock tyranny" would have to be more resilient than any political faction we've known of and persist for thousands of year. I doubt that this would be possible.

Comment by betulaster (raman-malykhin) on The Axiological Treadmill · 2020-09-18T05:04:27.754Z · LW · GW

Sorry if I wasn't clear enough. My critique refers to your point about scenarios where humans evolve like a dystopia not being applicable because if it were, suffering should be a rare occurence - if I understand you correctly, you're stating that if we could evolve to like dystopias, by this point in time we would have evolved to either avoid or like any source of suffering. My counterpoint to this is that there is a massive sub-multitude of sources of suffering that do not affect evolution in any way because they are too transient to effect any serious selection pressure.

Comment by betulaster (raman-malykhin) on The Axiological Treadmill · 2020-09-16T20:26:02.292Z · LW · GW
You could perhaps engineer scenarios where humans will genuinely evolve to like a dystopia

I think that this kind of misrepresents the scale on which evolution happens - it's not one generation, or two, it's hundreds and thousands, and it's taken relatively good care of the sources of suffering that are fundamental enough to persist and keep the selection pressure on across that time frame - we're pretty good at not eating things that are toxic, breeding, avoiding predators and so on. The problem with evolution is that a significant number of sources of suffering are persistent enough to have a detrimental impact on an individual's life, but transient enough to not be able to affect selection across generations.

Comment by betulaster (raman-malykhin) on nostalgebraist: Recursive Goodhart's Law · 2020-08-29T04:28:08.794Z · LW · GW

This post reminds me of an insight from one of my uni professors.

Early on at university, I was very frustrated with that the skills that were taught to us did not seem to be immediately applicable to the real world. That frustration was strong enough to snuff out most of the interest I had for studying genuinely (that is, to truly understand and internalize the concepts taught to us). Still, studying was expensive, dropping out was not an option, and I had to pass exams, which is why very early on I started, in what seemed to me to be a classic instance of Goodhart, to game the system - test banks were videly circulated among students, and for the classes with no test banks, there were past exams, which you could go through, trace out some kind of pattern for which topics and kinds of problems the prof puts on exams, and focus only on stuying those. I didn't know it was called "Goodhart" back then, but the significance of this was not lost on me - I felt that by pivoting away from learning subjects and towards learning to pass exams in subjects, I was intellectually cheating. Sure, I was not hiding crib sheets in my sleeves or going to the restroom to look something up on my phone, but it was still gaming the system.

Later on, when I got rather friendly with one of my profs, and extremely worn down by pressures from my probability calculus course, I admitted to him that this was what I was doing, that I felt guilty, and didn't feel able to pass any other way and felt like a fake. He said something to the effect of "Do you think we don't know this? Most students study this way, and that's fine. The characteristic of a well-structured exam isn't that it does not allow cheating, it's that it only allows cheating that is intelligent enough that a successful cheater would have been able to pass fairly."

What he said was essentially a refutation of Goodhart's Law by a sufficiently high-quality proxy. I think this might be relevant to the case you're dealing with here as well. Your "true" global optimum probably is a proxy, but if it's a well-chosen one, it need not be vulnerable to Goodhart.

Comment by betulaster (raman-malykhin) on Inoculating against Psychedelic Woo · 2020-08-21T18:46:24.256Z · LW · GW

I understand that this may be well outside the scope of your writing, but still - any chance you could actually post some epistemic defense decks for Anki? Or are there any good ones already available?

(Apologies if the question is stupid, I'm somewhat new to LW)

Comment by betulaster (raman-malykhin) on Why haven't we celebrated any major achievements lately? · 2020-08-18T11:54:01.746Z · LW · GW

Disclaimer: this comment includes a lot of speculation on philosophy and art movements that I myself don't have an in-depth understanding of. Please take this with a grain of salt. If anyone reading this understands the matter better and sees me saying BS, please correct me.

I think that one thing that can be helpful to examine is postmodernism. As Jean-Francois Lyotard had originally described it in the late 70s, it is "incredulity towards metanarratives". For Lyotard this meant rejecting the idea that the world is described or describable by some unified model. He criticized both capitalists and marxists, which were two sides of the big ideological struggle of his time, for trying to assert that there is a single and universally true vision of society and money and power and how to handle those. He also criticized science, which he said had turned from "truly producing knowledge" to "performativity" and churning out self-contradictory results, which to him meant that the "metanarrative" of science explaining the world was fake and discredited.

Now, Lyotard later rejected some of these ideas and admitted he did not have a very good understanding of the sciences he was trying to criticize, but his work was influential and the notion of skepticism towards big universal narratives (including scientism) took hold in the French intellectual sphere and birthed the entire movement of postmodernism.

These are things I'm relatively (see disclaimer) confident in. From here, we have to make two assumptions to get us somewhere relevant to your question. First, assume portrayal of some event in art has influence on its public perception, including this event being celebrated or not (I'm quite confident in this assumption). Second, assume Lyotard's ideas had influence not only on philosophy, but also on art (I'm less confident in this - in fact, Lyotard borrowed the term "postmodernism" from art critics, who were using it before him, and I'm not familiar with anything about postmodernism in art that would scream rejection of science and technology - maybe aside from that postmodern artists are broadly understood as rejecting modern ones, and Bauhaus, an art movement obsessed with glorifying technology, is broadly understood as modernist).

If these assumptions hold, we have something like an answer to your question - if the artist starts with the abstract notion that there is no universal rhyme or reason to the world that humans could discover, they will not create works that glorify discovery or science, and thus will not spur public celebration.

Comment by betulaster (raman-malykhin) on Generalized Efficient Markets in Political Power · 2020-08-14T21:13:52.369Z · LW · GW

Thanks for the reply and sorry I couldn't get to this for some time! Hope you're still interested in the discussion.

I expect that politics in most places, and US Congressional politics especially, is usually much more heavily focused on special interests than the overall media narrative would suggest

This is really interesting and you probably have a good point. Do you think there's a more reliable way (for an outsider like myself, who's not able to, I dunno, go and ask people in a dive bar what they think) to get the lay of the political land in a particular point in space? (And time?) Maybe some centralized kind of poll repository?


Every media outlet in the country (including 538) wanted to run stories about how race was super-important to the election, because those stories got tons of clicks, but that's very different from actually playing a role.

On a side note, I can imagine this kind of perspective, when taken to an unmitigated extreme, leading to a very cartesian-demon view of the world. Most people that publish their thoughts are incentivized to make you, the reader, like it or be interested in it. Mass-media obviously so, bloggers or analysts or think tanks less obviously so, but still. If, when faced with a choice of writing about (a) things that are real but dull vs (b) things that are not real but get clicks, no one has an incentive to do (a), how do you form a view of the world?

(Had I not known about publish-or-perish and read Gelman/Falkovich on p-hacking, I myself would give the traditionally cartesian answer of "by reading scientific papers in peer-reviewed journals", but... yeah).


So we just ignore the gullible people, and apply the discussion from the post to everybody else.

I think we've made an important move in argumentation here - we've started to introduce the possibility of the voters differing by whether they believe the lies/bullshit or not. But if we do - that is, we introduce the possibility of the voter considering some of the politician's commitment to a future policy Schelling point not genuine - we also open the possibility for the voter to speculate on what the politician's true policies are.

Say, Alice runs for president on a conservative/jobs-centric platform, commits to outlaw work visas and expunge all foreign workers, and wins the race, and says that she's working hard to achieve that goal. Bob is a total supporter and he's sure that she does exactly that and thinks that deportations are only a couple days away. Charlie may be skeptical about the promise, because sounded very radical and campaign-y, but thinks she's probably going to cut work visas, but not be able to expunge foreigners already in the country. Dave agrees with Charlie that the promise was radical and will not be followed through on fully, but not on what will actually happen - he thinks Alice will be able to expunge already present foreigners, but never risk the political turmoil of removing work visas. Erin is a total skeptic and thinks Alice is merely exploiting the voters, and is actually not doing anything about the foreigners, and finally Frank is a conspiracy theorist and thinks that Alice is secretly working with a cabal of globalists to bring even more foreigners (maybe even illegally!) in while bullshitting him.

All of these people have different Schelling points! If Zack, a foreigner, asks Bob to loan him money, Bob is going to refuse, because he thinks Zack will be kicked out of the country tomorrow and he's not getting his money back. If he asks Erin, she's likely to agree, because she doesn't believe Zack is going anywhere.

Now, sure, there's only so many ways to interpret a single campaign promise, and there are bound to be groups within the voter base that will agree on what Alice will actually do, the Schelling point will work for them - but since Alice is incentivized to make a lot of focused-benefit-disperse-cost promises, voters who agree on what her actions on a certain policy are, may disagree on what her actions regarding a different policy are. So... when nobody agrees on what the Schelling point is, does it, for all intents and purposes, exist?

Comment by betulaster (raman-malykhin) on My paper was signalling the whole time - Robin Hanson wins again · 2020-08-14T11:58:15.471Z · LW · GW

So... it's possible that there is something about Middle Eastern politics that I don't understand, and it would be cool if you could clarify. If I understand you correctly, you write that farms in the South are owned by rich people. At the same time, you write that farms in the North are somehow connected to the ruling coalition, and because of this the government had to signal loyalty to them.

I was under the impression that in monarchic/autocratic countries it was near-impossible to be rich while not being connected to the ruling group (= not being the kind of agent the ruling group would need to signal loyalty to). The farmers in the South contradict that. How does this work?

Comment by betulaster (raman-malykhin) on Generalized Efficient Markets in Political Power · 2020-08-04T01:21:33.847Z · LW · GW

I'm probably missing something obvious, but I don't trivially see how this

Interestingly, this suggests that a leader can get high value from a group whose preferences are orthogonal to their own; pursue power in groups which care about different things than you!

follows from this

A leader’s power is high when group members all want to coordinate their choices, but care much less about which choice is made, so long as everyone “matches”. Then the leader can just choose anything they please, and everyone will go along with it.

Could you please elaborate?


Also, I have an outsider's view of American (or, indeed, Western in general) politics, so I can be wrong, but I think an argument from empirics could be made against this:

It’s special-interest politics: look for policies with focused benefits and diffuse costs. Pile many such policies together, and you have a winning coalition.

At least in the two most recent American elections (2016 and then the 2018 midterms) it seems like it was very much not the case of people racing for the most focused benefits and most diffuse cost, but rather for the most efficient way to galvanize their voters, cost be damned. Think of the wall on the Mexican border - it would probably be exorbitantly expensive, including to those that voted for it, but it was a very powerful symbol that people who felt strongly about the issue could rally behind.

538 here do a kind of literature review - and find, amongst other things, that

racial attitudes mattered more in 2016 than in any recent election — even 2008, when the presence of an African-American candidate shaped the political conversation.

Unless I misunderstand the idea, I don't think issues of race have a narrow focused scope of benefits and costs diffuse enough not to be noticed by other voters.


I also think this point

Would-be leaders make promises: they precommit to certain policies, thereby cutting off certain options if they win (i.e. sacrificing potential power), but gaining more support for their Schelling point in the process.

makes an assumption of voters being more-or-less perfectly informed about what the Schelling point (policies and laws) actually is. What if a leader could get elected by pre-commiting to certain policies, but then actually not act on them, while managing to convince the voters that they, in fact, are doing their best to implement these policies, but are failing to for a certain (probably not a very falsifiable) reason? Or does the model already support this in a way that I don't notice?