Against "Context-Free Integrity"

post by Ben Pace (Benito) · 2021-04-14T08:20:44.368Z · LW · GW · 28 comments

Sometimes when I talk to people about how to be a strong rationalist, I get the impression they are making a specific error.

The error looks like this: they think that good thinking is good thinking irrespective of environment [LW · GW]. If they just learn to avoid rationalization [LW · GW] and setting the bottom-line [LW · GW] first, then they will have true beliefs about their environment, and if there's something that's true and well-evidenced, they will come to believe it in time.

Let me give an extreme example.

Consider what a thoughtful person today thinks of a place like the Soviet Union under Stalin. This was a nation with evil running through their streets. People were vanished in the night, whole communities starved to death, the information sources were controlled by the powerful, and many other horrendous things happened every day.

Consider what a strong rationalist would have been like in such a place, if they were to succeed at keeping sane. 

(In reality a strong rationalist would have found their ways out of such places [LW · GW], but let us assume they lived there and couldn't escape [LW · GW].) 

I think such a person would be deeply paranoid (at least Mad-Eye Moody level), understanding that the majority of their world was playing power games and trying to control them. They'd spend perhaps the majority of their cognition understanding the traps around them (e.g. what games they were being asked to play by their bosses, what sorts of comments their friends would report them for, etc) and trying to build some space with enough slack to occasionally think straight about the narratives they had to live out every day. It's kind of like living in The Truman Show, where everyone is living a narrative, and punishing you / disbelieving you when you deviate. (Except with much worse consequences than what happened in that show.)

Perhaps this is too obvious to need elaborating on, but the cognition of a rationalist today who aims to come to true beliefs about the Soviet Union, and the cognition of a rationalist in the Soviet Union who aims to come to true beliefs about the Soviet Union, are not the same. They're massively different. The latter of them is operating in an environment where basically every force of power around you is trying to distort your beliefs on that particular topic – your friends, your coworkers, the news, the police, the government, the rest of the world.

(I mean, certainly there are still today many distortionary forces about that era. I'm sure the standard history books are altered in many ways, and for reasons novel to our era, but I think qualitatively there are some pretty big differences.)

No, coming to true beliefs about your current environment, especially if it is hostile, is very different from coming to true beliefs about many other subjects like mathematics or physics. Being in the environment can be especially toxic, depending on the properties of that environment and what relationship you have to it.

By analogy, I sometimes feel like the person I'm talking to thinks that they just practice enough fermi estimates [? · GW] and calibration training [? · GW] and notice rationalization in themselves and practice the principle of charity, then they'll probably have a pretty good understanding of the environment they live in and be able to take positive, directed action in it, even if they don't think carefully about the political forces acting upon them.

And man, that feels kinda naive to me.

Here's a related claim: you cannot get true beliefs about what are good actions to take in your environment without good accounting, and good record-keeping. 

Suppose you're in a company that has an accounting department that tells you who is spending money and how. This is great, you can reward/punish people for things like being more/less cost-effective. 

But suppose you understand one of the accounting people is undercounting the expenses of their spouse in the company. Okay, you need to track that. (Assume you can't fire them for political reasons.) Suppose another person is randomly miscounting expenses depending on which country the money is being spent. Okay, you need to track that. Suppose some people are filing personal expenses as money they spent supporting the client. Okay, now you need to distrust certain people's reports more-so.

At some point, to have accurate beliefs here, it is again not sufficient to avoid rationalization and be charitable and be calibrated. You need to build a whole accounting system for yourself to track reality.

[A]s each sheep passes out of the enclosure, I drop a pebble into a bucket nailed up next to the door. In the afternoon, as each returning sheep passes by, I take one pebble out of the bucket. When there are no pebbles left in the bucket, I can stop searching and turn in for the night. It is a brilliant notion. It will revolutionize shepherding.

The Simple Truth [LW · GW]

I sometimes see quite thoughtful and broadly moral people interact with systems I know to have many power games going internally. Moral Mazes [? · GW], to some extent or another. The system outputs arguments and trades, and the person sometimes engages with the arguments and sometimes engages in the trade, and thinks things are going well. But I feel like, if they knew the true internal accounting mechanisms in that entity, then they would be notably more disgusted with the parts of that system they interacted with. 

(Imagine someone reading a scientific paper on priming [? · GW], and seeking deep wisdom in how science works from the paper, and then reading about the way science rewards replications. [LW · GW])

Again, I occasionally talk to such a person, and they can't "see" anything wrong with the system, and if they introspect they don't find a trace of any rationalization local to the situation. And if they've practiced their calibration and fermis and charity, they think they've probably come to true beliefs and should expect that their behavior was net positive for the world. And yet there are instances I feel that it clearly wasn't.

Sometimes I try to tell the people what I can see, and that doesn't always go well. I'm not sure why. Sometimes they have a low prior on that level of terrible accounting, so don't believe me (slash think it's more likely that I'm attempting to deceive them). This is the overly-naive mistake.

More often I think they're just not that interested in building that detailed of a personal accounting system for the thing they're only engaging with some of the time and isn't hurting them very much. It's more work than it's worth to them, so they get kind of tired of talking about it. They'd rather believe the things around them are pretty good rather than kinda evil. Evil means accounting, and accounting is boooring. This is the apathetic mistake.

Anyway. All this is me trying to point to an assumption that I suspect some people make, an assumption I call "Context-Free Integrity", where someone believes they can interact with complex systems, and as long as they themselves are good and pure, their results will be good and pure. But I think it's required that you to actually build your own models of the internals of the complex systems before you can assess this claim.

...writing that down, I notice it's too strong. Eliezer recommends empirical tests [? · GW], and I think you can get a broad overall sense of the morality of a system with much less cost than something like "build a full-scale replica accounting model of the system in google sheets". You can run simple checks to see what sorts of morality the people in the system have (do they lie often? do they silence attempts to punish people for bad things? do they systematically produce arguments that the system is good, rather than trying to simply understand the system?) and also just look at its direct effects in the world.

(In my mind, Zvi Mowshowitz is the standard-bearer on 'noping out' of a bad system as soon as you can tell it's bad. The first time was with Facebook, where he came to realize what was evil about it way in advance of me.)

Though of course, the more of a maze the system is, the more it will actively obscure a lot of these checks, which itself should be noted and listed as a major warning. Just as many scientific papers will not give you their data, only their conclusions, many moral mazes will not let you see their results, or tell you metrics that are confusing and clearly goodharted [? · GW] (again on science, see citation count).

I haven't managed to fully explain the title of this post, but essentially I'm going to associate all the things I'm criticizing with the name "Context-Free Integrity". 

Context-Free Integrity (noun): The notion that you can have true beliefs about the systems in your environment you interact with, without building (sometimes fairly extensive) models of the distortionary forces within them.

28 comments

Comments sorted by top scores.

comment by Richard_Ngo (ricraz) · 2021-04-14T17:34:01.233Z · LW(p) · GW(p)

There's something that's been bugging me lately about the rationalist discourse on moral mazes, political power structures, the NYT/SSC kerfuffle, etc. People are making unusually strong non-consequentialist moral claims without providing concomitantly strong arguments, or acknowledging the ways in which this is a judgement-warping move.

I don't think that being non-consequentialist is always wrong. But I do think that we have lots of examples of people being blinded by non-consequentialist moral intuitions, and it seems like rationalists around me are deliberately invoking risk factors. Some of the risk factors: strong language, tribalism, deontological rules, judgements about the virtue of people or organisations, and not even trying to tell a story about specific harms.

Your post isn't a central example of this, but it seems like your argument is closely related to this phenomenon, and there are also a few quotes from your post which directly showcase the thing I'm criticising:

they would be notably more disgusted with the parts of that system they interacted with

And:

They'd rather believe the things around them are pretty good rather than kinda evil. Evil means accounting, and accounting is boooring.

And:

The first time was with Facebook, where he was way in advance of me coming to realize what was evil about it.

"Evil" is one of the most emotionally loaded words in the english language. Disgust is one of the most visceral and powerful emotions. Neither you nor I nor other readers are immune to having our judgement impaired by these types of triggers, especially when they're used regularly. (Edit: to clarify, I'm not primarily worried about worst-case interpretations; I'm worried about basically everyone involved.)

Now, I'm aware of major downsides of being too critical [LW · GW] of strong language and bold claims. But being careful of gratuitously using words like "evil" and "insane" and "Stalinist" isn't an usually high bar; even most people on Twitter manage it.

Other closely-related examples: people invoking anti-media tribalism in defence of SSC; various criticisms of EA for not meeting highly scrupulous standards of honesty (using words like "lying" and "scam"); talking about "attacks" and "wars"; taking hard-line views on privacy and the right not to be doxxed; etc.

Oh, and I should also acknowledge that my calls for higher epistemic standards [LW(p) · GW(p)] are driven to a significant extent by epistemically-deontological intuitions. And I do think this has warped my judgement somewhat, because those intuitions lead to strong reactions to people breaking the "rules". I think the effect is likely to be much stronger when driven by moral (not just epistemic) intuitions, as in the cases discussed above.

Replies from: NicholasKross, mr-hire, Benito
comment by NicholasKross · 2021-04-15T01:02:42.524Z · LW(p) · GW(p)

I've noticed a thing happening (more? lately? just in my reading sample?) similar to what you describe, where the emphasis goes more onto the social/community side of rationality as opposed to... the rest of rationality.

The Moral Mazes examples are related to that. Also topics like reputation [LW · GW], and virtues 'n' norms [LW · GW], and what other people think of you [LW · GW].

At some point, a person's energy and resources are finite. They can try to win at anything, but maybe the lesson from recent writings is "winning at social anything is hard enough (for a LW-frequenting personality) to be a notable problem".

Some thoughts on this issue:

  • Codify, codify, codify. Most people in the LW community are lacking in some social skills (relative to both non-members and the professional-politician standard). Those who have those skills: please make long detailed checklists and email-extensions of what works. That way, the less-socially-skilled among us can avoid losing-at-social without turning into Mad-Eye Moody and losing our energy.

  • Is there a trend where communities beat around the bush more over time? Many posts do what I've heard called "subtweeting". "Imagine a person X, having to do thing Y, and problem Z happens...". Yes, social game theory exists and reputation exists, but at least consider just telling people the details.

Common/game-theory/vague/bad: "Let's say somebody goes to $ORG, but they do something bad. We should consider $ORG and everyone there to be infected with The Stinky."

Better/precise/detailed/good: "Hey, Nicholas Kross went to MIRI and schemed to build a robot that outputs anti-utils. How do we prevent this in the future, and can we make a preventative checklist?" [1]

If you are totally financially/legally dependent on an abusive organization or person, obviously writing a call-out post with details is game-theoretically bad for you. In that case, don't leave in those details. For everyone else: either write a postmortem [? · GW] or say "I'm under NDA, but...".

(If your AGI-will-give-us-Slack timeline is shorter than a community-Slack-project, how much should you really worry about long-term politics-style social/reputational-game-theoretic threats to the community's Slack?)

Interested in more thoughts on this.


  1. This is a fictional example. Plus, it's not even slyly alluding to any situations! (Well, as far as I know.) ↩︎

comment by Matt Goldenberg (mr-hire) · 2021-04-15T11:06:14.067Z · LW(p) · GW(p)

I unironically think this a great example of doing the thing the OP is pointing at correctly.

Replies from: interstice
comment by interstice · 2021-04-15T20:35:29.674Z · LW(p) · GW(p)

Indeed. Quis custodiet ipsos custodes?

comment by Ben Pace (Benito) · 2021-04-14T20:02:43.083Z · LW(p) · GW(p)

There are many forces and causes that lead use of deontology and virtue ethics to be misunderstood and punished on Twitter, and this is part of the reason that I have not participated in Twitter these past 3-5 years. But don't confuse these with the standards for longer form discussions and essays. Trying to hold your discussions to Twitter standards is a recipe for great damage to one's ability to talk, and ability to think.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-04-14T21:03:51.048Z · LW(p) · GW(p)

I'm saying we should strive to do better than Twitter on the metric of "being careful with strongly valenced terminology", i.e. being more careful. I'm not quite sure what point you're making - it seems like you think it'd be better to be less careful?

In any case, the reference to Twitter was just a throwaway example; my main argument is that our standards for longer form discussions on Lesswrong should involve being more careful with strongly valenced terminology than people currently are.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-14T21:24:30.732Z · LW(p) · GW(p)

You should totally be less careful. On Twitter, if you say something that can be misinterpreted, sometimes over a million people see it and someone famous tells them you're an awful person. I say sometimes, I more mean "this is the constant state and is happening thousands of times per day". Yes, if you're not with your friends and allies and community, if you're in a system designed to take the worst interpretation of what you say and amplify it in the broader culture with all morality set aside, be careful.

Here on LW, I don't exercise that care to anything like the same degree. I try to be honest and truthful, and worry less about the worst interpreration of what I write. In hiring, there's a maxim: hire for strengths, don't hire for lack-of-weaknesses. It's to push against the failure modes of hiring-by-committee (typically the board) where everyone can agree on obvious weaknesses, but reward standout strengths less. Similarly in writing, I aim more to say valuable truths that aren't said elsewhere or can be succinctly arrived at alongside other LWers, rather than for lack of mistakes or lack of things-that-can-be-misinterpreted by anyone.

Replies from: ricraz, Raemon
comment by Richard_Ngo (ricraz) · 2021-04-15T09:06:27.999Z · LW(p) · GW(p)

I think we're talking past each other a little, because we're using "careful" in two different senses. Let's say careful1 is being careful to avoid reputational damage or harassment. Careful2 is being careful not to phrase claims in ways that make it harder for you or your readers to be rational about the topic (even assuming a smart, good-faith audience).

It seems like you're mainly talking about careful1. In the current context, I am not worried about backlash or other consequences from failure to be careful1. I'm talking about careful2. When you "aim to say valuable truths that aren't said elsewhere", you can either do so in a way that is careful2 to be nuanced and precise, or you can do so in a way that is tribalist and emotionally provocative and mindkilling [LW · GW]. From my perspective, the ability to do the former is one of the core skills of rationality.

In other words, it's not just a question of the "worst" interpretation of what you write; rather, I think that very few people (even here) are able to dispassionately evaluate arguments which call things "evil" and "disgusting", or which invoke tribal loyalties. Moreover, such arguments are often vague because they appeal to personal standards of "evil" or "insane" without forcing people to be precise about what they mean by it (e.g. I really don't know what you actually mean when you say facebook is evil). So even if your only goal is to improve your personal understanding of what you're writing about, I would recommend being more careful2.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-15T17:06:28.643Z · LW(p) · GW(p)

I don’t know why you want ‘disspassion’, emotions are central to how I think and act and reason, and this is true for most rationalists I know. I mean, you say it’s mindkilling, and of course there’s that risk, but you can’t just cut off the domain of emotion, and I will not pander to readers who cannot deal with their own basic emotions.

When I say Facebook is evil, I straightforwardly mean that it is trying to hurt people. It is intentionally aiming to give millions of people an addiction that makes their lives worse and their communities worse. Zuckerberg’s early pitch decks described Facebook’s users as addicted and made this a selling point of the company, analogous to how a drug dealer would pitch that their product got users addicted and this being a selling point for investing in the company. The newsfeed is an explicitly designed sparse reward loop that teaches you to constantly spend your time on it, reading through content you do not find interesting, to find the sparks of valuable content once in a while, instead of giving you the content it knows you want up front, all in order to keep you addicted. They are explicitly trying to take up as much of your free time as they can with an unhelpful addiction and they do not care about the harm it will cause for the people they hurt. This is what I am calling evil. I give a link in the OP to where Zvi explains a lot of the basic reasoning, the interested reader can learn all this from there. You might disagree about whether Facebook is evil, but I am using the word centrally, and I do not accept your implied recommendation to stop talking about evil things.

You’re saying things like ‘provocative’ and ‘mindkilling’ and ‘invoking tribal loyalties’, but you’ve not made any arguments relating that to my writing. My sense is you‘re arguing that all of my posts should kind of meet the analytic philosophy level of dryness where we can say that something is disgusting only in sentences like

Let us call an act that is morally wrong but not causing direct harm ”morally-disgusting-1”. Let us call an act that is not morally wrong but causes humans to feel disgust anyway “morally-discussing-2”. Let us explore the space between these two with a logical argument in three steps.

Whereas I want to be able to write posts with sentences like

I had been studying psychology for several years and built my worldview around it, and then after I discovered the replication crisis I felt betrayed. The scientists whose work I revered made me angry, and the fake principles I cherished now fill me disgust. I feel I was lied to and tricked for 8 years.

With little argument it seems (?) like you’ve decided the latter is ‘mindkilling’ and off-limits and shouldn’t be talked about.

There are absolutely more mature and healthy and rational ways of communicating about emotions and dealing with them, and I spend a massive amount of my time reflecting on my feelings and how I communicate with the people I talk to. If I think I might miscommunicate, I sometimes explicit acknowledge things like “This is how I’m feeling as I say this, but I don’t mean this is my reflectively endorsed position” or “I’m angry at you about this and I’m being clear about that, but it’s not a big deal to me on an absolute scale and we can move on if it’s also not a big deal to you“ or “I want to be clear that while I love <x> this doesn’t mean I think it’s obvious that you should love <x>” or “I want to be clear that you and I do not have the sort of relationship where I get to demand time from you about this, and if you leave I won’t think that reflects poorly on you”. Most emotional skills aren’t at all explicit, like having an emotion rise in me, reflecting on it, and realizing it just isn’t actually something I want to bring up. Perhaps I’m being too reactionary and should try to be more charitable (or I’m being too charitable and should try to be more reactionary). There’s lots of tools and skills here. But you’re not talking about these skills (perhaps you’d like to?), you’re mostly saying (from where I’m sitting) that using emotive or deontological language at all is to-be-frowned-on, which I can’t agree with.

I think that very few people (even here) are able to dispassionately evaluate arguments which call things "evil" and "disgusting",

I disagree. I think they have fairly straightforward meanings, and people can understand my claims without breaking their minds, though I am quite happy to answer requests for clarification [LW · GW]. I agree it involves engaging with your own emotions, but we’re not Spock and I’m not writing for him.

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2021-04-15T19:16:45.377Z · LW(p) · GW(p)

You’re saying things like ‘provocative’ and ‘mindkilling’ and ‘invoking tribal loyalties’, but you’ve not made any arguments relating that to my writing

I should be clear here that I'm talking about a broader phenomenon, not specifically your writing. As I noted above, your post isn't actually a central example of the phenomenon. The "tribal loyalties" thing was primarily referring to people's reactions to the SSC/NYT thing. Apologies if it seemed like I was accusing you personally of all of these things. (The bits that were specific to your post were mentions of "evil" and "disgust".)

Nor am I saying that we should never talk about emotions; I do think that's important. But we should try to also provide argumentative content which isn't reliant on the emotional content. If we make strong claims driven by emotions, then we should make sure to also defend them in less emotionally loaded ways, in a way which makes them compelling to someone who doesn't share these particular emotions. For example, in the quotation you gave, what makes science's principles "fake" just because they failed in psychology? Is that person applying an isolated demand for rigour because they used to revere science? I can only evaluate this if they defend their claims more extensively elsewhere.

On the specific example of facebook, I disagree that you're using evil in a central way. I think the central examples of evil are probably mass-murdering dictators. My guess is that opinions would be pretty divided about whether to call drug dealers evil (versus, say, amoral); and the same for soldiers, even when they end up causing a lot of collateral damage.

Your conclusion that facebook is evil seems particularly and unusually strong because your arguments are also applicable to many TV shows, game producers, fast food companies, and so on. Which doesn't make those arguments wrong, but it means that they need to meet a pretty high bar, since either facebook is significantly more evil than all these other groups, or else we'll need to expand the scope of words like "evil" until they refer to a significant chunk of society (which would be quite different from how most people use it).

(This is not to over-focus on the specific word "evil", it's just the one you happened to use here. I have similar complaints about other people using the word "insane" gratuitously; to people casually comparing current society to Stalinist Russia or the Cultural Revolution [LW · GW]; and so on.)

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-17T01:40:15.706Z · LW(p) · GW(p)

If we make strong claims driven by emotions, then we should make sure to also defend them in less emotionally loaded ways, in a way which makes them compelling to someone who doesn't share these particular emotions.

Restating this in the first person, this reads to me as ”On the topics where we strongly disagree, you’re not supposed to say how you feel emotionally about the topic if it’s not compelling to me.” This is a bid you get to make and it will be accepted/denied based on the local social contract and social norms, but it’s not a “core skill of rationality”. 

You don’t understand what all my words mean. I’m not writing for everyone, so it’s mostly fine from where I’m sitting, and as I said I’m happy to give clarifications to the interested reader. This thread hasn’t been very productive right now though so I’ll drop it. Except I’ll add, which perhaps you’ll appreciate, I did indeed link to an IMO pretty extensive explanation of the reasons behind the ways I think Facebook is evil, and I don’t expect I would have written it that way had I not know there was an extensive explanation written up. The inferential gap would’ve been too big, but I can say it casually because I know that the interested reader can cross the gap using the link.

comment by Raemon · 2021-04-14T21:35:46.612Z · LW(p) · GW(p)

(I have some disagreements with this. I think there's a virtue Ben is pointing at (and which Zvi and others are pointing at), which is important, but I don't think we have the luxury of living in the world where you get to execute that virtue without also worrying about the failure modes Richard is worried about)

Replies from: ricraz, Ruby
comment by Richard_Ngo (ricraz) · 2021-04-15T09:19:41.424Z · LW(p) · GW(p)

Whether I agree with this point or not depends on whether you're using Ben's framing of the costs and benefits, or the framing I intended [LW(p) · GW(p)]; I can't tell.

Replies from: Raemon
comment by Raemon · 2021-04-15T19:50:05.505Z · LW(p) · GW(p)

I think I mostly have a deep disagreement with Ben here, which is important but not urgent to resolve and would take a bunch of time. (I think I might separately have different deep disagreements with you, but I haven't evaluated that)

comment by Ruby · 2021-04-15T05:20:41.238Z · LW(p) · GW(p)

+1

comment by Ruby · 2021-04-14T16:12:07.263Z · LW(p) · GW(p)

I think if I were to title this post, it'd be something like "It's not enough to model internal distortionary forces, you've got to model external ones too." (The current title sounds cool but sans explanation, I don't see how it matches the content.)

And I'd frame the argument as:

When it comes to believing true things about the world, there are distortionary forces both without and within. External people want you to believe things for the sake of their agenda and will optimize aggressively against you for their own interests. At the same time, you too are a political animal and your own mind will optimize against you (and others) for the sake of your own near-term political expediency (cf. Elephant in the Brain and arguments for the primacy of self-deception). To reach truth, you have to model and account for each distortionary environment. Not just one. That means tracking both your own motivated cognition and others' motivated cognition too.

A person who only models their own mind (the naive, inwards-focused bias-correcting Rationalist) will allow others to manipulate their map.  A person who only maps the external adversarial environments allows their own mind to manipulate them (especially if they can justify things with reference to external enemies, cf. playbook of oppressive regimes).  You must account and attend to both.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-14T18:37:11.316Z · LW(p) · GW(p)

That's a good summary.

comment by seed · 2021-04-14T13:22:47.169Z · LW(p) · GW(p)

>> Sometimes I try to tell the people what I can see, and that doesn't always go well. I'm not sure why.
Can you describe a concrete example? Without looking at a few examples, it is hard to tell if a "context-free integrity" fallacy is to blame, or you are just making bad arguments, or something.
 

comment by Yoav Ravid · 2021-04-14T09:42:55.246Z · LW(p) · GW(p)

Related to rationalists in Stalinist Russia: Kolmogorov Complicity and The Parable of Lightning

comment by FeepingCreature · 2021-04-14T08:29:05.589Z · LW(p) · GW(p)

This sounds like an instance of https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science [LW · GW] .

It was the notion that you could actually in real life follow Science and fail miserably, that Eliezer₁₈ didn't really, emotionally believe was possible.

Oh, of course he said it was possible. Eliezer₁₈ dutifully acknowledged the possibility of error, saying, "I could be wrong, but..."

But he didn't think failure could happen in, you know, real life. You were supposed to look for flaws, not actually find them.

...

No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-14T18:38:01.361Z · LW(p) · GW(p)

I hadn't thought of that. Not sure whether it's the same thing, but thanks for the comment. 

comment by Hazard · 2021-04-14T14:03:18.494Z · LW(p) · GW(p)

I generally agree with this post.

And man, that feels kinda naive to me.

Is there something you wanted to communicate here that was more than "that feels wrong/not true"? All usage and explications of "naive" that I've encountered seemed to focus on "the thing here that is bad or shameful is that we experienced people know this and you don't, get with the program".

Replies from: Benito
comment by Ben Pace (Benito) · 2021-04-14T19:51:34.844Z · LW(p) · GW(p)

I wanted to convey (my feeling of) the standard use of the word.

(of a person or action) showing a lack of experience, wisdom, or judgment.

"the rather naive young man had been totally misled"

I actually can imagine a LWer making that same argument but not out of naivete, because LWers argue earnestly for all sorts of wacky ideas. But what I meant was it also feels to me like the sort of thing I might've said in the past when I had not truly seen the mazes in the world, not had my hard work thrown in my face, or some other experience like that where my standard tools had failed me [LW · GW].

Replies from: Hazard
comment by Hazard · 2021-04-15T02:29:28.860Z · LW(p) · GW(p)

Dope, it was nice to check and see that contrary to what I expect, it's not always being used that way :)

Some idle musings on using naive to convey specific content.

Sometimes I might want to communicate that I think someone's wrong, and I also think they're wrong in a way that's only likely to happen if they lack experience X. Or similar, they are wrong because they haven't had experience X. That's something I can imagine being relevant and something I'd want to communicate. Though I'd specifically want to mention the experience that I think they're lacking. Otherwise it feels like I'm asserting "there just is this thing that is being generally privy to how things work" and you can be privy or not, which feels like it would pull me away from looking at specific things and understanding how they work, and instead towards trying to "figure out the secret". (This is less relevant to your post, because you are actually talking about things one can do)

There's another thing which is in between what I just mentioned, and "naive" as a pure intentional put-down. It's something like "You are wrong, you are wrong because you haven't had experience X, and everyone who has had experience X is able to tell that you are wrong and haven't had experience X." The extra piece here is the assertion that "there are many people who know you are wrong". Maybe those many people are "us", maybe not. I'm having a much harder time thinking of an example where that's something that's useful to communicate, and is too close asserting group pressure for my liking.

comment by Ben Pace (Benito) · 2021-04-14T08:26:26.559Z · LW(p) · GW(p)

Just after posting this on "Context-Free Integrity", I checked Marginal Revolution and saw Tyler's latest post was on "Free-Floating Credibility". These two terms feel related...

The kabbles are strong tonight.

comment by Eric Raymond (eric-raymond) · 2021-04-15T13:43:35.076Z · LW(p) · GW(p)

Terminological point: I don't think you can properly describe your hypothetical rationalist in Stalinist Russia as "paranoid".  His belief that he is surrounded by what amounts to a conspiracy out to subjugate and destroy him is neither fixated nor delusional; it is quite correct, even if many of the conspiracy's members would choose to defect from it if they believed they could do so without endangering themselves.

I also note that my experience of living in the US since around 2014 has been quite similar in kind, if not yet in degree.  I pick out 2014 because of the rage-mobbing of Brendan Eich; that was the the point at which "social justice" began presenting to me as an overtly serious threat to free speech.  Six years later, political censorship and the threat from cancel culture have escalated to the point where, while we may not yet have achieved Soviet levels of repression, we're  closing in fast on East Germany's.

Replies from: Benito, Benito
comment by Ben Pace (Benito) · 2021-04-15T16:12:16.139Z · LW(p) · GW(p)

"Constant vigilance, eh, lad?" said the man.

"It's not paranoia if they really are out to get you," Harry recited the proverb.

The man turned fully toward Harry; and insofar as Harry could read any expression on the scarred face, the man now looked interested.

Though, my point is that just like Moody, a person who is (correctly) constantly looking out for power-plays and traps, and will end up seeing many that aren’t there, because it’s a genuinely hard problem to figure out whether specific people are plotting against you.

comment by Ben Pace (Benito) · 2021-04-15T16:13:59.098Z · LW(p) · GW(p)

I had an interesting conversation with Zvi about in which societies it was easiest to figure out whether the major societal narratives were false. It seemed like there was only a few major global narratives in times back then, whereas today I feel like there’s a lot more narratives flying around me.