Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z
Metauncertainty 2009-04-10T23:41:52.946Z


Comment by jimmy on Everything Okay · 2021-01-24T20:13:17.458Z · LW · GW

"G" fits my own understanding best: "Not Okay" is a generalized alarm state, and the ambiguity is a feature, not a bug.

(Generally) we have an expectation that things are supposed to be "Okay" so when they're not, this conflict is uncomfortable and draws attention to the fact that "something is wrong!". What exactly it takes to provoke this alarm into going off depends on the person/context/mindset because it depends on (what they realize) they haven't already taken into account, and that's kinda the point. For example, if you're on a boat and notice that you're on a collision course with a rock you might panic a bit and think "We have to change course!!!", which is an example of "things not being okay". However, the driver might already see the rock and is Okay because the "trajectory" he's on includes turning away from the rock so there's no danger. And of course, other passengers may be in Okay Mode because they fail to see the rock or because they kinda see the rock but they are averse to being Not Okay and therefore try to ignore it as long as possible.

In that light, "Everything is Okay" is reassurance that the alarm can be dismissed. Maybe it's because the driver already sees the rock. Maybe it's because our "boat" is actually a hovercraft which will float right over the rock without issue. Maybe we actually will hit the rock, but there's nothing we can do to not hit the rock, and the damages will be acceptable. Getting people back into Okay Mode is in exercise in getting people to believe that one of these is true, and you don't necessarily have to specify which one if they trust you, and if the details are important that's what the rest of the conversation is for.

The best way to get the benefits of ‘okay’ in avoiding giant stress balls, while still retaining the motivation to act and address problems or opportunities is to "just" engage with the situation without holding back.

Okay, so we're headed for a rock, now what? If that's alarming then it's alarming. Are we actually going to hit it if we simply dismiss the alarm and go back to autopilot? If so, would that be more costly than the cost of the stress needed to avert it? What can we actually do to stop it? Can we just talk to the driver? Is that likely to work?

If that's likely to work and you're on track to doing that, then "can we sanely go back to autopilot?" can evaluate as "yes" again and we can go back to Okay Mode -- at least, until the driver doesn't listen and we no longer expect out autopilot to handle the situation satisfactorily. You get to go back to Okay Mode as soon as you've taken the new information into account and gotten back on a track you're willing to accept over the costs of stressing more.

"The Kensho thing", as I see it, is the recognition that these alarms aren't "fundamental truths" where the meaning resides. They're momentary alarms that call for the redirection of one's attention, and the ultimate place that everything resolves to after doing your homework and integrating all the information is back to a state which calls for no alarms. That's why it's not "nothing matters, everything is equally good" or "you'll feel good no matter what once you're enlightened" -- it's just "Things are okay,  on a fundamental level alarms are not called for, behaviors are, and it's my job to figure out which. If I'm not okay with them that signals a problem with me in that I have not yet integrated all the information available and gotten back on my best-possible-track". So when your friend dies or you realize that humanity is going to be obliterated, it's not "Lol, that's fine", it's room to keep a drive to not only do something about it, a drive to stare reality in the face as much as you can manage, to regulate how much you stare at painful truths so that you keep your responses productive, and a desire to up one's ability to handle unpleasant conflict.

 How should one react to those who are primarily optimizing for being in Okay Mode at the expense of other concerns

Fundamentally, it's a problem of aversion to unpleasant conflict. Sometimes they won't actually see the problem here so it can be complicated by their endorsement of avoidance, but even in those cases it's probably most productive to ignore their own narratives and instead directly address the thing that's causing them to want to avoid.

Shoving in their face more reasons to be Not Okay is likely to trigger more avoidance, so instead of trying to argue why "Here's how closing your eyes means you're more likely to fail to avoid the rock, and therefore kill everyone. Can you imagine how unfun drowning will be?" (which I would expect to lead to more rationalizations/avoidance), I'd focus on helping them be comfortable. More "Yeah, it's super unfun for things to be Not Okay, and I can't blame you for not wanting to do it more than necessary"/"Yes, it's super important to be able to be able to regulate one's own level of Okayness, since being an emotional wreck often makes things worse, and it's good that you don't fail in that way".

Of course, you don't want to just make them comfortable staying in Okay Mode because then there's no motivation to switch, so when there's a little more room to introduce unpleasant ideas without causing folding you can place a little more emphasis on "it's good that you fail in that way", and how completely avoiding stress isn't ideal or consequence free either.

It's a bit of a balancing act, and more easily said than done. You have to be able to pull off sincerity when you reassure them that you get where they're coming from and that it's actually better than doing the thing they fear their option is, and without "Not Okaying" at them by pushing them "It's Not Okay that you feel Okay!". It's a lot easier when you can be Okay that they're in Okay mode because they're Not Okay with being Not Okay, partially just because externalizing ones alarms as a flinch is rarely the most helpful way of doing things. But also because if you're Okay you can "go first" and give them a proof of concept and reference example for what it looks like to stare at the uncomfortable thing (or uncomfortable things in general) and stay in Okay Mode. It helps them know "Hey, this is actually possible", and feel like you might even be able to help them get closer to it.

or those who are using Okay as a weapon?

Again, I'd just completely disregard their narratives on this one. They're implying that if you're Not Okay, then it's a "you problem". So what? Make sure they're wrong and demonstrate it.

"God, it's just a little fib. Are you okay??"

"Not really. I think honesty about these kinds of things is actually extremely important, and I'm still trying to figure out where I went wrong expecting not to have that happen"


"Yeah, no, I'm fine. I just want to make sure that these people know your history when deciding how much to trust you".

Comment by jimmy on In Defense of Twitter's Decision to Ban Trump · 2021-01-12T02:04:11.836Z · LW · GW

"Content moderation" is not always a bad thing, but you can't jump directly from "Content moderation can be important" to "Banning Trump, on balance, will not be harmful". 

The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it's possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they're [perceived to be] a spammer that provides no value to the forum, then there's no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it's easier to pretend you're right if you don't have to address challenging arguments, then that's bad content moderation and LW would certainly suffer for it.

The question isn't over whether "it's possible for moderation to be good", it's whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.

What would you expect the world to look like if that weren't at all part of the motivation? 

What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?

Comment by jimmy on Motive Ambiguity · 2020-12-16T06:58:37.757Z · LW · GW

The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?


It sounds like the question is essentially "How to do hard mode?".

On a small scale, it's not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it -- and to find opportunities to do so. When you can see that you don't actually need to display that costly signal, it's usually a pretty easy choice to make.

Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. "Does this dress make my ass look fat?" -- can you pull off "The *dress* doesn't, no" and get a laugh, or are you stuck where there's only one acceptable answer? If you can, demonstrate that it's okay to suggest the "unthinkable" and keep poking until you can find the edge of the envelope. If not, or when you've reached the point where you can't, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.

On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn't already mostly get it, and you don't have so much time and attention to work. There are things you can do, and I don't want to suggest that it's "not doable". You can start to presuppose the framings that you've worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you're challenged -- which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.

I just don't get the impression that there's any "easy" answer. If you want people to donate to your political campaign even though you won't play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie -- and are perceived however you let that be perceived.

Comment by jimmy on Number-guessing protocol? · 2020-12-07T18:30:34.454Z · LW · GW

This answer is great because it takes the problem with the initial game (one person gets to update and the other doesn't) and returns the symmetry by allowing both players to update. The end result shows who is better at Aumann updating and should get you closer to the real answer.

If you'd rather know who has the best private beliefs to start with, you can resolve the asymmetry in the other direction and make everyone commit to their numbers before hearing anyone else's. This adds a slight bit of complexity if you can't trust the competitors to be honest, but it's easily solved by either paper/pencil or everyone texting their answer to the person who is going to keep their phone in their pocket and say their answer first.

Comment by jimmy on Covid 11/19: Don’t Do Stupid Things · 2020-11-20T19:38:04.829Z · LW · GW

The official recommendations are crazy low. Zvi's recommendation here of 5000IU/day is the number I normally hear from smart people who have actually done their research. 

The RCT showing vitamin D to help with covid used quite a bit. This converter from mg to IU suggests that the dose is at least somewhere around 20k on the first day and a total of 40k over the course of the week. The form they used (calcifediol) is also more potent, and if I'm understanding the following comment from the paper correctly, that means the actual number is closer to 200k/400k. (I'm a bit rushed on this, so it's worth double checking here)

In addition, calcifediol is more potent when compared to oral vitamin D3 [43]. In subjects with a deficient state of vitamin D, and administering physiological doses (up to 25 μg or 1000 IU daily, approximately 1 in 3 molecules of vitamin D appears as 25OHD; the efficacy of conversion is lower (about 1 in 10 molecules) when pharmacological doses of vitamin D/25OHD are used. [42]

I've always been confused why the official recommendations for vitamin D are so darn low, but it seems that there might be an answer that is fairly straight forward (and not very flattering to the those coming up with the recommended values). It looks like it might be a simple conflation between "standard error of the mean" and "standard deviation" of the population itself.

Comment by jimmy on Simpson's paradox and the tyranny of strata · 2020-11-20T17:21:50.012Z · LW · GW

(If you're worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)


They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.


The problem you're using Simpson's paradox to point at does not have this same property of "multiplying the size of the data set by arbitrarily large numbers doesn't help". If you can keep taking data until randomness chance is no issue, then they will end up having sufficient data in all the same subgroups, and you can just read the correct answer off the last million times they both flew in the same city/day/time/season simultaneously.

The problem you're pointing at fundamentally boils down to not having enough data to force your conclusions, and therefore needing to make judgement about how important season is compared to time of day so that you can determine when conditioning on more factors will help relevance more than it will hurt by adding more noise.

Comment by jimmy on Covid 9/24: Until Morale Improves · 2020-09-24T19:43:12.034Z · LW · GW

Hypothetically, what would the right response be if you noticed that one of the main vaccine trials has really terrible blinding (e.g. participants are talking about how to tell whether you get the placebo in the waiting room)?

It seems like it would really mess up the data, probably resulting in the people who got the the vaccine taking extra risk and leading the study to understate the effectiveness.  Ideally, "tell the researchers" would be the obvious right answer, but are there perverse incentives at play that make the best response something else?

If I didn’t have people thanking me every week for doing these, it would be difficult to keep going.

Thanks Zvi. The effort is definitely appreciated.

Comment by jimmy on Covid 9/10: Vitamin D · 2020-09-11T03:46:41.824Z · LW · GW
There were 50 patients in the treatment group. None were admitted to the ICU. There were 26 patients in the control group. Half of them, 13 out of 26, were admitted to the ICU. So 13/26 vs. 0/50.

That's not what the paper says

Of 50 patients treated with calcifediol, one required admission to the ICU (2%),

The conclusions still hold, of course.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-26T18:31:39.220Z · LW · GW

Adjusting in the other direction seems useful as well. If someone Strong Upvotes ten times less frequently than average I would want to see their strong upvote as worth somewhat more.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-24T17:32:49.958Z · LW · GW

Voting based on current karma is a good thing.

Without that, a post that is unanimously barely worth upvoting will get an absurd amount of upvotes while another post which is recognized as earth shatteringly important by 50% will fail to stand out. Voting based on current karma gives you a measure of the *magnitude* of people's like for a comment as well as the direction, and you don't want to throw that information out.

If everyone votes based on what they think the total karma should be, then a post's karma reflects [a weighted average of opinions on what the post's total karma should be] rather than [a weighted average of opinions on the post].

This isn't true.

If people vote based on what the karma should be, the final value you get is the median of what people think the karma should be -- i.e. a median of people's opinion of the post. If you force people to ignore the current karma, you don't actually get a weighted average of opinions on the post because there's very little flexibility in how strongly you upvote a post. In order to get that magnitude signal back, you'd have to dilute your voting with dither, and while that will no doubt happen to some extent (people might be too lazy to upvote slightly-good posts, but will make sure to upvote great ones), you will get an overestimate of the value of slightly-good posts.

This is bad, because the great posts hold a disproportionate share of the value, and we very much want them to rise to the top and stand out above the rest.

Comment by jimmy on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T07:15:46.234Z · LW · GW
You are very much in the minority if you want to abolish norms in general.

There's a parallel here with the fifth amendment's protection from self incrimination making it harder to enforce laws and laws being good on average. This isn't paradoxical because the fifth amendment doesn't make it equally difficult to enforce all laws. Actions that harm other people tend to have other ways of leaving evidence that can be used to convict. If you murder someone, the body is proof that someone has been harmed and the DNA in your van points towards you being the culprit. If you steal someone's bike, you don't have to confess in order to be caught with the stolen bike. On the other hand, things that stay in the privacy of your own home with consenting adults are *much* harder to acquire evidence for if you aren't allowed to force people to testify against themselves. They're also much less likely to be things that actually need to be sought out and punished.

If it were the case that one coherent agent were picking all the rules with good intent, then it wouldn't make sense to create rules that make enforcement of other rules harder. There isn't one coherent agent picking all the rules and intent isn't always good, so it's important to fight for meta rules that make it selectively hard to enforce any bad rules that get through.

You can try to argue that preventing blackmail isn't selective *enough* (or that it selects in the wrong direction), but you can't just equate blackmail with "norm enforcement [applied evenly across the board]".

Comment by jimmy on What counts as defection? · 2020-07-16T06:27:15.059Z · LW · GW
I actually don't think this is a problem for the use case I have in mind. I'm not trying to solve the comparison problem. This work formalizes: "given a utility weighting, what is defection?". I don't make any claim as to what is "fair" / where that weighting should come from. I suppose in the EGTA example, you'd want to make sure eg reward functions are identical.

This strikes me as a particularly large limitation. If you don't have any way of creating meaningful weightings of utility between agents then you can't get anything meaningful out. If you're allowed to play with that free parameter then you can simply say "I'm not a utility monster, this genuinely impacts me more than you [because I said so!]" and your actual outcomes aren't constrained at all.

Defection doesn't always have to do with the Pareto frontier - look at PD, for example. (C,C), (C,D), (D,C) are usually all Pareto optimal. 

That's why I talk about "in the larger game" and use scare quotes on "defection". I think the word has to many different connotations and needs to be unpacked a bit.

The dictionary definition, for example, is:

A lack: a failure; especially, failure in the performance of duty or obligation.
n.The act of abandoning a person or a cause to which one is bound by allegiance or duty, or to which one has attached himself; a falling away; apostasy; backsliding.
n.Act of abandoning a person or cause to which one is bound by allegiance or duty, or to which one has attached himself; desertion; failure in duty; a falling away; apostasy; backsliding.

This all fits what I was talking about, and the fact that the options in prisoners dilemma are traditionally labeled "Cooperate" and "Defect" doesn't mean they fit the definition. It smuggles in these connotations when they do not necessarily apply.

The idea of using tit for tat to encourage cooperation requires determining what ones "duty" is and what "failing" this duty is, and "doesn't maximize total utility" does not actually work as a definition for this purpose because you still have to figure out how to do that scaling.

Using the Pareto frontier allows you to distinguish between cooperative and non-cooperative behavior without having to make assumptions/claims about whose preferences are more "valid". This is really important for any real world application, because you don't actually get those scalings on a silver platter, and therefore need a way to distinguish between "cooperative" and "selfishly destructive" behavior as separate from "trying to claim a higher weight to one's own utility".

Comment by jimmy on What counts as defection? · 2020-07-13T18:33:20.963Z · LW · GW

As others have mentioned, there's an interpersonal utility comparison problem. In general, it is hard to determine how to weight utility between people. If I want to trade with you but you're not home, I can leave some amount of potatoes for you and take some amount of your milk. At what ratio of potatoes to milk am I "cooperating" with you, and at what level am I a thieving defector? If there's a market down the street that allows us to trade things for money then it's easy to do these comparisons and do Coasian payments as necessary to coordinate on maximizing the size of the pie. If we're on a deserted island together it's harder. Trying to drive a hard bargain and ask for more milk for my potatoes is a qualitatively different thing when there's no agreed upon metric you can use to say that I'm trying to "take more than I give".

Here is an interesting and hilarious experiment about how people play an iterated asymmetric prisoner's dilemma. The reason it wasn't more pure cooperation is that due to the asymmetry there was a disagreement between the players about what was "fair". AA thought JW should let him hit "D" some fraction of the time to equalize the payouts, and JW thought that "C/C" was the right answer to coordinate towards. If you read their comments, it's clear that AA thinks he's cooperating in the larger game, and that his "D" aren't anti-social at all. He's just trying to get a "fair" price for his potatoes, and he's mistaken about what that is. JW, on the other hand, is explicitly trying use his Ds to coax A into cooperation. This conflict is better understood as a disagreement over where on the Pareto frontier ("at which price") to trade than it is about whether it's better to cooperate with each other or defect.

In real life problems, it's usually not so obvious what options are properly thought of as "C" or "D", and when trying to play "tit for tat with forgiveness" we have to be able to figure out what actually counts as a tit to tat. To do so, we need to look at the extent to which the person is trying to cooperate vs trying to get away with shirking their duty to cooperate. In this case, AA was trying to cooperate, and so if JW could have talked to him and explained why C/C was the right cooperative solution, he might have been able to save the lossy Ds. If AA had just said "I think I can get away with stealing more value by hitting D while he cooperates", no amount of explaining what the right concept of cooperation looks like will fix that, so defecting as punishment is needed.

In general, the way to determine whether someone is "trying to cooperate" vs "trying to defect" is to look at how they see the payoff matrix, and figure out whether they're putting in effort to stay on the Pareto frontier or to go below it. If their choice shows that they are being diligent to give you as much as possible without giving up more themselves, then they may be trying to drive a hard bargain, but at least you can tell that they're trying to bargain. If their chosen move is conspicuously below (their perception of) the Pareto frontier, then you can know that they're either not-even-trying, or they're trying to make it clear that they're willing to harm themselves in order to harm you too.

In games like real life versions of "stag hunt", you don't want to punish people for not going stag hunting when it's obvious that no one else is going either and they're the one expending effort to rally people to coordinate in the first place. But when someone would have been capable of nearly assuring cooperation if they did their part and took an acceptable risk when it looked like it was going to work, then it makes sense to describe them as "defecting" when they're the one that doesn't show up to hunt the stag because they're off chasing rabbits.

"Deliberately sub-Pareto move" I think is a pretty good description of the kind of "defection" that means you're being tatted, and "negligently sub-Pareto" is a good description of the kind of tit to tat.

Comment by jimmy on Noise on the Channel · 2020-07-04T17:47:07.105Z · LW · GW

To the extent that the underlying structure doesn't matter and can't be used, I agree that technically non-random "noise" behaves similarly and that this can be a reasonable use of the term. My objection to the term "noise" as a description of conversational landmines isn't just that they're "technically not completely random", but that the information content is actually important and relevant. In other words, it's not noise, it's signal.

The "landmines" are part of how their values are actually encoded. It's part of the belief structure you're looking to interact with in the first place. They're just little pockets of care which haven't yet been integrated in a smooth and stable way with everything else. Or to continue the metaphor, it's not "scary dangerous explosives to try to avoid", it's "inherently interesting stores of unstable potential energy which can be mined for energetic fuel". If someone is touchy around the subject you want to talk about, that is the interesting thing itself. What is in here that they haven't even finished explaining to themselves, and why is it so important to them that they can't even contain themselves if you try to blow past it?

It doesn't even require slow and cautious approach if you shift your focus appropriately. I've had good results starting a conversation with a complete stranger who was clearly insecure about her looks by telling her that she should make sure her makeup doesn't come off because she's probably ugly if she's that concerned about it. Not only did she not explode at me, she decided to throw the fuse away and give me a high bandwidth and low noise channel to share my perspective on her little dilemma, and then took my advice and did the thing her insecurity had been stopping her from doing.

The point is that you only run into problems with landmines as noise if you mistake landmines for noise. If your response to the potential of landmines is "Gah! Why does that unimportant noise have to get in the way of what I want to do!? I wonder if I can get away with ignoring them and marching straight ahead", then yeah, you'll probably get blowed up if you don't hold back. On the other hand, if your response is closer to "Ooh! Interesting landmine you got here! What happens if I poke it? Does it go off, or does the ensuing self reflection cause it to just dissolve away?", then you get to have engaging and worthwhile high bandwidth low noise conversations immediately, and you will more quickly get what you came for.

Comment by jimmy on Noise on the Channel · 2020-07-02T18:14:24.673Z · LW · GW

I think it's worth making a distinction between "noise" and "low bandwidth channel". Your first examples of "a literal noisy room" or "people getting distracted by shiny objects passing by" fit the idea of "noise" well. Your last two examples of "inferential distance" and "land mines" don't, IMO.

"Noise" is when the useful information is getting crowded out by random information in the channel, but land mines aren't random. If you tell someone their idea is stupid and then you can't continue telling them why because they're flipping out at you, that's not a random occurrence. Even if such things aren't trivially predictable in more subtle cases, it's still a predictable possibility and you can generally feel out when such things are safe to say or when you must tread a bit more carefully.

The "trying to squeeze my ideas through a straw" metaphor seems much more fitting than "struggling to pick the signal out of the noise floor" metaphor, and I would focus instead on deliberately broadening the straw until you can just chuck whatever's on your mind down that hallway without having to focus any of your attention on the limitations of the channel.

There's a lot to say on this topic, but I think one of the more important bits is that you can often get the same sense of "low noise conversation" if you pivot from focusing on ideas which are too big for the straw to focusing on the straw itself, and how its limitations might be relaxed. This means giving up on trying to communicate the object level thing for a moment, but it wasn't going to fit anyway so you just focus on what is impeding communication and work to efficiently communicate about *that*. This is essentially "forging relationships" so that you have the ability to communicate usefully in the future. Sometimes this can be time consuming, but sometimes knowing how to carry oneself with the right aura of respectability and emotional safety does wonders for the "inferential distance" and "conversational landmines" issues right off the bat.

When the problem is inferential distance, the question comes down to what extent it makes sense to trust someone to have something worth listening to over several inferences. If our reasonings differ several layers deep then offering superficial arguments and counterarguments is a waste of time because we both know that we can both do that without even being right. When we can recognize that our conversation partner might actually be right about even some background assumptions that we disagree on, then all of a sudden the idea of listening to them describe their world view and looking for ways that it could be true becomes a lot more compelling. Similarly, when you can credibly convey that you've thought things through and are likely to have something worth listening to, they will find themselves much more interested in listening to you intently with an expectation of learning something.

When the problem is "land mines", the question becomes whether the topic is one where there's too much sensitivity to allow for nonviolent communication and whether supercritical escalation to "violent" threats (in the NonViolent Communication sense) will necessarily displace invitations to cooperate. Some of the important questions here are "Am I okay enough to stay open and not lash out when they are violent at me?" and the same thing reflected towards the person you're talking to. When you can realize "No, if they snap at me I'm not going to have an easy time absorbing that" you can know to pivot to something else (perhaps building the strength necessary for dealing with such things), but when you can notice that you can brush it off and respond only to the "invitation to cooperate" bit, then you have a great way of demonstrating for them that these things are actually safe to talk about because you're not trying to hurt them, and it's even safe to lash out unnecessarily before they recognize that it's safe. Similarly, if you can sincerely and without hint of condescension ask the person whether they're okay or whether they'd like you to back off a bit, often that space can be enough for them to decide "Actually, yeah. I can play this way. Now that I think about it, its clear that you're not out to get me".

There's a lot more to be said about how to do these things exactly and how to balance between pushing on the straw to grow and relaxing so that it can rebuild, but the first point is that it can be done intentionally and systematically, and that doing so can save you from the frustration of inefficient communication and replace it with efficient communication on the topic of how to communicate efficiently over a wider channel that is more useful for everything you might want to communicate.

Comment by jimmy on Fight the Power · 2020-06-25T03:33:36.012Z · LW · GW

In general, if you're careful to avoid giving unsolicited opinions you can avoid most of these problems even with rigid ideologues. You wouldn't inform a random stranger that they're ugly just because it's true, and if you find yourself expressing or wishing to express ideas which people don't want to hear from you, it's worth reflecting on why that is and what you are looking to get out of saying it.

Comment by jimmy on [deleted post] 2020-06-17T03:32:11.597Z

I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you're trying to say about it in particular. I think I'm less concerned though, because I don't see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.

Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological "defense mechanisms", because "*obviously* evolution wouldn't select for those who 'defend' from threats by sticking their heads in the sand!" -- as if the problem of wireheading were as simple as competition between a "gene for wireheading" and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It's also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it's too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. "Defense mechanisms" arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective -- and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.

The solution of "simply noticing that the pain from cauterizing a serious bleed isn't a *bad* thing and therefore not flinching from it" isn't trivial. It's *doable*, and to be aspired to, but there's no such thing as "a gene for wise decisions" that is already "hard coded in DNA".

Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn't proof of some sort of "selection for shittiness" any more than it is to notice individual incoherence and the resulting dysfunction. It's not that coherence is impossible or undesirable, just that you're fighting entropy to get there, and succeeding takes work.

The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying "no thanks, I can wait" will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn't winning by disincentivizing cooperation with him, and it's likely more of a "his desire to not feel pain is winning, so he bleeds" sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it's also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there's an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.

I don't see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It's negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying "shut up and be a cog", I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using "shut up and be a cog" as a shit test to filter out the leaders who don't have what it takes to say "nah, I think I'll trust myself and win more", and lead effectively. Just like the burning pain, it's there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It's not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.

And as a result, I don't really feel myself being pulled between a conflict of "respect societies stupid beliefs/rules" and "care about other people". I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it's more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there's no reason to go from "I can't yet be effective in the broader community because I can't yet break out of their 'cog' mold for me, so I'm going to focus on the smaller community where I can" to "fuck them all". There's still plenty of value in reengaging when capable and pretending there isn't isn't that good functional thing we're striving to do. It's not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there's plenty of incentive to help things go well for everyone.

Comment by jimmy on Simulacra Levels and their Interactions · 2020-06-16T05:49:37.392Z · LW · GW
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.

Hm. This sounds like a challenge.

How about this:

Those "popular kids" who keep talking about fictitious "lions" on the other side of the river are actually losers. They try to pretend that they're simply "the safe and responsible people" and pat themselves on the back over it, but really they're just a bunch of cowards who wouldn't know what to do if there were a lion, and so they can't even look across the river and will just shame you for being "reckless" if you doubt the existence of lions that they "just know" are there. I hate having to say something that could lump me with these deplorable fools, and never before has there actually been a lion on the other side of the river, but this time there is. This time it's real, and I'm not saying we can't cross if need be, but if we're going to cross we need to be armed and prepared.

I can see a couple potential failure modes. One is if "Those guys are just crying wolf, but I am legit saving you [and therefore am cool in the way they pretend they are]" itself becomes a cool kid thing to say. The other is if your audience is motivated to see you as "one of them" to the point of being willing to ignore the evidence in front of them, they will do so despite you having credibly signaled that this is not true. Translating to actual issues I can think of, I think it would mostly actually work though.

It becomes harder if you think those guys are actually cool, but that shouldn't really be a problem in practice. Either a) there actually has been a lion every single time it is claimed, in which case it's kinda hard for "there's a lion!" to indicate group membership because it's simply true. Or b) they've actually been wrong, in which case you have something to distance yourself from.

If the truth is contentious and even though there has always been a lion, they've never believed you, then you have a bigger problem than simply having your assertions mistaken for group membership slogans; you simply aren't trusted to be right. I'd still say there's things that can be done there, but it does become a different issue.

Comment by jimmy on [deleted post] 2020-06-11T19:05:14.154Z
I described what happend to the other post here.

Thanks, I hadn't seen the edit.

I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?

While I generally support the idea that it's better to stop posting than to continue to post things which will predictably be negative karma sum, I don't think that's necessary here. There's plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one's own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.

I think the down votes are about something else which is a lot more easily fixable. While I'm sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don't put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn't miss.

As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I'm not sure where you simply missed that part or whether you don't believe him, but either way it is very hard to have a conversation with someone that doesn't engage with points like this at least enough to say why they aren't convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That's not to say that you have to believe that they're being reasonable/charitable/etc, or that you have to act like you do, but it's nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of "insufficiently charitable" is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.

It's a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn't sweat it.

Comment by jimmy on [deleted post] 2020-06-11T18:11:24.513Z
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?

Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.

When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

It seems that this bit is your main concern?

It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.

I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.

If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.

Does that address what you’re saying?

Comment by jimmy on [deleted post] 2020-06-10T20:08:21.060Z

It's not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there's no shame in that. Heck, maybe I'm even wrong and what I'm perceiving as an error actually isn't faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.

I try to avoid making that mistake, but if you feel like I'm erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.

I'm sorry if it hasn't been sufficiently clear that I'm friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.

Secondly I would also like to hear an actual counterargument to the argument I made

Which one? The "it was only studying IBS" one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It's always going to be "in the cases they've studied" and it's always conceivable that if you only knew to find the right use of placebos to test, you'll find one where it doesn't work. However, when placebos work without deception in every case you've tested, the default hypothesis is no longer "well, they require deception in every case except these two weird cases that I happen to have checked". The default hypothesis should now be "maybe they just don't require deception at all, and if they do maybe it's much more rare than I thought".

I'm not sure what point the existence of nocebo makes for you, but the same principles apply there too. I've gotten a guy to punch a cactus right after he told me "don't make me punch the cactus" simply by making him expect that if I told him to do it he would. Simply replace "because drugs" with "because of the way your mind works" and you can do all the same things and more.

I'm not sure how many more times I'll be willing to address things like this though. I'm willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn't considered and are therefore surprisingly strong, but if you still just don't buy into the general idea as worth exploring then I can agree to disagree.

And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.

Speaking of deleting things, what happened to your other post?

Comment by jimmy on [deleted post] 2020-06-10T07:49:50.487Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-10T07:48:04.782Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-08T19:30:29.622Z

1) Isomorphic to my "what if you know you'll do something stupid if you learn that your girlfriend has cheated on you" example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you're going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.

2) This clearly isn't a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn't mean giving your stalker your new address.

3) "Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won't take hold without it". This is assuming your conclusion. It's like saying "Unfortunately, in my made up hypothetical that doesn't actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!". I say you're making stuff up and that no such thing exists. Show me the studies, and I'll show you where they went wrong.

You can't just throw a blanket over a box and say "now that you can no longer see the gears, imagine that there's a perpetual motion machine in there!" and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn't magician trickery and it really wasn't sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I'm missing. At the point where it became likely that it wasn't new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.

Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn't to challenge the hypothetical hard, then you're making a mistake.

"Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly" is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you've given, it's been one or the other of these, and that's not likely to change.

If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can't simply cast a blanket over the box and declare that it is now "possible" since you "can't see" that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you'll start to see the path towards making both your epistemic and your instrumental rationality work better.

4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.

It's worth noticing though, that you didn't use a real world example and that there might be reasons for this.

5) This is just 2 again.

6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.

a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.

In this case, obviously not. I'm also curious what it feels like to be shot in the leg, but the price of that information is more than I'm willing to spend. If I learn what that feels like, then I don't get to learn what I would have been able to accomplish if I could still walk well. There's no conflict here between epistemic and instrumental rationality here.

b) The "torture" is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.

When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.

If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I'd probably call their bluff and push that button to see what they got.

c) This "torture" really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.

I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we're back to being another example of number one. If I'm pretty sure it won't be any worse than this, I'd go for it.

This whole "epistemic vs instrumental rationality" thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.

Comment by jimmy on [deleted post] 2020-06-08T17:53:06.883Z
I'm very glad that you managed to train yourself to do that but this option is not available for everyone.

Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to "maybe I'm missing something here" . I'm not asking you to be more credulous or to simply believe anything I'm saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.

Because you do have that option available to you. In my experience, it's simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says "that's a great idea, let's do that!" and then mysteriously fails. The idea that it's "not available" is one more false belief that gets in the way of focusing on the right thing.

Don't get me wrong, I'm not saying that it's always trivial. Epistemic rationality is not trivial. It's completely possible to try to organize one's mind into coherence and still fail to get the results because you don't realize where you're missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.

I don't see a lot of engaging in the least convenient possible world

Well, if I don't think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)

I'll take a look at your new post.

Comment by jimmy on [deleted post] 2020-06-08T05:13:44.826Z

Placebo doesn't require deception.

Just like with sports, you can get all the same benefits of placebo by simply pointing your attention correctly without predicating it on nonsense beliefs, and it's actually the nonsense beliefs that are getting in the way and causing the problem in the first place. A "placebo" is just an excuse to stop falsely believing that you can't do whatever it is you need to do without a pill.

And I don't say this as some matter of abstract "theory" that sounds good until you try to put it into practice; it's a very real thing that I actually practice somewhat regularly. I'll give you an example.

One day I sprained my ankle pretty badly. I was frustrated with myself for getting injured and didn't want it to swell up so I indignantly decided "screw this, my ankle isn't going to swell". It was a significant injury and took a month to recover, but it didn't swell. The next several times I got injured I kept this attitude and nothing swelled, including dropping a 50lb chunk of wood on my finger in a way that I was sure would swell enough to keep me from bending that finger... until I remembered that it doesn't have to be that way, and made the difficult decision to actually expect it to not swell. It didn't, and I retained complete finger mobility.

I told a friend of mine about this experience of mine, and while she definitely remained skeptical and it seemed "crazy" to her, she had also learned that even "crazy" things coming out of my mouth had a high probability of turning out to be true, and therefore didn't rule it out. Next time she got injured, she felt a little weird "pretending" that she could just influence these things but figured "why not?" and decided that her injury wasn't going to swell either. It didn't. A few injuries go by, and things aren't swelling so much. Finally, she inadvertently tells someone "Oh, don't worry, I don't need to ice my broken thumb because I just decided that it won't swell". The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break. I called her and talked to later that night and pointed out what had happened with her mental state and helped her fix her silly limiting (and false) beliefs, and when she woke up in the morning swelling had largely subsided again.

The size of the effect was larger than I've ever gotten with ibuprofen, let alone fake ibuprofen. "I have no ability to prevent my body from swelling up" is factually wrong, and being convinced of this falsehood prevents people from even trying. You can lie to yourself and take a sugar pill if you want, but it really is both simpler and more effective to just stop believing false things.

Comment by jimmy on [deleted post] 2020-06-07T21:22:34.580Z
What something is worth is not an objective belief but a subjective value.

Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"? Because if it turns out that the hot dog had been sitting out for too long and you end up puking your guts out, I think it's pretty unambiguous to say that "worth eating" was clearly false.

The fact that the precise meaning may not be clear does not make the statement immune from "being wrong". A really good start on this problem is "if you were able to see and emotionally integrate all of the consequences, would you regret this decision or be happy that you made it?".

This is not how human psychology works. Optimism does lead to better results in sports.

You have to be able to distinguish between "optimism" (which is good) and "irrational confidence" (which is bad). What leads to good results in sports is an ability to focus on putting the ball where it needs to go, and pessimism (but not accurate beliefs) impedes that.

If you want a good demonstration of that, watch Conor McGregor's rise to stardom. He gained a lot of interest for his "trash talk" which was remarkably accurate. Instead of saying "I'M GONNA KNOCK HIM OUT FIRST ROUND!" every time, he actually showed enough humility to say "My opponent is a tough guy, and it probably won't be a first round knockout. I'll knock him out in the second". It turned out in that case that he undersold himself, but that did not prevent him from getting the first round knockout. When you watch his warm up right before the fights, what his body language screams is that he has no fear, and that's what's important because fear impedes fluid performance. When he finally lost, his composure in defeat showed that his lack of fear came not from successful delusion but from acceptance of the possibility of losing. This is peak performance, and is what we should all be aspiring to.

In general, "not how human psychology works" is a red flag for making excuses for those with poorly organized minds. "You have to expect to win!" is a refrain for a reason; the people who say this falsehood probably would engage in pessimism if they thought they were likely to lose. However, that does not mean that one cannot aspire to do better. Other people don't fall prey to this failure mode, and those people can put on impressive performances that shock even themselves.

Comment by jimmy on [deleted post] 2020-06-06T20:17:05.089Z
These two options do not always coincide. Sometimes you have to choose.

I'll go even further than Zack and flat out reject the idea that this even applies to humans.

The most famous examples are: Learning knowledge that is dangerous for humanity (e.g how to build an unaligned Superintelligence in your garage), knowledge that is dangerous to you (e.g Infohazards)

This kind of problem can only happen with an incoherent system ("building and running a superintelligence in ones garage is a bad thing to do"+"I should build and run a superintelligence in my garage!") where you posit that the subsystem in control is not the subsystem that is correct. If you don't posit incoherence of "a system", then this whole thing makes no sense. If garage AIs are bad, don't build them and try to stop others from building them. If garage AIs are good, then build it. Both sides find instrumental and epistemic rationality to be aligned. It's just that my idea of truth doesn't always line up with your idea of best action because you might have a different idea of what the truth is.

It can be more confusing when it happens within one person, but it's the same thing.

If learning that your girlfriend is cheating on you would cause you to think "life isn't worth living" and attempt suicide even though life is still worth living, then the problem isn't that true beliefs ("she cheated on me") are leading to bad outcomes, it's that false beliefs ("life isn't worth living") are leading to bad outcomes, and that your truth finding is so out of whack that you can already predict that true beliefs will lead to false beliefs.

In these cases you have a few options. One is to notice this and say "Huh, if life would still be worth living, why would I feel like it isn't?" and explore that until your thoughts and feelings merge into agreement somewhere. In other words, fix your shit so that true beliefs no longer predictably lead to false beliefs. Another is to put off the hard work of having congruent (and hopefully true) beliefs and feelings, and say "my feelings about life being worth living are wrong, so I will not act on them". Another, if you feel like you can't trust your logical self to retain control over your emotional impulses, is to say "I realize that my belief that my girlfriend isn't cheating on me might not be correct, but my resulting feelings about life would be incorrect in a worse way, and since I am not yet capable of good epistemics, I'm at least going to be strategic about which falsehoods I believe so that my bad epistemics harm me the least".

The worst thing you can do is go full "Epistemics don't matter when my life is on the line" and flat out believe that you're not being cheated on. Because if you do that, then there's nothing protecting you from stumbling upon evidence and being forced between a choice of "unmanaged false beliefs about life's worth" or "detaching from reality yet further".

or trusting false information to increase your chances of achieving your goals (e.g Being unrealistically optimistic about your odds of beating cancer because optimistic people have higher chances of survival).

True beliefs aren't the culprit here either. If you have better odds when you're optimistic, then be optimistic. "The cup isn't completely empty! It's 3% full, and even that may be an underestimate!" is super optimistic, even when "I'm almost certainly going to die" is also true.

This is very similar to the mistaken sports idea that "you have to believe you will win". No you don't. You just have to put the ball through the hoop more than the other guy does, or whatever other criteria your sport has. Yes, you're likely to not even try if you're lying to yourself and saying "it's not even possible to win" because "I shouldn't even try" follows naturally from that. However, if you keep your mind focused on "I can still win this, even if it's unlikely" or even just "Put the ball in the hoop. Put the ball in the hoop", then that's all you need.

In physics, if you think you've found a way to get free energy, that's a good sign that your understanding of the physics is flawed and the right response is to think "okay, what is it that I don't understand about gravity/fluid dynamics/etc that is leading me to this false conclusion?". Similarly , the idea that epistemic rationality and instrumental rationality are in conflict is a major red flag about the quality of your epistemic rationality, and the solution on both fronts is to figure out what you're doing wrong so as to perceive this obvious falsehood.

Comment by jimmy on [deleted post] 2020-06-06T19:39:22.403Z

That's an interesting hypothesis, and seems plausible as a partial explanation to me. I don't buy it as a full explanation for a couple reasons. One is that it is inherently harder to read and follow rather than being an equally valid aesthetic. It may also function as a signal that you are on team Incoherent Thought, and there may occasionally be reasons to fake a disability, but generally genuine shortcomings don't become attractive things to signal. Even the king of losers is a loser, and generally the impression that I get is that these people did wish they had more mainstream acceptance and would take it in a heartbeat if they could get it at the level that they feel like they deserve. That doesn't mean that they won't flout it when they can, but the signs are there. They spend a lot more time talking about "the establishment" than the establishment spends talking about them, for example.

The main point holds though. If your target audience sees formal attire as a sign of "conformism and closed mindedness" rather than a sign that you are able to shave and afford pricey clothing, then the honest thing to do is to show that you don't have to conform by not wearing a suit when you meet with them. When you're meeting the people who do want to make sure you can shave and put on fancy clothes, it's honest to show that you can do that too.

Comment by jimmy on [deleted post] 2020-06-01T19:27:55.278Z

If your website looks like this people don't need to read your content in order to tell that you're a crazy person who is out of touch with how he comes off and doesn't have basic competencies like "realize that this is terrible, hire a professional". Just scroll through without reading any of it, and with your defense against the dark arts primed and ready, tell me how likely you feel that the content is some brilliant insight into the nature of time itself. It's a real signal that credibly conveys information about how unlikely this person is to have something to say which is worth listening to. Signalling that you can't make a pretty website when you can is dishonest, and the fact that you would be hindering yourself by doing so makes it no better.

When you know what you're doing, there's nothing "dark" about looking like it.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-30T20:17:20.568Z · LW · GW
a "steel man" is an improvement of someone's position or argument that is harder to defeat than their originally stated position or argument.

This seems compatible with both, to me. "You're likely to underestimate the risks, and you can die even on a short trip" is a stronger argument than "You should always wear your seat belt because it is NEVER safe to be in a car without a seat belt", and cannot be so easily defeated as saying "Parked in the garage. Checkmate".

Reading through the hyperbole to the reasonable point underneath is still an example of addressing "the best form of the other person's argument", and it's not the one they presented.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-30T20:12:46.731Z · LW · GW

I think the conflicting narratives tend to come from different sides of the conflict, and that people generally want the institutions that they're part of (and which give them status) to remain high status. It just doesn't always work.

What I'm talking about is more like.. okay, so Chael Sonnen makes a great example here both because he's great at it and because it makes for a non-political example. Chael Sonnen is a professional fighter who intentionally plays the role of the "heel". He'll say ridiculous things with a straight face, like telling the greatest fighter in the world that he "absolutely sucks" or telling a story that a couple Brazilian fighters (the Nogueira brothers) mistook a bus for a horse and tried to feed it a carrot and sticking to it.

When people try to "fact check" Chael Sonnen, it doesn't matter because not only does he not care that what he's saying is true, he's not even bound by any expectation of you believing him. The bus/carrot story was his way of explaining that he didn't mean to offend any Brazilians, and the only reason he said that offensive stuff online is that he was unaware that they had computers in Brazil. The whole point of being a heel is to provoke a response, and in order to do that all he has to do is have the tiniest sliver of potential truth there and not break character. The bus/carrot story wouldn't have worked if the fighters from a clearly more technologically advanced country than him, even though it's pretty darn far from "they actually think buses are horses, and it's plausible that Chael didn't know they have computers". If your attempt to call Chael out on his BS is to "fact check" whether he was even there to see a potential bus/horse confusion or to point out that if anything, they're more likely to mistake a bus for a Llama, you're missing the entire point of the BS in the first place. The only way to respond is the way Big Nog actually did, which is to laugh it off as the ridiculous story it is.

The problem is that while you might be able to laugh off a silly story about how you mistook a horse for a carrot, people like Chael (if they're any good at what they do) will be able to find things you're sensitive about. You can't so easily "just laugh off" him saying that you absolutely suck even if you're the best in the world, because he was a good enough fighter that he nearly won that first match. Bullshitters like Chael will find the things that are difficult for you to entertain as potentially true and make you go there. If there's any truth there, you'll have to admit to it or end up making yourself look like a fool.

This brings up the other type of non-truthtelling that commonly occurs which is the counterpart to this. Actually expecting to be believed means opening yourself to the possibility of being wrong and demonstrating that you're not threatened by this. If I say it's raining outside and expect you to actually believe me, I have to be able to say "hey, I'll open the door and show you!", and I have to look like I'll be surprised if you don't believe me once you get outside. If I start saying "How DARE you insinuate that I might be lying about the rain!" and generally take the bait that BSers like Chael leave, I show that it's not that I want you to genuinely believe me so much as I want you to shut your mouth and not challenge my ideas. It's a 2+2=5 situation now, and that's a whole nother thing to expect. In these cases there still isn't the same pressure to conform to the truth needed if you expect to be believed, and your real constraint is how much power you have to pressure the other person into silence/conformity.

The biggest threat to truth, as I see it, is that when people get threatened by ideas that they don't want to be true, they try to 2+2=5 at it. Sometimes they'll do the same thing even when the belief they're trying to enforce is actually the correct one, and it causes just as much problems because can't trust someone saying "Don't you DARE question" even when they follow it up with "2+2=4", and unless you can do the math yourself you can't know what to believe. To give a recent example, I found a document written by a virologist PhD about why the COVID pandemic is very unlikely to have come from a lab and it was more thorough and covered more possibilities I hadn't yet seen anyone cover, which was really cool. The problem is that when I actually checked his sources, they didn't all say what he said they said. I sent him a message asking whether I was missing something in a particular reference, and his response was basically "Ah, yeah. It's not in that one it's in another one from China that has been deleted and doesn't exist anymore." and went on to cite the next part of his document as if there's nothing wrong with making blatantly false implications that the sources one gives support the point one made, and the only reason I could even be asking about it is that I hadn't read the following paragraph about something else. When I pointed out that conspiracy minded people are likely to latch on to any little reason to not trust him and that in order to be persuasive to his target audience he should probably correct it and note the change, he did not respond and did not correct his document. And he wonders why we have conspiracy theories.

Bullshitters like Chael can sometimes lose (or fail to form) their grip on reality and let their untruths actually start to impact things in a negative way, and that's a problem. However, it's important to realize that the fuel that sustains these people is the over-reaching attempts to enforce "2+2=what I want you to say it does", and if you just do the math and laugh it off when he straight face says that 2+2=22, there's no more oppressive bullshit for him to eat and fuel his trolling bullshit.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-29T20:14:08.635Z · LW · GW
You don't want your interlocutor to feel like you are either misrepresenting or humiliating him. Improving an argument is still desirable, but don't sour the debate.

There are a couple different things I sometimes see conflated together under the label "steel man".

As an example, imagine you're talking to the mother of a young man who was killed by a drunk driver on the way to the corner store, and whose life could likely have been saved if he had been wearing a seat belt. This mom might be a bit emotional when she says "NEVER get in a car without your seat belt on! It's NEVER safe!", and interpreted completely literally it is clearly bad advice based on a false premise.

One way to respond would be to say "Well, that's pretty clearly wrong, since sitting in a car in your garage isn't dangerous without a seat belt on. If you were to make a non-terrible argument for wearing seat belts all the time, you might say that it's good to get in the habit so that you're more likely to do it when there is real danger", and then respond to the new argument. The mother in this case is likely to feel both misrepresented and condescended to. I wouldn't call this steel manning.

Another thing you could do is to say "Hm. Before I respond, let me make sure I'm understanding you right. You're saying that driving with a seat belt is almost always dangerous (save for obvious cases like "moving the car from the driveway into the garage") and that the temptation to say "Oh, that rarely happens!"/"it won't happen to me!"/"it's only a short trip!" is so dangerously dismissive of real risk that it's almost never worth trusting that impulse when the cost of failure is death and the cost of putting a seat belt on is negligible. Is that right?". In that case, you're more likely to get a resounding "YES!" in response, even though that not only isn't literally what she said, it also contradicts the "NEVER" in her statement. It's not "trying to come up with a better argument, because yours is shit", it's "trying to understand the actual thing you're trying to express, rather than getting hung up on irrelevant distractions when you don't express it perfectly and/or literally". Even if you interpret wrong, you're not going to get bad reactions because you're checking for understanding rather than putting words in their mouth, and you're responding to the thing they are actually trying to communicate. This is the thing I think was being pointed at in the original description of "steel man", and is something worth striving for.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-27T18:03:07.743Z · LW · GW

I think another distinction worth making here is whether the person "bullshitting"/"lying" even expects or intends to be believed. It's possible to have "not care whether the things he says describe reality correctly" and still be saying it because you expect people to take you seriously and believe you, and I'd still call that lying.

It's quite a different thing when that expectation is no longer there.

Comment by jimmy on Reflective Complaints · 2020-05-24T20:20:49.045Z · LW · GW

I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".

The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).

When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.

My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.

While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.

The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.

Comment by jimmy on Reflective Complaints · 2020-05-22T03:06:37.049Z · LW · GW
It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them. 
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
TRIGGER: Notice that the other person isn't convinced by my argument
ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?

The fact that the other person isn’t convinced by your argument is only evidence that you’re mistaken to the extent you’d expect this other person would be convinced by good arguments. For your friends and people who have earned your respect this action is a good response, but in the more general case it might be hard to get yourself to apply it faithfully because really, when the flat earther isn’t convinced are you honestly going to consider whether you’re actually the one that’s wrong?

The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance or else finding some real humility. For the trigger you give, I’d suggest the tentative alternate action of “stick my neck out and offer for it to be chopped off”, and only if that action makes you feel a bit uneasy do you start hedging and wondering “maybe I’m overstepping”.

For example, maybe you’re arguing politics and they scoffed at your assertion that policy X is better than policy Y or whatever, and it strikes you as arrogant for them to just dismiss out of hand ideas which you’ve thought very hard about. You could wonder whether you’re the arrogant one, and that you really should have thought harder before presenting such scoffable ideas and asked for their expertise before forming an opinion — and in some cases that’ll be the right play. In other cases though, you can can be pretty sure that you’re not the arrogant one, and so you can say “you think I’m being arrogant by thinking I can trust my thinking here to be at least worth addressing?” and give them the chance to say “Yes”.

You can ask this question because “I’m not sure if I am being arrogant here, and I want to make sure not to overstep”, but you can also ask because it’s so obvious what the answer is that when you give them an opening and invite their real belief they’ll have little option but to realize “You’re right, that’s arrogant of me. Sorry”. It can’t be a statement disguised as a question and you really do have to listen to their answer and take it in whatever it is, but you don’t have to pretend to be uncertain of what the answer is or what they will believe it to be under reflection. “Hey, so I’m assuming you’re just acting out of habit and if so that’s fine, but you don’t really think it’s arrogant of me to have an opinion here, do you?” or “Can you honestly tell me that I’m being arrogant here”. It doesn’t really matter whether you say it because “you want to point out to people when they aren’t behaving consistently with their beliefs”, or because “I want to find out whether they really believe that this behavior is appropriate”, or because “I want to find out whether I’m actually the one in the wrong here”. The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.

Comment by jimmy on Consistent Glomarization should be feasible · 2020-05-09T04:57:51.059Z · LW · GW
With close friends or rationalist groups, you might agree in advance that there's a "or I don't want to tell you about what I did" attached to every statement about your life, or have a short abbreviation equivalent to that.

This already exists, and the degree of “or I’m not telling the truth” is communicated nonverbally.

For example, when my wife early in her pregnancy we attended the wedding of one of her friends, and a friend noticed that she wasn’t drinking “her” drink and asked “Oh my gosh, are you pregnant!?”. My wife’s response was to smile and say “yep” and then take a sip of beer. The reason this worked for both 1) causing her friend to conclude that she [probably] wasn’t pregnant and 2) not feeling like her trust was betrayed later is that the response was given “jokingly”, which means “don’t put too much weight into the seriousness of this statement”. A similar response could be “No, don’t you think I’d have immediately told you immediately if I were pregnant?”, again, said jokingly so as to highlight the potential for “no, I suppose you might not want to share if it’s that early”. It still communicates “No, or else I have a good reason for not wanting to tell you”.

If you want to be able to feel betrayed when their answer is misleading, you have to get a sincere sounding answer first, and “refuses to stop joking and be serious” is one way that people communicate their reluctance to give a real answer. Pushing for a serious answer after this is clear is typically seen as bad manners, and so it’s easy to go from joking around to a flat “don’t pry” when needed without seeming like you have anything to hide. Because after all, if they weren’t prying they’d have just accepted the joking/not-entirely-serious answer as good enough.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T18:16:03.995Z · LW · GW
Understand that the urge to breath is driven by the body’s desire to rid itself of carbon dioxide (CO2)--not (as some assume) your body's desire to take in oxygen (O2).

Interestingly enough, this isn't entirely true. If you get a pulse oximeter and a bottle of oxygen you can have some fun with it.

Because of the nonlinearity in the oxygen dissociation curve, oxygen saturation tends to hold pretty steady for a while and then really tank quickly, whereas CO2 discomfort builds more uniformly. In my experience, when I get that really "panicked" feeling and start breathing again, the pulse oximiter on my finger shows my saturation tank shortly after (there's a bit of a delay, which is useful here for knowing that it's not the numbers on the display causing the distress).

If it were just CO2 causing the urge to breathe, CO2 contractions and the urge to breathe should come on in the exact same way when breathing pure oxygen, and this is not the case. Instead of coming on at ~2-2.5min and being quite uncomfortable, they didn't start until four minutes and were very very mild. I've broken five minutes when I was training more, and it was psychologically quite difficult. Compartively speaking, 5 minutes on pure O2 was downright trivial, and at 7 minutes it wasn't any harder. The only reason I stopped the experiment then is that I started feeling narcosis from the CO2 and figured I should do some more research about hypercapnia (too much CO2) before pushing further.

Along those same lines, rebreather divers sometimes drown when they pass out due to hypercapnia, and while you'd think it'd be way too uncomfortable to miss, this doesn't seem (always) to be the case. In my own experiments, rebreathing a scrubberless bag of oxygen did get uncomfortable quickly, but when they did a blind study on it five out of twenty people failed to notice that there was no CO2 being removed in 5 minutes.

At the same time, a scrubbed bag with no oxygen replacement is completely comfortable even as the lights go out, so low O2 alone isn't enough to trigger that panic.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T17:57:28.209Z · LW · GW

Certainly not in any obvious way like people that suffer repeated blows to the head. There's some debate over whether loss of motor control (they call it "samba" because it's kinda like you start dancing involuntarily) can cause damage that makes it more likely to happen again in the future, but I haven't been able to find any evidence that there is any damage at all in normal training and even the former seems to be controversial.

Comment by jimmy on On Platitudes · 2020-04-22T19:36:47.911Z · LW · GW

This is a big topic and I think both slider's "Part of the problem about such tibits of wisdom that they are about big swath of experience/information and kind of need that supporting infrastructure." and Christian's "It seems to me that the skillset towards which you are pointing is a part of hypnosis" are important parts of it. In particular, hypnotists like Milton Erickson have put a lot of time into figuring out how to best convey the felt sense that there is a big swath of experience/information in there that needs to be found, and how to give pointers in the right direction. Hypnotized people can forget their own name without understanding any of the supporting theory about how this is even possible, and religious people can live on commandments even though they do not grasp or have an ability to convey the wisdom upon which they rest. Knowing who to trust and how to believe things that one does not yet understand can be very important life skills, and it doesn't come naturally for those of us who like to "think for ourselves".

The reason Peterson can be so powerful in how he expresses these "platitudes" is that to him they aren't platitudes. He actually did the work and developed the wisdom necessary for these things to stand on their own and not drift away as a "Yeah, nice thought, heard that before". When you see the effects of people breaking the relevant commandments enough that you start to get a gut level appreciation of what it would be like if you were to allow yourself to make that mistake, it starts to have the same intrinsic revulsion that you get when trying to eat Chinese food after it gave you food poisoning the time before. It's a different thing that way.

If you look at someone who makes a living spouting feel good platitudes that they do not themselves live by or understand, how do they respond when challenged? How would you respond if you had tried to tell people to "clean their rooms" as if it were a solution for everything up to and including global warming, only to have BS called on you? Here's how Peterson responds. He does not falter and lose confidence. He does not back away into more platitudes to prevent engagement. He actually goes forward and begins to expound on the underpinnings of why "clean your room" is shorthand for a very important principle (in his view, at least, and mine as well) about how social activism is best done. He does it without posturing about how clean his room is and without accusing his accuser of having an unclean room herself. This part is a bit subtle as he makes no apologies for her behavior and his models do suggest unflattering motivations, but he doesn't go so far as to make it about her or about deflecting criticism from himself. He keeps his focus on the importance of cleaning ones room so that one can do good in this world and not be led astray by psychological avoidance and ignorance, and this is exactly what you would expect from someone who is actually onto something real and who means what they say. This engagement is crucial.

Even if "clean your room" isn't terribly informative or novel itself, his two minute explanation is more. Even though that's not enough, he does have books and lectures where he spells it all out in more detail. When even a book or two isn't enough, there's clearly a lifetime of experience and practice under there beyond immediate reach. You can get started with a YouTube video or a book, but back to slider's point, there's a big ass iceberg under there and you have to piece the bulk of it together yourself. The YouTube videos and books are as much an advertisement as they are a pointer. "Here are [short descriptions of] the rules he endeavors to live by, and the results are there to judge for yourself". When people see someone who practices what they preach and whose results they like at least in part, it creates that motivation to learn more of what is underneath and, in the meantime, to accept some of what they can't understand on their own when they can see that the results are there to back it up.

You can't just say "She's happier now in heaven" and expect words that are meaningless to you to convey any meaning. But when "She wouldn't have wanted you to be unhappy" is true and relevant and not just a pretense in attempt to avoid the real hurt of real loss... because the suffering they're going through isn't just plain grieving but also beating oneself up out of some mistaken idea that it's what a "good" husband would do... then absolutely those words can be powerful. Because they actually mean something, and you would know it.

When the meaning is there, and you know it, and you are willing to engage and stand up to the potential challenges of people who might want to push away from your advice, then even simple and "non-novel" words can be very novel and compelling thing. Because while they may have heard someone spout that platitude before, they likely have never heard anyone stand behind and really mean it.

Comment by jimmy on Reflections on Arguing about Politics · 2020-04-14T18:51:57.126Z · LW · GW
>really want to change the other’s mind
Which is very zero-sum, and indicates that to the extent a discussion is productive for one, it's counter-productive for the other. I recommend NOT HAVING those arguments. If you're going in with goals of understanding their position, changing your own mind, or better modeling the universe (and those in it), then you might actually be productive.

Not quite. If my goal is to change your mind and I succeed, you don't lose and therefore it's not zero-sum.. If I succeed it's probably because I'm right, or at least in your estimation I seem more likely right than your old position was. This holds true even if you went into it really wanting to change my mind as well -- it would just mean that you'd have had to change your mind on whether that was a good goal once you started seeing that I might be right.

The real problem is going in not wanting to be convinced. If you do that, and keep attachment to your belief that you're right, then you're adding a negative penalty to a win condition that makes it hard to get to. So long as you go in willing and happy to be convinced, you can productively go in with the main goal being to change their mind if you expect that to be more likely than them having something to say which could change your mind. In cases where you don't already understand their position, then this comes down to the same thing you say where you work towards goals like "understanding their position" and "changing your own mind", but when you think you already get their side then putting that to the test and seeing if you can change their mind is a very valid goal. You just come at it in a very different way when you're open to their viewpoints than when you're not.

Comment by jimmy on Hanson & Mowshowitz Debate: COVID-19 Variolation · 2020-04-14T18:34:55.690Z · LW · GW

It depends on what you mean by "unpopular". If you mean that someone is going to ignore the lives that would be saved and accuse you of being uncaring, then that's certainly true and you would need to be ready to deal with that.

On the other hand, if you mean that everyone would actually be against this idea then I think you're wrong. I've been floating the idea every time I end up in a discussion about this virus, and while my conversations can't be taken as completely representative, it's worth noting that not once have I had anyone say it's a bad idea. The most negative I've gotten was "that's interesting", and most of the other people I've talked to have said that they would do it right now and that so would a lot of their friends.

In a situation where the risks for healthy young people are low and eventual infection is likely anyway, "If people are going to get sick anyway, let them do it on their terms so that it can be as safe as possible" is not a hard argument to win, and the people who need to be convinced are likely far more sympathetic to such ideas than you think.

We just need to create the common knowledge that such ideas are thinkable and doesn't have to be a politically losing stance. A lot of "common knowledge" stances have turned out to be wrong and to flip overnight, and you'd be offering people a chance to be ahead of the curve and the first to jump on the winning team that saved the day. It'd have to be done deliberately and carefully, but if you do it right people will take it.

Comment by jimmy on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-06T18:37:03.456Z · LW · GW
I linked to a Bulletin of Atomic Scientists article about why this debunked idea still keeps coming up and the harms associated with it. Just printing articles and pointing people to them wasn't enough. I don't have more to say about your specific arguments because I think they're covered pretty well by the article I linked.

That article is just a list of a bunch of opinions people have and it is nothing more than a gossip piece. Literally all it does is repeat things like:

Mahmoud Ahmadinejad, the former Iranian president who seems unable to resist a good opportunity to propagate falsehoods (even Al-Qaida once asked him to stop making things up), also got in on the coronavirus conspiracy action. In an open letter to the UN secretary-general, he wrote that it was clear that the virus was “produced in laboratories … by the warfare stock houses of biologic war belonging to world hegemonic powers.”


Lentzos worries that the parade of prominent figures promoting the bioweapons conspiracy theory could weaken the global taboo against possessing bioweapons—making biological weapon research appear to be widespread.

It does nothing to even begin commenting on why these ideas keep spreading, just that they are and who is spreading them. Likewise, exactly nothing in that article responds to anything I've said.

It's no wonder that linking to trash like this doesn't convince anyone. To even get started you need to be able to link to things like this. Then you need to have people who can understand why that is credible explain it to their social circle who respect them and wouldn't understand it on their own. And that means you need an army of people who are capable of empathizing with the very real concerns that these "conspiracy theorists" have instead of falling into the trap of arrogance to hide from their own difficulties in being persuasive and credible.

Yes, it's hard. Let's get to work.

Comment by jimmy on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-05T18:27:14.766Z · LW · GW

"Considered harmful" is what Wikipedia refers to as "weasel words". By whom? Why do we care what they think? It's much better to make the case directly than to attempt to weasel in a (false) sense of consensus. Doing the latter damages your credibility, and you're going to need that.

If you're concerned about conspiracy theories "failing to be debunked", what you need are honest and credible experts. The public can't evaluate the claims themselves. Heck, I'm a pretty smart guy and I can't evaluate the evidence based on looking at the genome itself or even from evaluating the object level claims of people who have. But I and many many others are smart enough to notice a big coincidence when we see one, and smart enough to know that many many people like to think it's a good idea to censor, distort, or otherwise lie about things to paint their preferred viewpoint. Honestly and openness is critical if you want to persuade anyone of anything. If you say "The bio-weapon theory has been debunked as just a conspiracy theory", what am I supposed to take from that? That you are very open to this theory would speak publicly about all evidence you find in its favor if only it existed? Or that you want to silence "information hazards" with weasel phrases and like to use terms like "debunked" or maybe even "conspiracy theory" to discredit ideas without even diving into whether or not they might be true? When the latter is a strong possibility, we can't just take things like "X has been debunked" on faith.

I actually don't think it's a bio-weapon and I do believe it has been debunked. But the reason for this is that when I look to people who are able to evaluate the object level claims themselves, the ones who are capable of honestly considering and stating "yep, this sure looks like a bio-weapon" are actually saying "yeah, I considered that hypothesis myself because it's a totally reasonable thing to check, but it turns out that this one looks natural (and here's why, if you want to check my work)". That is the only way you can debunk these things, since everyone can't become virology experts overnight and you can't declare yourself into credibility by fiat.

You're right that now is not the time to be starting wars, and I think there is a very persuasive case to be made for that. Fighting and posturing are last and second to last resorts respectively, and not ones we want to hastily resort to ever, let alone in difficult times where there might be a flinch to do so. It is a bad idea to pick fights and start wars, especially when the ability to cooperate globally is most important, especially especially without thinking these things through very very carefully. However, this all holds true even if it were a bio-weapon, or escaped from a lab due to negligence, or spread worldwide due to attempted cover-ups/etc. Instead of removing your voice from the conversation that will happen anyway and has to happen anyway, use your voice to say what needs to be said. "Yes, it's very reasonable to suspect that it might be a bio-weapon, and that's why we checked. It doesn't look like it is". "Yes, it is very important for people and organizations to be held accountable for their actions. It is also very important to first make sure we know what those actions actually are and to give people the benefit of the doubt both on what they did and their willingness to take responsibility voluntarily first. Now is the time to cooperate with one another to fight this pandemic. Later is the time to sort through the mistakes we made and make sure we're all working honestly to avoid repeating them for personal gain or otherwise".

Comment by jimmy on Taking Initial Viral Load Seriously · 2020-04-01T23:59:37.871Z · LW · GW
Looking at a 50% low risk, 50% high risk scenario, we can only save 50% of what we could save if we started in a 50% high risk scenario.

I don’t think this is right.

It’s worth noting that the 14x difference between the risk for the first kid in the house and the second is a (noisy) lower bound on the degree to which the risk depends on dose. For example, this data is consistent with the toy model that the risk vs dose is a step function going from 0% to 100% at a certain point; it would just require that the kid bringing the disease into the house is 14x less likely to have crossed that threshold. With intentional inoculation we don’t know how much lower the risk can be driven, nor do we know how much of the exceptionally bad cases are simply due to exceptionally high initial dose of the virus. It’s entirely possible that even in your “50% low risk 50% high risk” scenario, all or most of the low risk people that die are because of unusually high viral loads given their risk grouping, and that with careful titration of dosage we can do much better than shifting people from one crude grouping to another.

I’m also quite skeptical that we should find the absence of more evidence to be particularly damning. Where would it come from? Ethics boards aren’t going to be happy about intentionally infecting people with deadly diseases, and it’s hard to get more than a very crude guess at the initial dose of cases caught “in the wild”. Furthermore, if you’re going to intentionally inject people with non-potent virus in order to build antibodies, you’d normally want to go all the way and do an actual vaccine. How many people have been thinking about what to do in case there’s a pandemic where you can’t wait for a real vaccine, and how many of them have been studying variolation? I wouldn’t think many.

To me, it sounds like taking volunteers to empty cruise ships sounds like an easy and potentially big win. There are plenty of young people who aren’t concerned and aren’t at high risk to begin with, and you can offer them both a lower risk (because they’re likely to get it anyway), a free party, and a way to feel like they’re helping instead of hurting other people. In return we get data, a step towards herd immunity, and workers which can safely treat other patients or run nursing homes. Once we need to scale up we can start thinking on how to triage people who are wanting to take this risk. For now, it seems like we just want to rush to get this idea accepted and tried somewhere.

Comment by jimmy on mind viruses about body viruses · 2020-03-28T18:54:09.407Z · LW · GW

When you're dealing with a threat that doubles in size every few days, you do not have the luxury of excess caution. Inverted pendulums have an exponentially growing error as well, and no matter what you do (or don't do) to react, if your control system doesn't do it faster than the instability grows, you lose. Period. If you try to move slowly in the act of balancing, then you will fall off the tight rope no matter how sure you later become of what the right action would have been. It is fundamentally necessary to be able to react and then correct for errors later (so yes, pre-frame this in your communication so that you don't over-commit to something you will need to later change).

It's also worth noting that "literally everyone on earth" only starts trying to solve the problem once they know that it's a problem, and at the time that Pueyo's first essay came out, that was absolutely not the case. At that time, I was still scrambling trying to figure out how to best leverage my credibility and communication skills to convey the exact same point about "Why you must act now" because people around me had not yet come to realize how serious an issue this was going to be. Sure, they'd have heard it without me too. But they would have heard it later with less time to act, and might not have taken it as seriously without my credibility behind it. If enough people like me took your advice they might not have heard it at all because it could be out competed by other less useful memes.

It's just that, as you manage to find alternative takes (perhaps by credentialed experts, perhaps not) that find flaws in the memes you've been spreading, you spread those too. I would say "correct for your mistakes" except that it's not even a "mistake" necessarily, just "a clarification of oversimplification" or "the next control input given the newest estimate of the state".

As we get deeper into this mess and people start mobilizing, then "do something in this general direction!" becomes less important. At some point you have to wonder whether the pendulum has swung too far, or if we need to be acting in a different direction, or something else. When everyone in the world is thinking about it we now have a very different problem and instead of simply requiring an ability to take back of the envelope models seriously when they are outside the "socially accepted reality", you actually need more detailed analyses.

Still, public opinion will need to get on board with whatever is necessary, and in the absence of your input the memes don't just stop and wait for science, and neither does the coronavirus. If you try to say "but I can pick nits! This isn't credentialed and perfect!" and try to replace useful first steps with inaction, then you blow your credibility and with it your ability to help shape things for the better. Let's not do that.

Yes, it is important to not initiate or signal boost bad information at the cost of good ones. Yes, it is important to look for people who are (actually) experts. But it's also important to provide a path from the real experts to the layfolk, since that doesn't and cannot happen on its own. The public in general not only can't evaluate the object level arguments about epidemiology and must defer to authority, they can't even evaluate object level arguments about who is the real authority -- that's why you get antivaxxers listening to crackpots. It's appeals to authority (mixed in with justifications) all the way up. If you can't create the best ideas but you can distinguish between the best ideas and those which merely look good to the untrained eye, it is your job to pass the best ideas down to those who are less able to make that distinction. If you can't make that distinction yourself but you can at least distinguish between people who can and posers, then it is your job as the next link in the chain to pass this information from those more able to discern to those who are less able to discern than you. This goes all the way down to the masses watching the news, and you better hope you can get the news to get their shit together. I still know people who are in denial because mainstream news told them to be and then failed to appropriately correct for their earlier mistakes. Let's work to fix that.

Exponential memetic spread does not pathology make. Yes, it's possible for overactive or mistargeted immune systems to fail to prevent things or to do more harm than good. Yes, Dunning Kruger applies and humility is as necessary as ever. However, so is the courage to be bold and to take action when it is called for instead of hiding in false humility. This "intellectual curve" is a part of our collective immune response to an actual virus which is killing people and threatening to kill exponentially more. Do not flatten the wrong curve. Find a role that allows you to guide it in the right direction, and then guide.

Comment by jimmy on Authorities and Amateurs · 2020-03-26T21:34:25.905Z · LW · GW

Here's my answer:

There is an important distinction between "object level arguments" and "appeals to authority". Contrary to how it's normally spoken about, appeal to authority is not really fallacious and at times absolutely necessary. If I am unable to parse the object level arguments myself, I have to defer to experts. The only issue is whether I have the self awareness and integrity to say "I'm not capable of evaluating this myself, so unfortunately I have to defer to the people I trust to get these things right. Maybe you're right and I'm just not smart enough to see it". However, this must ground out somewhere. If you listen to people who only appeal to authority (whether it is their own or others) and there are never any attempts to ground things in object level arguments, then there is nothing this trust is founded on and so your beliefs can float away with no connection to reality.

What I do is take into consideration all object level arguments which I am not personally qualified to evaluate, and then weigh my trust in the various "authorities" based on how capable they seem in actually getting into the object level and making at least as much sense as the people they're arguing against. As it applies here, the amateurs linked to actually got into the object level and made very plausible sounding arguments. I didn't see any major holes in the main premise, even if I could pick less important nits. I never saw any credentialed authority engaging in the object level and making even plausibly correct counterarguments which negated the main point of these amateur models. There were a lot of "don't worry, nothing to see here", but there weren't any that were backed up by concrete models that didn't have visible holes.

The people I'm going to listen to (regardless of how capable I personally am of evaluating the object level arguments) are those who 1) have been willing to stick their neck out and make actual arguments, and 2) haven't had their neck chopped off by people pointing out identifiable mistakes in ways that are either personally verifiable or agreed upon by a more compelling network of "authority".

I think this heuristic worked pretty well in this case.

Comment by jimmy on Advice on reducing other risks during Coronavirus? · 2020-03-25T17:28:25.422Z · LW · GW

I'm not so sure the recommendation for walking over driving holds up. According to the CDC "Per trip, pedestrians are 1.5 times more likely than passenger vehicle occupants to be killed in a car crash."

Comment by jimmy on [deleted post] 2020-03-23T06:42:53.441Z

Strong disagree. Anyone who knows how to operate their weapon and is willing to use it is a formidable threat to all but the most trained and determined invaders. The level of accuracy needed to hit a man sized target inside a house with a long gun is really low. Low enough that if you miss the problem isn’t that you aren’t yet skilled in the art of aiming, it’s that you didn’t make sure to aim at all before you pulled the trigger.

The bigger barrier is psychological. If you can’t get yourself to take deliberate aim on another human and pull the trigger knowing what will happen, then a firearm might not be useful. If you can do that though, the mechanics won't be a problem except in the difficult cases.

Comment by jimmy on March Coronavirus Open Thread · 2020-03-20T19:23:15.627Z · LW · GW

Right, I got that it was them doing the math correction not you. Still, they did the math and give an age breakdown of the passengers and a crude sanity check gives a number within about 30% of what they report.