Posts

Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z
Metauncertainty 2009-04-10T23:41:52.946Z

Comments

Comment by jimmy on The reverse Goodhart problem · 2021-06-09T21:23:38.570Z · LW · GW


This is one of the times it helps to visualize things to see what's going on.


Let's pick target shooting for example, since it's easy to picture and makes for a good metaphor. The goal is to get as close as possible to the bulls eye, and for each inch of miss you score one less point. Visually, you see a group of concentric "rings" around the bulls eye which score fewer and fewer points as they get bigger. Simplifying to one dimension for a moment, V = -abs(x).

However, it's not easy to point the rifle right at the bulls eye. You do your best, of course, and it's much much closer to the bulls eye than any random orientation would be, but maybe  you end up aiming one inch to the right, and that the more accurate your ammo is the closer you get to this aimpoint of x=1. This makes U = -abs(1-x), or -abs(1-x)+constant or whatever. It doesn't really matter, but if we pick -abs(1-x)+1, U = V when you miss sufficiently far to the left so it fits nicely with your picture.

When we plot U, V, and 2U-V, we can see that your mathematical truth holds and it looks immediately suspicious. Going back to two dimensions, instead of having nice concentric rings around the actual target, you're pointing out that if the bulls eye had instead been placed exactly where you ended up aiming, and if the rings were distorted and non-concentric in this certain way, then V would actually increase twice as fast as U. 

But it's sorta missing the point. Because for one, the absolute scaling is fairly meaningless in the first place because it brings you towards the same place anyway, and more importantly you don't get the luxury of drawing your bullseye after you shoot. If you had been aiming for V' in the first place, you almost certainly wouldn't have managed to pull off a proxy as perfect as U. (in general V' and U don't have to line up in the exact same spot like this, but in those cases you still wouldn't have happened to miss V' in this particular way)


Goodhart has nothing to do with human values being "funny", it has to do with the fundamental difficulty of setting your sights in just the right place. Once you're within the range of the distance between your proxy and actual goal, it's no longer guaranteed that getting closer to the proxy gets you closer to your goal and it can actually bring you further away -- and if it brings you further away, that's bad. If you did a good job on all axes, maybe you end up hitting the 9 ring and that's good enough.

The thing that makes it "inevitable disaster" rather than just "not suboptimal improvement" is when you forget to take into account a whole dimension. Say, if you aim your rifle well in azimuth and elevation but instead of telling the bullet to stop at a certain distance, you tell it to keep going in that direction forever and it manages to succeed well beyond the target range.

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-05-02T17:07:09.026Z · LW · GW

(Technical point: the phase 3's still were randomized controlled trials, they just weren't double-blind. But double-blind is the relevant characteristic when asking whether the different results are due to partying Israelis, so that's fine.)

 

Yeah, the part I was objecting to there was "the placebo group was given a fake injection and everything". Not only did they do far less than "everything" that is supposed to go with giving fake injections, they also failed to give me a fake injection! My second "placebo" was a real vaccine and my dad's second "vaccine" was a placebo!

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-04-29T02:22:13.181Z · LW · GW

Shame on them for misreporting. It was not double blind. 

I wouldn't put it past this guy for not knowing anyway, but he was 2 for 2 in accidentally hinting at the right thing (one vaccine, one placebo)

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-04-25T23:07:41.513Z · LW · GW

The issue is that this number captures efficacy starting on the day you receive the vaccine (or sometimes 7 days later)

 

Do you know how the efficacy on a given day is defined? I'm assuming it's going by date of first reporting symptoms (because you can't always know when the exposure was), but it makes a big difference if you're thinking "when am I safe to expose my self to covid".

But it's equally important to note that the the phase 3's were true randomized controlled trials - the placebo group was given a fake injection and everything

 

I did the Pfizer phase 3 trial, and this isn't really true. 

The side effects are clear enough that without an active placebo, calling it "blind" is kinda a joke in the first place. On top of that, people in the waiting room were talking about how you can tell if you're getting the real vaccine by looking at the syringe. And top of that, the doctor who gave me the injection basically told me that I got the real thing ("Keep wearing your mask, we don't know yet if these work"), and said something equally revealing to at least one other person I know who did the trial.

Comment by jimmy on Homeostatic Bruce · 2021-04-20T17:53:29.835Z · LW · GW

Could you clarify how those things are selected for in training? I am actually struggling to imagine how they could be selected for in a BUD/S context — so sharing would be helpful!

(Army special forces, not SEALs)

Scrupulosity: They had some tough navigation challenges where they were presented with opportunities to attempt to cheat, such as using flashlights or taking literal shortcuts, and several were weeded out there.

Reliability: They had peer reviews, where the people who couldn't work well with a team got booted. Depends on what exactly you mean by "reliability", but "we can't rely on this guy" sounded like a big part of what people got dinged for there.

"Viewing life as a series of task- oriented challenges" seems like a big part of the attitude that my friend had that helped him do very well there, even if a lot of it comes through as persistence. Some of it is significantly different though, like in SERE training where the challenge for him wasn't so much "don't quit" so much as it was "Stop giving your 'captors' attitude you dummy. Play to win.".

I'm confused — it sounds like your friend enjoyed effects both to that magnitude and in that direction. Am I misunderstanding?

Yeah, that was poorly explained, sorry about that. The "magnitude" is less than it seems at a glance for a couple reasons. He wasn't a "pot smoking slacker" because he lacked motivation to do anything, he was a "pot smoking slacker" because he didn't have respect for the games he was expected to play. When you look at him as a 12 year old kid, you wouldn't think of him joining the military and waking up early with a buzz cut and saying "Yes sir!". But when you hear he joined the special forces in particular, it's not "Wow! To think he could grow up to excel and take things seriously!", it's "Hm. The military aspect is a bit of a twist, but it makes sense. Kid's definitely the right kind of crazy".

He was always a bit extreme, it's just that the way it came out changed -- and the military training was at least as much an effect of the change as it was a cause. It didn't come out in studying hard for straight As in college or anything that externally obvious, but there were some big changes before he joined the military. For example, he ended up deciding that there was something to the Christian values his parents tried (seemingly in vain) to instill in him, and took a hard swing from being kinda a player to deciding that he wasn't going to have sex again until he found the woman he was going to marry and have children with (I laughed at him at the time, but he was right).

The reason I say "not necessarily in that direction" is that they weren't simply trying to push in a consistent direction to maximize traits they deem desirable. One of the things they told him they liked about the results of his personality test was that he had a bit of a rebellious "fuck authority" streak -- but also that in his case, he should probably tone it down a bit because he was over the top (and he seemed to agree). The only part of the training I can think of that's directly relevant to this is the SERE thing, and that was more of "At least learn what it's like to be obedient when you need to be" than anything else (and certainly wasn't "do it unthinkingly as a terminal good").

Also, if he did enjoy such effects as you describe, do you have any hypotheses for the mechanism? Given that such radical changes are quite rare naturally, we'd expect there to be something at play here right?


I feel like a lot of the changes have to do with "growing up and figuring out what he wants to do with his life", and a lot of the rest following more or less naturally from valuing things differently once he knew what he was actually aiming for and what it was going to take. If you wanted to run marathons for a living, and you had to run a marathon in a certain time in order to qualify for the job, "how much of a runner you are" would probably change overnight because you would train in anticipation.

That's not to say that the training itself wasn't necessary or didn't exert more force too. There's a particular moment he told me about when things were approaching maximum shittiness. He somewhat hurt from earlier training, carrying more than his share of the weight, already fatigued with much left to do, no guarantee of success and all that, and to top it off it started raining unexpectedly. It's the moments like that which are hard to properly prepare for in advance, and which really make you question your choices and whether this is actually what you want to do with your life. Because it's not just a test you have to pass to get a comfy job, that is what the job is. So the question the training shoved his nose in and forced him to answer honestly was "This is what the job you're asking for is really like. Do you want this?". At the point he realized that, he started laughing because for him the answer was "Yes. I want this miserable shit".

I think the mechanism is best understood as giving people a credible and tangible requirement to grapple with so they can't fail to motivate themselves and can't fail to understand what's needed -- and of course, selecting only for the people who can make it through. Throw someone in the same training camp when they don't want to be there, and I don't think you get positive results. Take people who can't meet requirements and I think you're likely to end up teaching the wrong thing there too. But if your whole culture enforces "No dating until you wear the bullet ant gloves without whining", then I think you get a bunch of men who can handle physical pain without breaking down because there was never a choice to not suck it up and figure it out.

Comment by jimmy on Homeostatic Bruce · 2021-04-13T04:57:05.243Z · LW · GW

I suspect it would only really be compelling to those who personally witnessed the rapid shift in personality consequent to elite military training in an acquaintance.

 

I kinda fit that. I know someone who went from a "pot smoking slacker" to "elite and conscientious SOF badass", which kinda looks like what you're talking about from afar. 

However, my conclusions from actually talking to him about it all before, during (iirc?), and after are very different. The training seems to be very very much about selection, everyone who got traumatized was weeded out, and things like "reliable, and scrupulous, viewing life as a series of task- oriented challenges" were all selected for.

The training did have some effects, but not to that magnitude, not by that mechanism, and not necessarily even in that direction.

Comment by jimmy on On Changing Minds That Aren't Mine, and The Instinct to Surrender. · 2021-03-14T19:46:57.947Z · LW · GW

Originally, I had earned a reputation on the server for my patience, my ability to defuse heated disagreements and give everyone every single chance to offer a good reason for why they held their positions. That slowed and stopped. I got angrier, ruder, more sarcastic, less willing to listen to people. Why should I, in the face of a dozen arguments that ended without any change?  [...] What’s the point of getting people mad? [...] what’s the point in listening to someone who might be scarcely better than noise


This seems like the core of it right here.

You started out decently enough when you had patience and were willing to listen, but your willingness to listen was built on expectations of how easily people would change their minds or offer up compelling reason to change your own, and those expectations aren't panning out. You don't want to have to give up on people -- especially those who set out to become rational -- being able to change their minds, or on your own ability to be effective in this way. Yet you can't help but notice that your expectations aren't being fulfilled, and this brings up some important questions on what you're doing and whether it's even worth it. You don't want to "just give up", yet you're struggling to find room for optimism, and so you're finding yourself just frustrated and doing neither "optimism" nor "giving up" well.

Sound like a fair enough summary?

There is, of course, the classic solution: Get stronger. If I could convince them I was right or get convinced that they’re right, that would nicely remove the dissonance.

The answer is in part this, yes.

It is definitely possible to intentionally steer things such that people either change your mind or change their own. It is not easy.

It is not easy in two different ways. One is that people's beliefs generally aren't built on the small set of "facts" that they give to support them. They're built on a whole lot more than that, a lot of it isn't very legible, and a lot of the time people aren't very aware of or honest about what their beliefs are actually built on. This means that even when you're doing things perfectly* and making steady progress towards converging on beliefs, it will probably take longer than you'd think, and this can be discouraging if you don't know to expect it.

The other way it's hard is that you have to regulate yourself in tricky ways. If you're getting frustrated, you're doing something wrong. If you're getting frustrated and not instantly pivoting to the direction that alleviates the frustration, you're doing that wrong too. It's hard to even know what direction to pivot sometimes. Getting this right takes a lot of self-observation and correction so as to train yourself to balance the considerations better and better. Think of it as a skill to be trained.

* "Perfectly" as in "Not wasting motion". Not slipping the clutch and putting energy into heat rather than motion. You might still be in the wrong gear. Even big illegible messes can be fast when you can use "high gear" effectively. In that case it's Aumann agreement about whose understanding to trust how far, rather than conveying the object level understanding itself.

And, of course, there is the psychological option- just get over it.

The answer is partly this too, though perhaps not in the way you'd think.

It's (usually) not about just dropping things altogether, but rather integrating the unfortunate information into your worldview so that it stops feeling like an alarm and more like a known-issue to be solved.

Hardly seemed appropriate to be happy about anything when it came to politics. Everyone is dreadfully underinformed, and those with the greatest instincts towards kindness and systemic changes may nevertheless cause great harm

This, for example, isn't an "Oh, whatever, NBD". You know how well things could go if people could be not-stupid about things. If people could listen to each other, and could say things worth listening to. If people who were about "kindness" knew they had to do their homework and ensure good outcomes before they could feel good about themselves for being "kind". And you see a lot of not that. It sucks.

It's definitely a problem to be solved rather than one to be accepted as "just how things are". However, it is also currently how things are, and it's not the kind of problem that can be solved by flinching at it until it no longer exists to bother us -- the way we might be able to flinch away from a hot stove and prevent "I'm burning" from being a true thing we need to deal with.

We have to mourn the loss of what we thought we had, just as we have to to when someone we cared about doesn't get the rest of the life we were hoping for. There's lots of little "Aw, and that means this won't get to happen either", and a lot of "But WHY?" until we've updated our maps and we're happy that we're no longer neglecting to learn lessons that might come back to bite us again.

Some people aren't worth convincing, and aren't worth trying to learn from. It's easier to let those slide when you know exactly what you're aiming for, and what exact cues you'd need to see before it'd be worth your time to pivot.


With Trump in office, I struggled to imagine how anyone could possibly change their view. If you like him, any argument against him seems motivated by hatred and partisanship to the point of being easily dismissed. If you don’t, then how could you possibly credit any idea or statement of himself or his party as worthwhile in the face of his monumental evils.

Let's use this for an example.

Say I disagreed with your take on Trump because I thought you liked him too much. I don't know you and you don't know me, so I can't rest on having built a reputation on not being a hateful partisan and instead thinking things through. With that in mind, I'd probably do my best to pace where you're coming from. I'll show you exactly how cool all of the cool thing Trump has done are (or on the other side, exactly how uncool all the uncool things are), and when I'm done, I'll ask you if I'm missing anything. And I'll listen. Maybe I'm actually missing something about how (un)cool Trump is, even if I think it's quite unlikely. Maybe you'll teach me something about how you (and people like you) think, and maybe I care about that -- I am choosing to engage with you, after all.

After I have proven to your satisfaction that not only do I get where you're coming from, I don't downplay the importance of what you see at all, do you really believe that you'd still see me as "a hateful partisan" -- or on the other side, "too easily looking past Trump's monumental evils"? If you do slip into that mode of operation and I notice and stop to address it with an actual openness to seeing why you might see me that way, do you think you'd be able to continue holding the "he's just a hater" frame without kinda noticing to yourself that you're wrong about this and weakening your ability to keep hold of this pretense if it keeps happening?

Or do you see it as likely that you might be curious about how I can get everything you do, not dismiss any of it, and still think you're missing something important? Might you even consider it meaningful that I don't come to the same conclusion before you understand what my reasoning is well enough that I'd sign off on it?

You still probably aren't going to flip your vote in a twenty minute conversation, but what if it were more? Do you think you could hang out with someone like that for a week without weakening some holds on things you were holding onto for less than fully-informed-and-rational reasons? Do you think that maybe, if the things you were missing turned out to be important and surprising enough, you might even change your vote despite still hating all the things about the other guy that you hated going in?


The question is just whether the person is worth the effort. Or perhaps, worth practicing your skills with.

Comment by jimmy on Making Vaccine · 2021-02-06T19:10:51.513Z · LW · GW

Being as charitable as the facts allow is great. Starting to shy away from some of the facts so that one can be more charitable than they allow isn't.

The whole point is that this moderators actions aren't justifiable. If they have a "/r/neoliberal isn't the place for medicine, period" stance, that would be justifiable. If the mod deleted the post and said "I don't know how to judge these well so I'm deleting it to be safe, but it's important if true so please let me know why I should approve it", then that would be justifiable as well, even if he ultimately made the wrong call there too. 

What that mod actually did, if I'm reading correctly, is to make an active claim that the link is "misinformation" and then ban the person who posted it without giving any avenue to be proven wrong.  Playing doctor by asserting truths about medical statements, when one is not competent or qualified to do so, getting it wrong when getting it wrong is harmful, and then shutting down avenues where your mistakes can be shown, is not justifiable behavior. It's shameful behavior, and that mod ought to feel very bad about his or herself until they correct their mistakes and stop harming people out of their own hubris. The charity that there is room for is along the lines of "Maybe the line about misinformation was an uncharitable paraphrase rather than a direct quote" and "Hey, everyone makes mistakes, and even mistakes of hubris can be atoned for" -- not justifying the [if the story is what it seems to be] clearly and very bad behavior itself.

Comment by jimmy on Making Vaccine · 2021-02-04T19:45:35.463Z · LW · GW

I think this is inaccurately charitable. It's never the case that a moderator has "no way" to know whether it checks out or not. If "Hey, this sounds like it could be dangerous misinfo, how can I know it's not so that I can approve your post?" is too much work and they can't tell the good from bad within the amount of work they're willing to put in, then they are a bad moderator -- at least, with respect to this kind of post. Even if you can't solve all or even most cases, leaving a "I could be wrong, and I'm open to being surprised" line on all decisions is trivial and can catch the most egregious moderation failures.

Maybe that's acceptable from a neoliberal moderator since it's not the core topic, but the test is "When confronted with evidence that they can correctly evaluate as showing them to have been wrong, do they say 'oops!' and update accordingly, or do they double down and make excuses for doing the wrong thing and not update". I don't know the mod in question, but the former answer is the exception and the latter is the rule. If the rejection note was "Medical stuff isn't allowed because I'm not qualified to sort the good from the bad", then I'd say "fair enough". But actively claiming "Spreading dangerous misinfo!" is rarely done with epistemic humility out of necessity and almost always done out of the kind of epistemic hubris that has gotten us into this mess by denying that there's an upcoming pandemic, denying that masks work and are important, and now denying that we can and should dare to vaccinate in ways that deviate from the phase 3 clinical trials. This kind of behavior is hugely destructive and is largely the result of enabled laziness, so it's really not something we ought to be making excuses for.

Comment by jimmy on Covid 1/28: Muddling Through · 2021-01-29T18:13:21.483Z · LW · GW

Running some quick numbers on that Israel "Stop living in fear after being vaccinated" thing, it looks like Israel's current 7-day average is about 8000 cases/day, so with a population of 9 million we should expect about 110 cases/day out of 125k vaccinated if vaccines did nothing and people didn't change their behavior. What they actually got was 20.. over what time period? Vaccines clearly work to a wonderful extent, but is it really to the "Don't think twice about going out partying then visiting immune compromised and unvaccinated grandma" level?

On an unrelated note, aren't these mRNA vaccines supposed to produce a lot more antibodies than COVID? Shouldn't that show up on a COVID antibody test? Because in my experience they did not.

Comment by jimmy on Everything Okay · 2021-01-24T20:13:17.458Z · LW · GW

"G" fits my own understanding best: "Not Okay" is a generalized alarm state, and the ambiguity is a feature, not a bug.

(Generally) we have an expectation that things are supposed to be "Okay" so when they're not, this conflict is uncomfortable and draws attention to the fact that "something is wrong!". What exactly it takes to provoke this alarm into going off depends on the person/context/mindset because it depends on (what they realize) they haven't already taken into account, and that's kinda the point. For example, if you're on a boat and notice that you're on a collision course with a rock you might panic a bit and think "We have to change course!!!", which is an example of "things not being okay". However, the driver might already see the rock and is Okay because the "trajectory" he's on includes turning away from the rock so there's no danger. And of course, other passengers may be in Okay Mode because they fail to see the rock or because they kinda see the rock but they are averse to being Not Okay and therefore try to ignore it as long as possible.

In that light, "Everything is Okay" is reassurance that the alarm can be dismissed. Maybe it's because the driver already sees the rock. Maybe it's because our "boat" is actually a hovercraft which will float right over the rock without issue. Maybe we actually will hit the rock, but there's nothing we can do to not hit the rock, and the damages will be acceptable. Getting people back into Okay Mode is in exercise in getting people to believe that one of these is true, and you don't necessarily have to specify which one if they trust you, and if the details are important that's what the rest of the conversation is for.

The best way to get the benefits of ‘okay’ in avoiding giant stress balls, while still retaining the motivation to act and address problems or opportunities is to "just" engage with the situation without holding back.

Okay, so we're headed for a rock, now what? If that's alarming then it's alarming. Are we actually going to hit it if we simply dismiss the alarm and go back to autopilot? If so, would that be more costly than the cost of the stress needed to avert it? What can we actually do to stop it? Can we just talk to the driver? Is that likely to work?

If that's likely to work and you're on track to doing that, then "can we sanely go back to autopilot?" can evaluate as "yes" again and we can go back to Okay Mode -- at least, until the driver doesn't listen and we no longer expect out autopilot to handle the situation satisfactorily. You get to go back to Okay Mode as soon as you've taken the new information into account and gotten back on a track you're willing to accept over the costs of stressing more.


"The Kensho thing", as I see it, is the recognition that these alarms aren't "fundamental truths" where the meaning resides. They're momentary alarms that call for the redirection of one's attention, and the ultimate place that everything resolves to after doing your homework and integrating all the information is back to a state which calls for no alarms. That's why it's not "nothing matters, everything is equally good" or "you'll feel good no matter what once you're enlightened" -- it's just "Things are okay,  on a fundamental level alarms are not called for, behaviors are, and it's my job to figure out which. If I'm not okay with them that signals a problem with me in that I have not yet integrated all the information available and gotten back on my best-possible-track". So when your friend dies or you realize that humanity is going to be obliterated, it's not "Lol, that's fine", it's room to keep a drive to not only do something about it, a drive to stare reality in the face as much as you can manage, to regulate how much you stare at painful truths so that you keep your responses productive, and a desire to up one's ability to handle unpleasant conflict.

 How should one react to those who are primarily optimizing for being in Okay Mode at the expense of other concerns

Fundamentally, it's a problem of aversion to unpleasant conflict. Sometimes they won't actually see the problem here so it can be complicated by their endorsement of avoidance, but even in those cases it's probably most productive to ignore their own narratives and instead directly address the thing that's causing them to want to avoid.

Shoving in their face more reasons to be Not Okay is likely to trigger more avoidance, so instead of trying to argue why "Here's how closing your eyes means you're more likely to fail to avoid the rock, and therefore kill everyone. Can you imagine how unfun drowning will be?" (which I would expect to lead to more rationalizations/avoidance), I'd focus on helping them be comfortable. More "Yeah, it's super unfun for things to be Not Okay, and I can't blame you for not wanting to do it more than necessary"/"Yes, it's super important to be able to be able to regulate one's own level of Okayness, since being an emotional wreck often makes things worse, and it's good that you don't fail in that way".

Of course, you don't want to just make them comfortable staying in Okay Mode because then there's no motivation to switch, so when there's a little more room to introduce unpleasant ideas without causing folding you can place a little more emphasis on "it's good that you fail in that way", and how completely avoiding stress isn't ideal or consequence free either.

It's a bit of a balancing act, and more easily said than done. You have to be able to pull off sincerity when you reassure them that you get where they're coming from and that it's actually better than doing the thing they fear their option is, and without "Not Okaying" at them by pushing them "It's Not Okay that you feel Okay!". It's a lot easier when you can be Okay that they're in Okay mode because they're Not Okay with being Not Okay, partially just because externalizing ones alarms as a flinch is rarely the most helpful way of doing things. But also because if you're Okay you can "go first" and give them a proof of concept and reference example for what it looks like to stare at the uncomfortable thing (or uncomfortable things in general) and stay in Okay Mode. It helps them know "Hey, this is actually possible", and feel like you might even be able to help them get closer to it.


or those who are using Okay as a weapon?

Again, I'd just completely disregard their narratives on this one. They're implying that if you're Not Okay, then it's a "you problem". So what? Make sure they're wrong and demonstrate it.

"God, it's just a little fib. Are you okay??"

"Not really. I think honesty about these kinds of things is actually extremely important, and I'm still trying to figure out where I went wrong expecting not to have that happen"

Or

"Yeah, no, I'm fine. I just want to make sure that these people know your history when deciding how much to trust you".

Comment by jimmy on In Defense of Twitter's Decision to Ban Trump · 2021-01-12T02:04:11.836Z · LW · GW

"Content moderation" is not always a bad thing, but you can't jump directly from "Content moderation can be important" to "Banning Trump, on balance, will not be harmful". 

The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it's possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they're [perceived to be] a spammer that provides no value to the forum, then there's no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it's easier to pretend you're right if you don't have to address challenging arguments, then that's bad content moderation and LW would certainly suffer for it.

The question isn't over whether "it's possible for moderation to be good", it's whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.

What would you expect the world to look like if that weren't at all part of the motivation? 

What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?

Comment by jimmy on Motive Ambiguity · 2020-12-16T06:58:37.757Z · LW · GW

The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?

 

It sounds like the question is essentially "How to do hard mode?".

On a small scale, it's not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it -- and to find opportunities to do so. When you can see that you don't actually need to display that costly signal, it's usually a pretty easy choice to make.

Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. "Does this dress make my ass look fat?" -- can you pull off "The *dress* doesn't, no" and get a laugh, or are you stuck where there's only one acceptable answer? If you can, demonstrate that it's okay to suggest the "unthinkable" and keep poking until you can find the edge of the envelope. If not, or when you've reached the point where you can't, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.

On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn't already mostly get it, and you don't have so much time and attention to work. There are things you can do, and I don't want to suggest that it's "not doable". You can start to presuppose the framings that you've worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you're challenged -- which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.

I just don't get the impression that there's any "easy" answer. If you want people to donate to your political campaign even though you won't play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie -- and are perceived however you let that be perceived.
 

Comment by jimmy on Number-guessing protocol? · 2020-12-07T18:30:34.454Z · LW · GW

This answer is great because it takes the problem with the initial game (one person gets to update and the other doesn't) and returns the symmetry by allowing both players to update. The end result shows who is better at Aumann updating and should get you closer to the real answer.

If you'd rather know who has the best private beliefs to start with, you can resolve the asymmetry in the other direction and make everyone commit to their numbers before hearing anyone else's. This adds a slight bit of complexity if you can't trust the competitors to be honest, but it's easily solved by either paper/pencil or everyone texting their answer to the person who is going to keep their phone in their pocket and say their answer first.

Comment by jimmy on Covid 11/19: Don’t Do Stupid Things · 2020-11-20T19:38:04.829Z · LW · GW

The official recommendations are crazy low. Zvi's recommendation here of 5000IU/day is the number I normally hear from smart people who have actually done their research. 

The RCT showing vitamin D to help with covid used quite a bit. This converter from mg to IU suggests that the dose is at least somewhere around 20k on the first day and a total of 40k over the course of the week. The form they used (calcifediol) is also more potent, and if I'm understanding the following comment from the paper correctly, that means the actual number is closer to 200k/400k. (I'm a bit rushed on this, so it's worth double checking here)

In addition, calcifediol is more potent when compared to oral vitamin D3 [43]. In subjects with a deficient state of vitamin D, and administering physiological doses (up to 25 μg or 1000 IU daily, approximately 1 in 3 molecules of vitamin D appears as 25OHD; the efficacy of conversion is lower (about 1 in 10 molecules) when pharmacological doses of vitamin D/25OHD are used. [42]

I've always been confused why the official recommendations for vitamin D are so darn low, but it seems that there might be an answer that is fairly straight forward (and not very flattering to the those coming up with the recommended values). It looks like it might be a simple conflation between "standard error of the mean" and "standard deviation" of the population itself.

Comment by jimmy on Simpson's paradox and the tyranny of strata · 2020-11-20T17:21:50.012Z · LW · GW

(If you're worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)

[...]

They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.

 

The problem you're using Simpson's paradox to point at does not have this same property of "multiplying the size of the data set by arbitrarily large numbers doesn't help". If you can keep taking data until randomness chance is no issue, then they will end up having sufficient data in all the same subgroups, and you can just read the correct answer off the last million times they both flew in the same city/day/time/season simultaneously.

The problem you're pointing at fundamentally boils down to not having enough data to force your conclusions, and therefore needing to make judgement about how important season is compared to time of day so that you can determine when conditioning on more factors will help relevance more than it will hurt by adding more noise.

Comment by jimmy on Covid 9/24: Until Morale Improves · 2020-09-24T19:43:12.034Z · LW · GW

Hypothetically, what would the right response be if you noticed that one of the main vaccine trials has really terrible blinding (e.g. participants are talking about how to tell whether you get the placebo in the waiting room)?

It seems like it would really mess up the data, probably resulting in the people who got the the vaccine taking extra risk and leading the study to understate the effectiveness.  Ideally, "tell the researchers" would be the obvious right answer, but are there perverse incentives at play that make the best response something else?

If I didn’t have people thanking me every week for doing these, it would be difficult to keep going.

Thanks Zvi. The effort is definitely appreciated.

Comment by jimmy on Covid 9/10: Vitamin D · 2020-09-11T03:46:41.824Z · LW · GW
There were 50 patients in the treatment group. None were admitted to the ICU. There were 26 patients in the control group. Half of them, 13 out of 26, were admitted to the ICU. So 13/26 vs. 0/50.

That's not what the paper says

Of 50 patients treated with calcifediol, one required admission to the ICU (2%),

The conclusions still hold, of course.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-26T18:31:39.220Z · LW · GW

Adjusting in the other direction seems useful as well. If someone Strong Upvotes ten times less frequently than average I would want to see their strong upvote as worth somewhat more.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-24T17:32:49.958Z · LW · GW

Voting based on current karma is a good thing.

Without that, a post that is unanimously barely worth upvoting will get an absurd amount of upvotes while another post which is recognized as earth shatteringly important by 50% will fail to stand out. Voting based on current karma gives you a measure of the *magnitude* of people's like for a comment as well as the direction, and you don't want to throw that information out.

If everyone votes based on what they think the total karma should be, then a post's karma reflects [a weighted average of opinions on what the post's total karma should be] rather than [a weighted average of opinions on the post].

This isn't true.

If people vote based on what the karma should be, the final value you get is the median of what people think the karma should be -- i.e. a median of people's opinion of the post. If you force people to ignore the current karma, you don't actually get a weighted average of opinions on the post because there's very little flexibility in how strongly you upvote a post. In order to get that magnitude signal back, you'd have to dilute your voting with dither, and while that will no doubt happen to some extent (people might be too lazy to upvote slightly-good posts, but will make sure to upvote great ones), you will get an overestimate of the value of slightly-good posts.

This is bad, because the great posts hold a disproportionate share of the value, and we very much want them to rise to the top and stand out above the rest.

Comment by jimmy on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T07:15:46.234Z · LW · GW
You are very much in the minority if you want to abolish norms in general.

There's a parallel here with the fifth amendment's protection from self incrimination making it harder to enforce laws and laws being good on average. This isn't paradoxical because the fifth amendment doesn't make it equally difficult to enforce all laws. Actions that harm other people tend to have other ways of leaving evidence that can be used to convict. If you murder someone, the body is proof that someone has been harmed and the DNA in your van points towards you being the culprit. If you steal someone's bike, you don't have to confess in order to be caught with the stolen bike. On the other hand, things that stay in the privacy of your own home with consenting adults are *much* harder to acquire evidence for if you aren't allowed to force people to testify against themselves. They're also much less likely to be things that actually need to be sought out and punished.

If it were the case that one coherent agent were picking all the rules with good intent, then it wouldn't make sense to create rules that make enforcement of other rules harder. There isn't one coherent agent picking all the rules and intent isn't always good, so it's important to fight for meta rules that make it selectively hard to enforce any bad rules that get through.

You can try to argue that preventing blackmail isn't selective *enough* (or that it selects in the wrong direction), but you can't just equate blackmail with "norm enforcement [applied evenly across the board]".

Comment by jimmy on What counts as defection? · 2020-07-16T06:27:15.059Z · LW · GW
I actually don't think this is a problem for the use case I have in mind. I'm not trying to solve the comparison problem. This work formalizes: "given a utility weighting, what is defection?". I don't make any claim as to what is "fair" / where that weighting should come from. I suppose in the EGTA example, you'd want to make sure eg reward functions are identical.

This strikes me as a particularly large limitation. If you don't have any way of creating meaningful weightings of utility between agents then you can't get anything meaningful out. If you're allowed to play with that free parameter then you can simply say "I'm not a utility monster, this genuinely impacts me more than you [because I said so!]" and your actual outcomes aren't constrained at all.

Defection doesn't always have to do with the Pareto frontier - look at PD, for example. (C,C), (C,D), (D,C) are usually all Pareto optimal. 

That's why I talk about "in the larger game" and use scare quotes on "defection". I think the word has to many different connotations and needs to be unpacked a bit.

The dictionary definition, for example, is:

A lack: a failure; especially, failure in the performance of duty or obligation.
n.The act of abandoning a person or a cause to which one is bound by allegiance or duty, or to which one has attached himself; a falling away; apostasy; backsliding.
n.Act of abandoning a person or cause to which one is bound by allegiance or duty, or to which one has attached himself; desertion; failure in duty; a falling away; apostasy; backsliding.

This all fits what I was talking about, and the fact that the options in prisoners dilemma are traditionally labeled "Cooperate" and "Defect" doesn't mean they fit the definition. It smuggles in these connotations when they do not necessarily apply.

The idea of using tit for tat to encourage cooperation requires determining what ones "duty" is and what "failing" this duty is, and "doesn't maximize total utility" does not actually work as a definition for this purpose because you still have to figure out how to do that scaling.

Using the Pareto frontier allows you to distinguish between cooperative and non-cooperative behavior without having to make assumptions/claims about whose preferences are more "valid". This is really important for any real world application, because you don't actually get those scalings on a silver platter, and therefore need a way to distinguish between "cooperative" and "selfishly destructive" behavior as separate from "trying to claim a higher weight to one's own utility".

Comment by jimmy on What counts as defection? · 2020-07-13T18:33:20.963Z · LW · GW

As others have mentioned, there's an interpersonal utility comparison problem. In general, it is hard to determine how to weight utility between people. If I want to trade with you but you're not home, I can leave some amount of potatoes for you and take some amount of your milk. At what ratio of potatoes to milk am I "cooperating" with you, and at what level am I a thieving defector? If there's a market down the street that allows us to trade things for money then it's easy to do these comparisons and do Coasian payments as necessary to coordinate on maximizing the size of the pie. If we're on a deserted island together it's harder. Trying to drive a hard bargain and ask for more milk for my potatoes is a qualitatively different thing when there's no agreed upon metric you can use to say that I'm trying to "take more than I give".


Here is an interesting and hilarious experiment about how people play an iterated asymmetric prisoner's dilemma. The reason it wasn't more pure cooperation is that due to the asymmetry there was a disagreement between the players about what was "fair". AA thought JW should let him hit "D" some fraction of the time to equalize the payouts, and JW thought that "C/C" was the right answer to coordinate towards. If you read their comments, it's clear that AA thinks he's cooperating in the larger game, and that his "D" aren't anti-social at all. He's just trying to get a "fair" price for his potatoes, and he's mistaken about what that is. JW, on the other hand, is explicitly trying use his Ds to coax A into cooperation. This conflict is better understood as a disagreement over where on the Pareto frontier ("at which price") to trade than it is about whether it's better to cooperate with each other or defect.

In real life problems, it's usually not so obvious what options are properly thought of as "C" or "D", and when trying to play "tit for tat with forgiveness" we have to be able to figure out what actually counts as a tit to tat. To do so, we need to look at the extent to which the person is trying to cooperate vs trying to get away with shirking their duty to cooperate. In this case, AA was trying to cooperate, and so if JW could have talked to him and explained why C/C was the right cooperative solution, he might have been able to save the lossy Ds. If AA had just said "I think I can get away with stealing more value by hitting D while he cooperates", no amount of explaining what the right concept of cooperation looks like will fix that, so defecting as punishment is needed.

In general, the way to determine whether someone is "trying to cooperate" vs "trying to defect" is to look at how they see the payoff matrix, and figure out whether they're putting in effort to stay on the Pareto frontier or to go below it. If their choice shows that they are being diligent to give you as much as possible without giving up more themselves, then they may be trying to drive a hard bargain, but at least you can tell that they're trying to bargain. If their chosen move is conspicuously below (their perception of) the Pareto frontier, then you can know that they're either not-even-trying, or they're trying to make it clear that they're willing to harm themselves in order to harm you too.

In games like real life versions of "stag hunt", you don't want to punish people for not going stag hunting when it's obvious that no one else is going either and they're the one expending effort to rally people to coordinate in the first place. But when someone would have been capable of nearly assuring cooperation if they did their part and took an acceptable risk when it looked like it was going to work, then it makes sense to describe them as "defecting" when they're the one that doesn't show up to hunt the stag because they're off chasing rabbits.

"Deliberately sub-Pareto move" I think is a pretty good description of the kind of "defection" that means you're being tatted, and "negligently sub-Pareto" is a good description of the kind of tit to tat.

Comment by jimmy on Noise on the Channel · 2020-07-04T17:47:07.105Z · LW · GW

To the extent that the underlying structure doesn't matter and can't be used, I agree that technically non-random "noise" behaves similarly and that this can be a reasonable use of the term. My objection to the term "noise" as a description of conversational landmines isn't just that they're "technically not completely random", but that the information content is actually important and relevant. In other words, it's not noise, it's signal.

The "landmines" are part of how their values are actually encoded. It's part of the belief structure you're looking to interact with in the first place. They're just little pockets of care which haven't yet been integrated in a smooth and stable way with everything else. Or to continue the metaphor, it's not "scary dangerous explosives to try to avoid", it's "inherently interesting stores of unstable potential energy which can be mined for energetic fuel". If someone is touchy around the subject you want to talk about, that is the interesting thing itself. What is in here that they haven't even finished explaining to themselves, and why is it so important to them that they can't even contain themselves if you try to blow past it?

It doesn't even require slow and cautious approach if you shift your focus appropriately. I've had good results starting a conversation with a complete stranger who was clearly insecure about her looks by telling her that she should make sure her makeup doesn't come off because she's probably ugly if she's that concerned about it. Not only did she not explode at me, she decided to throw the fuse away and give me a high bandwidth and low noise channel to share my perspective on her little dilemma, and then took my advice and did the thing her insecurity had been stopping her from doing.

The point is that you only run into problems with landmines as noise if you mistake landmines for noise. If your response to the potential of landmines is "Gah! Why does that unimportant noise have to get in the way of what I want to do!? I wonder if I can get away with ignoring them and marching straight ahead", then yeah, you'll probably get blowed up if you don't hold back. On the other hand, if your response is closer to "Ooh! Interesting landmine you got here! What happens if I poke it? Does it go off, or does the ensuing self reflection cause it to just dissolve away?", then you get to have engaging and worthwhile high bandwidth low noise conversations immediately, and you will more quickly get what you came for.

Comment by jimmy on Noise on the Channel · 2020-07-02T18:14:24.673Z · LW · GW

I think it's worth making a distinction between "noise" and "low bandwidth channel". Your first examples of "a literal noisy room" or "people getting distracted by shiny objects passing by" fit the idea of "noise" well. Your last two examples of "inferential distance" and "land mines" don't, IMO.

"Noise" is when the useful information is getting crowded out by random information in the channel, but land mines aren't random. If you tell someone their idea is stupid and then you can't continue telling them why because they're flipping out at you, that's not a random occurrence. Even if such things aren't trivially predictable in more subtle cases, it's still a predictable possibility and you can generally feel out when such things are safe to say or when you must tread a bit more carefully.

The "trying to squeeze my ideas through a straw" metaphor seems much more fitting than "struggling to pick the signal out of the noise floor" metaphor, and I would focus instead on deliberately broadening the straw until you can just chuck whatever's on your mind down that hallway without having to focus any of your attention on the limitations of the channel.

There's a lot to say on this topic, but I think one of the more important bits is that you can often get the same sense of "low noise conversation" if you pivot from focusing on ideas which are too big for the straw to focusing on the straw itself, and how its limitations might be relaxed. This means giving up on trying to communicate the object level thing for a moment, but it wasn't going to fit anyway so you just focus on what is impeding communication and work to efficiently communicate about *that*. This is essentially "forging relationships" so that you have the ability to communicate usefully in the future. Sometimes this can be time consuming, but sometimes knowing how to carry oneself with the right aura of respectability and emotional safety does wonders for the "inferential distance" and "conversational landmines" issues right off the bat.

When the problem is inferential distance, the question comes down to what extent it makes sense to trust someone to have something worth listening to over several inferences. If our reasonings differ several layers deep then offering superficial arguments and counterarguments is a waste of time because we both know that we can both do that without even being right. When we can recognize that our conversation partner might actually be right about even some background assumptions that we disagree on, then all of a sudden the idea of listening to them describe their world view and looking for ways that it could be true becomes a lot more compelling. Similarly, when you can credibly convey that you've thought things through and are likely to have something worth listening to, they will find themselves much more interested in listening to you intently with an expectation of learning something.

When the problem is "land mines", the question becomes whether the topic is one where there's too much sensitivity to allow for nonviolent communication and whether supercritical escalation to "violent" threats (in the NonViolent Communication sense) will necessarily displace invitations to cooperate. Some of the important questions here are "Am I okay enough to stay open and not lash out when they are violent at me?" and the same thing reflected towards the person you're talking to. When you can realize "No, if they snap at me I'm not going to have an easy time absorbing that" you can know to pivot to something else (perhaps building the strength necessary for dealing with such things), but when you can notice that you can brush it off and respond only to the "invitation to cooperate" bit, then you have a great way of demonstrating for them that these things are actually safe to talk about because you're not trying to hurt them, and it's even safe to lash out unnecessarily before they recognize that it's safe. Similarly, if you can sincerely and without hint of condescension ask the person whether they're okay or whether they'd like you to back off a bit, often that space can be enough for them to decide "Actually, yeah. I can play this way. Now that I think about it, its clear that you're not out to get me".

There's a lot more to be said about how to do these things exactly and how to balance between pushing on the straw to grow and relaxing so that it can rebuild, but the first point is that it can be done intentionally and systematically, and that doing so can save you from the frustration of inefficient communication and replace it with efficient communication on the topic of how to communicate efficiently over a wider channel that is more useful for everything you might want to communicate.

Comment by jimmy on Fight the Power · 2020-06-25T03:33:36.012Z · LW · GW

In general, if you're careful to avoid giving unsolicited opinions you can avoid most of these problems even with rigid ideologues. You wouldn't inform a random stranger that they're ugly just because it's true, and if you find yourself expressing or wishing to express ideas which people don't want to hear from you, it's worth reflecting on why that is and what you are looking to get out of saying it.

Comment by jimmy on [deleted post] 2020-06-17T03:32:11.597Z

I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you're trying to say about it in particular. I think I'm less concerned though, because I don't see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.

Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological "defense mechanisms", because "*obviously* evolution wouldn't select for those who 'defend' from threats by sticking their heads in the sand!" -- as if the problem of wireheading were as simple as competition between a "gene for wireheading" and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It's also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it's too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. "Defense mechanisms" arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective -- and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.

The solution of "simply noticing that the pain from cauterizing a serious bleed isn't a *bad* thing and therefore not flinching from it" isn't trivial. It's *doable*, and to be aspired to, but there's no such thing as "a gene for wise decisions" that is already "hard coded in DNA".

Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn't proof of some sort of "selection for shittiness" any more than it is to notice individual incoherence and the resulting dysfunction. It's not that coherence is impossible or undesirable, just that you're fighting entropy to get there, and succeeding takes work.

The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying "no thanks, I can wait" will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn't winning by disincentivizing cooperation with him, and it's likely more of a "his desire to not feel pain is winning, so he bleeds" sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it's also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there's an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.

I don't see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It's negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying "shut up and be a cog", I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using "shut up and be a cog" as a shit test to filter out the leaders who don't have what it takes to say "nah, I think I'll trust myself and win more", and lead effectively. Just like the burning pain, it's there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It's not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.

And as a result, I don't really feel myself being pulled between a conflict of "respect societies stupid beliefs/rules" and "care about other people". I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it's more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there's no reason to go from "I can't yet be effective in the broader community because I can't yet break out of their 'cog' mold for me, so I'm going to focus on the smaller community where I can" to "fuck them all". There's still plenty of value in reengaging when capable and pretending there isn't isn't that good functional thing we're striving to do. It's not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there's plenty of incentive to help things go well for everyone.

Comment by jimmy on Simulacra Levels and their Interactions · 2020-06-16T05:49:37.392Z · LW · GW
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.

Hm. This sounds like a challenge.

How about this:

Those "popular kids" who keep talking about fictitious "lions" on the other side of the river are actually losers. They try to pretend that they're simply "the safe and responsible people" and pat themselves on the back over it, but really they're just a bunch of cowards who wouldn't know what to do if there were a lion, and so they can't even look across the river and will just shame you for being "reckless" if you doubt the existence of lions that they "just know" are there. I hate having to say something that could lump me with these deplorable fools, and never before has there actually been a lion on the other side of the river, but this time there is. This time it's real, and I'm not saying we can't cross if need be, but if we're going to cross we need to be armed and prepared.

I can see a couple potential failure modes. One is if "Those guys are just crying wolf, but I am legit saving you [and therefore am cool in the way they pretend they are]" itself becomes a cool kid thing to say. The other is if your audience is motivated to see you as "one of them" to the point of being willing to ignore the evidence in front of them, they will do so despite you having credibly signaled that this is not true. Translating to actual issues I can think of, I think it would mostly actually work though.

It becomes harder if you think those guys are actually cool, but that shouldn't really be a problem in practice. Either a) there actually has been a lion every single time it is claimed, in which case it's kinda hard for "there's a lion!" to indicate group membership because it's simply true. Or b) they've actually been wrong, in which case you have something to distance yourself from.

If the truth is contentious and even though there has always been a lion, they've never believed you, then you have a bigger problem than simply having your assertions mistaken for group membership slogans; you simply aren't trusted to be right. I'd still say there's things that can be done there, but it does become a different issue.

Comment by jimmy on [deleted post] 2020-06-11T19:05:14.154Z
I described what happend to the other post here.

Thanks, I hadn't seen the edit.

I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?

While I generally support the idea that it's better to stop posting than to continue to post things which will predictably be negative karma sum, I don't think that's necessary here. There's plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one's own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.

I think the down votes are about something else which is a lot more easily fixable. While I'm sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don't put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn't miss.

As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I'm not sure where you simply missed that part or whether you don't believe him, but either way it is very hard to have a conversation with someone that doesn't engage with points like this at least enough to say why they aren't convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That's not to say that you have to believe that they're being reasonable/charitable/etc, or that you have to act like you do, but it's nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of "insufficiently charitable" is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.

It's a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn't sweat it.

Comment by jimmy on [deleted post] 2020-06-11T18:11:24.513Z
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?

Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.

When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

It seems that this bit is your main concern?

It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.

I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.

If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.

Does that address what you’re saying?

Comment by jimmy on [deleted post] 2020-06-10T20:08:21.060Z

It's not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there's no shame in that. Heck, maybe I'm even wrong and what I'm perceiving as an error actually isn't faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.

I try to avoid making that mistake, but if you feel like I'm erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.

I'm sorry if it hasn't been sufficiently clear that I'm friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.

Secondly I would also like to hear an actual counterargument to the argument I made

Which one? The "it was only studying IBS" one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It's always going to be "in the cases they've studied" and it's always conceivable that if you only knew to find the right use of placebos to test, you'll find one where it doesn't work. However, when placebos work without deception in every case you've tested, the default hypothesis is no longer "well, they require deception in every case except these two weird cases that I happen to have checked". The default hypothesis should now be "maybe they just don't require deception at all, and if they do maybe it's much more rare than I thought".

I'm not sure what point the existence of nocebo makes for you, but the same principles apply there too. I've gotten a guy to punch a cactus right after he told me "don't make me punch the cactus" simply by making him expect that if I told him to do it he would. Simply replace "because drugs" with "because of the way your mind works" and you can do all the same things and more.

I'm not sure how many more times I'll be willing to address things like this though. I'm willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn't considered and are therefore surprisingly strong, but if you still just don't buy into the general idea as worth exploring then I can agree to disagree.

And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.

Speaking of deleting things, what happened to your other post?

Comment by jimmy on [deleted post] 2020-06-10T07:49:50.487Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-10T07:48:04.782Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-08T19:30:29.622Z

1) Isomorphic to my "what if you know you'll do something stupid if you learn that your girlfriend has cheated on you" example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you're going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.

2) This clearly isn't a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn't mean giving your stalker your new address.

3) "Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won't take hold without it". This is assuming your conclusion. It's like saying "Unfortunately, in my made up hypothetical that doesn't actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!". I say you're making stuff up and that no such thing exists. Show me the studies, and I'll show you where they went wrong.

You can't just throw a blanket over a box and say "now that you can no longer see the gears, imagine that there's a perpetual motion machine in there!" and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn't magician trickery and it really wasn't sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I'm missing. At the point where it became likely that it wasn't new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.

Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn't to challenge the hypothetical hard, then you're making a mistake.

"Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly" is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you've given, it's been one or the other of these, and that's not likely to change.

If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can't simply cast a blanket over the box and declare that it is now "possible" since you "can't see" that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you'll start to see the path towards making both your epistemic and your instrumental rationality work better.

4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.

It's worth noticing though, that you didn't use a real world example and that there might be reasons for this.

5) This is just 2 again.

6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.

a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.

In this case, obviously not. I'm also curious what it feels like to be shot in the leg, but the price of that information is more than I'm willing to spend. If I learn what that feels like, then I don't get to learn what I would have been able to accomplish if I could still walk well. There's no conflict here between epistemic and instrumental rationality here.

b) The "torture" is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.

When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.

If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I'd probably call their bluff and push that button to see what they got.

c) This "torture" really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.

I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we're back to being another example of number one. If I'm pretty sure it won't be any worse than this, I'd go for it.


This whole "epistemic vs instrumental rationality" thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.

Comment by jimmy on [deleted post] 2020-06-08T17:53:06.883Z
I'm very glad that you managed to train yourself to do that but this option is not available for everyone.

Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to "maybe I'm missing something here" . I'm not asking you to be more credulous or to simply believe anything I'm saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.

Because you do have that option available to you. In my experience, it's simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says "that's a great idea, let's do that!" and then mysteriously fails. The idea that it's "not available" is one more false belief that gets in the way of focusing on the right thing.

Don't get me wrong, I'm not saying that it's always trivial. Epistemic rationality is not trivial. It's completely possible to try to organize one's mind into coherence and still fail to get the results because you don't realize where you're missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.

I don't see a lot of engaging in the least convenient possible world

Well, if I don't think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)

I'll take a look at your new post.

Comment by jimmy on [deleted post] 2020-06-08T05:13:44.826Z

Placebo doesn't require deception.

Just like with sports, you can get all the same benefits of placebo by simply pointing your attention correctly without predicating it on nonsense beliefs, and it's actually the nonsense beliefs that are getting in the way and causing the problem in the first place. A "placebo" is just an excuse to stop falsely believing that you can't do whatever it is you need to do without a pill.

And I don't say this as some matter of abstract "theory" that sounds good until you try to put it into practice; it's a very real thing that I actually practice somewhat regularly. I'll give you an example.

One day I sprained my ankle pretty badly. I was frustrated with myself for getting injured and didn't want it to swell up so I indignantly decided "screw this, my ankle isn't going to swell". It was a significant injury and took a month to recover, but it didn't swell. The next several times I got injured I kept this attitude and nothing swelled, including dropping a 50lb chunk of wood on my finger in a way that I was sure would swell enough to keep me from bending that finger... until I remembered that it doesn't have to be that way, and made the difficult decision to actually expect it to not swell. It didn't, and I retained complete finger mobility.

I told a friend of mine about this experience of mine, and while she definitely remained skeptical and it seemed "crazy" to her, she had also learned that even "crazy" things coming out of my mouth had a high probability of turning out to be true, and therefore didn't rule it out. Next time she got injured, she felt a little weird "pretending" that she could just influence these things but figured "why not?" and decided that her injury wasn't going to swell either. It didn't. A few injuries go by, and things aren't swelling so much. Finally, she inadvertently tells someone "Oh, don't worry, I don't need to ice my broken thumb because I just decided that it won't swell". The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break. I called her and talked to later that night and pointed out what had happened with her mental state and helped her fix her silly limiting (and false) beliefs, and when she woke up in the morning swelling had largely subsided again.

The size of the effect was larger than I've ever gotten with ibuprofen, let alone fake ibuprofen. "I have no ability to prevent my body from swelling up" is factually wrong, and being convinced of this falsehood prevents people from even trying. You can lie to yourself and take a sugar pill if you want, but it really is both simpler and more effective to just stop believing false things.

Comment by jimmy on [deleted post] 2020-06-07T21:22:34.580Z
What something is worth is not an objective belief but a subjective value.

Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"? Because if it turns out that the hot dog had been sitting out for too long and you end up puking your guts out, I think it's pretty unambiguous to say that "worth eating" was clearly false.

The fact that the precise meaning may not be clear does not make the statement immune from "being wrong". A really good start on this problem is "if you were able to see and emotionally integrate all of the consequences, would you regret this decision or be happy that you made it?".

This is not how human psychology works. Optimism does lead to better results in sports.

You have to be able to distinguish between "optimism" (which is good) and "irrational confidence" (which is bad). What leads to good results in sports is an ability to focus on putting the ball where it needs to go, and pessimism (but not accurate beliefs) impedes that.

If you want a good demonstration of that, watch Conor McGregor's rise to stardom. He gained a lot of interest for his "trash talk" which was remarkably accurate. Instead of saying "I'M GONNA KNOCK HIM OUT FIRST ROUND!" every time, he actually showed enough humility to say "My opponent is a tough guy, and it probably won't be a first round knockout. I'll knock him out in the second". It turned out in that case that he undersold himself, but that did not prevent him from getting the first round knockout. When you watch his warm up right before the fights, what his body language screams is that he has no fear, and that's what's important because fear impedes fluid performance. When he finally lost, his composure in defeat showed that his lack of fear came not from successful delusion but from acceptance of the possibility of losing. This is peak performance, and is what we should all be aspiring to.

In general, "not how human psychology works" is a red flag for making excuses for those with poorly organized minds. "You have to expect to win!" is a refrain for a reason; the people who say this falsehood probably would engage in pessimism if they thought they were likely to lose. However, that does not mean that one cannot aspire to do better. Other people don't fall prey to this failure mode, and those people can put on impressive performances that shock even themselves.

Comment by jimmy on [deleted post] 2020-06-06T20:17:05.089Z
These two options do not always coincide. Sometimes you have to choose.

I'll go even further than Zack and flat out reject the idea that this even applies to humans.

The most famous examples are: Learning knowledge that is dangerous for humanity (e.g how to build an unaligned Superintelligence in your garage), knowledge that is dangerous to you (e.g Infohazards)

This kind of problem can only happen with an incoherent system ("building and running a superintelligence in ones garage is a bad thing to do"+"I should build and run a superintelligence in my garage!") where you posit that the subsystem in control is not the subsystem that is correct. If you don't posit incoherence of "a system", then this whole thing makes no sense. If garage AIs are bad, don't build them and try to stop others from building them. If garage AIs are good, then build it. Both sides find instrumental and epistemic rationality to be aligned. It's just that my idea of truth doesn't always line up with your idea of best action because you might have a different idea of what the truth is.

It can be more confusing when it happens within one person, but it's the same thing.

If learning that your girlfriend is cheating on you would cause you to think "life isn't worth living" and attempt suicide even though life is still worth living, then the problem isn't that true beliefs ("she cheated on me") are leading to bad outcomes, it's that false beliefs ("life isn't worth living") are leading to bad outcomes, and that your truth finding is so out of whack that you can already predict that true beliefs will lead to false beliefs.

In these cases you have a few options. One is to notice this and say "Huh, if life would still be worth living, why would I feel like it isn't?" and explore that until your thoughts and feelings merge into agreement somewhere. In other words, fix your shit so that true beliefs no longer predictably lead to false beliefs. Another is to put off the hard work of having congruent (and hopefully true) beliefs and feelings, and say "my feelings about life being worth living are wrong, so I will not act on them". Another, if you feel like you can't trust your logical self to retain control over your emotional impulses, is to say "I realize that my belief that my girlfriend isn't cheating on me might not be correct, but my resulting feelings about life would be incorrect in a worse way, and since I am not yet capable of good epistemics, I'm at least going to be strategic about which falsehoods I believe so that my bad epistemics harm me the least".

The worst thing you can do is go full "Epistemics don't matter when my life is on the line" and flat out believe that you're not being cheated on. Because if you do that, then there's nothing protecting you from stumbling upon evidence and being forced between a choice of "unmanaged false beliefs about life's worth" or "detaching from reality yet further".

or trusting false information to increase your chances of achieving your goals (e.g Being unrealistically optimistic about your odds of beating cancer because optimistic people have higher chances of survival).

True beliefs aren't the culprit here either. If you have better odds when you're optimistic, then be optimistic. "The cup isn't completely empty! It's 3% full, and even that may be an underestimate!" is super optimistic, even when "I'm almost certainly going to die" is also true.

This is very similar to the mistaken sports idea that "you have to believe you will win". No you don't. You just have to put the ball through the hoop more than the other guy does, or whatever other criteria your sport has. Yes, you're likely to not even try if you're lying to yourself and saying "it's not even possible to win" because "I shouldn't even try" follows naturally from that. However, if you keep your mind focused on "I can still win this, even if it's unlikely" or even just "Put the ball in the hoop. Put the ball in the hoop", then that's all you need.

In physics, if you think you've found a way to get free energy, that's a good sign that your understanding of the physics is flawed and the right response is to think "okay, what is it that I don't understand about gravity/fluid dynamics/etc that is leading me to this false conclusion?". Similarly , the idea that epistemic rationality and instrumental rationality are in conflict is a major red flag about the quality of your epistemic rationality, and the solution on both fronts is to figure out what you're doing wrong so as to perceive this obvious falsehood.

Comment by jimmy on [deleted post] 2020-06-06T19:39:22.403Z

That's an interesting hypothesis, and seems plausible as a partial explanation to me. I don't buy it as a full explanation for a couple reasons. One is that it is inherently harder to read and follow rather than being an equally valid aesthetic. It may also function as a signal that you are on team Incoherent Thought, and there may occasionally be reasons to fake a disability, but generally genuine shortcomings don't become attractive things to signal. Even the king of losers is a loser, and generally the impression that I get is that these people did wish they had more mainstream acceptance and would take it in a heartbeat if they could get it at the level that they feel like they deserve. That doesn't mean that they won't flout it when they can, but the signs are there. They spend a lot more time talking about "the establishment" than the establishment spends talking about them, for example.

The main point holds though. If your target audience sees formal attire as a sign of "conformism and closed mindedness" rather than a sign that you are able to shave and afford pricey clothing, then the honest thing to do is to show that you don't have to conform by not wearing a suit when you meet with them. When you're meeting the people who do want to make sure you can shave and put on fancy clothes, it's honest to show that you can do that too.

Comment by jimmy on [deleted post] 2020-06-01T19:27:55.278Z

If your website looks like this people don't need to read your content in order to tell that you're a crazy person who is out of touch with how he comes off and doesn't have basic competencies like "realize that this is terrible, hire a professional". Just scroll through without reading any of it, and with your defense against the dark arts primed and ready, tell me how likely you feel that the content is some brilliant insight into the nature of time itself. It's a real signal that credibly conveys information about how unlikely this person is to have something to say which is worth listening to. Signalling that you can't make a pretty website when you can is dishonest, and the fact that you would be hindering yourself by doing so makes it no better.

When you know what you're doing, there's nothing "dark" about looking like it.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-30T20:17:20.568Z · LW · GW
a "steel man" is an improvement of someone's position or argument that is harder to defeat than their originally stated position or argument.

This seems compatible with both, to me. "You're likely to underestimate the risks, and you can die even on a short trip" is a stronger argument than "You should always wear your seat belt because it is NEVER safe to be in a car without a seat belt", and cannot be so easily defeated as saying "Parked in the garage. Checkmate".

Reading through the hyperbole to the reasonable point underneath is still an example of addressing "the best form of the other person's argument", and it's not the one they presented.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-30T20:12:46.731Z · LW · GW

I think the conflicting narratives tend to come from different sides of the conflict, and that people generally want the institutions that they're part of (and which give them status) to remain high status. It just doesn't always work.

What I'm talking about is more like.. okay, so Chael Sonnen makes a great example here both because he's great at it and because it makes for a non-political example. Chael Sonnen is a professional fighter who intentionally plays the role of the "heel". He'll say ridiculous things with a straight face, like telling the greatest fighter in the world that he "absolutely sucks" or telling a story that a couple Brazilian fighters (the Nogueira brothers) mistook a bus for a horse and tried to feed it a carrot and sticking to it.

When people try to "fact check" Chael Sonnen, it doesn't matter because not only does he not care that what he's saying is true, he's not even bound by any expectation of you believing him. The bus/carrot story was his way of explaining that he didn't mean to offend any Brazilians, and the only reason he said that offensive stuff online is that he was unaware that they had computers in Brazil. The whole point of being a heel is to provoke a response, and in order to do that all he has to do is have the tiniest sliver of potential truth there and not break character. The bus/carrot story wouldn't have worked if the fighters from a clearly more technologically advanced country than him, even though it's pretty darn far from "they actually think buses are horses, and it's plausible that Chael didn't know they have computers". If your attempt to call Chael out on his BS is to "fact check" whether he was even there to see a potential bus/horse confusion or to point out that if anything, they're more likely to mistake a bus for a Llama, you're missing the entire point of the BS in the first place. The only way to respond is the way Big Nog actually did, which is to laugh it off as the ridiculous story it is.

The problem is that while you might be able to laugh off a silly story about how you mistook a horse for a carrot, people like Chael (if they're any good at what they do) will be able to find things you're sensitive about. You can't so easily "just laugh off" him saying that you absolutely suck even if you're the best in the world, because he was a good enough fighter that he nearly won that first match. Bullshitters like Chael will find the things that are difficult for you to entertain as potentially true and make you go there. If there's any truth there, you'll have to admit to it or end up making yourself look like a fool.

This brings up the other type of non-truthtelling that commonly occurs which is the counterpart to this. Actually expecting to be believed means opening yourself to the possibility of being wrong and demonstrating that you're not threatened by this. If I say it's raining outside and expect you to actually believe me, I have to be able to say "hey, I'll open the door and show you!", and I have to look like I'll be surprised if you don't believe me once you get outside. If I start saying "How DARE you insinuate that I might be lying about the rain!" and generally take the bait that BSers like Chael leave, I show that it's not that I want you to genuinely believe me so much as I want you to shut your mouth and not challenge my ideas. It's a 2+2=5 situation now, and that's a whole nother thing to expect. In these cases there still isn't the same pressure to conform to the truth needed if you expect to be believed, and your real constraint is how much power you have to pressure the other person into silence/conformity.

The biggest threat to truth, as I see it, is that when people get threatened by ideas that they don't want to be true, they try to 2+2=5 at it. Sometimes they'll do the same thing even when the belief they're trying to enforce is actually the correct one, and it causes just as much problems because can't trust someone saying "Don't you DARE question" even when they follow it up with "2+2=4", and unless you can do the math yourself you can't know what to believe. To give a recent example, I found a document written by a virologist PhD about why the COVID pandemic is very unlikely to have come from a lab and it was more thorough and covered more possibilities I hadn't yet seen anyone cover, which was really cool. The problem is that when I actually checked his sources, they didn't all say what he said they said. I sent him a message asking whether I was missing something in a particular reference, and his response was basically "Ah, yeah. It's not in that one it's in another one from China that has been deleted and doesn't exist anymore." and went on to cite the next part of his document as if there's nothing wrong with making blatantly false implications that the sources one gives support the point one made, and the only reason I could even be asking about it is that I hadn't read the following paragraph about something else. When I pointed out that conspiracy minded people are likely to latch on to any little reason to not trust him and that in order to be persuasive to his target audience he should probably correct it and note the change, he did not respond and did not correct his document. And he wonders why we have conspiracy theories.

Bullshitters like Chael can sometimes lose (or fail to form) their grip on reality and let their untruths actually start to impact things in a negative way, and that's a problem. However, it's important to realize that the fuel that sustains these people is the over-reaching attempts to enforce "2+2=what I want you to say it does", and if you just do the math and laugh it off when he straight face says that 2+2=22, there's no more oppressive bullshit for him to eat and fuel his trolling bullshit.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-29T20:14:08.635Z · LW · GW
You don't want your interlocutor to feel like you are either misrepresenting or humiliating him. Improving an argument is still desirable, but don't sour the debate.


There are a couple different things I sometimes see conflated together under the label "steel man".

As an example, imagine you're talking to the mother of a young man who was killed by a drunk driver on the way to the corner store, and whose life could likely have been saved if he had been wearing a seat belt. This mom might be a bit emotional when she says "NEVER get in a car without your seat belt on! It's NEVER safe!", and interpreted completely literally it is clearly bad advice based on a false premise.

One way to respond would be to say "Well, that's pretty clearly wrong, since sitting in a car in your garage isn't dangerous without a seat belt on. If you were to make a non-terrible argument for wearing seat belts all the time, you might say that it's good to get in the habit so that you're more likely to do it when there is real danger", and then respond to the new argument. The mother in this case is likely to feel both misrepresented and condescended to. I wouldn't call this steel manning.

Another thing you could do is to say "Hm. Before I respond, let me make sure I'm understanding you right. You're saying that driving with a seat belt is almost always dangerous (save for obvious cases like "moving the car from the driveway into the garage") and that the temptation to say "Oh, that rarely happens!"/"it won't happen to me!"/"it's only a short trip!" is so dangerously dismissive of real risk that it's almost never worth trusting that impulse when the cost of failure is death and the cost of putting a seat belt on is negligible. Is that right?". In that case, you're more likely to get a resounding "YES!" in response, even though that not only isn't literally what she said, it also contradicts the "NEVER" in her statement. It's not "trying to come up with a better argument, because yours is shit", it's "trying to understand the actual thing you're trying to express, rather than getting hung up on irrelevant distractions when you don't express it perfectly and/or literally". Even if you interpret wrong, you're not going to get bad reactions because you're checking for understanding rather than putting words in their mouth, and you're responding to the thing they are actually trying to communicate. This is the thing I think was being pointed at in the original description of "steel man", and is something worth striving for.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-27T18:03:07.743Z · LW · GW

I think another distinction worth making here is whether the person "bullshitting"/"lying" even expects or intends to be believed. It's possible to have "not care whether the things he says describe reality correctly" and still be saying it because you expect people to take you seriously and believe you, and I'd still call that lying.

It's quite a different thing when that expectation is no longer there.

Comment by jimmy on Reflective Complaints · 2020-05-24T20:20:49.045Z · LW · GW

I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".

The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).

When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.

My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.

While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.

The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.

Comment by jimmy on Reflective Complaints · 2020-05-22T03:06:37.049Z · LW · GW
It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them. 
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
TRIGGER: Notice that the other person isn't convinced by my argument
ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?

The fact that the other person isn’t convinced by your argument is only evidence that you’re mistaken to the extent you’d expect this other person would be convinced by good arguments. For your friends and people who have earned your respect this action is a good response, but in the more general case it might be hard to get yourself to apply it faithfully because really, when the flat earther isn’t convinced are you honestly going to consider whether you’re actually the one that’s wrong?

The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance or else finding some real humility. For the trigger you give, I’d suggest the tentative alternate action of “stick my neck out and offer for it to be chopped off”, and only if that action makes you feel a bit uneasy do you start hedging and wondering “maybe I’m overstepping”.

For example, maybe you’re arguing politics and they scoffed at your assertion that policy X is better than policy Y or whatever, and it strikes you as arrogant for them to just dismiss out of hand ideas which you’ve thought very hard about. You could wonder whether you’re the arrogant one, and that you really should have thought harder before presenting such scoffable ideas and asked for their expertise before forming an opinion — and in some cases that’ll be the right play. In other cases though, you can can be pretty sure that you’re not the arrogant one, and so you can say “you think I’m being arrogant by thinking I can trust my thinking here to be at least worth addressing?” and give them the chance to say “Yes”.

You can ask this question because “I’m not sure if I am being arrogant here, and I want to make sure not to overstep”, but you can also ask because it’s so obvious what the answer is that when you give them an opening and invite their real belief they’ll have little option but to realize “You’re right, that’s arrogant of me. Sorry”. It can’t be a statement disguised as a question and you really do have to listen to their answer and take it in whatever it is, but you don’t have to pretend to be uncertain of what the answer is or what they will believe it to be under reflection. “Hey, so I’m assuming you’re just acting out of habit and if so that’s fine, but you don’t really think it’s arrogant of me to have an opinion here, do you?” or “Can you honestly tell me that I’m being arrogant here”. It doesn’t really matter whether you say it because “you want to point out to people when they aren’t behaving consistently with their beliefs”, or because “I want to find out whether they really believe that this behavior is appropriate”, or because “I want to find out whether I’m actually the one in the wrong here”. The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.

Comment by jimmy on Consistent Glomarization should be feasible · 2020-05-09T04:57:51.059Z · LW · GW
With close friends or rationalist groups, you might agree in advance that there's a "or I don't want to tell you about what I did" attached to every statement about your life, or have a short abbreviation equivalent to that.

This already exists, and the degree of “or I’m not telling the truth” is communicated nonverbally.

For example, when my wife early in her pregnancy we attended the wedding of one of her friends, and a friend noticed that she wasn’t drinking “her” drink and asked “Oh my gosh, are you pregnant!?”. My wife’s response was to smile and say “yep” and then take a sip of beer. The reason this worked for both 1) causing her friend to conclude that she [probably] wasn’t pregnant and 2) not feeling like her trust was betrayed later is that the response was given “jokingly”, which means “don’t put too much weight into the seriousness of this statement”. A similar response could be “No, don’t you think I’d have immediately told you immediately if I were pregnant?”, again, said jokingly so as to highlight the potential for “no, I suppose you might not want to share if it’s that early”. It still communicates “No, or else I have a good reason for not wanting to tell you”.

If you want to be able to feel betrayed when their answer is misleading, you have to get a sincere sounding answer first, and “refuses to stop joking and be serious” is one way that people communicate their reluctance to give a real answer. Pushing for a serious answer after this is clear is typically seen as bad manners, and so it’s easy to go from joking around to a flat “don’t pry” when needed without seeming like you have anything to hide. Because after all, if they weren’t prying they’d have just accepted the joking/not-entirely-serious answer as good enough.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T18:16:03.995Z · LW · GW
Understand that the urge to breath is driven by the body’s desire to rid itself of carbon dioxide (CO2)--not (as some assume) your body's desire to take in oxygen (O2).

Interestingly enough, this isn't entirely true. If you get a pulse oximeter and a bottle of oxygen you can have some fun with it.

Because of the nonlinearity in the oxygen dissociation curve, oxygen saturation tends to hold pretty steady for a while and then really tank quickly, whereas CO2 discomfort builds more uniformly. In my experience, when I get that really "panicked" feeling and start breathing again, the pulse oximiter on my finger shows my saturation tank shortly after (there's a bit of a delay, which is useful here for knowing that it's not the numbers on the display causing the distress).

If it were just CO2 causing the urge to breathe, CO2 contractions and the urge to breathe should come on in the exact same way when breathing pure oxygen, and this is not the case. Instead of coming on at ~2-2.5min and being quite uncomfortable, they didn't start until four minutes and were very very mild. I've broken five minutes when I was training more, and it was psychologically quite difficult. Compartively speaking, 5 minutes on pure O2 was downright trivial, and at 7 minutes it wasn't any harder. The only reason I stopped the experiment then is that I started feeling narcosis from the CO2 and figured I should do some more research about hypercapnia (too much CO2) before pushing further.

Along those same lines, rebreather divers sometimes drown when they pass out due to hypercapnia, and while you'd think it'd be way too uncomfortable to miss, this doesn't seem (always) to be the case. In my own experiments, rebreathing a scrubberless bag of oxygen did get uncomfortable quickly, but when they did a blind study on it five out of twenty people failed to notice that there was no CO2 being removed in 5 minutes.

At the same time, a scrubbed bag with no oxygen replacement is completely comfortable even as the lights go out, so low O2 alone isn't enough to trigger that panic.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T17:57:28.209Z · LW · GW

Certainly not in any obvious way like people that suffer repeated blows to the head. There's some debate over whether loss of motor control (they call it "samba" because it's kinda like you start dancing involuntarily) can cause damage that makes it more likely to happen again in the future, but I haven't been able to find any evidence that there is any damage at all in normal training and even the former seems to be controversial.

Comment by jimmy on On Platitudes · 2020-04-22T19:36:47.911Z · LW · GW

This is a big topic and I think both slider's "Part of the problem about such tibits of wisdom that they are about big swath of experience/information and kind of need that supporting infrastructure." and Christian's "It seems to me that the skillset towards which you are pointing is a part of hypnosis" are important parts of it. In particular, hypnotists like Milton Erickson have put a lot of time into figuring out how to best convey the felt sense that there is a big swath of experience/information in there that needs to be found, and how to give pointers in the right direction. Hypnotized people can forget their own name without understanding any of the supporting theory about how this is even possible, and religious people can live on commandments even though they do not grasp or have an ability to convey the wisdom upon which they rest. Knowing who to trust and how to believe things that one does not yet understand can be very important life skills, and it doesn't come naturally for those of us who like to "think for ourselves".

The reason Peterson can be so powerful in how he expresses these "platitudes" is that to him they aren't platitudes. He actually did the work and developed the wisdom necessary for these things to stand on their own and not drift away as a "Yeah, nice thought, heard that before". When you see the effects of people breaking the relevant commandments enough that you start to get a gut level appreciation of what it would be like if you were to allow yourself to make that mistake, it starts to have the same intrinsic revulsion that you get when trying to eat Chinese food after it gave you food poisoning the time before. It's a different thing that way.

If you look at someone who makes a living spouting feel good platitudes that they do not themselves live by or understand, how do they respond when challenged? How would you respond if you had tried to tell people to "clean their rooms" as if it were a solution for everything up to and including global warming, only to have BS called on you? Here's how Peterson responds. He does not falter and lose confidence. He does not back away into more platitudes to prevent engagement. He actually goes forward and begins to expound on the underpinnings of why "clean your room" is shorthand for a very important principle (in his view, at least, and mine as well) about how social activism is best done. He does it without posturing about how clean his room is and without accusing his accuser of having an unclean room herself. This part is a bit subtle as he makes no apologies for her behavior and his models do suggest unflattering motivations, but he doesn't go so far as to make it about her or about deflecting criticism from himself. He keeps his focus on the importance of cleaning ones room so that one can do good in this world and not be led astray by psychological avoidance and ignorance, and this is exactly what you would expect from someone who is actually onto something real and who means what they say. This engagement is crucial.

Even if "clean your room" isn't terribly informative or novel itself, his two minute explanation is more. Even though that's not enough, he does have books and lectures where he spells it all out in more detail. When even a book or two isn't enough, there's clearly a lifetime of experience and practice under there beyond immediate reach. You can get started with a YouTube video or a book, but back to slider's point, there's a big ass iceberg under there and you have to piece the bulk of it together yourself. The YouTube videos and books are as much an advertisement as they are a pointer. "Here are [short descriptions of] the rules he endeavors to live by, and the results are there to judge for yourself". When people see someone who practices what they preach and whose results they like at least in part, it creates that motivation to learn more of what is underneath and, in the meantime, to accept some of what they can't understand on their own when they can see that the results are there to back it up.

You can't just say "She's happier now in heaven" and expect words that are meaningless to you to convey any meaning. But when "She wouldn't have wanted you to be unhappy" is true and relevant and not just a pretense in attempt to avoid the real hurt of real loss... because the suffering they're going through isn't just plain grieving but also beating oneself up out of some mistaken idea that it's what a "good" husband would do... then absolutely those words can be powerful. Because they actually mean something, and you would know it.

When the meaning is there, and you know it, and you are willing to engage and stand up to the potential challenges of people who might want to push away from your advice, then even simple and "non-novel" words can be very novel and compelling thing. Because while they may have heard someone spout that platitude before, they likely have never heard anyone stand behind and really mean it.