Posts

Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z
Metauncertainty 2009-04-10T23:41:52.946Z

Comments

Comment by jimmy on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-18T17:28:16.427Z · LW · GW

"Isn't that what you're being paid to do, Miss DiAngelo?"

Comment by jimmy on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-18T17:25:27.019Z · LW · GW

To reiterate my point, it's entirely fair to notice that this "grandma" has an awfully long snout and to distrust her. I'm with you on that. I pick up on the same patterns as you. It's a real problem.

And still, big leap between there and an unqualified "This is insanity wolf".

 

I doubt if a conversation with DiAngelo would get very far. 

It's not "a" conversation, as if "conversation" were one thing and the way you go about it doesn't matter. If you were to go about it the way you're going about it here, with presumption of guilt, it wouldn't go far and it wouldn't be her fault. 

If you were to go about it in a way optimized for success, actually giving her the largest possible opening to see anything she might be doing wrong and to persuade you of good will, then it's not so clear.

There is nothing that a white person can say, including what I've said here, that her scheme cannot classify as "White Fragility" and therefore deem invalid.

There's nothing that can't be classified that way by the scheme which you assert to be hers.  It's possible, if she really is nothing but 100% this scheme, that nothing a white person can say would get through. 

However it's also possible that your bald presupposition that there's nothing else to her could be wrong, and that if you were careful enough in picking what you said, you could find something to say that gets her to deviate from this scheme.

As a general rule, asserting "Nothing can be done" suspicious -- especially when nothing has been tried. It's suspiciously convenient, and too absolute to be likely literally true. The times when a belief would be convenient for you are the last times you should be playing loose with the truth and dismissing known-falsehoods as "rounding errors", since that's when your motivated thinking can slip in and pull you away from the truth.

 

There's probably someone right now reading this whole discussion and mocking the White Fragility on display. 

Sure, that kind of thing definitely exists and is bad. It's also not the only thing that exists.

Comment by jimmy on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-07T19:42:45.287Z · LW · GW

This is the Gospel According to Insanity Wolf,

If you encounter an idea for which,
however watertight the argument leading to it,
you hear it in the voice of Insanity Wolf,
screaming at you,
a voice that absolutely will not stop, ever,
until you are dead,
then maybe you should reject that idea,
even if you do not have a refutation of it.

Anyone speaking in that voice,
even if outwardly quiet and reasonable,
wants something
that you should not give.

 

There's definitely a real point in there, in that suspicion is warranted and "little red riding hood" is a cautionary tale. "Roll over and believe whenever asked to" is not the right play.

At the same time, that "maybe" is critically important. Without it, you end up becoming insanity wolf yourself, snapping at your actual grandma and any vaguely-wolf-shaped clouds. Baring teeth is a display of weakness, and should be avoided as long as possible in favor of something closer to "No Chad" so that it's easier to separate the truth from the power plays.

Notice the presupposition worked in at the start: "you have been socialised into racism." Therefore your opinions are invalid. Your thoughts are invalid. Your reaction to being told this is invalid. Every objection is invalid. You are invalid. Anything but immediate subservience is invalid. You must say “of course I was; I’m glad I finally found out about it so I can change.” No other response is valid.

All of the objectionable conclusions there are in your own words, but you're responding to them as if she had actually said them rather than just fit the caricature despite trying to sound good. It's fair enough to notice that this "grandma" has an awfully long snout and to distrust her, but there's still a big leap between there and an unqualified "This is insanity wolf".

There are other explanations that fit [this summary of] the book. It's also conceivable that she just has a sincere belief that white people have been socialized into racism (even if they don't realize it yet) and a lack of awareness of the caricature's she's fitting. People usually aren't aware of the caricatures they fit, and often have beliefs that they think are more self-evident-upon-examining than they are, so it doesn't seem like a big stretch at all. 

This caricature fitting can sometimes be due to simply poor communication, but it can also be due to harboring a bit more of the caricature than they realize and would accept if they knew. You kinda have to point it out (and in a non-hostile way) before you can distinguish between "sloppy communication", "imperfection that escaped notice", and "endorsed malintent". "Innocent until proven guilty" is super important, so give her the rope and wait to see if she'll hang herself with it or pull herself out of the caricature.

In particular, you predict that "No other response [will be seen as] valid". But what if the response were "Well shoot. I hope I'm not doing anything racist. Can you explain to me what exactly is racist about it so that I can make sure not to do it?" and additional sincere questions about any bit that doesn't make sense about her story? If you were to do that without a hint of bared teeth yourself, what response do you think you'd get, exactly?

Condescension is the immediate response I'd anticipate for sure, and probably attempts to disengage before getting anywhere interesting, but that's not what I'm talking about. Assuming she doesn't actually make a convincing point, and that you don't allow things to turn confrontational, what happens when you finally get to the point where the flaws in her reasoning start to get hard to avoid? Sociopathic willful lying is definitely a possibility, and if you get that, then yeah, insanity wolf confirmed. But I also wouldn't be surprised if she were to get genuinely befuddled and not know what her own response is going to be, because no one has actually challenged her and made her think like that. Do you think you have a way of showing that this latter option isn't on the table?

Comment by jimmy on What weird treatments should people who lost their taste from COVID try? · 2021-07-30T22:35:02.041Z · LW · GW

reddit.com/r/ivermectin talks about it for this purpose (1 2 3 4). 

To add an anecdote, I know someone who started taking ivermectin two weeks after getting covid, and their sense of smell returned after ~55hr (3 doses)

Comment by jimmy on A reasonable debate about Ivermectin · 2021-07-30T17:45:36.342Z · LW · GW

Not a 1 in 7 chance that Christianity is right, a 1 in 7 chance that there is a God.

If there is a god and we're simplifying complex things to the point of "Christianity is right" or "Christianity is wrong", then Christianity is right and Dawkins is wrong. The point stands that Dawkins has not been acting consistently with the idea that there's a 1 in 7 chance that he's wrong about the big one.

 

The assumption here is that someone should have 100% certainty that they have a nose on their face

No, the point is the opposite. 

Not only should people not have 100% certainty that they have a nose, they also don't have that certainty -- even when they make unqualified statements like "I have a nose" in response to "What could convince you that you don't?"

That's why they'll say things like "A mirror would show me to have a nose" rather than "Even if a mirror shows no now, I'll still know I have one".

So my problem is that she’s treating something that’s still being researched with this same level of certainty.

Yes, I see that. And you could be right that she's overconfident here.

However, ironically, you're being overconfident here. The fact that it's still being researched is not enough to prove a thing to be unknowable to those who have looked at the data and know how to properly analyze it. Read That Alien Message for an intuition pump about how far things can be taken in principle.

Her confidence being higher than you think should be possible means one of two things (or some combination). Either she's irrationally confident, or she's calibrated and better at discerning the truth than you realize can even be done. If you jump straight from "(S)he is more confident than I'd expect someone to be" to concluding "They're being irrational" without first examining and ruling out "They know things I don't", you are going to systematically throw away the perspectives that matter most.

Comment by jimmy on Covid 7/22: Error Correction · 2021-07-30T00:37:16.834Z · LW · GW

If you see a plane crash reported and don’t know that crashes always get reported, it’s good news because you learn crashes are rare enough to be news.

[...]

Then again, there was also this, in the comments last week:

 

For the record, the wedding I went to did end up in the news. 
 

Comment by jimmy on A reasonable debate about Ivermectin · 2021-07-29T22:26:29.323Z · LW · GW

But effectively assigning a 100% probability to big questions (there's no chance I'm wrong, so there's nothing that will change my mind) is a huge red flag for me. Bill Nye isn't 100% certain that creationism is wrong. Richard Dawkins isn't 100% certain that God doesn't exist

 

 

 

That's not what "assigning 100% probability" looks like. Look at anticipated behaviors, rather than what people claim to believe in.

Imagine, if instead of talking about a (seemingly, to you) difficult question like "Does ivermectin work?", it was a painfully obvious one like "Do you have a nose?". If she had been asked "What would convince you that you don't have a nose?" and she responded "I have a nose", would not the subtext obviously be "and I'm not going to entertain your motivated attempts to gaslight me into questioning what I can see clear as day"? Would not that "Fuck you, you need to show me something surprising before I even take you seriously enough to engage with your (seemingly) nonsense hypotheticals" response seem fitting?

It's not "assigning 100% probability" just because they don't take your ideas seriously. It's "assigning 100% probability" when no matter the evidence, the needle on their belief doesn't move. If you were to say "What if I gave you a mirror and it showed you to have no nose?", would she say "It wouldn't show that", or would she say "I would believe it's fancy CGI on a screen disguised as a mirror"? Because only the latter is claiming to ignore the evidence; the former is just predicting that the test won't show that. If you were actually hand her a mirror so that she could see her missing nose and the bleeding wound where it was, would stick to that rationalization or would she exclaim "OMG what happened to my nose!"? Because only the latter is actually ignoring the evidence, and the state of mind that produces "I have a nose" in almost everyone when asked that question would also give "OMG!" if it manages to be shown to be wrong.


She still might be overconfident, but you can't jump from "She thinks the chance of being wrong is too small to continue thinking about" to "She's *over* confident" until you can demonstrate what her proper level of confidence should be. And while you might not buy her arguments, I do expect that she has thought about it and would be ready to argue the case that the proper level of confidence is high.

Absent evidence that she hasn't actually thought things through, it strikes me more as "Not significant evidence", or rather "Evidence that either the question is overdetermined OR that she's crazy" than "red flag". It really looks like a failure of communication on both sides here.

She should have explained herself better: The equivalent of "Well, the thing is that all of the things which could prove to me that I have no nose are already tests I've done and they've all come back showing that I have a nose. I looked in the mirror right before I came here, for example. I'm feeling it with my fingers right now, as you can see. Theoretically it's possible to be wrong about anything, but it'd have to somehow be in a way that causes my nose to consistently show up where I expect it to be, and I can't imagine any realistic way for that to happen, can you?"

On the other side, they could have understood that this is what she (most likely) meant, and responded to "I have a nose" with "So you would then expect to see it if I handed you a mirror, right?" and try to do the work themselves to find a possibility that she hasn't already though through, and help clarify what her model actually is, what it's based on, and what predictions it makes.

It's also worth noting that Dawkins claiming he "doesn't absolutely know" and is "a six out of seven" isn't really demonstrative of the virtue of humility either. "Nothing is 100% certain" is what you're supposed to say when you're on team science, but it doesn't mean you're actively tracking that remaining uncertainty or doing anything other than giving lip service to the right ideas for that crowd. Watch any Dawkins debate and ask yourself whether he's really acting the way you'd expect him to act if he thought the betting odds on him being wrong were really as bad as 1 in 7. If *I* thought there was anywhere near a 1 in 7 chance of Christianity being more right than I was, I'd sure as hell be a little more respectful of it than Dawkins is!

Comment by jimmy on Covid 7/15: Rates of Change · 2021-07-18T06:55:18.250Z · LW · GW

and then only two of those continuing to pass it on at all

 

I'm not sure this is the case. I think there was significant stagger between when the girls started showing symptoms, and I don't know how many have gotten sick since then or who will get sick soon. My wife just started showing symptoms today, for example (though that's not evidence of vaccinated->vaccinated transfer because kid).

If I had to bet, I'd guess that it was sub-replacement among vaccinated, I just don't have all the data in front of me yet

Comment by jimmy on Covid 7/15: Rates of Change · 2021-07-18T06:25:16.004Z · LW · GW

I got my first shot of Pfizer in September and the second in February. I don't know the exact answer for everyone else, but at least one J&J and I think mostly mRNA, probably at about the time it became available to 30 year olds in the states. 

Comment by jimmy on Covid 7/15: Rates of Change · 2021-07-16T02:34:32.497Z · LW · GW

Any given observation can of course but squared with any given level of effectiveness, because randomness, and also because superspreaders are a thing - so the outcomes can be highly correlated. 

Yeah, depends on what you mean by "hard" exactly, and what your hypotheses are. If the idea is that the vaccine makes 93% of people completely immune and leaves 7% effectively unvaccinated, then getting at least 6 of 14 should be pretty rare -- especially if you expect those 7% to be disproportionately old and unhealthy people and the 14 are all fairly young and healthy. Not impossible, but I'm squinting my eyes and double checking the methodology on anything that implies it was "just a fluke".

Explanations that assume some correlation are easier to buy, but I'm not sure what it could be correlated with that wouldn't have also shown up enough pre-delta that we'd have heard of some superspreader event where vaccinated people were passing covid to other vaccinated people.

Hope no one got seriously ill. From the lack of mention of any adverse effects, I am guessing everyone involved is fine? 

Too early to tell, but I think everyone will be fine. So far no one high risk has gotten sick.

 

EDIT: I have some more anecdotal evidence. My cousin just told me that his friend has nearly the exact same story with his wife's bachelorette party. 6/15 so far with symptoms and positive tests, all vaccinated. There's some selection bias there since I likely wouldn't have heard about it if it were only 1/15, but not enough to make it expected under a "just a fluke" model.

Comment by jimmy on Covid 7/15: Rates of Change · 2021-07-15T22:52:48.484Z · LW · GW

My wife and I went to a wedding last week, and 6 of the 14 girls who went to the bachelorette party (all vaccinated) have since tested positive for covid. I and another one of the guys (also vaccinated) got it a few days later, most likely from one of those girls (and not our partners, who both tested negative).

It's just a few cases, but nonetheless seems hard to square with the lower bound of vaccines still working super well to prevent symptomatic infection.

Comment by jimmy on How do you deal with people cargo culting COVID-19 defense? · 2021-07-07T21:13:56.381Z · LW · GW

You make it sound like I'm not doing anything that's stressful.

 

I'm not commenting on what fights you pick or how stressful they are, and wouldn't be as presumptuous as to think I know which fights you should be picking. 

 

Let's back up a bit. Your original post asks:

How do you deal emotionally with [people cargo culting COVID-19 defense]? Do you become cynic?

And in a comment further up this chain you say:

I don't see what's so hard about saying: "Please only speak in public transport when necessary to reduce the chance of infecting other people"

This reads more like an expression of frustration about the lack of such messaging, rather than an expression of curiosity about why that message doesn't get pushed by the government.

The answer to your question is that the way I emotionally deal with things like this is to try to notice when I'm getting frustrated and whether getting frustrated is actually going to get me what I want. 

I don't see expressions of indignation as a useful tool for improving governance (in this context, at least), so when I think forward about what's it's going to achieve, it kinda kills my motivation to be frustrated at the government. It does require accepting that the government kinda sucks relative to what I would like to see, but they do and I don't see it changing on its own, so it seems worth accepting.

I'd much rather ask "Why" and be curious. When I do, the answer I get is "Oh yeah, it's not actually trivial. Here are the difficulties involved". 

To the extent that it's really difficult, it helps explain why the government doesn't "just" do that, which helps to alleviate any sense that the government "shouldn't be fucking up easy things".

To the extent that I realize it's hard for other people but easy (or just achievable) for me, I try to actually go do it and teach others how to do it -- because that's what needs to be done, and apparently there haven't been enough people teaching and doing these things.

To the extent that it seems like it'd actually be easy for other people too and they're still not doing it, then the thread of curiosity has to go deeper and you have to figure out what's causing people to not do things they could and "should" do.

In short, frustration works best as a transient state, and as a sign that something isn't working -- much like tires slipping in a car would be. The way I emotionally handle this kind of thing, to the extent that I handle it well, is by noticing frustrations as signals that what I'm doing isn't working, and redirecting that into curiosity about why things actually are the way they are and how I would like to respond.

That's not true. There are many ways to change government policy without getting directly elected.

"Directly change", not "directly elected". You can certainly influence government policy without getting elected, but I would consider those to be "indirect".

I'm not extroverted and pick different fights then you but it's not like I'm just doing nothing. Given my resources I don't think there's a fight about people speaking in trains that I can effectively fight.

I'm not very extroverted either, so I absolutely get where you're coming from. If that's not a fight you can effectively fight, then it's not a fight you can effectively fight. No pressure from me.

If you're still feeling a conflict between "this should be easy" and "the government isn't doing it", then trying it yourself (or at least figuring out what you'd have to do in order to be effective) might help you figure out why other people aren't doing it effectively either, and that tends to make things emotionally easier.

Maybe it's because you feel like it should be easy for them but not for you?

Comment by jimmy on How do you deal with people cargo culting COVID-19 defense? · 2021-07-06T23:21:24.933Z · LW · GW

You can't (directly) change government policy without getting elected, but you can work to shape social norms around you. You're not in a universally recognized position of authority, but neither is the government, and you have earned some respect and know how to earn more from people around you.


When the pandemic was first kicking off and people weren't yet taking it seriously, I was actively giving social permission to friends to prepare for a pandemic, and to medical professionals to start wearing N-95 masks at work. It was clear to me that no one wanted to be the weirdo "freaking out" and "over reacting", and social permission was needed, so I tried to give it to anyone I thought I could reach, and to give them permission and motivation to extend the permission further. It's hard to tell how much effect I really had, but it basically seemed to work on the scale I could manage. With people close to me, I *know* their attitude and behaviors changed as a direct result of talking to me. I know at least one doctor started taking the need for N-95 masks more seriously after talking to someone I persuaded, and I get the impression that many other healthcare workers were given a good nudge in that direction from other people that I talked to.


It just wasn't trivial or stress free.

Comment by jimmy on How do you deal with people cargo culting COVID-19 defense? · 2021-07-06T18:09:35.275Z · LW · GW

You don't have to convince the people who are annoyed by other people's phone calls not to talk. You have to convince the people having the phone calls not to have them. And you have to convince the people silently watching in annoyance to speak up and tell people to get off the phone. 

If you think it's easy, give it a shot.

Comment by jimmy on How do you deal with people cargo culting COVID-19 defense? · 2021-06-24T19:26:47.933Z · LW · GW

It seems like in your model, what happens is that the authorities switch from "Please wear masks" to "Please wear masks and avoid unnecessary talking", people nod along in unison, and a new social norm is created which functions similarly to the norms against talking too much in a library or movie theater.

I don't think it's that simple. For one, I don't think people would get it. I don't think many people are going to say "Oh yeah, from that one simple sentence I now understand exactly how much talking increases the risk of spreading covid, how important it is, what level we should tolerate and how we should punish it, how we should deal with people who are too lax/harsh on punishing others/etc". We're still struggling to coordinate on "How big a deal is covid?" norms on wearing masks and enforcing mask wearing. Given how much more inconvenient staying silent is,  I wouldn't expect norms against talking to be easier to coordinate on. 

If you remember back to the beginning of the pandemic, no one knew what to make of this thing, and so everyone was slow trying to wait to see what other people say before they decide what to think. This has all the obvious problems, but it's also worth noting that when people try to "think for themselves" you don't get a bunch of good answers, you get varied answers and dumb answers. A good example of this is Eddie Bravo, who is sometimes as being a brilliant jiu jitsu mind and has added a lot to the sport, but at the same time believes obviously crazy/dumb things. When you look at the debates he gets in about flat earth stuff, he actually makes better arguments than his round earth opponents because he's actually thought things through (albeit poorly) and his opponents are stuck trying to rationalize on the fly. Thinking things through from first principles only works when social consensus is less thought through than that, and while it worked for him in a new and niche sport, it's not that great for problems as tricky as "Is the earth round" let alone "How should we handle a pandemic". This sort of "follow the herd" mentality is necessary.

I don't know about you, but I find creating this sort of social consensus (and defying any existing consensus) to be stressful and I expect that most others do as well. In the beginning, for example, I remember really not liking having to be the one taking people from the comfortable mental space where they wanted to be and conveying to them that there's very likely a full blown pandemic coming that the entire world is unprepared for, and that they should probably start thinking ahead and planning accordingly. There were plenty of people I didn't even bother telling, because I didn't feel like I had enough social credit for it to be worth the effort. Once "We're in a pandemic, duh" had been established, it became trivial to convey to these same people "Keeping a door open and a fan on is probably much more important than washing surfaces", but that's because it's a smaller deviation from the accepted narrative and one that is emotionally "cheap" for them to consider.

When I put myself in the shoes of the authority having to say "Please avoid unnecessary talking", I anticipate getting a lot of push back. I anticipate people trying to frame me as a scaremonger, trying to ruin peoples social lives for my own political ends. I anticipate other people agreeing with me, either silently or in a way that further polarizes things rather than helps things. I anticipate it feeling qualitatively more similar to breaking through the ice to go swimming in a frozen lake than settling into a nice warm hot tub. 

Ice swims can still feel good with the right incentives and mindsets, and it would definitely be awesome if our leadership were more competent and motivated to find out what is the actual best course of action and communicate it credibly and understandably to a partially hostile population without raising their own defensive shields and mucking things up. At the same time, I think it's pretty understandable why even trivial things like "Please don't talk more than necessary" don't get asserted/communicated.

Comment by jimmy on The reverse Goodhart problem · 2021-06-09T21:23:38.570Z · LW · GW


This is one of the times it helps to visualize things to see what's going on.


Let's pick target shooting for example, since it's easy to picture and makes for a good metaphor. The goal is to get as close as possible to the bulls eye, and for each inch of miss you score one less point. Visually, you see a group of concentric "rings" around the bulls eye which score fewer and fewer points as they get bigger. Simplifying to one dimension for a moment, V = -abs(x).

However, it's not easy to point the rifle right at the bulls eye. You do your best, of course, and it's much much closer to the bulls eye than any random orientation would be, but maybe  you end up aiming one inch to the right, and that the more accurate your ammo is the closer you get to this aimpoint of x=1. This makes U = -abs(1-x), or -abs(1-x)+constant or whatever. It doesn't really matter, but if we pick -abs(1-x)+1, U = V when you miss sufficiently far to the left so it fits nicely with your picture.

When we plot U, V, and 2U-V, we can see that your mathematical truth holds and it looks immediately suspicious. Going back to two dimensions, instead of having nice concentric rings around the actual target, you're pointing out that if the bulls eye had instead been placed exactly where you ended up aiming, and if the rings were distorted and non-concentric in this certain way, then V would actually increase twice as fast as U. 

But it's sorta missing the point. Because for one, the absolute scaling is fairly meaningless in the first place because it brings you towards the same place anyway, and more importantly you don't get the luxury of drawing your bullseye after you shoot. If you had been aiming for V' in the first place, you almost certainly wouldn't have managed to pull off a proxy as perfect as U. (in general V' and U don't have to line up in the exact same spot like this, but in those cases you still wouldn't have happened to miss V' in this particular way)


Goodhart has nothing to do with human values being "funny", it has to do with the fundamental difficulty of setting your sights in just the right place. Once you're within the range of the distance between your proxy and actual goal, it's no longer guaranteed that getting closer to the proxy gets you closer to your goal and it can actually bring you further away -- and if it brings you further away, that's bad. If you did a good job on all axes, maybe you end up hitting the 9 ring and that's good enough.

The thing that makes it "inevitable disaster" rather than just "not suboptimal improvement" is when you forget to take into account a whole dimension. Say, if you aim your rifle well in azimuth and elevation but instead of telling the bullet to stop at a certain distance, you tell it to keep going in that direction forever and it manages to succeed well beyond the target range.

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-05-02T17:07:09.026Z · LW · GW

(Technical point: the phase 3's still were randomized controlled trials, they just weren't double-blind. But double-blind is the relevant characteristic when asking whether the different results are due to partying Israelis, so that's fine.)

 

Yeah, the part I was objecting to there was "the placebo group was given a fake injection and everything". Not only did they do far less than "everything" that is supposed to go with giving fake injections, they also failed to give me a fake injection! My second "placebo" was a real vaccine and my dad's second "vaccine" was a placebo!

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-04-29T02:22:13.181Z · LW · GW

Shame on them for misreporting. It was not double blind. 

I wouldn't put it past this guy for not knowing anyway, but he was 2 for 2 in accidentally hinting at the right thing (one vaccine, one placebo)

Comment by jimmy on For mRNA vaccines, is (short-term) efficacy really higher after the second dose? · 2021-04-25T23:07:41.513Z · LW · GW

The issue is that this number captures efficacy starting on the day you receive the vaccine (or sometimes 7 days later)

 

Do you know how the efficacy on a given day is defined? I'm assuming it's going by date of first reporting symptoms (because you can't always know when the exposure was), but it makes a big difference if you're thinking "when am I safe to expose my self to covid".

But it's equally important to note that the the phase 3's were true randomized controlled trials - the placebo group was given a fake injection and everything

 

I did the Pfizer phase 3 trial, and this isn't really true. 

The side effects are clear enough that without an active placebo, calling it "blind" is kinda a joke in the first place. On top of that, people in the waiting room were talking about how you can tell if you're getting the real vaccine by looking at the syringe. And top of that, the doctor who gave me the injection basically told me that I got the real thing ("Keep wearing your mask, we don't know yet if these work"), and said something equally revealing to at least one other person I know who did the trial.

Comment by jimmy on Homeostatic Bruce · 2021-04-20T17:53:29.835Z · LW · GW

Could you clarify how those things are selected for in training? I am actually struggling to imagine how they could be selected for in a BUD/S context — so sharing would be helpful!

(Army special forces, not SEALs)

Scrupulosity: They had some tough navigation challenges where they were presented with opportunities to attempt to cheat, such as using flashlights or taking literal shortcuts, and several were weeded out there.

Reliability: They had peer reviews, where the people who couldn't work well with a team got booted. Depends on what exactly you mean by "reliability", but "we can't rely on this guy" sounded like a big part of what people got dinged for there.

"Viewing life as a series of task- oriented challenges" seems like a big part of the attitude that my friend had that helped him do very well there, even if a lot of it comes through as persistence. Some of it is significantly different though, like in SERE training where the challenge for him wasn't so much "don't quit" so much as it was "Stop giving your 'captors' attitude you dummy. Play to win.".

I'm confused — it sounds like your friend enjoyed effects both to that magnitude and in that direction. Am I misunderstanding?

Yeah, that was poorly explained, sorry about that. The "magnitude" is less than it seems at a glance for a couple reasons. He wasn't a "pot smoking slacker" because he lacked motivation to do anything, he was a "pot smoking slacker" because he didn't have respect for the games he was expected to play. When you look at him as a 12 year old kid, you wouldn't think of him joining the military and waking up early with a buzz cut and saying "Yes sir!". But when you hear he joined the special forces in particular, it's not "Wow! To think he could grow up to excel and take things seriously!", it's "Hm. The military aspect is a bit of a twist, but it makes sense. Kid's definitely the right kind of crazy".

He was always a bit extreme, it's just that the way it came out changed -- and the military training was at least as much an effect of the change as it was a cause. It didn't come out in studying hard for straight As in college or anything that externally obvious, but there were some big changes before he joined the military. For example, he ended up deciding that there was something to the Christian values his parents tried (seemingly in vain) to instill in him, and took a hard swing from being kinda a player to deciding that he wasn't going to have sex again until he found the woman he was going to marry and have children with (I laughed at him at the time, but he was right).

The reason I say "not necessarily in that direction" is that they weren't simply trying to push in a consistent direction to maximize traits they deem desirable. One of the things they told him they liked about the results of his personality test was that he had a bit of a rebellious "fuck authority" streak -- but also that in his case, he should probably tone it down a bit because he was over the top (and he seemed to agree). The only part of the training I can think of that's directly relevant to this is the SERE thing, and that was more of "At least learn what it's like to be obedient when you need to be" than anything else (and certainly wasn't "do it unthinkingly as a terminal good").

Also, if he did enjoy such effects as you describe, do you have any hypotheses for the mechanism? Given that such radical changes are quite rare naturally, we'd expect there to be something at play here right?


I feel like a lot of the changes have to do with "growing up and figuring out what he wants to do with his life", and a lot of the rest following more or less naturally from valuing things differently once he knew what he was actually aiming for and what it was going to take. If you wanted to run marathons for a living, and you had to run a marathon in a certain time in order to qualify for the job, "how much of a runner you are" would probably change overnight because you would train in anticipation.

That's not to say that the training itself wasn't necessary or didn't exert more force too. There's a particular moment he told me about when things were approaching maximum shittiness. He somewhat hurt from earlier training, carrying more than his share of the weight, already fatigued with much left to do, no guarantee of success and all that, and to top it off it started raining unexpectedly. It's the moments like that which are hard to properly prepare for in advance, and which really make you question your choices and whether this is actually what you want to do with your life. Because it's not just a test you have to pass to get a comfy job, that is what the job is. So the question the training shoved his nose in and forced him to answer honestly was "This is what the job you're asking for is really like. Do you want this?". At the point he realized that, he started laughing because for him the answer was "Yes. I want this miserable shit".

I think the mechanism is best understood as giving people a credible and tangible requirement to grapple with so they can't fail to motivate themselves and can't fail to understand what's needed -- and of course, selecting only for the people who can make it through. Throw someone in the same training camp when they don't want to be there, and I don't think you get positive results. Take people who can't meet requirements and I think you're likely to end up teaching the wrong thing there too. But if your whole culture enforces "No dating until you wear the bullet ant gloves without whining", then I think you get a bunch of men who can handle physical pain without breaking down because there was never a choice to not suck it up and figure it out.

Comment by jimmy on Homeostatic Bruce · 2021-04-13T04:57:05.243Z · LW · GW

I suspect it would only really be compelling to those who personally witnessed the rapid shift in personality consequent to elite military training in an acquaintance.

 

I kinda fit that. I know someone who went from a "pot smoking slacker" to "elite and conscientious SOF badass", which kinda looks like what you're talking about from afar. 

However, my conclusions from actually talking to him about it all before, during (iirc?), and after are very different. The training seems to be very very much about selection, everyone who got traumatized was weeded out, and things like "reliable, and scrupulous, viewing life as a series of task- oriented challenges" were all selected for.

The training did have some effects, but not to that magnitude, not by that mechanism, and not necessarily even in that direction.

Comment by jimmy on On Changing Minds That Aren't Mine, and The Instinct to Surrender. · 2021-03-14T19:46:57.947Z · LW · GW

Originally, I had earned a reputation on the server for my patience, my ability to defuse heated disagreements and give everyone every single chance to offer a good reason for why they held their positions. That slowed and stopped. I got angrier, ruder, more sarcastic, less willing to listen to people. Why should I, in the face of a dozen arguments that ended without any change?  [...] What’s the point of getting people mad? [...] what’s the point in listening to someone who might be scarcely better than noise


This seems like the core of it right here.

You started out decently enough when you had patience and were willing to listen, but your willingness to listen was built on expectations of how easily people would change their minds or offer up compelling reason to change your own, and those expectations aren't panning out. You don't want to have to give up on people -- especially those who set out to become rational -- being able to change their minds, or on your own ability to be effective in this way. Yet you can't help but notice that your expectations aren't being fulfilled, and this brings up some important questions on what you're doing and whether it's even worth it. You don't want to "just give up", yet you're struggling to find room for optimism, and so you're finding yourself just frustrated and doing neither "optimism" nor "giving up" well.

Sound like a fair enough summary?

There is, of course, the classic solution: Get stronger. If I could convince them I was right or get convinced that they’re right, that would nicely remove the dissonance.

The answer is in part this, yes.

It is definitely possible to intentionally steer things such that people either change your mind or change their own. It is not easy.

It is not easy in two different ways. One is that people's beliefs generally aren't built on the small set of "facts" that they give to support them. They're built on a whole lot more than that, a lot of it isn't very legible, and a lot of the time people aren't very aware of or honest about what their beliefs are actually built on. This means that even when you're doing things perfectly* and making steady progress towards converging on beliefs, it will probably take longer than you'd think, and this can be discouraging if you don't know to expect it.

The other way it's hard is that you have to regulate yourself in tricky ways. If you're getting frustrated, you're doing something wrong. If you're getting frustrated and not instantly pivoting to the direction that alleviates the frustration, you're doing that wrong too. It's hard to even know what direction to pivot sometimes. Getting this right takes a lot of self-observation and correction so as to train yourself to balance the considerations better and better. Think of it as a skill to be trained.

* "Perfectly" as in "Not wasting motion". Not slipping the clutch and putting energy into heat rather than motion. You might still be in the wrong gear. Even big illegible messes can be fast when you can use "high gear" effectively. In that case it's Aumann agreement about whose understanding to trust how far, rather than conveying the object level understanding itself.

And, of course, there is the psychological option- just get over it.

The answer is partly this too, though perhaps not in the way you'd think.

It's (usually) not about just dropping things altogether, but rather integrating the unfortunate information into your worldview so that it stops feeling like an alarm and more like a known-issue to be solved.

Hardly seemed appropriate to be happy about anything when it came to politics. Everyone is dreadfully underinformed, and those with the greatest instincts towards kindness and systemic changes may nevertheless cause great harm

This, for example, isn't an "Oh, whatever, NBD". You know how well things could go if people could be not-stupid about things. If people could listen to each other, and could say things worth listening to. If people who were about "kindness" knew they had to do their homework and ensure good outcomes before they could feel good about themselves for being "kind". And you see a lot of not that. It sucks.

It's definitely a problem to be solved rather than one to be accepted as "just how things are". However, it is also currently how things are, and it's not the kind of problem that can be solved by flinching at it until it no longer exists to bother us -- the way we might be able to flinch away from a hot stove and prevent "I'm burning" from being a true thing we need to deal with.

We have to mourn the loss of what we thought we had, just as we have to to when someone we cared about doesn't get the rest of the life we were hoping for. There's lots of little "Aw, and that means this won't get to happen either", and a lot of "But WHY?" until we've updated our maps and we're happy that we're no longer neglecting to learn lessons that might come back to bite us again.

Some people aren't worth convincing, and aren't worth trying to learn from. It's easier to let those slide when you know exactly what you're aiming for, and what exact cues you'd need to see before it'd be worth your time to pivot.


With Trump in office, I struggled to imagine how anyone could possibly change their view. If you like him, any argument against him seems motivated by hatred and partisanship to the point of being easily dismissed. If you don’t, then how could you possibly credit any idea or statement of himself or his party as worthwhile in the face of his monumental evils.

Let's use this for an example.

Say I disagreed with your take on Trump because I thought you liked him too much. I don't know you and you don't know me, so I can't rest on having built a reputation on not being a hateful partisan and instead thinking things through. With that in mind, I'd probably do my best to pace where you're coming from. I'll show you exactly how cool all of the cool thing Trump has done are (or on the other side, exactly how uncool all the uncool things are), and when I'm done, I'll ask you if I'm missing anything. And I'll listen. Maybe I'm actually missing something about how (un)cool Trump is, even if I think it's quite unlikely. Maybe you'll teach me something about how you (and people like you) think, and maybe I care about that -- I am choosing to engage with you, after all.

After I have proven to your satisfaction that not only do I get where you're coming from, I don't downplay the importance of what you see at all, do you really believe that you'd still see me as "a hateful partisan" -- or on the other side, "too easily looking past Trump's monumental evils"? If you do slip into that mode of operation and I notice and stop to address it with an actual openness to seeing why you might see me that way, do you think you'd be able to continue holding the "he's just a hater" frame without kinda noticing to yourself that you're wrong about this and weakening your ability to keep hold of this pretense if it keeps happening?

Or do you see it as likely that you might be curious about how I can get everything you do, not dismiss any of it, and still think you're missing something important? Might you even consider it meaningful that I don't come to the same conclusion before you understand what my reasoning is well enough that I'd sign off on it?

You still probably aren't going to flip your vote in a twenty minute conversation, but what if it were more? Do you think you could hang out with someone like that for a week without weakening some holds on things you were holding onto for less than fully-informed-and-rational reasons? Do you think that maybe, if the things you were missing turned out to be important and surprising enough, you might even change your vote despite still hating all the things about the other guy that you hated going in?


The question is just whether the person is worth the effort. Or perhaps, worth practicing your skills with.

Comment by jimmy on Making Vaccine · 2021-02-06T19:10:51.513Z · LW · GW

Being as charitable as the facts allow is great. Starting to shy away from some of the facts so that one can be more charitable than they allow isn't.

The whole point is that this moderators actions aren't justifiable. If they have a "/r/neoliberal isn't the place for medicine, period" stance, that would be justifiable. If the mod deleted the post and said "I don't know how to judge these well so I'm deleting it to be safe, but it's important if true so please let me know why I should approve it", then that would be justifiable as well, even if he ultimately made the wrong call there too. 

What that mod actually did, if I'm reading correctly, is to make an active claim that the link is "misinformation" and then ban the person who posted it without giving any avenue to be proven wrong.  Playing doctor by asserting truths about medical statements, when one is not competent or qualified to do so, getting it wrong when getting it wrong is harmful, and then shutting down avenues where your mistakes can be shown, is not justifiable behavior. It's shameful behavior, and that mod ought to feel very bad about his or herself until they correct their mistakes and stop harming people out of their own hubris. The charity that there is room for is along the lines of "Maybe the line about misinformation was an uncharitable paraphrase rather than a direct quote" and "Hey, everyone makes mistakes, and even mistakes of hubris can be atoned for" -- not justifying the [if the story is what it seems to be] clearly and very bad behavior itself.

Comment by jimmy on Making Vaccine · 2021-02-04T19:45:35.463Z · LW · GW

I think this is inaccurately charitable. It's never the case that a moderator has "no way" to know whether it checks out or not. If "Hey, this sounds like it could be dangerous misinfo, how can I know it's not so that I can approve your post?" is too much work and they can't tell the good from bad within the amount of work they're willing to put in, then they are a bad moderator -- at least, with respect to this kind of post. Even if you can't solve all or even most cases, leaving a "I could be wrong, and I'm open to being surprised" line on all decisions is trivial and can catch the most egregious moderation failures.

Maybe that's acceptable from a neoliberal moderator since it's not the core topic, but the test is "When confronted with evidence that they can correctly evaluate as showing them to have been wrong, do they say 'oops!' and update accordingly, or do they double down and make excuses for doing the wrong thing and not update". I don't know the mod in question, but the former answer is the exception and the latter is the rule. If the rejection note was "Medical stuff isn't allowed because I'm not qualified to sort the good from the bad", then I'd say "fair enough". But actively claiming "Spreading dangerous misinfo!" is rarely done with epistemic humility out of necessity and almost always done out of the kind of epistemic hubris that has gotten us into this mess by denying that there's an upcoming pandemic, denying that masks work and are important, and now denying that we can and should dare to vaccinate in ways that deviate from the phase 3 clinical trials. This kind of behavior is hugely destructive and is largely the result of enabled laziness, so it's really not something we ought to be making excuses for.

Comment by jimmy on Covid 1/28: Muddling Through · 2021-01-29T18:13:21.483Z · LW · GW

Running some quick numbers on that Israel "Stop living in fear after being vaccinated" thing, it looks like Israel's current 7-day average is about 8000 cases/day, so with a population of 9 million we should expect about 110 cases/day out of 125k vaccinated if vaccines did nothing and people didn't change their behavior. What they actually got was 20.. over what time period? Vaccines clearly work to a wonderful extent, but is it really to the "Don't think twice about going out partying then visiting immune compromised and unvaccinated grandma" level?

On an unrelated note, aren't these mRNA vaccines supposed to produce a lot more antibodies than COVID? Shouldn't that show up on a COVID antibody test? Because in my experience they did not.

Comment by jimmy on Everything Okay · 2021-01-24T20:13:17.458Z · LW · GW

"G" fits my own understanding best: "Not Okay" is a generalized alarm state, and the ambiguity is a feature, not a bug.

(Generally) we have an expectation that things are supposed to be "Okay" so when they're not, this conflict is uncomfortable and draws attention to the fact that "something is wrong!". What exactly it takes to provoke this alarm into going off depends on the person/context/mindset because it depends on (what they realize) they haven't already taken into account, and that's kinda the point. For example, if you're on a boat and notice that you're on a collision course with a rock you might panic a bit and think "We have to change course!!!", which is an example of "things not being okay". However, the driver might already see the rock and is Okay because the "trajectory" he's on includes turning away from the rock so there's no danger. And of course, other passengers may be in Okay Mode because they fail to see the rock or because they kinda see the rock but they are averse to being Not Okay and therefore try to ignore it as long as possible.

In that light, "Everything is Okay" is reassurance that the alarm can be dismissed. Maybe it's because the driver already sees the rock. Maybe it's because our "boat" is actually a hovercraft which will float right over the rock without issue. Maybe we actually will hit the rock, but there's nothing we can do to not hit the rock, and the damages will be acceptable. Getting people back into Okay Mode is in exercise in getting people to believe that one of these is true, and you don't necessarily have to specify which one if they trust you, and if the details are important that's what the rest of the conversation is for.

The best way to get the benefits of ‘okay’ in avoiding giant stress balls, while still retaining the motivation to act and address problems or opportunities is to "just" engage with the situation without holding back.

Okay, so we're headed for a rock, now what? If that's alarming then it's alarming. Are we actually going to hit it if we simply dismiss the alarm and go back to autopilot? If so, would that be more costly than the cost of the stress needed to avert it? What can we actually do to stop it? Can we just talk to the driver? Is that likely to work?

If that's likely to work and you're on track to doing that, then "can we sanely go back to autopilot?" can evaluate as "yes" again and we can go back to Okay Mode -- at least, until the driver doesn't listen and we no longer expect out autopilot to handle the situation satisfactorily. You get to go back to Okay Mode as soon as you've taken the new information into account and gotten back on a track you're willing to accept over the costs of stressing more.


"The Kensho thing", as I see it, is the recognition that these alarms aren't "fundamental truths" where the meaning resides. They're momentary alarms that call for the redirection of one's attention, and the ultimate place that everything resolves to after doing your homework and integrating all the information is back to a state which calls for no alarms. That's why it's not "nothing matters, everything is equally good" or "you'll feel good no matter what once you're enlightened" -- it's just "Things are okay,  on a fundamental level alarms are not called for, behaviors are, and it's my job to figure out which. If I'm not okay with them that signals a problem with me in that I have not yet integrated all the information available and gotten back on my best-possible-track". So when your friend dies or you realize that humanity is going to be obliterated, it's not "Lol, that's fine", it's room to keep a drive to not only do something about it, a drive to stare reality in the face as much as you can manage, to regulate how much you stare at painful truths so that you keep your responses productive, and a desire to up one's ability to handle unpleasant conflict.

 How should one react to those who are primarily optimizing for being in Okay Mode at the expense of other concerns

Fundamentally, it's a problem of aversion to unpleasant conflict. Sometimes they won't actually see the problem here so it can be complicated by their endorsement of avoidance, but even in those cases it's probably most productive to ignore their own narratives and instead directly address the thing that's causing them to want to avoid.

Shoving in their face more reasons to be Not Okay is likely to trigger more avoidance, so instead of trying to argue why "Here's how closing your eyes means you're more likely to fail to avoid the rock, and therefore kill everyone. Can you imagine how unfun drowning will be?" (which I would expect to lead to more rationalizations/avoidance), I'd focus on helping them be comfortable. More "Yeah, it's super unfun for things to be Not Okay, and I can't blame you for not wanting to do it more than necessary"/"Yes, it's super important to be able to be able to regulate one's own level of Okayness, since being an emotional wreck often makes things worse, and it's good that you don't fail in that way".

Of course, you don't want to just make them comfortable staying in Okay Mode because then there's no motivation to switch, so when there's a little more room to introduce unpleasant ideas without causing folding you can place a little more emphasis on "it's good that you fail in that way", and how completely avoiding stress isn't ideal or consequence free either.

It's a bit of a balancing act, and more easily said than done. You have to be able to pull off sincerity when you reassure them that you get where they're coming from and that it's actually better than doing the thing they fear their option is, and without "Not Okaying" at them by pushing them "It's Not Okay that you feel Okay!". It's a lot easier when you can be Okay that they're in Okay mode because they're Not Okay with being Not Okay, partially just because externalizing ones alarms as a flinch is rarely the most helpful way of doing things. But also because if you're Okay you can "go first" and give them a proof of concept and reference example for what it looks like to stare at the uncomfortable thing (or uncomfortable things in general) and stay in Okay Mode. It helps them know "Hey, this is actually possible", and feel like you might even be able to help them get closer to it.


or those who are using Okay as a weapon?

Again, I'd just completely disregard their narratives on this one. They're implying that if you're Not Okay, then it's a "you problem". So what? Make sure they're wrong and demonstrate it.

"God, it's just a little fib. Are you okay??"

"Not really. I think honesty about these kinds of things is actually extremely important, and I'm still trying to figure out where I went wrong expecting not to have that happen"

Or

"Yeah, no, I'm fine. I just want to make sure that these people know your history when deciding how much to trust you".

Comment by jimmy on In Defense of Twitter's Decision to Ban Trump · 2021-01-12T02:04:11.836Z · LW · GW

"Content moderation" is not always a bad thing, but you can't jump directly from "Content moderation can be important" to "Banning Trump, on balance, will not be harmful". 

The important value behind freedom of association is not in conflict with the important value behind freedom of speech, and it's possible to decline to associate with someone without it being a violation of the latter principle. If LW bans someone because they're [perceived to be] a spammer that provides no value to the forum, then there's no freedom of speech issue. If LW starts banning people for proposing ideas that are counter to the beliefs of the moderators because it's easier to pretend you're right if you don't have to address challenging arguments, then that's bad content moderation and LW would certainly suffer for it.

The question isn't over whether "it's possible for moderation to be good", it's whether the ban was motivated in part or full by an attempt to avoid having to deal with something that is more persuasive than Twitter would like it to be. If this is the case, then it does change the ultimate point.

What would you expect the world to look like if that weren't at all part of the motivation? 

What would you expect the world to look like if it were a bigger part of the motivation than Twitter et al would like to admit?

Comment by jimmy on Motive Ambiguity · 2020-12-16T06:58:37.757Z · LW · GW

The world would be better if people treated more situations like the first set of problems, and less situations like the second set of problems. How to do that?

 

It sounds like the question is essentially "How to do hard mode?".

On a small scale, it's not super intimidating. Just do the right thing and take your spouse to the place you both like. Be someone who cares about finding good outcomes for both of you, and marry someone who sees it. There are real gains here, and with the annoyance you save yourself by not sacrificing for the sake of showing sacrifice, you can maintain motivation to sacrifice when the payoff is actually worth it -- and to find opportunities to do so. When you can see that you don't actually need to display that costly signal, it's usually a pretty easy choice to make.

Forging a deeper and more efficient connection does require allowing potential for conflict so that you can distinguish yourself from the person who is only doing things for shallow/selfish reasons. Distinguish yourself by showing willingness to entertain such accusations, knowing that the truth will show through. Invite those conflicts when you have enough slack to turn it into play, and keep enough slack that you can. "Does this dress make my ass look fat?" -- can you pull off "The *dress* doesn't, no" and get a laugh, or are you stuck where there's only one acceptable answer? If you can, demonstrate that it's okay to suggest the "unthinkable" and keep poking until you can find the edge of the envelope. If not, or when you've reached the point where you can't, then stop and ask why. Address the problem. Rinse and repeat with the next harder thing, as you become ready to.

On a larger scale, it gets a lot harder. You can no longer afford to just walk away from anyone who doesn't already mostly get it, and you don't have so much time and attention to work. There are things you can do, and I don't want to suggest that it's "not doable". You can start to presuppose the framings that you've worked hard to create and justify in the past, using stories from past experience and social proof to support them in the cases where you're challenged -- which might be less than you think, since the ability to presuppose such things without preemptively flinching defensively can be powerful subcommunication. You can start to build social groups/communities/institutions to scale these principles, and spread to the extent that your extra ability to direct motivation towards good outcomes allows you to out-compete the alternatives.

I just don't get the impression that there's any "easy" answer. If you want people to donate to your political campaign even though you won't play favorites like the other guy will, I think you have to genuinely have to be able to expect that your donors will be more personally rewarded by the larger total pie and recognition of doing the right thing than they will in the alternative where they donate to have someone fight to give them more of a smaller pie -- and are perceived however you let that be perceived.
 

Comment by jimmy on Number-guessing protocol? · 2020-12-07T18:30:34.454Z · LW · GW

This answer is great because it takes the problem with the initial game (one person gets to update and the other doesn't) and returns the symmetry by allowing both players to update. The end result shows who is better at Aumann updating and should get you closer to the real answer.

If you'd rather know who has the best private beliefs to start with, you can resolve the asymmetry in the other direction and make everyone commit to their numbers before hearing anyone else's. This adds a slight bit of complexity if you can't trust the competitors to be honest, but it's easily solved by either paper/pencil or everyone texting their answer to the person who is going to keep their phone in their pocket and say their answer first.

Comment by jimmy on Covid 11/19: Don’t Do Stupid Things · 2020-11-20T19:38:04.829Z · LW · GW

The official recommendations are crazy low. Zvi's recommendation here of 5000IU/day is the number I normally hear from smart people who have actually done their research. 

The RCT showing vitamin D to help with covid used quite a bit. This converter from mg to IU suggests that the dose is at least somewhere around 20k on the first day and a total of 40k over the course of the week. The form they used (calcifediol) is also more potent, and if I'm understanding the following comment from the paper correctly, that means the actual number is closer to 200k/400k. (I'm a bit rushed on this, so it's worth double checking here)

In addition, calcifediol is more potent when compared to oral vitamin D3 [43]. In subjects with a deficient state of vitamin D, and administering physiological doses (up to 25 μg or 1000 IU daily, approximately 1 in 3 molecules of vitamin D appears as 25OHD; the efficacy of conversion is lower (about 1 in 10 molecules) when pharmacological doses of vitamin D/25OHD are used. [42]

I've always been confused why the official recommendations for vitamin D are so darn low, but it seems that there might be an answer that is fairly straight forward (and not very flattering to the those coming up with the recommended values). It looks like it might be a simple conflation between "standard error of the mean" and "standard deviation" of the population itself.

Comment by jimmy on Simpson's paradox and the tyranny of strata · 2020-11-20T17:21:50.012Z · LW · GW

(If you're worried about the difference being due to random chance, feel free to multiply the number of animals by a million.)

[...]

They vary from these patterns, but never enough that they are flying the same route on the same day at the same time at the same time of year. If you want to compare, you can group flights by cities or day or time or season, but not all of them.

 

The problem you're using Simpson's paradox to point at does not have this same property of "multiplying the size of the data set by arbitrarily large numbers doesn't help". If you can keep taking data until randomness chance is no issue, then they will end up having sufficient data in all the same subgroups, and you can just read the correct answer off the last million times they both flew in the same city/day/time/season simultaneously.

The problem you're pointing at fundamentally boils down to not having enough data to force your conclusions, and therefore needing to make judgement about how important season is compared to time of day so that you can determine when conditioning on more factors will help relevance more than it will hurt by adding more noise.

Comment by jimmy on Covid 9/24: Until Morale Improves · 2020-09-24T19:43:12.034Z · LW · GW

Hypothetically, what would the right response be if you noticed that one of the main vaccine trials has really terrible blinding (e.g. participants are talking about how to tell whether you get the placebo in the waiting room)?

It seems like it would really mess up the data, probably resulting in the people who got the the vaccine taking extra risk and leading the study to understate the effectiveness.  Ideally, "tell the researchers" would be the obvious right answer, but are there perverse incentives at play that make the best response something else?

If I didn’t have people thanking me every week for doing these, it would be difficult to keep going.

Thanks Zvi. The effort is definitely appreciated.

Comment by jimmy on Covid 9/10: Vitamin D · 2020-09-11T03:46:41.824Z · LW · GW
There were 50 patients in the treatment group. None were admitted to the ICU. There were 26 patients in the control group. Half of them, 13 out of 26, were admitted to the ICU. So 13/26 vs. 0/50.

That's not what the paper says

Of 50 patients treated with calcifediol, one required admission to the ICU (2%),

The conclusions still hold, of course.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-26T18:31:39.220Z · LW · GW

Adjusting in the other direction seems useful as well. If someone Strong Upvotes ten times less frequently than average I would want to see their strong upvote as worth somewhat more.

Comment by jimmy on Do you vote based on what you think total karma should be? · 2020-08-24T17:32:49.958Z · LW · GW

Voting based on current karma is a good thing.

Without that, a post that is unanimously barely worth upvoting will get an absurd amount of upvotes while another post which is recognized as earth shatteringly important by 50% will fail to stand out. Voting based on current karma gives you a measure of the *magnitude* of people's like for a comment as well as the direction, and you don't want to throw that information out.

If everyone votes based on what they think the total karma should be, then a post's karma reflects [a weighted average of opinions on what the post's total karma should be] rather than [a weighted average of opinions on the post].

This isn't true.

If people vote based on what the karma should be, the final value you get is the median of what people think the karma should be -- i.e. a median of people's opinion of the post. If you force people to ignore the current karma, you don't actually get a weighted average of opinions on the post because there's very little flexibility in how strongly you upvote a post. In order to get that magnitude signal back, you'd have to dilute your voting with dither, and while that will no doubt happen to some extent (people might be too lazy to upvote slightly-good posts, but will make sure to upvote great ones), you will get an overestimate of the value of slightly-good posts.

This is bad, because the great posts hold a disproportionate share of the value, and we very much want them to rise to the top and stand out above the rest.

Comment by jimmy on Highlights from the Blackmail Debate (Robin Hanson vs Zvi Mowshowitz) · 2020-08-23T07:15:46.234Z · LW · GW
You are very much in the minority if you want to abolish norms in general.

There's a parallel here with the fifth amendment's protection from self incrimination making it harder to enforce laws and laws being good on average. This isn't paradoxical because the fifth amendment doesn't make it equally difficult to enforce all laws. Actions that harm other people tend to have other ways of leaving evidence that can be used to convict. If you murder someone, the body is proof that someone has been harmed and the DNA in your van points towards you being the culprit. If you steal someone's bike, you don't have to confess in order to be caught with the stolen bike. On the other hand, things that stay in the privacy of your own home with consenting adults are *much* harder to acquire evidence for if you aren't allowed to force people to testify against themselves. They're also much less likely to be things that actually need to be sought out and punished.

If it were the case that one coherent agent were picking all the rules with good intent, then it wouldn't make sense to create rules that make enforcement of other rules harder. There isn't one coherent agent picking all the rules and intent isn't always good, so it's important to fight for meta rules that make it selectively hard to enforce any bad rules that get through.

You can try to argue that preventing blackmail isn't selective *enough* (or that it selects in the wrong direction), but you can't just equate blackmail with "norm enforcement [applied evenly across the board]".

Comment by jimmy on What counts as defection? · 2020-07-16T06:27:15.059Z · LW · GW
I actually don't think this is a problem for the use case I have in mind. I'm not trying to solve the comparison problem. This work formalizes: "given a utility weighting, what is defection?". I don't make any claim as to what is "fair" / where that weighting should come from. I suppose in the EGTA example, you'd want to make sure eg reward functions are identical.

This strikes me as a particularly large limitation. If you don't have any way of creating meaningful weightings of utility between agents then you can't get anything meaningful out. If you're allowed to play with that free parameter then you can simply say "I'm not a utility monster, this genuinely impacts me more than you [because I said so!]" and your actual outcomes aren't constrained at all.

Defection doesn't always have to do with the Pareto frontier - look at PD, for example. (C,C), (C,D), (D,C) are usually all Pareto optimal. 

That's why I talk about "in the larger game" and use scare quotes on "defection". I think the word has to many different connotations and needs to be unpacked a bit.

The dictionary definition, for example, is:

A lack: a failure; especially, failure in the performance of duty or obligation.
n.The act of abandoning a person or a cause to which one is bound by allegiance or duty, or to which one has attached himself; a falling away; apostasy; backsliding.
n.Act of abandoning a person or cause to which one is bound by allegiance or duty, or to which one has attached himself; desertion; failure in duty; a falling away; apostasy; backsliding.

This all fits what I was talking about, and the fact that the options in prisoners dilemma are traditionally labeled "Cooperate" and "Defect" doesn't mean they fit the definition. It smuggles in these connotations when they do not necessarily apply.

The idea of using tit for tat to encourage cooperation requires determining what ones "duty" is and what "failing" this duty is, and "doesn't maximize total utility" does not actually work as a definition for this purpose because you still have to figure out how to do that scaling.

Using the Pareto frontier allows you to distinguish between cooperative and non-cooperative behavior without having to make assumptions/claims about whose preferences are more "valid". This is really important for any real world application, because you don't actually get those scalings on a silver platter, and therefore need a way to distinguish between "cooperative" and "selfishly destructive" behavior as separate from "trying to claim a higher weight to one's own utility".

Comment by jimmy on What counts as defection? · 2020-07-13T18:33:20.963Z · LW · GW

As others have mentioned, there's an interpersonal utility comparison problem. In general, it is hard to determine how to weight utility between people. If I want to trade with you but you're not home, I can leave some amount of potatoes for you and take some amount of your milk. At what ratio of potatoes to milk am I "cooperating" with you, and at what level am I a thieving defector? If there's a market down the street that allows us to trade things for money then it's easy to do these comparisons and do Coasian payments as necessary to coordinate on maximizing the size of the pie. If we're on a deserted island together it's harder. Trying to drive a hard bargain and ask for more milk for my potatoes is a qualitatively different thing when there's no agreed upon metric you can use to say that I'm trying to "take more than I give".


Here is an interesting and hilarious experiment about how people play an iterated asymmetric prisoner's dilemma. The reason it wasn't more pure cooperation is that due to the asymmetry there was a disagreement between the players about what was "fair". AA thought JW should let him hit "D" some fraction of the time to equalize the payouts, and JW thought that "C/C" was the right answer to coordinate towards. If you read their comments, it's clear that AA thinks he's cooperating in the larger game, and that his "D" aren't anti-social at all. He's just trying to get a "fair" price for his potatoes, and he's mistaken about what that is. JW, on the other hand, is explicitly trying use his Ds to coax A into cooperation. This conflict is better understood as a disagreement over where on the Pareto frontier ("at which price") to trade than it is about whether it's better to cooperate with each other or defect.

In real life problems, it's usually not so obvious what options are properly thought of as "C" or "D", and when trying to play "tit for tat with forgiveness" we have to be able to figure out what actually counts as a tit to tat. To do so, we need to look at the extent to which the person is trying to cooperate vs trying to get away with shirking their duty to cooperate. In this case, AA was trying to cooperate, and so if JW could have talked to him and explained why C/C was the right cooperative solution, he might have been able to save the lossy Ds. If AA had just said "I think I can get away with stealing more value by hitting D while he cooperates", no amount of explaining what the right concept of cooperation looks like will fix that, so defecting as punishment is needed.

In general, the way to determine whether someone is "trying to cooperate" vs "trying to defect" is to look at how they see the payoff matrix, and figure out whether they're putting in effort to stay on the Pareto frontier or to go below it. If their choice shows that they are being diligent to give you as much as possible without giving up more themselves, then they may be trying to drive a hard bargain, but at least you can tell that they're trying to bargain. If their chosen move is conspicuously below (their perception of) the Pareto frontier, then you can know that they're either not-even-trying, or they're trying to make it clear that they're willing to harm themselves in order to harm you too.

In games like real life versions of "stag hunt", you don't want to punish people for not going stag hunting when it's obvious that no one else is going either and they're the one expending effort to rally people to coordinate in the first place. But when someone would have been capable of nearly assuring cooperation if they did their part and took an acceptable risk when it looked like it was going to work, then it makes sense to describe them as "defecting" when they're the one that doesn't show up to hunt the stag because they're off chasing rabbits.

"Deliberately sub-Pareto move" I think is a pretty good description of the kind of "defection" that means you're being tatted, and "negligently sub-Pareto" is a good description of the kind of tit to tat.

Comment by jimmy on Noise on the Channel · 2020-07-04T17:47:07.105Z · LW · GW

To the extent that the underlying structure doesn't matter and can't be used, I agree that technically non-random "noise" behaves similarly and that this can be a reasonable use of the term. My objection to the term "noise" as a description of conversational landmines isn't just that they're "technically not completely random", but that the information content is actually important and relevant. In other words, it's not noise, it's signal.

The "landmines" are part of how their values are actually encoded. It's part of the belief structure you're looking to interact with in the first place. They're just little pockets of care which haven't yet been integrated in a smooth and stable way with everything else. Or to continue the metaphor, it's not "scary dangerous explosives to try to avoid", it's "inherently interesting stores of unstable potential energy which can be mined for energetic fuel". If someone is touchy around the subject you want to talk about, that is the interesting thing itself. What is in here that they haven't even finished explaining to themselves, and why is it so important to them that they can't even contain themselves if you try to blow past it?

It doesn't even require slow and cautious approach if you shift your focus appropriately. I've had good results starting a conversation with a complete stranger who was clearly insecure about her looks by telling her that she should make sure her makeup doesn't come off because she's probably ugly if she's that concerned about it. Not only did she not explode at me, she decided to throw the fuse away and give me a high bandwidth and low noise channel to share my perspective on her little dilemma, and then took my advice and did the thing her insecurity had been stopping her from doing.

The point is that you only run into problems with landmines as noise if you mistake landmines for noise. If your response to the potential of landmines is "Gah! Why does that unimportant noise have to get in the way of what I want to do!? I wonder if I can get away with ignoring them and marching straight ahead", then yeah, you'll probably get blowed up if you don't hold back. On the other hand, if your response is closer to "Ooh! Interesting landmine you got here! What happens if I poke it? Does it go off, or does the ensuing self reflection cause it to just dissolve away?", then you get to have engaging and worthwhile high bandwidth low noise conversations immediately, and you will more quickly get what you came for.

Comment by jimmy on Noise on the Channel · 2020-07-02T18:14:24.673Z · LW · GW

I think it's worth making a distinction between "noise" and "low bandwidth channel". Your first examples of "a literal noisy room" or "people getting distracted by shiny objects passing by" fit the idea of "noise" well. Your last two examples of "inferential distance" and "land mines" don't, IMO.

"Noise" is when the useful information is getting crowded out by random information in the channel, but land mines aren't random. If you tell someone their idea is stupid and then you can't continue telling them why because they're flipping out at you, that's not a random occurrence. Even if such things aren't trivially predictable in more subtle cases, it's still a predictable possibility and you can generally feel out when such things are safe to say or when you must tread a bit more carefully.

The "trying to squeeze my ideas through a straw" metaphor seems much more fitting than "struggling to pick the signal out of the noise floor" metaphor, and I would focus instead on deliberately broadening the straw until you can just chuck whatever's on your mind down that hallway without having to focus any of your attention on the limitations of the channel.

There's a lot to say on this topic, but I think one of the more important bits is that you can often get the same sense of "low noise conversation" if you pivot from focusing on ideas which are too big for the straw to focusing on the straw itself, and how its limitations might be relaxed. This means giving up on trying to communicate the object level thing for a moment, but it wasn't going to fit anyway so you just focus on what is impeding communication and work to efficiently communicate about *that*. This is essentially "forging relationships" so that you have the ability to communicate usefully in the future. Sometimes this can be time consuming, but sometimes knowing how to carry oneself with the right aura of respectability and emotional safety does wonders for the "inferential distance" and "conversational landmines" issues right off the bat.

When the problem is inferential distance, the question comes down to what extent it makes sense to trust someone to have something worth listening to over several inferences. If our reasonings differ several layers deep then offering superficial arguments and counterarguments is a waste of time because we both know that we can both do that without even being right. When we can recognize that our conversation partner might actually be right about even some background assumptions that we disagree on, then all of a sudden the idea of listening to them describe their world view and looking for ways that it could be true becomes a lot more compelling. Similarly, when you can credibly convey that you've thought things through and are likely to have something worth listening to, they will find themselves much more interested in listening to you intently with an expectation of learning something.

When the problem is "land mines", the question becomes whether the topic is one where there's too much sensitivity to allow for nonviolent communication and whether supercritical escalation to "violent" threats (in the NonViolent Communication sense) will necessarily displace invitations to cooperate. Some of the important questions here are "Am I okay enough to stay open and not lash out when they are violent at me?" and the same thing reflected towards the person you're talking to. When you can realize "No, if they snap at me I'm not going to have an easy time absorbing that" you can know to pivot to something else (perhaps building the strength necessary for dealing with such things), but when you can notice that you can brush it off and respond only to the "invitation to cooperate" bit, then you have a great way of demonstrating for them that these things are actually safe to talk about because you're not trying to hurt them, and it's even safe to lash out unnecessarily before they recognize that it's safe. Similarly, if you can sincerely and without hint of condescension ask the person whether they're okay or whether they'd like you to back off a bit, often that space can be enough for them to decide "Actually, yeah. I can play this way. Now that I think about it, its clear that you're not out to get me".

There's a lot more to be said about how to do these things exactly and how to balance between pushing on the straw to grow and relaxing so that it can rebuild, but the first point is that it can be done intentionally and systematically, and that doing so can save you from the frustration of inefficient communication and replace it with efficient communication on the topic of how to communicate efficiently over a wider channel that is more useful for everything you might want to communicate.

Comment by jimmy on Fight the Power · 2020-06-25T03:33:36.012Z · LW · GW

In general, if you're careful to avoid giving unsolicited opinions you can avoid most of these problems even with rigid ideologues. You wouldn't inform a random stranger that they're ugly just because it's true, and if you find yourself expressing or wishing to express ideas which people don't want to hear from you, it's worth reflecting on why that is and what you are looking to get out of saying it.

Comment by jimmy on [deleted post] 2020-06-17T03:32:11.597Z

I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you're trying to say about it in particular. I think I'm less concerned though, because I don't see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.

Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological "defense mechanisms", because "*obviously* evolution wouldn't select for those who 'defend' from threats by sticking their heads in the sand!" -- as if the problem of wireheading were as simple as competition between a "gene for wireheading" and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It's also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it's too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. "Defense mechanisms" arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective -- and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.

The solution of "simply noticing that the pain from cauterizing a serious bleed isn't a *bad* thing and therefore not flinching from it" isn't trivial. It's *doable*, and to be aspired to, but there's no such thing as "a gene for wise decisions" that is already "hard coded in DNA".

Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn't proof of some sort of "selection for shittiness" any more than it is to notice individual incoherence and the resulting dysfunction. It's not that coherence is impossible or undesirable, just that you're fighting entropy to get there, and succeeding takes work.

The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying "no thanks, I can wait" will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn't winning by disincentivizing cooperation with him, and it's likely more of a "his desire to not feel pain is winning, so he bleeds" sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it's also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there's an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.

I don't see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It's negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying "shut up and be a cog", I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using "shut up and be a cog" as a shit test to filter out the leaders who don't have what it takes to say "nah, I think I'll trust myself and win more", and lead effectively. Just like the burning pain, it's there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It's not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.

And as a result, I don't really feel myself being pulled between a conflict of "respect societies stupid beliefs/rules" and "care about other people". I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it's more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there's no reason to go from "I can't yet be effective in the broader community because I can't yet break out of their 'cog' mold for me, so I'm going to focus on the smaller community where I can" to "fuck them all". There's still plenty of value in reengaging when capable and pretending there isn't isn't that good functional thing we're striving to do. It's not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there's plenty of incentive to help things go well for everyone.

Comment by jimmy on Simulacra Levels and their Interactions · 2020-06-16T05:49:37.392Z · LW · GW
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.

Hm. This sounds like a challenge.

How about this:

Those "popular kids" who keep talking about fictitious "lions" on the other side of the river are actually losers. They try to pretend that they're simply "the safe and responsible people" and pat themselves on the back over it, but really they're just a bunch of cowards who wouldn't know what to do if there were a lion, and so they can't even look across the river and will just shame you for being "reckless" if you doubt the existence of lions that they "just know" are there. I hate having to say something that could lump me with these deplorable fools, and never before has there actually been a lion on the other side of the river, but this time there is. This time it's real, and I'm not saying we can't cross if need be, but if we're going to cross we need to be armed and prepared.

I can see a couple potential failure modes. One is if "Those guys are just crying wolf, but I am legit saving you [and therefore am cool in the way they pretend they are]" itself becomes a cool kid thing to say. The other is if your audience is motivated to see you as "one of them" to the point of being willing to ignore the evidence in front of them, they will do so despite you having credibly signaled that this is not true. Translating to actual issues I can think of, I think it would mostly actually work though.

It becomes harder if you think those guys are actually cool, but that shouldn't really be a problem in practice. Either a) there actually has been a lion every single time it is claimed, in which case it's kinda hard for "there's a lion!" to indicate group membership because it's simply true. Or b) they've actually been wrong, in which case you have something to distance yourself from.

If the truth is contentious and even though there has always been a lion, they've never believed you, then you have a bigger problem than simply having your assertions mistaken for group membership slogans; you simply aren't trusted to be right. I'd still say there's things that can be done there, but it does become a different issue.

Comment by jimmy on [deleted post] 2020-06-11T19:05:14.154Z
I described what happend to the other post here.

Thanks, I hadn't seen the edit.

I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?

While I generally support the idea that it's better to stop posting than to continue to post things which will predictably be negative karma sum, I don't think that's necessary here. There's plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one's own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.

I think the down votes are about something else which is a lot more easily fixable. While I'm sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don't put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn't miss.

As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I'm not sure where you simply missed that part or whether you don't believe him, but either way it is very hard to have a conversation with someone that doesn't engage with points like this at least enough to say why they aren't convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That's not to say that you have to believe that they're being reasonable/charitable/etc, or that you have to act like you do, but it's nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of "insufficiently charitable" is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.

It's a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn't sweat it.

Comment by jimmy on [deleted post] 2020-06-11T18:11:24.513Z
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?

Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.

When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

It seems that this bit is your main concern?

It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.

I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.

If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.

Does that address what you’re saying?

Comment by jimmy on [deleted post] 2020-06-10T20:08:21.060Z

It's not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there's no shame in that. Heck, maybe I'm even wrong and what I'm perceiving as an error actually isn't faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.

I try to avoid making that mistake, but if you feel like I'm erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.

I'm sorry if it hasn't been sufficiently clear that I'm friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.

Secondly I would also like to hear an actual counterargument to the argument I made

Which one? The "it was only studying IBS" one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It's always going to be "in the cases they've studied" and it's always conceivable that if you only knew to find the right use of placebos to test, you'll find one where it doesn't work. However, when placebos work without deception in every case you've tested, the default hypothesis is no longer "well, they require deception in every case except these two weird cases that I happen to have checked". The default hypothesis should now be "maybe they just don't require deception at all, and if they do maybe it's much more rare than I thought".

I'm not sure what point the existence of nocebo makes for you, but the same principles apply there too. I've gotten a guy to punch a cactus right after he told me "don't make me punch the cactus" simply by making him expect that if I told him to do it he would. Simply replace "because drugs" with "because of the way your mind works" and you can do all the same things and more.

I'm not sure how many more times I'll be willing to address things like this though. I'm willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn't considered and are therefore surprisingly strong, but if you still just don't buy into the general idea as worth exploring then I can agree to disagree.

And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.

Speaking of deleting things, what happened to your other post?

Comment by jimmy on [deleted post] 2020-06-10T07:49:50.487Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-10T07:48:04.782Z

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-08T19:30:29.622Z

1) Isomorphic to my "what if you know you'll do something stupid if you learn that your girlfriend has cheated on you" example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you're going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.

2) This clearly isn't a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn't mean giving your stalker your new address.

3) "Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won't take hold without it". This is assuming your conclusion. It's like saying "Unfortunately, in my made up hypothetical that doesn't actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!". I say you're making stuff up and that no such thing exists. Show me the studies, and I'll show you where they went wrong.

You can't just throw a blanket over a box and say "now that you can no longer see the gears, imagine that there's a perpetual motion machine in there!" and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn't magician trickery and it really wasn't sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I'm missing. At the point where it became likely that it wasn't new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.

Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn't to challenge the hypothetical hard, then you're making a mistake.

"Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly" is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you've given, it's been one or the other of these, and that's not likely to change.

If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can't simply cast a blanket over the box and declare that it is now "possible" since you "can't see" that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you'll start to see the path towards making both your epistemic and your instrumental rationality work better.

4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.

It's worth noticing though, that you didn't use a real world example and that there might be reasons for this.

5) This is just 2 again.

6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.

a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.

In this case, obviously not. I'm also curious what it feels like to be shot in the leg, but the price of that information is more than I'm willing to spend. If I learn what that feels like, then I don't get to learn what I would have been able to accomplish if I could still walk well. There's no conflict here between epistemic and instrumental rationality here.

b) The "torture" is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.

When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.

If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I'd probably call their bluff and push that button to see what they got.

c) This "torture" really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.

I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we're back to being another example of number one. If I'm pretty sure it won't be any worse than this, I'd go for it.


This whole "epistemic vs instrumental rationality" thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.

Comment by jimmy on [deleted post] 2020-06-08T17:53:06.883Z
I'm very glad that you managed to train yourself to do that but this option is not available for everyone.

Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to "maybe I'm missing something here" . I'm not asking you to be more credulous or to simply believe anything I'm saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.

Because you do have that option available to you. In my experience, it's simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says "that's a great idea, let's do that!" and then mysteriously fails. The idea that it's "not available" is one more false belief that gets in the way of focusing on the right thing.

Don't get me wrong, I'm not saying that it's always trivial. Epistemic rationality is not trivial. It's completely possible to try to organize one's mind into coherence and still fail to get the results because you don't realize where you're missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.

I don't see a lot of engaging in the least convenient possible world

Well, if I don't think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)

I'll take a look at your new post.