Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z · score: 1 (2 votes)
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z · score: 4 (5 votes)
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z · score: 6 (7 votes)
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z · score: 7 (10 votes)
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z · score: 8 (25 votes)
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z · score: 24 (29 votes)
Metauncertainty 2009-04-10T23:41:52.946Z · score: 19 (23 votes)


Comment by jimmy on mind viruses about body viruses · 2020-03-28T18:54:09.407Z · score: 10 (6 votes) · LW · GW

When you're dealing with a threat that doubles in size every few days, you do not have the luxury of excess caution. Inverted pendulums have an exponentially growing error as well, and no matter what you do (or don't do) to react, if your control system doesn't do it faster than the instability grows, you lose. Period. If you try to move slowly in the act of balancing, then you will fall off the tight rope no matter how sure you later become of what the right action would have been. It is fundamentally necessary to be able to react and then correct for errors later (so yes, pre-frame this in your communication so that you don't over-commit to something you will need to later change).

It's also worth noting that "literally everyone on earth" only starts trying to solve the problem once they know that it's a problem, and at the time that Pueyo's first essay came out, that was absolutely not the case. At that time, I was still scrambling trying to figure out how to best leverage my credibility and communication skills to convey the exact same point about "Why you must act now" because people around me had not yet come to realize how serious an issue this was going to be. Sure, they'd have heard it without me too. But they would have heard it later with less time to act, and might not have taken it as seriously without my credibility behind it. If enough people like me took your advice they might not have heard it at all because it could be out competed by other less useful memes.

It's just that, as you manage to find alternative takes (perhaps by credentialed experts, perhaps not) that find flaws in the memes you've been spreading, you spread those too. I would say "correct for your mistakes" except that it's not even a "mistake" necessarily, just "a clarification of oversimplification" or "the next control input given the newest estimate of the state".

As we get deeper into this mess and people start mobilizing, then "do something in this general direction!" becomes less important. At some point you have to wonder whether the pendulum has swung too far, or if we need to be acting in a different direction, or something else. When everyone in the world is thinking about it we now have a very different problem and instead of simply requiring an ability to take back of the envelope models seriously when they are outside the "socially accepted reality", you actually need more detailed analyses.

Still, public opinion will need to get on board with whatever is necessary, and in the absence of your input the memes don't just stop and wait for science, and neither does the coronavirus. If you try to say "but I can pick nits! This isn't credentialed and perfect!" and try to replace useful first steps with inaction, then you blow your credibility and with it your ability to help shape things for the better. Let's not do that.

Yes, it is important to not initiate or signal boost bad information at the cost of good ones. Yes, it is important to look for people who are (actually) experts. But it's also important to provide a path from the real experts to the layfolk, since that doesn't and cannot happen on its own. The public in general not only can't evaluate the object level arguments about epidemiology and must defer to authority, they can't even evaluate object level arguments about who is the real authority -- that's why you get antivaxxers listening to crackpots. It's appeals to authority (mixed in with justifications) all the way up. If you can't create the best ideas but you can distinguish between the best ideas and those which merely look good to the untrained eye, it is your job to pass the best ideas down to those who are less able to make that distinction. If you can't make that distinction yourself but you can at least distinguish between people who can and posers, then it is your job as the next link in the chain to pass this information from those more able to discern to those who are less able to discern than you. This goes all the way down to the masses watching the news, and you better hope you can get the news to get their shit together. I still know people who are in denial because mainstream news told them to be and then failed to appropriately correct for their earlier mistakes. Let's work to fix that.

Exponential memetic spread does not pathology make. Yes, it's possible for overactive or mistargeted immune systems to fail to prevent things or to do more harm than good. Yes, Dunning Kruger applies and humility is as necessary as ever. However, so is the courage to be bold and to take action when it is called for instead of hiding in false humility. This "intellectual curve" is a part of our collective immune response to an actual virus which is killing people and threatening to kill exponentially more. Do not flatten the wrong curve. Find a role that allows you to guide it in the right direction, and then guide.

Comment by jimmy on Authorities and Amateurs · 2020-03-26T21:34:25.905Z · score: 12 (6 votes) · LW · GW

Here's my answer:

There is an important distinction between "object level arguments" and "appeals to authority". Contrary to how it's normally spoken about, appeal to authority is not really fallacious and at times absolutely necessary. If I am unable to parse the object level arguments myself, I have to defer to experts. The only issue is whether I have the self awareness and integrity to say "I'm not capable of evaluating this myself, so unfortunately I have to defer to the people I trust to get these things right. Maybe you're right and I'm just not smart enough to see it". However, this must ground out somewhere. If you listen to people who only appeal to authority (whether it is their own or others) and there are never any attempts to ground things in object level arguments, then there is nothing this trust is founded on and so your beliefs can float away with no connection to reality.

What I do is take into consideration all object level arguments which I am not personally qualified to evaluate, and then weigh my trust in the various "authorities" based on how capable they seem in actually getting into the object level and making at least as much sense as the people they're arguing against. As it applies here, the amateurs linked to actually got into the object level and made very plausible sounding arguments. I didn't see any major holes in the main premise, even if I could pick less important nits. I never saw any credentialed authority engaging in the object level and making even plausibly correct counterarguments which negated the main point of these amateur models. There were a lot of "don't worry, nothing to see here", but there weren't any that were backed up by concrete models that didn't have visible holes.

The people I'm going to listen to (regardless of how capable I personally am of evaluating the object level arguments) are those who 1) have been willing to stick their neck out and make actual arguments, and 2) haven't had their neck chopped off by people pointing out identifiable mistakes in ways that are either personally verifiable or agreed upon by a more compelling network of "authority".

I think this heuristic worked pretty well in this case.

Comment by jimmy on Advice on reducing other risks during Coronavirus? · 2020-03-25T17:28:25.422Z · score: 2 (1 votes) · LW · GW

I'm not so sure the recommendation for walking over driving holds up. According to the CDC "Per trip, pedestrians are 1.5 times more likely than passenger vehicle occupants to be killed in a car crash."

Comment by jimmy on Should I buy a gun for home defense in response to COVID-19? · 2020-03-23T06:42:53.441Z · score: 8 (6 votes) · LW · GW

Strong disagree. Anyone who knows how to operate their weapon and is willing to use it is a formidable threat to all but the most trained and determined invaders. The level of accuracy needed to hit a man sized target inside a house with a long gun is really low. Low enough that if you miss the problem isn’t that you aren’t yet skilled in the art of aiming, it’s that you didn’t make sure to aim at all before you pulled the trigger.

The bigger barrier is psychological. If you can’t get yourself to take deliberate aim on another human and pull the trigger knowing what will happen, then a firearm might not be useful. If you can do that though, the mechanics won't be a problem except in the difficult cases.

Comment by jimmy on Coronavirus Open Thread · 2020-03-20T19:23:15.627Z · score: 3 (2 votes) · LW · GW

Right, I got that it was them doing the math correction not you. Still, they did the math and give an age breakdown of the passengers and a crude sanity check gives a number within about 30% of what they report.

Comment by jimmy on Coronavirus Open Thread · 2020-03-20T19:07:46.008Z · score: 2 (1 votes) · LW · GW

I'm not sure what makes you think it doesn't have sharp edges. In order to not have sharp edges it would need to be a bar, not flexible tape.

Comment by jimmy on Coronavirus Open Thread · 2020-03-19T01:41:00.207Z · score: 3 (2 votes) · LW · GW

Yeah, the 1/8th multiplier sounded hard to believe. A 1/2 multiplier based on demographic correction sounds a lot more plausible, and it's nice to have confirmation that someone else actually did the math. Thanks for finding/sharing it!

Comment by jimmy on Coronavirus Open Thread · 2020-03-18T20:26:51.831Z · score: 4 (2 votes) · LW · GW
The one situation where an entire, closed population was tested was the Diamond Princess cruise ship and its quarantine passengers. The case fatality rate there was 1.0%, but this was a largely elderly population, in which the death rate from Covid-19 is much higher.
Projecting the Diamond Princess mortality rate onto the age structure of the U.S. population, the death rate among people infected with Covid-19 would be 0.125%.

John Ioannidis is making an interesting (and reassuring, if true) claim here. Has anyone looked at the demographics and done the comparison themselves?

Comment by jimmy on Coronavirus Open Thread · 2020-03-18T20:26:05.947Z · score: 2 (1 votes) · LW · GW

Yes, that looks right. The edges of any thin tape are going to be sharp, it's just that copper is strong enough to hold that geometry instead of folding easily before it cuts you.

Comment by jimmy on Could you save lives in your community by buying oxygen concentrators from Alibaba? · 2020-03-17T04:04:07.233Z · score: 4 (2 votes) · LW · GW

Unless you're underwater or in a hyperbaric chamber, oxygen toxicity isn't really a big concern, and a cheap oxygen concentrator like the one described above can't get you close to where problems start. Even if you had a better oxygen concentrator, it doesn't take any fancy training to add oxygen until 92% saturation or whatever.

Comment by jimmy on How effective are tulpas? · 2020-03-10T18:18:28.172Z · score: 9 (5 votes) · LW · GW
Bowing down to authority every time someone tells me not to do something isn't going to accomplish that.

Not if applied across the board like that, no. At the same time, a child who ignores his parents' vague warnings about playing in the street is likely to become much weaker or nonexistent for it, not stronger. You have to be able to dismiss people as posers when they lack the wisdom to justify their advice and be able to act on opaque advice from people who see things you don't. Both exist, and neither blind submission nor blind rebellion make for successful strategies.

An important and often missed aspect of this is that not all good models are easily transferable and therefore not all good advice will be something you can easily understand for yourself. Sometimes, especially when things are complicated (as the psychology of human minds can be), the only thing that can be effectively communicated within the limitations is an opaque "this is bad, stay away" -- and in those cases you have no choice but to evaluate the credibility of the person making these claims and decide whether or not this specific "authority" making this specific claim is worth taking seriously even before you can understand the "why" behind it. Whether you want to want to heed or ignore the warnings here is up to you, but keep in mind that there is a right and wrong answer, and that the cost of being wrong in one direction isn't the same as the other. A good heuristic which I like to go by and which you might want to consider is to refrain from discounting advice until you can pass the intellectual Turing test of the person who is offering it. That way, you can know that when you choose to experiment with things deemed risky, you're at least doing it informed of the potential risks.

FWIW, I think the best argument against spending effort on tulpas isn't the risk but just the complete lack of reward relative to doing the same things without spending time and effort on "wrapping paper" which can do nothing but impede. You're hardware hours limited anyway, and so if your "tulpa" is going to become an expert in chess it will be with eye/hand/brain hours that you could have used becoming an expert in chess yourself. If your tulpa is going to have important wisdom to offer by virtue of holding different perspectives, those perspectives will be generated with brain time you could have used generating those perspectives for yourself. There's no rule saying people can't specialize in more than one thing or be more than uni-dimensional, it's just a question of where you want to spend your limited hours.

Comment by jimmy on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T18:34:23.596Z · score: 17 (6 votes) · LW · GW

We do, and that's the point. It's not "hey, we're not as bad as them so don't complain to us!". It's that there is already a lot of distrust out there, and giving people something to latch onto with "see, I knew the CDC wasn't being honest with me!" can keep them from spiraling out of control with their distrust, since at least they know where it ends.

Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn't "they must be perfect" it's "if they have to hide it, the problems are probably worse than 'mild'".

Comment by jimmy on What is the appropriate way to communicate that we are experiencing a pandemic? · 2020-03-05T18:28:05.729Z · score: 5 (3 votes) · LW · GW
But then I thought the psychological consequences for a not inconsiderable amount of people would be disastrous, as it seems to my girlfriend. [...] I don't want to be a information hazard source.

It's important to note that unpleasant emotions are functional when faced with a new threat that one hasn't prepared for; the whole point of emotions like fear is to reorient ourselves towards the reality we find ourselves in and come up with a more informed (and therefore hopefully more effective) response. It is always unpleasant to realize that things aren't quite as nice as we've been hoping and planning on, but the actual information hazard would be things that "protect" people from the emotion that could have protected their life and well being as well as the life and well being of their loved ones. What you're talking about doing is the opposite of an information hazard.

That said, there are a few things that can be important for doing it right.

One is that you want to draw very clear boundaries between the position you advocate and alarmism. You're pushing for integration of scary information as well, not for blindness to good news and the potential for optimism. You don't want to push people from "white thinking" to "black thinking", you want to encourage people to take in all information and pick the most appropriate shade of gray given the current information available.

Not only is some shade of gray more accurate than pure black, making this distinction clear will help you persuade people. When people are primed and ready to "not give into alarmist/doomer thinking", you don't want them to pattern match you as this opposite form of irrational thought. If you have had/seen any conversations about this where people are saying "it's not the end of the world" in response to statements like "it's not 'just the flu'", this is what is going on. You're seeing them argue against what they don't want to believe rather than what is being argued. I would make sure to include and emphasize everything optimistic you can without sacrificing accuracy, and make sure you're not trying to "push one side" as much as offer more information as someone who can see both the reassuring and the scary.

Secondly, recognize the fact that you are deliberately exposing people to scary ideas which many many people are not emotionally prepared to deal with. The whole reason people dismiss reasonable arguments as "alarmist" is because their emotional response would be somewhat like your girlfriend's, and they don't want to have to face that. To every extent you can, ease this transition. Be comforting and hospitable, even if just in body language or vocal tone in a YouTube video. You want to emphasize (explicitly or implicitly) that feeling fear is not a sign of cowardice but of courage -- after all, they've proven themselves capable of avoiding it if they wanted to. You want to give people an idea of what they can do, and what their cues should be for various decisions. This can help lower the amount of uncertainty that they will have to deal with and make the transition more comfortable, as well as cutting down on the unnecessarily duplicated cognitive effort of "figuring out what the hell to do about it". People are always free to doubt and question and to disagree, of course, but it can be nice having a "default" value to jump to so that you can update on risks without having to be emotionally and mentally ready to compute all your own ideas on first principles.

This is very important work, as there are relatively few people who are willing and able to engage with the scarier possibilities without losing touch of hope and succumbing to alarmist paranoia and losing all credibility. I definitely encourage you to make the video.

Comment by jimmy on How does electricity work literally? · 2020-02-24T22:44:18.845Z · score: 6 (4 votes) · LW · GW
I don't know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I'm pretty sure all the obvious arguments on both sides were aired right from the start.

DC wasn't really a viable option at the start because of the transformer issue you mentioned. The local power lines carry ~100x higher voltage than what you get in your house, and the long distance power lines up to another 100x on top of that. Without that voltage step up, you'd need 100-10,000x as much wire.

Modern semi conductors change the game considerably though. In a lot of areas, the big iron transformers are getting phased out and replaced with switching power supplies, which suggests that it could be economically efficient now, if not for the requirement for a 50 or 60hz sinewave and existing stuff.

A DC based system would have advantages of not requiring rectification on many end uses, give some minor improvement in corona losses in transmission, and would allow for variable speed generators. It would come at the cost of controller-less induction motors and clocks that use the AC signal to keep time. I'm not sure about the cost of doing the voltage step-up/step-down because both methods are still in use. I'm not sure which would be the better choice now, but it is an interesting question.

Comment by jimmy on Who lacks the qualia of consciousness? · 2019-10-06T18:03:12.033Z · score: 8 (5 votes) · LW · GW
Over on Facebook (I don't know if it's possible to link to a Facebook post, but h/t Alexander Kruel) and Twitter, the subject of missing qualia has come up. Some people are color-blind. This deficiency can be objectively demonstrated by tasks such as the Ishihara patterns

Lacking the ability to distinguish colors well means your brain does not know which qualia to use, not that it doesn't have all of the qualia available. I'm red/green color blind (according to the tests, and difficulty determining the color of small things), but I have very distinct red and green qualia. Most of the time my experience feels like "I'm unsure if this line is red or green", which is different than "this line is red-green, as there is not actually a difference between the thing people call 'red' and the thing people call 'green'".

However, I have also had the experience of having red text show up as bright green and then switch on me. I was reading part of the sequences back in the day, and I could tell from the context that the word "GREEN" was supposed to be red (stroop tests), but my brain took that as a cue that the text was supposed to be green. When I brought my face closer to the screen to check, the text flipped to red. When I backed up it returned to green. In between, individual pieces of each letter would start to flip color.

Comment by jimmy on The Power to Demolish Bad Arguments · 2019-09-05T18:34:20.975Z · score: 2 (1 votes) · LW · GW

Okay, I thought that might be the case but I wasn't sure because the way it was worded made it sound like the first interaction was real. ("You can see I was showing off my mastery of basic economics." doesn't have any "[in this hypothetical]" clarification and "This seemed like a good move to me at the time" also seems like something that could happen in real life but an unusual choice for a hypothetical).

To clarify though, it's not quite "doubt that it's sufficiently realistic". Where your simulated conversation differs from my experience is easily explained by differing subcommunication and preexisting relationships, so it's not "it doesn't work this way" but "it doesn't *have to* work this way". The other part of it is that even if the transcript was exactly something that happened, I don't see any satisfying resolution. If it ended in "Huh, I guess I didn't actually have any coherent point after all", it would be much stronger evidence that they didn't actually have a coherent point -- even if the conversation were entirely fictional but plausible.

Comment by jimmy on The Power to Demolish Bad Arguments · 2019-09-04T16:34:43.687Z · score: 0 (5 votes) · LW · GW

1) There is a risk in looking at concrete examples before understanding the relevant abstractions. Your Uber example relies on the fact that you can both look at his concrete example and know you're seeing the same thing. This condition does not always hold, as often the wrong details jump out as salient.

To give a toy example, if I were to use the examples "King cobra, Black mamba" to contrast with "Boa constrictor, Anaconda" you'd probably see "Ah, I get it! Venomous snakes vs non-venomous snakes", but that's not the distinction I'm thinking of so now I have to be more careful with my selection of examples. I could say "King cobra, Reticulated python" vs "Rattlesnake, Anaconda", but now you're just going to say "I don't get it" (or worse yet, you might notice "Ah, Asia vs the Americas!"). At some point you just have to stop the guessing game, say "live young vs laying eggs", and only get back to the concrete examples once they know where to be looking and why the other details aren't relevant.

Anything you have to teach which is sufficiently different from the persons pre-existing world view is necessarily going to require the abstractions first. Even when you have concrete real life experiences that this person has gone through themselves, they will simply fail to recognize what is happening to them. Your conclusion "I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything." is kinda the point. When you're learning new ways of looking at things, you're not going to immediately be able to cache them out into specific predictions. Noticing this is an important step that must come before evaluating predictions for accuracy if you're going to evaluate reliably. You do have to be able to get specific eventually, or else the new abstractions won't have any way to provide value, but "more specificity" isn't always the best next step.

2) It seems like the main function you have for "can you give me a concrete example" is to force coherence by highlighting the gaps. Asking for concrete examples is one way of doing this, but it is not required. All you really need for that is a desire to understand how their worldview works, and you can do this in the abstract as well. You can ask "Can you give me a concrete example?", but you could also ask "What do you think of the argument that Uber workers could simply work for McDonald's instead if Uber isn't treating them right?". Their reasoning is in the abstract, and it will have holes in the abstract too.

You could even ask "What do you mean by 'exploits its workers'?", so long as it's clear that your intent is to really grok how their worldview works, and not just trying to pick it apart in order to make them look dumb. In fact, your hypothetical example was a bit jarring to me, because "what do you mean by [..]" is exactly the kind of thing I'd ask and "Come on, you know!" isn't a response I ever get.

3) Am I understanding your post correctly that you're giving a real-world example of you not using the skill you're aiming to teach, and then a purely fictional example of you imagine that the conversation would have gone if you had?

I'd be very hesitant to accept that you've drawn the right conclusion about what is actually going on in people's heads if you cannot show it with actual conversations and at the very least provoke cognitive dissonance, if not agreement and change. Otherwise, you have a "fictitious evidence" problem, and you're in essence trusting your analysis rather than actually testing your analysis.

You say "Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty.", but I don't see any indication anywhere that you've actually ruled out the hypothesis "they actually have something to say, but I've failed to find it".

Comment by jimmy on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-03T17:51:31.661Z · score: 7 (3 votes) · LW · GW

I wouldn't interpret Kaj as saying "Go ahead and remember false things for instrumental gain. What could possibly go wrong with that!?". Truth is obviously important, and allowing oneself to pretend "this looks instrumentally useful to believe, so I can ignore the fact that it's clearly false" is definitely a recipe for disaster.

What Kaj is saying, I think, is that the possibility of being wrong is not justification for closing ones eyes and not looking. If we attempt to have any beliefs at all, we're going to be wrong now and then, and the best way to deal with this is to keep this in mind, stay calibrated, and generally look at more rather than less.

It's not that "recovering memories" is especially error prone, it's that everything is error prone and people often fail to appreciate how unreliable memory can be because they don't actually get how it works. If you try to mislead someone and convince them that a certain thing is happened, they might remember "oh, but I could have been mislead" where as if you do the exact same thing but instead you mislead them to think "you remember this happening", then they now get this false stamp of certainty saying "but I remember it!".

Comment by jimmy on Raw Post: Talking With My Brother · 2019-07-21T08:29:37.514Z · score: 6 (3 votes) · LW · GW
I think the same is true of NVC. If only one person's doing it, it's not going to work very well. It takes two. Some of my best memories are of conversations that took place between myself and somebody else schooled in NVC or something similar. Some of my worst are of applying NVC or similar techniques in a situation where the other person is used to getting their way through domineering or abusive behavior.

Hm. My experience is the opposite. I’ve found the most use in NVC type communication in cases where the other person is getting quite violent, as it can be remarkably disarming to see through the threats and care about the hurt that is driving the person to make them. It can also make it quite difficult for people to continue justifying their domineering and/or abusive behavior to themselves and others, if that’s what they’re doing.

My model of NVC is that’s useful in the way that neutron moderators can be useful. There’s a certain “gain” by which the “violence level” of communications are amplified after being expressed and received, and then having the response expressed and received. If you get multiplied by a number greater than one after going around the loop, things will melt down or explode. If one person is both interpreting and responding uncharitably, the other party is going to have to shoulder more of the burden to “moderate neutrons” and be extra clean in their communications so as to not escalate or allow for escalation. Additional effort to communicate nonviolently is going to make the most difference in the cases where you can actually cross unity. If you’re well below one to start with, then there’s no need. If you can’t get below one even with effort, it’s a bit futile (and therefore frustrating/discouraging).

You seem pretty aware of the failure mode of trying to use neutrality/rationality as an emotional defense mechanism, and how reaching for tools like NVC out of these motivations can lead to stifling of the important emotions that need to be expressed (which, interestingly, necessarily leads to misapplication/cargo culting of the tools). Do you think that could be behind your difficulties in getting good results with NVC in the “one sided” cases? Also, to me, your conversation with your brother looks like a perfect example of using NVC with someone who presumably isn’t trained in NVC to transform things from where they were coming off as domineering/abusive to one where you two are clearly on the same side and working together. Do you conceptualize it differently?

Comment by jimmy on Self-consciousness wants to make everything about itself · 2019-07-08T19:38:24.680Z · score: 3 (2 votes) · LW · GW

Free soloing is fun for some and not others in large part for reasons like “skill in climbing”, which cannot be expected to hold the same optimal value for different people. Courage, on the other hand, is pretty universally useful, and can help in ways that are not immediately obvious.

It’s not always obvious how things could be better through the exercise of courage for two reasons. First, the application of courage almost inevitably results in an increase of fear (how could it not, since you’re choosing to not flinch away from the fear). If you’re not exceedingly careful, it can be easy to conflate “things got scarier” with “things got objectively worse”. In situations like this, it can often escalate things into explicit threats of violence which are definitely more scary and it can be easy to read “he threatened to fight me” as a turn for the worse. It’s not at all obvious until you follow through that these threats are very very often empty — so often, in fact, that displaying willingness to let things escalate physically can be the safer thing to (at least in my experience it has been).

Secondly, it’s not always clear what one should do with courage. All the courage in the world wouldn’t get me to free solo climb for the same reason it wouldn’t get me to play Russian roulette; it’s just not worth the risk for me. In situations like this, cousin_it suggests responding with “no, you”, but I actually think that’s a mistake. I’d actually advocate doing exactly what habryka did. Say nothing. Don’t back down, of course, but you don’t have to respond and what do you get out of responding other than encouraging that kind of bad behavior? The jerk doesn’t deserve a response.

Of course, it’d be nice to do it with less fear. It’d be nice if instead of seeing fear he sees someone looking at him as if he’s irrelevant and just waiting for him to leave (which is a pretty big punishment, actually, since it makes the aggressor feel foolish for thinking their aggression would have any effect), but that’s an issue of “fear” not “courage”, and you kinda have to accept and run with whatever fear you have since you can’t really address it on the fly.

I wouldn’t say “you should have more courage” both because I don’t see any obvious failure of courage and because you can’t “should” people into courage or out of fear, but I do think courage is an underappreciated virtue to be cultivated, and that the application of courage in cases like these makes life as a whole much more pleasant and less (invisibly and visibly) controlled by fear. This means both holding fast in the moment despite the presence of fear, as well as taking the time to work through your fears in the down time such that you’re more prepared for the next time.

Comment by jimmy on Everybody Knows · 2019-07-03T18:27:36.810Z · score: 7 (5 votes) · LW · GW

Similarly, this:

So they’ll cross their fingers rather than demand fair dice. So that they’ll stop trying to fight the war.

Is often not just false, but backwards.

The reason you say “everyone knows Bob is a liar” isn’t always to protect Bob and blame the victims for being fooled. Sometimes it’s to punish Bob, by taking things from the “trial” phase to the “sentencing” phase. So long as “Bob is a liar!” is a thing that needs to be said/argued, Bob is still on trial. Once “everyone knows” Bob is a liar, you can start actually treating him like a liar and trust that people will coordinate with you instead of against you for trying to punish someone whose guilt hasn’t been established.

Things often do get stuck at the stage where (pretty much) everyone knows but no one moves on to doing something about it because they feel they need to keep asserting the thing instead of declaring that it's worth taking as granted and moving on to dealing with it.

Comment by jimmy on Selection vs Control · 2019-06-15T05:48:51.334Z · score: 2 (1 votes) · LW · GW
I agree with most of what you say here, but I think you're over-emphasizing the idea that search deals with unknowns whereas control deals with knows.

There’s uncertainty in both approaches, but it is dealt with differently. In controls, you’ll often use kalman filters to estimate the relevant states. You might not know your exact state because there is noise on all your sensors, and you may have uncertainty in your estimate of the amount of noise, but given your best estimates of the variance, you can calculate the one best estimate of your actual state.

There’s still nothing to search for in the sense of “using our model of our system, try different kalman filter gains and see what works best”, because the math already answered that for you definitively. If you’re searching in the real world (i.e. actually trying different gains and seeing what works best), that can help, but only because you’re getting more information about what your noise distributions are actually like. You can also just measure that directly and then do the math.

With search over purely simulated outcomes, you’re saying essentially “I have uncertainty over how to do the math”, while in control theory you’re essentially saying “I don’t”.

Perhaps a useful analogy would be that of numerical integration vs symbolic integration. You can brute force a decent enough approximation of any integral just by drawing a bunch of little trapezoids and summing them up, and a smart highschooler can write the program to do it. Symbolic integration is much "harder", but can often give exact solutions and isn't so hard to compute once you know how to do it.

Why not both?

You can do both. I’m not trying to argue that doing the math and calculating the optimal answer is always the right thing to do (or even feasible/possible).

In the real world, I often do sorta “search through gains” instead of trying to get my analysis perfect or model my meta-uncertainty. Just yesterday, for example, we had some overshoot on the linear actuator we’re working on. Trying to do the math would have been extremely tedious and I likely would have messed it up anyway, but it took about two minutes to just change the values and try it until it worked well. It’s worth noting that “searching” by actually doing experiments is different than “searching” by running simulations, but the latter can make sense too -- if engineer time doing control theory is expensive, laptop time running simulations is cheap, and the latter can substitute for the former to some degree.

The point I was making was that the optimal solution is still going to be what control theory says, so if it's important to you to have the rightest answer with the fewest mistakes, you move away from searching and towards the control theory textbook -- not the other way around.

Most of your post is describing situations where you can't easily solve a control problem with a direct rule, so you spin up a search based on a model of the situation.

I don't follow this part.

Comment by jimmy on Selection vs Control · 2019-06-03T18:00:08.711Z · score: 17 (7 votes) · LW · GW
I would agree that a search process in which the cost of evaluation goes to infinity becomes purely a control process: you can't perform any filtering of possibilities based on evaluation, so, you have to output one possibility and try to make it a good one (with no guarantees).

This is backwards, actually. “Control” isn’t the crummy option you have to resort to when you can’t afford to search. Searching is what you have to resort to when you can’t do control theory.

When your Jacuzzi is at 60f and you want it at 102f, there are a lot of possible heating profiles you could try out. However, you know that no combination of “on off off on on off off on” is going to surprise you by giving a better result than simply leaving the heater on when it’s too cold and off when it’s too hot. Control theory actually can guarantee the optimal results, and with some simple assumptions it’s exactly what it seems like it’d be. Guided missiles do get more complicated than this with all the inertias and significant measurement noise and moving target and all that, but the principle remains the same: compute the best estimate of where you stand relative to the trajectory you want to be on (where “trajectory” includes things like the angular rates of your control surfaces), and then steer your trajectory towards that. There’s just nothing left to search for when you already know the best thing to do.

The reason we ever need to search is because it’s not always obvious when our actions are bringing us towards or away from our desired trajectory. “Searching” is performing trial and error by simulating forward in time until you realize “nope, this leads to a bad outcome” and backing up to before you “made” the mistake and trying something else. For example if you’re trying to cook a meal you might have to get all the way to the finished product before you realized that you started out with too much of one of your ingredients. However, this is a result of not knowing the composition you’re looking for and how your inputs affect it. Once you understand the objective, the process and actuators, and how things project into the future, you know your best guess of where to go at each step. If the water is too cold, you simply turn the heater on.

Searching, then, isn’t just something we do when projecting forward and evaluating outcomes is cheap. It’s what we do when analyzing the problem and building an understanding of how our inputs affect our trajectories (i.e. control theory) is expensive. Or difficult, or impossible.

Or perhaps better put, searching is for when we haven’t yet found what we want and how to get there. Control systems are what we implement once we know.

Comment by jimmy on Yes Requires the Possibility of No · 2019-05-21T19:08:02.097Z · score: 25 (8 votes) · LW · GW

Not every important concepts has implications which are immediately obvious, and it's generally worth making space for things which are true even when you can't yet find the implications. It's also worth making the post.

That said, one of the biggest implications I draw from this concept is that of "seeking 'no's". If you want a "yes", then often what you can do is go out of your way to make "no" super easy to say, so that the only reason they won't say "yes" is because "yes" isn't actually true/in their best interests. A trivial example might be that if you want someone to help you unload your moving truck, giving them the out "I know you've got other things you need to do, so if you're busy I can just hire some people to help" will make it easier to commit to a "yes" and not feel resentful for being asked favors.

More subtly, if you're interested in "showing someone that they're wrong", often it more effective to drop the goal entirely and instead focus on where you might be wrong. If you can ask things with genuine curiosity and intent to learn, people become much more open to sharing their true objections and then noticing when their views may not add up.

"Seeking 'no's" is a concept that applies everywhere though, and most people don't do it nearly enough.

Comment by jimmy on Comment section from 05/19/2019 · 2019-05-20T19:32:46.106Z · score: 30 (7 votes) · LW · GW

You're right that "feelings are information, not numbers to maximize" and that hiding a user's posts is often not a good solution because of this.

I don't think Christian is making this mistake though.

When someone is suffering from an injury they cannot heal, there are two problems, not one. The first is the injury itself — the broken leg, the loss of a relationship, whatever it may be. The second is that incessant alarm saying “THIS IS BAD THIS IS BAD THIS IS BAD” even when there’s nothing you can do.

If you want to help someone in this situation, it’s important to distinguish (and help them distinguish) between the two problem and come to agreement about which one it is that you should be trying to solve: are we trying to fix the injury here, or are we just trying to become more comfortable with the fact that we’re injured? Even asking this question can literally transform the sensation of pain, if the resulting reflection concludes “yeah, there’s nothing else to do about this injury” and “yeah, actually the sensation of pain itself isn’t a problem”.

Earlier in this discussion, Vanessa said “I feel X”, and the response she got was taking the problem to be about the “X” part, and arguing that X is not true. This is a great and satisfying response so long as the perceived problem is definitely “X” and not at all “I feel”. The response wasn’t satisfying though, and she responded by saying that she thought “I feel” was enough to be worth saying.

Since it has already been said that “if the problem is X, we can discuss whether X is actually true, and solve it if it is”, Christian’s contribution was to add “and if it’s not that you think X is actually true and just want help with your feelings, here’s a way that can help”. It’s helpful in the case where Vanessa decides “yes, the problem is primarily the feeling itself, which is maladaptive here”, and it’s also helpful in clarifying (to her and to others) that if she isn’t interested in taking the nerve block, her objection must be a factual claim about X itself, which can then be dealt with as we deal with factual claims (without special regards to feelings, which have been decided to be “not the problem”).

It’s not the most warm and welcoming way to deal with feelings (which may or may not reflect accurate/perceived as accurate upon reflection information), but not every space has to be warm and welcoming. There is a risk of conflating “it helps build community to help people manage their feelings” with “catering to feelings takes precedence over recognizing fact”, and that’s a nasty failure mode to fall into. If we want to manage that rule with a hard and fast “no emotional labor will be supplied here, you must manage your feelings in your own time”, that is a valid approach. And if there is a real threat of that conflation taking over, it’s probably the right one. However, there are better (more pleasant, welcoming/community building, and yes, truth-finding) methods to that we can play with once we’re comfortable that we’re safe from feelings becoming a negative utility monster problem. It’s just that in order to play with them safely, we must be very clear about the distinction between “I feel X, and this is valid evidence which you need to deal with” and “I feel X, and this is my problem, which I would appreciate assistance with even though you’re obviously not obligated to fix it for me”.

Comment by jimmy on Conversational Cultures: Combat vs Nurture (V2) · 2018-12-14T19:11:37.247Z · score: 4 (2 votes) · LW · GW
[...]but when the whole point of my comment was that jimmy ignored Mary's substantive point I think it's obnoxious to then ignore my substantive point about Mary's substantive point being ignored.

FWIW, “jimmy ignored Mary’s substantive point” is both uncharitable and untrue, and both “making uncharitable and untrue statements as if they were uncontested fact” and “stating that you find things obnoxious in cases where people might disagree about what is appropriate instead of offering an argument as to why it shouldn’t be done” stand out as far more obnoxious to me.

I normally would just ignore it (because again, I think saying “I think that’s obnoxious” is generally obnoxious and unhelpful) but given your comment you’ll probably either find the feedback helpful or else it’ll help you change your mind about whether it's helpful to call out things one finds to be obnoxious :P

Comment by jimmy on Conversational Cultures: Combat vs Nurture (V2) · 2018-12-14T19:10:08.687Z · score: 11 (3 votes) · LW · GW

The exact phrasing isn't important, but conveying the right message is. As Zvi and Ruby note, that “being”/”doing”/etc part is important. “You’re dumb” is not an acceptable alternative because it does not mean the same thing. “Your argument is bad” is also unacceptable because it also means something completely different.

"Your argument is bad" only means “your argument is bad”, and it is possible to go about things in a perfectly reasonable way and still have bad arguments sometimes. It is completely different than a situation where someone is failing to notice problems in their arguments which would be obvious to them if they weren’t engaging in motivated cognition and muddying their own thinking. An inability to think well is quite literally what “dumb” is, and “being dumb” is a literal description of what they’re doing, not a sloppy or motivated attempt to say or pretend to be saying something else.

As far as “then why does it always come out that way”, besides the fact that “you’re being dumb” is far quicker to say than the more neutral “you’re engaging in motivated cognition”, in my experience it doesn’t always or even usually come out that way — and in fact often doesn’t come out at all, which was kinda the point of my original comment.

When it does take that form, there are often good reasons which go beyond “¼ the syllables” and are completely above board, explicit, and agreed upon by both parties. Counter-signalling respect and affection is perhaps the clearest example.

There are examples of people doing it poorly or with hostile and dishonest intent, of course, but the answer to “why do those people do it that way” is a very different question than what was asked.

Comment by jimmy on Expected Pain Parameters · 2018-12-14T19:06:52.491Z · score: -1 (2 votes) · LW · GW

This is not the test for whether a statement has meaning. If I say "this vaccine you're getting does not cause autism", that would be meaningful even if it's not sometimes false when applied to other vaccines. It has meaning whenever "this vaccine causes autism" describes a different world than "this vaccine does not cause autism".

It may not convey any information to you if you already know "there are no vaccines about which that statement is false", but not everyone shares that certainty, and the people who don't might benefit from reassurance.

This definitely depends on you being right about the "this vaccine doesn't cause autism" thing, of course. You have to be able to honestly and justifiably state “this vaccine does not cause autism”, as encouraging people to take vaccines under false or unjustified premises is bad. You have to maintain openness to checking the data with them and changing your own mind if you do not find what you expect to find, because if you’ve closed your mind to the data not only does that make your job of persuasion harder, it makes your job of actually being reliably right harder. I'd even go so far as to say that not only should you be willing to put your money where your mouth is, you should even be able to do it *without flinching*. This means being able to put yourself in their shoes and actually experience "okayness" yourself.

Yes, if you can't do all of these things then you should do something about it before assuring them that it's okay. However, if you have good reason to believe that the statement is always true, that just means "figure out how to do all these things" is the thing you do about it before assuring them that it's okay.

Comment by jimmy on Conversational Cultures: Combat vs Nurture (V2) · 2018-11-25T21:09:55.454Z · score: 7 (8 votes) · LW · GW

The precise phrasing isn't important, and often "growls" do work. The important part is in knowing that you can safely express your criticisms unfiltered and they'll be taken for what they're worth.

Comment by jimmy on Conversational Cultures: Combat vs Nurture (V2) · 2018-11-10T19:19:28.584Z · score: 24 (8 votes) · LW · GW
To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends.

I don't think they're mutually exclusive. My response in close relationships tends to be both extra combative and extra nurturing, depending on the context.

The extra combativeness comes from common knowledge of respect, as has already been discussed. The extra nurturing is more interesting, and there are multiple things going on.

Telling people when they're being dumb and having them listen can be important. If those paths haven't been carved yet, it can be important to say "this is dumb" and prove that you can be reliably right when you say things like that. Doing that productively isn't trivial, and the fight to get your words respected at full value can get in the way of nurturing. In my close relationships where I can simply say "you're being dumb" and have them stop and say "oops, what am I missing?" I sometimes do, but I'm also far more likely to be uninterested in saying that because they'll figure it out soon enough and I actually am curious why they're doing something seems so deeply mistaken to me. Just like how security in nurturing can allow combativeness, security in combativeness can allow nurturing.

Another thing is that when people gain trust in you to not shit on them when they're vulnerable, they start opening up more in places in which nurture is the more appropriate response. In these cases it's not that I'm being nurturing instead of being combative, it's that I'm being nurturing instead of not having the interaction at all. Relative to the extreme care that'd need to be taken with someone less close in those areas, that high level of nurturing is still more combative.

Comment by jimmy on On Doing the Improbable · 2018-10-29T16:40:12.617Z · score: 39 (14 votes) · LW · GW

The times I was able to get people to do things that they felt were too unlikely to commit to were largely about lowering the emotional costs of failure. The context is a bit different, but it seems likely that some of the same factors apply.

Using “writing HPMoR” as an example, there’s more than one thing failure could be taken to mean. One is “I tested a high risk high reward idea, and it didn’t pan out. I learned something useful about what kinds of things I can’t do (right away, at least), and it still strikes me as having been worth attempting, given what I knew at the time. If I keep trying high risk high reward ideas one of them is likely to pay out, because the idea that I’m limited by what social expectations would see as “modest” isn’t even worth taking seriously”. A completely different thing it could mean is “I was arrogant to think I had a chance at this. I learned nothing on the object level because I already knew I couldn’t do it, but on the meta level I learned that I was wrong to set this aside and hope. In hindsight, it was a mistake that never was worth trying in the first place, and if I keep trying high risk high rewards things I’m just going to keep failing because social expectations of what I’m capable of are *right*”. The people with the latter anticipation are going to be less thrilled about flipping that coin with a 50% chance of success because the other 50% hurts a lot more.

The former mindset *sounds* a lot better, and people are going to want to say “yeah, that one sounds right! I believe *that* one!” even when their private thoughts tend towards the latter mindset. If you try to get someone in the latter category to act like they’re in the former category, you’re going to run into motivation problems. You’re going to hear “You’re right, and I want to… I just can’t find the motivation”.

In order to get people to shift from “failure means I should be less confident and try less” to “failure means this particular one didn’t pan out, and it’s still worth trying more”, you have to be able to engage with (and pass the ‘ideological turing test’ of) their impulses to take failure as indicative of a larger problem. There is definitely a skill to this, and it can be tough when you can plainly see that the right answer is to “just try it”. At the same time, it’s a skill that can be learned and it does work for opening things up for change.

Comment by jimmy on Unraveling the Failure's Try · 2018-07-20T21:22:04.233Z · score: 2 (1 votes) · LW · GW

I missed this response because I hadn't found the "someone has replied to your comment" indicator

The question "is it true" is exactly what informs me when I say "I know this fear to be irrational". I've seen situations in which one person is little more than a burden on another, and is still accepted and even taken care of much like one would do with any given loved one regardless of their practical worth. The failure I'm pointing to is that I can completely understand that line of reason, but my intuitive belief seems to be unaffected by it. The update in information created by this test didn't cascade down into my intuition, which I think is because my intuition is holding a piece (or set) of stronger beliefs that conflict with this anticipation. There is something arguing a "Yes, but..." where the 'but' is still more convincing than the 'yes'.

Is it that the information "didn't cascade down" to your intuition, or is it just that your intuition doesn't find that piece of information as convincing as you think it ought to be?

In general, when you get a "yes, but" (and *especially* when the "but" is explicitly more convincing than the "yes"), focus on the "but". But what? Yes, you understand that you've seen situations where one person sure seems to be little more than a burden and is still accepted, but that part of you still isn't convinced. Why not? What's in the "but"?

If I had to take a guess, you probably don't *want* to be little more than a burden on someone else, even if they still accept you (maybe they *shouldn't*, even). I know that's the case with other people, and if you feel the same way it would make sense that "but they'll accept me anyway" doesn't feel like it changes anything, no?

I'm not sure I follow you on the idea of lines of retreat. It seems like it a 'line of retreat' is moving around an obstacle deemed to difficult rather than through it. It would be useful to accept the obstacle as insurmountable without rigorous testing if you need to move forward before you can complete the testing. But my issue is that if this obstacle is too long, then I'm constantly skirting a more optimal path. It's like walking around a forest instead of through it because you don't trust yourself how to survive in the forest. What I'm after right now is how to survive in the forest because I think it will be faster and better in the long term to learn this skill than to become really good at skirting the forest.

I'm not sure I follow you either. Are you saying that you'd rather go forward with convincing yourself of something that you think is true rather than "going around" by making a line of retreat? If so, that's not really what I'm getting at. I'm not saying "go around instead", I'm saying "*even if* you want to go forward, the best way to do that when stuck is to open up the option of going around".

I'll give you an example. I recently had a client that wanted me to hypnotize him to forget something. I pointed out to him that what he wants is to *believe differently* he actually doesn't know for sure that the thing he's asking to forget actually happened -- after all it's possible that I hypnotized him to think it was real to prove a point. He was "yeah, but"ing me by saying stuff like "yeah, I mean, I guess that's possible, but I don't think it's very likely" -- and then not taking the idea seriously at all. I picked apart his reasoning and let him know that doing that kind of thing to prove a point is *exactly* the kind of thing I'd do, and that I have indeed done it in the past. Eventually it got down to "yeah, I mean, everything you're saying makes sense, but I just don't believe it".

Seems irrational, no? Like, if you aren't going to open your mind to evidence, then how do you expect to learn when you're wrong. If I had doubled down on the wrongness of this decision, it would have pushed him to agreeing with what I'm saying, yet being unable to actually experience the uncertainty that I was pointing him towards. Instead, what I said was "while that may *seem* silly, that's actually a really good strategy to keep yourself from being manipulated by tricky hypnotists". I was giving him a line of retreat by saying "we don't have to do this", and putting to words his reluctance to let me inspire doubt on such seemingly fundamental things. I didn't do it because I thought he should *take* it, but because I knew giving him the option would keep him from getting hung up and stuck on it *regardless* of which he felt was the better option. Reminding him that don't *have to* keep going forward turned out to be a *really quick* way of getting him to accept his passage through the forest. It reminded him that he *wanted* to get his perspective manipulated by me, that his is why he was there, and so he admitted that it really was a serious possibility and took it appropriately seriously and we were back on track.

I hadn't heard Confidence All The Way Up as a name but I'm familiar with the concept, in some places I have this, and more often than not other people had called it a weakness. That I would too readily dismiss other people's ideas as "not aligned with the evidence" because I was spending more time developing my own theory than I was about thinking about the implications of the statements of others. >Part of me would think "So now I'm selfish because I don't care about things that are easily disproven?" and part of me would think "Maybe I didn't understand what they actually meant." The second part recently started winning (probably due to a deterioration of a key relationship and not necessarily based on evidence in the strictest sense) and so I've been purposefully suppressing Confidence All The Way Up and trying to be a better listener. But I think he has a point that this is a useful way to function, and I would do well to apply it here. I don't think I've sunk into hopelessness, so much as I've gotten stuck.

The weakness isn't "being confident", it's in "dismissing the ideas of people who he wants to continue relating to before they agree that he would be right to".

The question is "does the fact that their ideas are not aligned with the evidence as I see it mean that I should dismiss their views?", and I think the answer is a pretty strong "no", in general. You don't have to object that people's views aren't aligned with the evidence just because (in your view) they are not. You don't have to squash your feeling of confidence to listen once you realize that you can listen for reasons other than "I'm likely wrong". You can still listen out of a desire to understand where they're coming from *regardless* of whether they turn out to be righter than you had known. You can refrain from objecting simply be realizing that they don't (yet) want to hear what you think.

Maybe you *didn't* understand what they actually meant. Maybe you did, and they just didn't recognize how much freaking thought you put into making sure you're right, and taking into account what other people think. I've had both happen. Listen because they don't see eye to eye with you, and you want to figure out how to get there.

Comment by jimmy on Expected Pain Parameters · 2018-07-16T02:42:16.941Z · score: 6 (4 votes) · LW · GW
So how could I possibly give someone advice on how much apologizing will hurt? If they're the type of person who takes embarrassment super seriously it will be totally different than if it's not big deal.

It's not so much "how much will this hurt" as it is "how much should this hurt". In other words, "how much does it have to hurt before I reconsider". In the running case, for example, you can't know before their run if they'll experience mild muscle soreness of if they'll step on a nail. You want them to know that if it feels like they've stepped on the nail, this isn't what you're talking about, and you shouldn't try to run through that.

There is a distinction between "this is how intense the sensations might be" and "this is the thing they signify, and how bad it is". A lot of the subjective experience of "pain" has to do with the meaning attached to it, and the reaction to that meaning.

In jiu jitsu for example, beginners are often not taught heel hooks in part because the sensation of a knee ligament about to rupture doesn't always stand out as a big deal, and so people will sometimes hurt themselves because they don't notice the warning signs. At the same time, you can get people screaming in pain once their foot is turned the wrong way because all of a sudden the meaning has changed and they no longer feel "okay". Other people can have the same thing happen to them and just kinda look at it like "oops, I screwed that up" because they simply aren't overwhelmed by the idea that their ligaments just tore and their limb isn't pointing the right way anymore.

When you're talking to someone who is in pain (or needs to do something which will be painful), there's two things you want to communicate. One is that it's okay, even though whatever the bad thing is that happened, and the other is what the bad thing is. When you can do those two things, their entire experience can change dramatically.

The same principles apply to emotional concerns. For example, if someone is going to feel embarrassed by something to a degree which seems appropriate and okay, then all you are going to need to communicate is "Yes, this is going to be embarrassing. It's okay". If they're going to be way more embarrassed than is called for (in your opinion), then you *also* want to be communicating that the damage isn't as severe (or call for such an extreme aversion) as it seems. It's the "this sensation means your knee is about to explode" training in reverse. In this case, you aren't just saying "embarrassment is okay", you're also saying "it's not even that embarrassing". Be prepared for people to not just take your word on this, of course, but that is the point of contention.

One way to deal with it is to actually paint them a picture of what it's like from your perspective so that they can see that it's not that big a deal (if they find your story convincing). Another is to just show awareness that it seems super horrible and worth being embarrassed over, and that you don't expect them to be convinced, but that you actually don't think it's that big a deal, for what it's worth. If your opinion means something to them, this can still have a significant effect.

As I read it, the advice Alicorn is giving here relates to the part where you don't want to miss the fact that someone might perceive an embarrassing event as way worse than you do (accurately or not) and then tell them "you should put up with it so that you can do X" without noticing that you might be asking them to endure a much bigger perceived (and maybe real) cost than you actually think it's worth. For example, you might want to say "I'd probably apologize to them if I were in your shoes. And yeah, it's kinda gonna suck. I wouldn't be smiling about it for sure, and I might have a hard time being in a good mood for the rest of the evening, but it's not like it's worth traumatizing yourself over. If it feels like something you can't handle, then that's fine too. It's not the end of the world if this person doesn't get their apology".

Comment by jimmy on Expected Pain Parameters · 2018-07-16T01:57:36.256Z · score: 10 (4 votes) · LW · GW

I agree with the idea that it's important for people to understand their pain when they aren't going to just flinch from it.

The framing you chose seems odd to me though. Instead of saying "if you're going to suggest people do something painful, you should present them with a model/make sure they understand" or saying "if someone is suggesting you do something painful, make sure you have a model", you say "*they* should present *you* with a model". Are you intending to suggest to your audience that they should feel *entitled* to having a model accompany the initial request, above and beyond the fact that it's important to understand?

Comment by jimmy on Unraveling the Failure's Try · 2018-06-09T22:00:29.504Z · score: 21 (4 votes) · LW · GW
So my question is, now that I am aware of the node, how do I unravel it? My understanding of counter-conditioning relies on specific, actionable behaviors. "Every time I want to eat ice cream I will think about my fitness goal, and instead work out, with enough time and careful planning, my desire for ice cream will be overpowered by my working out habit and I will (virtually) no longer struggle with my desire to eat ice cream." I've had success updating and adjusting other habits with this form, but I'm struggling to apply it to this problem. I fear it's the nature of the problem itself. "Even my strongest counter-conditioning strategy is too weak to deal with how pathetic I am."

It looks like the issue is that you want to use your "apply the solution" techniques before you know what the solution is.

If you could know that the fear is silly, then you simply apply the ice cream fix. "Every time I feel myself fearing this, I will remember all the reasons it's not true and I will feel better". The reason you haven't been able to apply that technique seems to be that you're not actually convinced that the fear is wrong. You say "I fear this" and you state the fear in quotes rather than outright saying it, but you also haven't said "and I know this is wrong because X". It sounds like that fear is still just sitting there unaddressed either way.

Generally, the first thing I ask myself in situations like these is "is it true?". Are my strongest counter-conditioning strategies too weak to deal with how pathetic I am? Must I become useful in as many aspects as possible? Will people really not want anything to do with me if I don't? Can my presence alone actually be enjoyed, or do I have to constantly raise your ability to help others before they can accept me?

These questions can be kinda hard to answer sometimes, because -- especially when phrased this way -- it can seem like things are "not allowed" to be one way, and when you're not allowed to think "no", then it's really hard to verify when it's "yes". For example, maybe I really don't want to accept (as more than just "a fear") that I'm fundamentally not good enough for others acceptance unless I keep leveling up. In that case I'm likely to flinch away from looking at the answer to that question, and that makes it hard to really see and accept when others do accept me.

Rather than trying to force yourself to look at the answers anyway or force yourself to believe what you think is right, I'd focus on leaving yourself a line of retreat to help make it more okay if not everyone wants to spend time with you unless you get better at whatever it is. "Okay, so I don't know whether it's true or not, but if it were true that I need to level up before people could accept me, then what would I want to do". "Get better" probably. Okay, so get better. What else?

Maybe it get's a bit more complicated. "But I don't know how to get better if even my strongest counter-conditioning strategies aren't good enough for my situation?". Okay, so what's the line of retreat there? If they aren't, then what do you do? I dunno, maybe post on LW to see if anyone has any useful input.

Nate Soares has a really good post related to this kind of thing, and is well worth reading. As he says, at some point you do have to bottom out and say "yeah, if I'm that far gone, then I fail and die". Until that point though, there are a hell of a lot of things you can do to prepare for the various possibilities, and once you map them out the mapping can take the place of the anxieties.

And once the anxieties are gone, you'll be back to knowing what you want to do, and just having to remember to do it.

Comment by jimmy on Explicit and Implicit Communication · 2018-03-22T16:51:33.897Z · score: 3 (1 votes) · LW · GW

Only if forced.

Comment by jimmy on On Defense Mechanisms · 2018-03-07T21:13:13.257Z · score: 16 (4 votes) · LW · GW

It feels like the same kind of reason that you need to be gentle with your body after running a marathon. I could try to be more specific about what might be going on that makes it difficult to keep it up, but the point is that it seems to be a fundamentally difficult to remain unfatigued and if you don't slow down when fatigued you're not going to move very well and are likely to break something.

Are you asking more "why can't you mentally run unlimited marathons in a row without slowing down" or more "what damage do you risk doing when continuing through 'mental fatigue' that makes it something you have to heed?"?

Comment by jimmy on On Defense Mechanisms · 2018-03-05T20:59:56.320Z · score: 8 (3 votes) · LW · GW
Question for discussion: How would you suggest we use the idea of defense mechanisms in theory or practice?

It's definitely important to keep from messing up big, but I think it's often underestimated how much value there is to be had in noticing and changing defensive responses when you aren't stressed and burned out. When you're burned out, it's often tough to figure out what you want to do instead because it means adding another problem to solve.

When you're more or less "okay" though, defensive responses are so much easier to change because they're likely not there out of necessity but rather just "hadn't noticed yet". If you look closely, they're still all over the place and the value of non-defensive responses adds up.

The strategy I suggest involves noticing whether you're being defensive no matter what, asking yourself whether you're "okay" and can afford to not be defensive, to do it when you can afford it, and when you feel like you can't afford to do without defensiveness to do it without shame and with an active awareness of what you're losing, what conditions would cause you to change tactics, and highlighting it for what it is so that it can be contained. This way the easy changes become easier (because you know you always have the option of backing off), and failure becomes easier to recover from (because you're not digging yourself deeper trying to avoid the inevitable, or failing to prepare for it properly).

Comment by jimmy on On Defense Mechanisms · 2018-03-05T20:49:04.310Z · score: 2 (4 votes) · LW · GW

Sometimes people are just dumb, and repeatedly do things that don't seem to accomplish anything because they don't know how to do anything better (because they don't understand why they're doing it in the first place). In other words, yes, there has to be a reason for them doing it, no, it shouldn't be expected to be a good reason or to stand up to reflection.

In my personal experience, "I'm feeling cranky at innocent parties because of rough day" feels like a response to having less cognitive resources to spend on whatever is being asked of me. It's the kind of thing where if it's not too bad, "hey, I'm really not up to this. I had a rough day and need some space or gentle handling" would feel like an attractive alternative. However sometimes even coming up with that is difficult, so the temptation is to take the easy option of lashing out which communicates the same thing ("either give me my space or walk on egg shells, because I don't want to deal with more shit when my plate is already full") in a much more hostile manner. "Is it worth the costs of being hostile?" is the relevant question, but people often run into limits of just being overwhelmed and not being able to actually compute all the answers before picking a choice and running with it.

Does that help answer your question, or am I trying to explain the wrong part?

Comment by jimmy on Circling · 2018-02-22T21:20:28.314Z · score: 3 (1 votes) · LW · GW

Back when I was first getting into hypnosis, we talked about my experiments with hypnosis and all the terrifying possibilities that they implied. Even though I'd expect you'd have taken basically the same stance even without those conversations, I imagine it is still a significant contributing factor towards your take on hypnosis, and so I feel compelled to note that I no longer feel this way about it.

To be clear, I don't think anything we talked about is "wrong", and the fact that the uncertainty mostly resolved on the "less scary" side isn't very reassuring. I still can't think of any circumstance with any hypnotist that I would allow them to "hypnotize" me, in the central meaning of the word, and I do still think people are insufficiently afraid of being hypnotized. That stuff is all more or less the same.

The big difference is that now recognize more of how "responding hypnotically" is a really important part of both learning and relating to people, and that it's possible to do it without risking falling into any of the obvious traps that enable the scary bad possibilities. "Engage critical faculties, keep in mind evidence, develop models, etc" yes. Do that and "Listen to the voice. Respond and receive. Be open to the update, etc" -- to the extent that you can do that without losing track of the former (and work to increase this extent as much as you can).

I don't even think it's always crazy to trade off some control for quicker learning, so long as this decision itself is made very carefully with full input of critical faculties, you understand the potential traps, and the person guiding you really can be verified to be worthy of the required trust, etc.

However it's not necessary either. I've gotten better at it myself without sacrificing my need for control, and I have a very "control freaky" friend who is also figuring out how to respond hypnotically without giving up any control, and has gotten some really cool results from it. It's taken her four years to be able to accept half the suggestions a good hypnotic subject can do in five minutes, but on the upside since she is deciding for herself which things to accept hypnotically, not only does she not expose herself to unnecessary risk, she's able to more efficiently spot what would be useful to her in a normal conversation without anyone having to lean on it as if it were an actual hypnotic suggestion.

I guess it's kinda like exploring caves that have a lot of goodies. Just make sure you know your way out.

Comment by jimmy on Pain, fear, sex, and higher order preferences · 2018-02-22T19:21:55.771Z · score: 3 (1 votes) · LW · GW

You're doing a little slight of hand by throwing all "avoiding pain" in one large bucket (and then deciding that you want to keep some of it), but then instead of analyzing "avoiding fear" as applied to one specific threat (and then deciding that it's "irrational"). You could just as easily say "no, I don't approve of feeling pain when I need to kick an important game winning goal. It's high order vs low order", or "I absolutely approve of keeping my fears, because they also protect me from real threats".

I don't find the distinction to be useful, except in modeling how other people relate to their own impulses. Even when people tell me that their fear is "irrational" and that they want it thrown out, I treat it more like the way you refer to the pain aversion case, and it works.

For example, my friend was telling me about her "irrational fear of heights" that she wanted gone, so I had her climb up a rock wall over concrete and had her try to hold the frame that there was zero risk and that the fear was entirely irrational while I kept pointing at all the failure modes and asking her to explain how she knew that wouldn't happen. This forced her to take the fear seriously, and once she did she was able to integrate it into her decision making process more efficiently and therefore able to be less paralyzed by fear when rock climbing and without throwing out any of the valuable information that the fear has. Similarly, there are times when you can look at the pain and decide that it's not necessary and watch it melt away into ticklish sensations or nothingness (and then kick the ball without wincing or anticipating badness).

In both cases, I'd look at it as a signal that there's value unaccounted for in the decision you're wanting to make, and once you properly account for it, all conflict and discomfort vanishes.

Comment by jimmy on Is skilled hunting unethical? · 2018-02-18T22:31:38.221Z · score: 6 (2 votes) · LW · GW

The way we test our heuristics is by seeing if they point to the correct conclusions or not, and the way that we verify whether or not the conclusion is correct is with evidence. A single example is only a single example, of course, but I don't see how the failure mode can be illustrated any more clearly than in the case of vaccines -- and precisely because of the strong evidence we have that our initial impulses are misdirected here. What kind of example are you looking for, if it's supposed to satisfy the criteria of "justifiably and convincingly show that the heuristic is bad" and "no strong evidence that the heuristic is wrong here"?

I'll try to rephrase to see if it makes my point any clearer:

Yes, of all things that children immediately see as bad, most are genuinely bad. Vaccines may be good, but sharing heroin needles under the bridge is bad, stepping on nails is bad, and getting a bull horn through your leg is bad. It's not a bad place to start. However, if you hear a mentally healthy adult (someone who was once a child and has access to and uses this same starting point) talking about letting someone cut him open and take part of his body out, my first thought is that he was probably convinced to make an exception for surgeons and tumors/infected appendix or something. I do not think it calls for anywhere near enough suspicion to drive one to think "I need to remind this person that getting cut open is bad and that even children know this". It's not that strong a heuristic and we should expect it to be overruled frequently.

Bringing it up, even as a "prior", is suggesting that people are under-weighting this heuristic relative to it's actual usefulness. This might be a solid point if there were evidence that things are simple , and that children are morally superior to adults. However, children are little assholes, and "you're behaving like a child" is not a compliment.

It might be a good thing to point out if your audience literally hadn't made it far enough in their moral development to even notice that it fails the "Disney test". However, I do not think that is the case. I think that it is a mistake, both relative to the LW audience and to the meat eating population at large, to assume that they haven't already made it that far. I think it's something that calls for more curiosity about why people would do these things that fail the Disney test.

Comment by jimmy on Is skilled hunting unethical? · 2018-02-17T23:08:45.756Z · score: 12 (4 votes) · LW · GW

"How easy would it be to make a childrens movie" is not a good heuristic. Think of how easy it would be to make a movie to scare children away from getting their shots compared to a movie that gets them comfortable with doctors stabbing them with needles.

Yes, in general people sticking sharp things in you is bad, and if you don't know anything else you should probably start there. However this is just an uninformed starting point, and it does not call for suspicion when society has come to the consensus that vaccines are great -- they were all kids at some point too, and then they were convinced to overcome their initial aversion to needles. Let's not discard that evidence. You don't get to "back up" to childhood without persuading people why everything they've learned growing up is a lie. In real life things are complicated, and we often must conclude things that viscerally disagree with our first impulses if we are to progress beyond childhood.

Comment by jimmy on Hufflepuff Cynicism on Crocker's Rule · 2018-02-17T20:12:13.724Z · score: 3 (1 votes) · LW · GW

It's also possible to take the gradual approach even if you only have one opportunity to give them feedback, though it is a little outside the bounds of how people normally interact and takes a bit of skill.

Basically, instead of "I need you to have already demonstrated [...] so I can't give you feedback", you can say "I need you to demonstrate [...] before I give you feedback", and then not taking away their opportunity to do so on the spot, if they wish.

There is a difference in experience between saying things that you want to believe/portray vs things that are honest reports of actual simulated experience. The former not only sounds different, but you can often see the emotional response yourself, if you look.

Suppose you ask someone "how would you respond if I told you that your mom died". They might look at it from the outside and make all sorts of guesses and statements about how they might respond. These generally don't mean much, because they rely on the persons self-model, which is often not that good even if they aren't swayed by motivations to pain themselves in a good light. However, another possible way to answer the question is to put yourself in that situation, and let yourself see first hand what kinds of thoughts/feelings/emotions come up. This isn't always foolproof either, since they might be imagining the stimulus a bit differently than it is going to turn out in real life, but at least their responses will be genuine.

Often, people will try to give you the first type of answer ("Words can't hurt me! Sticks and stones! Crockers rules!"), and you're going to need to nudge them towards giving the second type of answer before you can trust what they're saying. The way you do this is by leaning on the potential for it to be a real situation, which they need to actually be prepared for, but without spilling the secret about whether or not it's actually real, so that you can keep it as a (realistically simulated) hypothetical. It's a bit of a balancing act.

While they are too far on the side of "make believe", you can work things towards "real situation" by holding the frame that you can't believe/trust their declarations, or by painting an increasingly detailed and vivid picture and speaking more and more as if it's actually a real thing. If they start to fall too far on the side of "unprepared response to a harsh reality", you can start to remind them that you haven't actually confirmed that it is indeed reality yet, and start to bring them back to the perspective of seeing it as a hypothetical -- at which point it becomes pretty clear that they weren't ready to hear that kind of thing and make you feel comfortable telling them, if it were to be true. Of course, in order to be able to do this second step, you have to be able to credibly say that the fact that you're inquiring about their response to this hypothetical doesn't necessarily mean that it's true. For example, "It's impossible to hurt my feelings" might be tested with "Really? Even if everyone you've ever known and loved told you that they were putting up with you because they didn't think you could handle seeing how they really saw you?", even if the reality is just that you personally think they're pretty cool, but sometimes a little annoying.

In short, what you want to avoid is taking self-statements without confirmation as truth and then jumping all the way from complete make believe to complete truth, without ever stopping to see how they actually deal with things as they traverse from "totally untrue" to "potentially true" to "truth".

Slow it down, and traverse that ground in a controlled fashion, while always allowing a comfortable way out.

Comment by jimmy on Beware Social Coping Strategies · 2018-02-05T18:30:55.868Z · score: 20 (5 votes) · LW · GW
I think there's something to that, but it's not that general. For example, some people can be very kind to others but harsh with themselves. Some people can be cruel to others but lenient to themselves.

Even if the behavior itself seems vastly different, that doesn't necessarily mean they aren't just different instances of the same "social program". For example, if you're "kind" to others but harsh with yourself, it might be because you don't know how to hold people accountable without being harsh, and correctly predict that you wouldn't be able to get away with it with other people (but where are you going to go if you don't like yourself?).

Comment by jimmy on introducing: target stress · 2018-01-15T22:00:25.819Z · score: 9 (3 votes) · LW · GW
i think it's positive sum to exist-in-a-state which supports people using more explorative grammatical styles, because language is pretty constraining for expression for a lot of people and pushing them to optimize their language use helps a lot

I agree with this, and personnally I didn't even notice the lack of capital letters until it was pointed out. I also agree that sometimes it makes sense to place hoops in front of people to make sure you have their attention.

I do not, however, think that this is one of those times.

To the extent that people care, it's a battle that is going to have to be fought somewhere before people accept your way of writing, and I think you should reconsider where/how/if you want to fight this battle. I'd do it differently.

Comment by jimmy on Hero Licensing · 2017-12-04T05:35:10.471Z · score: 5 (2 votes) · LW · GW

Good question.

I generally wouldn't ask questions like "is his disagreement explained by status alone or by facts alone?". I generally ask questions more like "if he saw the person saying these things as higher or lower 'status', how much would this change his perception of the facts?" (and others, but this is the part of the picture I think is most important to illuminate here). If a fields medalists looks at your proof and says "you're wrong", you're going to respond differently than if a random homeless guy said it because when a fields medalist says it you're more likely to believe that your proof is flawed (and rightly so!). Presumably there's no one you hold in high enough regard that if they were to say "the earth is flat" you'd conclude "it's more likely that I'm wrong about the earth being round and all of the things that tie into that than it is that this person is wrong, so as weird as it is, the earth is probably flat", however even there status concerns change how you respond.

Coincidentally, just as I started drafting my response to this I got interrupted to go out to dinner and on the way was told about Newman's energy machine and how it produced more energy than it required, how Big Oil was involved in shutting it down, and the like. This certainly counts as "something I think is false" in the same way Bob thinks "the earth is flat" is false, but how, specifically, does that justify asking for evidence? The case against perpetual motion machines is very solid and this is not what a potentially successful challenge would look like (to put it lightly), so it's not like I need to ask for evidence to make sure I shouldn't be working on perpetual motion machines or something. Since I can't pretend I'd be doing it for my personal learning, what could motivate me to ask?

I could ask for evidence because of a sense of ["duty"](, but it was clear to me that he wasn't just gonna say "Huh, I guess my evidence is actually incredibly weak. Thanks!", so it's not like he was actually going to stop being wrong in the time/effort allotted. I could ask for evidence to make it clear to the "audience" that he has no good evidence, but there was no one there that was at risk of believing in perpetual motion machines.

Why should I ask him for evidence, if not for reasons having to do with wanting him to afford more respect to the things I think, less to what he thinks, and to punish him by making him feel stupid if he tries to resist?

Comment by jimmy on Hero Licensing · 2017-11-24T22:36:18.739Z · score: 11 (2 votes) · LW · GW

>Yeah, a better way at gesturing at what Eliezer means by "status-blind" might be "doesn't reflexively assign status or deference to people based on a felt sense of how authoritative/respectable/impressive people are likely to view them as being."

Yes, but I think the difference is in "how people are likely to view them" vs "how I see them", and not in "doesn't reflexively assign status or deference".

>As a "status-sighted" person, I don't think the difference feels internally like a distinct "emotion"; it feels more like people's impressiveness is just baked into the world as an obvious, perceptible fact. It's just goddamn different to meet a senator in full regalia versus meet the head of a local anarchist group.

This is what I meant when I said that status is invisible when agreed upon. In "competent elites", Eliezer wrote that he was expecting to find "fools in business suits" and was shocked that these people were "visibly much smarter than average mortals" and felt "more alive", even. This was two years before the "status blind" Eliezer of 2010, but this is a pretty clear depiction of his experience noting their surprisingly high status as an obvious perceptible fact about these people (as measured by him). Heck, he even described Jaynes as having a "magical aura of destiny".

If he were to be unimpressed by some dressed up senator, it wouldn't be because he is incapable of having the same status responses, just that the senators fancy pants obviously didn't earn it to him.

>Indeed, I think that's another factor that makes it really hard for me to notice when my behavior is authoritativeness-influenced is that it doesn't feel subjectively distinct from when part of my mind is quietly making a rational, calculated decision to play the politics game.

Well sure, that's how you implement these concerns. Similarly to how you can eat because you're hungry and avoid painful things because it "hurts" without consciously thinking through whether or not you need more food or the spicy pepper is going to damage you. When you get down to the bottom of it though, emotions and even "physical pain" will completely change as you start to see things differently (and of course, won't change so long as you're just insisting to yourself that you "should" see it differently)

>I agree with sil ver, too. This is one of the conclusions I reached when I tried to figure out why my output wasn't as amazing as Eliezer's or Scott's, and what I could do to change that.

Good. I hope it changes :)

Comment by jimmy on Hero Licensing · 2017-11-24T17:26:51.515Z · score: 19 (7 votes) · LW · GW

I don't think "status blind", the thing that leads to thoughts like "I don't get how/why people keep bringing status into this instead of just looking at the arguments", is what it is made out to be.

I used to be that kind of person. Back in college, my thesis adviser told me that he liked that I was willing to tell him when he was wrong, and that most of his students wouldn't do that (that was the upside. there were downsides too). It completely blew my mind because I literally couldn't grasp how it could be any other way. When I would point out that he was wrong, it wasn't a sneaky way of saying "I'm smarter and higher status than you", it was about the physics and status games just wasn't the level we were speaking on.

However that doesn't mean that "status" wasn't a useful concept for describing how I interacted with people in general or that it wasn't important to my interactions with him, even. It just means that the status requirements were satisfied , so that we could get to the good stuff. He saw himself as someone who was "smart, but not above making mistakes" and saw his students as "not having the same expertise, but not below understanding things and noticing his mistakes", and I saw them similarly. Since we were in agreement there, there was room for me to say "Yo, you're missing something" and for him to listen and say "Oops. What did I miss?". If I had seen him as above making mistakes and myself as below being able to notice them, I wouldn't be able to say "you're missing something" because I wouldn't be able to believe that it's true over the hypothesis "I'm the one that is wrong". I might be able to say "so I know I'm wrong, but can you explain to me how I'm wrong here?", but if the difference is extreme enough it kinda crowds out your ability to even notice your own thoughts -- since what do they matter anyway if you already know they're hopelessly wrong? On the other side, if the professor sees themselves as beyond having fundamental misunderstandings and their students as below being able to have better understandings and the ability to see their mistakes, then so long as you can't change their notions of status there is nothing you can do to communicate "you are fundamentally wrong about how this works" without them hearing it as "I don't realize how far out of my depth I am right now". I've been there too.

Status is invisible when it's agreed upon, and when it's not it always looks like it's the other person who is the arrogant fool who can't get over their status issues. "Status blindness" isn't about opting out of status, it's about having a different notion of status, not understanding how others determine status, and not having the perspective to notice that this is what's going on. It's certainly possible to be right that your notion of status is better, and for this to outweigh any loss of status in other peoples flawed perspectives. At the same time, to the extent that being able to interact with these other people are actually important, it really pays to understand what they're doing when they have status objections and how to reach agreement on status levels so that you can get to the good stuff and start communicating.

Comment by jimmy on Hero Licensing · 2017-11-24T08:37:34.321Z · score: 32 (12 votes) · LW · GW

In particular, I think the key distinction is between "I demand you justify yourself to me" and "I would appreciate if you could help satisfy my curiosity". Even if the person is a potential investor it's best to decline to jump through hoops and wait for them to shift to genuine curiosity.

If someone asks "how come you think you're good enough to do this?", I generally interpret this as "You seem to be implying that I should see you as high status. If you are going to demand I see you as high status then I counter-demand that you back it up. If you don't back up your active bids for status, I will conclude that you're a faker and declare you to be low status". The correct response to this is to not try to control your status in their mind in the first place. In response to this question, I'd probably go with something like "I might not be. I don't know." and emphasizing that this is a very real thing to me. Showing agreement and real weight to the idea that you might not deserve the status claim they see implied is something that you can't do if you're trying to make a grab for the status, so it tends to defuse those concerns and make it harder to continue to frame you as trying to status grab. At the same time, it is not engaging in false modesty, and you don't lose anything by pretending that you know that you aren't capable of it.

There's a whole lot of this that goes on below conscious awareness just in how people carry themselves and just working on your underlying frames can do a lot to prevent this kind of thing from ever becoming an issue, though there will always be cases where someone is status-insecure enough to keep trying to insist on framing you as a status grabber even after saying "I might not be". Eventually it get's pretty hard to keep it up though, if every time you respond with credible evidence that this isn't what you're doing. Sooner or later they pretty much have to give up on that framing and accept that you're willing to accept them seeing you however they see you and affording you whatever status they feel like you deserve.

If they're bothering to try to reject your status bids and they showed up in the first place, this is usually plenty high to fuel some genuine curiosity for how you can be status-secure while holding open some very strange possibilities, and when you see that shift you actually have an unprejudiced ear to hear your answer to "what makes you think you might be able to do this?". It likely won't make any sense to them anyway if you think very differently to them, but it'll at least create the opening for them to start noticing things and weighing evidence, and they can't really rule it out they way they otherwise would have.