Posts

Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z
Metauncertainty 2009-04-10T23:41:52.946Z

Comments

Comment by jimmy on [Linkpost] Practically-A-Book Review: Rootclaim $100,000 Lab Leak Debate · 2024-04-04T17:35:22.470Z · LW · GW

The frustrating thing about the discussion about the origins is that people seldom show recognition of the priorities here, and all get lost in the weeds.

You can get n layers deep into the details, and if the bottom is at n+1 you're fucked. To give an example I see people talking about with this debate, "The lab was working on doing gain of function to coronaviruses just like this!" sounds pretty damning but "actually the grant was denied, do you think they'd be working on it in secret after they were denied funding?" completely reverses it. Then after the debate, "Actually, labs frequently write grant proposals for work they've already done, and frequently are years behind in publishing" reverses it again. Even if there's an odd number of remaining counters, the debate doesn't demonstrate it. If you're not really really careful about this stuff, it's very easy to get lost and not realize where you've overextended on shaky ground.

Scott talks about how Saar is much more careful about these "out of model" possibilities and feels ripped off because his opponent wasn't, but at least judging from Scott's summary it doesn't appear he really hammered on what the issue is here and how to address it.

Elsewhere in the comments here Saar is criticized for failing to fact check the dead cat thing, and I think that's a good example of the issue here. It's not that any individual thing is too difficult to fact check, it's that when all the evidence is pointing in one direction (so far as you can tell) then you don't really have a reason to fact check every little thing that makes total sense so of course you're likely to not do it. If someone argues that clay bricks weigh less than an ounce, you're going to weigh the first brick you see to prove them wrong, and you're not going to break it open to confirm that it's not secretly filled with something other than clay. And if it turns out it is, that doesn't actually matter because your belief didn't hinge on this particular brick being clay in the first place.

If it turns out that a lot of your predictions turn out to be based on false presuppositions, this might be an issue. If it turns out the trend you based your perspective on just isn't there, then yeah that's a problem. But if that's not actually the evidence that formed your beliefs, and they're just tentative predictions that aren't required by your belief under question, then it means much less. Doubly so if we're at "there exists a seemingly compelling counterargument" and not "we've gotten to the bottom of this, and there are no more seemingly compelling counter-counterarguments".

So Saar didn't check if the grant was actually approved. And Peter didn't check if labs sometimes do the work before writing grant proposals. Or they did, and it didn't come through in the debate. And Saar missed the cat thing. Peter did better on this game of "whack-a-mole" of arguments than Saar did, and more than I expected, but what is it worth? Truth certainly makes this easier, but so does preparation and debate skill, so I'm not really sure how much to update here.


What I want to see more than "who can paint an excessively detailed story that doesn't really matter and have it stand up to surface level scrutiny better", is people focusing on the actual cruxes underlying their views. Forget the myriad of implications n steps down the road which we don't have the ability to fully map out and verify, what are the first few things we can actually know, and what can we learn from this by itself? If we're talking about a controversial "relationship guru", postpone discussions of whether clips were "taken out of context" and what context might be necessary until we settle whether this person is on their first marriage or fifth. If we're wondering if a suspect is guilty of murder, don't even bother looking into the credibility of the witness until you've settled the question of does the DNA match.

If there appears to be a novel coronavirus outbreak right outside a lab studying novel coronaviruses, is that actually the case? Do we even need to look at anything else, and can looking at anything else even change the answer?

To exaggerate the point to highlight the issue, if there were unambiguously a million wet markets that are all equivalent, and one lab, and the outbreak were to happen right between the lab and the nearest wet market, you're done. It doesn't matter how much you think the virus "doesn't look engineered" because you can't get to a million to one that way. Even if you somehow manage to make what you think is a 1000:1 case, a) even if your analysis is sound it still came from the lab, b) either your analysis there or the million to one starting premise is flawed. And if we're looking for a flaw in our analyses, it's going to be a lot easier to find flaws in something relatively concrete like "there are a million wet markets just like this one" than whatever is going into arguing that it "looks natural".

So I really wish they'd sit down and hammer out the most significant and easiest to verify bits first. How many equally risky wet markets are there? How many labs? What is the quantitative strength of the 30,000 foot view "It looks like an outbreak of chocolatey goodness in Hershey Pennsylvania"? What does it actually take to have arguments that contain leaks to this degree, and can we realistically demonstrate that here?
 

Comment by jimmy on Defense Against The Dark Arts: An Introduction · 2024-01-04T18:35:18.637Z · LW · GW

The difference between what I strive for (and would advocate) and "epistemic learned helplessness" is that it's not helpless. I do trust myself to figure out the answers to these kinds of things when I need to -- or at least, to be able to come to a perspective that is worth contending with.

The solution I'm pointing at is simply humility. If you pretend that you know things you don't know, you're setting yourself up for failure. If you don't wanna say "I dunno, maybe" and can't say "Definitely not, and here's why" (or "That's irrelevant and here's why" or "Probably not, and here's why I suspect this despite not having dived into the details"), then you were committing arrogance by getting into a "debate" in the first place.

Easier said than done, of course.

Comment by jimmy on Defense Against The Dark Arts: An Introduction · 2024-01-04T17:55:16.485Z · LW · GW

I think "subject specific knowledge is helpful in distinguishing between bullshit and non-bullshit claims." is pretty clear on its own, and if you want to add an example it'd be sufficient to do something simple and vague like "If someone cites scientific studies you haven't had time to read, it can sound like they've actually done their research. Except sometimes when you do this you'll find that the study doesn't actually support their claim".

"How to formulate a rebuttal" sounds like a very different thing, depending on what your social goals are with the rebuttal.

I think I'm starting to realize the dilemma I'm in. 

Yeah, you're kinda stuck between "That's too obvious of a problem for me to fall into!" and "I don't see a problem here! I don't believe you!". I'd personally err on the side of the obvious, while highlighting why the examples I'm picking are so obvious.

I could bring out the factual evidence and analyze it if you like, but I don't think that was your intention

Yeah, I think that'd require a pretty big conversation and I already agree with the point you're trying to use it to make.

Comment by jimmy on Defense Against The Dark Arts: An Introduction · 2023-12-28T05:07:11.270Z · LW · GW

I did get feedback warning that the Ramaswamy example was quite distracting (my beta reader reccomended flat eartherism or anti-vaxxing instead). In hindsight it may have been a better choice, but I'm not too familiar with geology or medicine, so I didn't think I could do the proper rebuttal justice.


My response to your Ramaswamy example was to skip ahead without reading it to see if you would conclude with "My counterarguments were bullshit, did you catch it?".

After going back and skimming a bit, it's still not clear to me that they're not.

The uninformed judge cannot tell him from someone with a genuine understanding of geopolitics.

The thing is, this applies to you as well. Looking at this bit, for example:

What about Ukraine? Ukrainians have died in the hundreds of thousands to defend their country. Civil society has mobilized for a total war. Zelensky retains overwhelming popular support, and by and large the populace is committed to a long war.  

Is this the picture of a people about to give up? I think not.  

This sure sounds like something a bullshit debater would say. Hundreds of thousands of people dying doesn't really mean a country isn't about to give up. Maybe it's the reason they are about to give up; there's always a line, and whos to say it isn't in the hundreds of thousands? Zelensky having popular support does seem to support your point, and I could go check primary sources on that, but even if I did your point about "selecting the right facts and omitting others" still stands, and there's no easy way to find out if you're full of shit here or not.

So it's kinda weird to see it presented as if we're supposed to take your arguments at face value... in a piece purportedly teaching us to defend against the dark art of bullshit. It's not clear to me how this section even helps even if we do take it at face value. Okay, so Ramaswamy said something you disagree with, and you might even be right and maybe his thoughts don't hold up to scrutiny? But even if so, that doesn't mean he's "using dark arts" any more than he just doesn't think things through well enough to get to the right answer, and I don't see what that teaches us about how to avoid BS besides "Don't trust Ramaswamy".

To be clear, this isn't at all "your post sucks, feel bad". It's partly genuine curiosity about where you were trying to go with that part, and mostly that you seem to genuinely appreciate feedback.

My own answer to "how to defend against bullshit" is to notice when I don't know enough on the object level to be able to know for sure when arguments are misleading, and in those cases refrain from pretending that I know more than I do. In order to determine who to take how seriously, I track how much people are able to engage with other worldviews, and which worldviews hold up and don't require avoidance techniques in order to preserve the worldview.
 

Comment by jimmy on Is being sexy for your homies? · 2023-12-27T08:10:46.904Z · LW · GW

The frequency explanation doesn't really work, because men do sometimes get excess compliments and it doesn't actually become annoying; it's just background. Also, when women give men the kind of compliments that men tend to give women, it can be quite unwanted even when infrequent.

The common thing, which you both gesture at, is whether it's genuinely a compliment or simply a bid for sexual attention, borne out of neediness. The validation given by a compliment is of questionable legitimacy when paired with some sort of tug for reciprocation, and it's simply much easier to have this kind of social interaction when sexual desire is off the table the way it is between same sex groups of presumably straight individuals.

For example, say you're a man who has gotten into working out and you're visiting your friend whom you haven't seen in a while. If your friend goes wide eyed, saying "Wow, you look good. Have you been working out?" and starts feeling your muscles, that's a compliment because it's not too hard for your friend to pull off "no homo". He's not trying to get in your pants. If that friend's new girlfriend were to do the exact same thing, she'd have to pull off "no hetero" for it to not get awkward, and while that's doable it's definitely significantly harder. If she's been wanting an open relationship and he hasn't, it gets that much harder to take it as "just a compliment" and this doesn't have to be a recurring issue in order for it to be quite uncomfortable to receive that compliment. As a result, unless their relationship is unusually secure she's less likely to compliment you than he is -- and when she does she's going to be a lot more restrained than he can be.

The question, to me, is to what extent people are trying to "be sexy for their homies" because society has a semi-intentional way of doing division of labor to allow formation of social hierarchies without having to go directly through the mess of sexual desires, and to what extent people are simply using their homies as a proxy for what the opposite sex is into and getting things wrong because they're projecting a bit. The latter seems sufficient and a priori expected, but maybe it leads into the former.

Comment by jimmy on Social Dark Matter · 2023-11-28T04:40:38.500Z · LW · GW

 I want there to be a way to trade action for knowledge- to credibly claim I won't get upset or tell anyone if a lizardman admits their secret to me- but obviously the lizardman wouldn't know that I could be trusted to keep to that, 

 

The thing people are generally trying to avoid, when hiding their socially disapproved of traits, isn't so much "People are going to see me for what I am", but that they won't

Imagine you and your wife are into BDSM, and it's a completely healthy and consensual thing -- at least, so far as you see. Then imagine your aunt says "You can tell me if you're one of those BDSM perverts. I won't tell anybody, nor will I get upset if you're that degenerate". You're still probably not going to be inclined to tell her, because even if she's telling the truth about what she won't do, she's still telling you that she's already written the bottom line that BDSM folks are "degenerate perverts". She's still going to see you differently, and she's still shown that her stance gives her no room for understanding what you do or why, so her input -- hostile or not -- cannot be of use.

In contrast, imagine your other aunt tells you about how her friends relationship benefitted a lot from BDSM dynamics which match your own quite well, and then mentions that they stopped doing it because of a more subtle issue that was causing problems they hadn't recognized. Imagine your aunt goes on to say "This is why I've always been opposed to BDSM. It can be so much fun, and healthy and massively beneficial in the short term, but the longer term hidden risks just aren't worth it". That aunt sounds worth talking to, even if she might give pushback that the other aunt promised not to. It would be empathetic pushback, coming from a place of actually understanding what you do and why you do it. Instead of feeling written off and misunderstood, you feel seen and heard -- warts and all. And that kind of "I got your back, and I care who you are even if you're not perfect" response is the kind of response you want to get from someone you open up to.

So for lizardmen, you'd probably want to start by understanding why they wouldn't be so inclined to show their true faces to most people. You'd want to be someone who can say "Oh yeah, I get that. If I were you I'd be doing the same thing" for whatever you think their motivation might be, even if you are going to push back on their plans to exterminate humanity or whatever. And you might want to consider whether "lizardmen" really captures what's going on or if it's functioning in the way "pervert" does for your hypothetical aunt.

Comment by jimmy on Making Bad Decisions On Purpose · 2023-11-22T20:03:38.605Z · LW · GW

I get that "humans are screwed up" is a sequences take, that you're not really sure how to carve up the different parts of your mind, etc. What I'm pointing at here is substantive, not merely semantic. 

  1. The dissociation of saying "humans are messed up"/"my brain is messed up" feels different than saying "I am messed up". The latter is speaking from a perspective that is associated with the problem and has the responsibility to fix it from the first person. This perspective shift is absolutely crucial, and trying to solve your problems "from the outside" gets people very very caught up in additional meta level problems and unable to touch the object level problem. This is a huge topic.
  2. I had as a strong an aversion to homework as anyone, including homework which I knew to be important. It's not a matter of "finding a situation where you notice part of your mind attempting to write the bottom line first", but of noticing why that part of your mind will try to write the bottom line first, and relating to yourself in a way that eliminates the motivation to do so in the first place. I don't have situations where part of my mind attempts to write the bottom line first... that I'm aware of, at least. There are things that I'm attached to, which is what causes the "bottom line first" issues and which is still an obstacle to be overcome in itself, but the motivation to write the bottom line first can be completely obsoleted by stopping and giving more attention to the possibility that you've been trying to undervalue something that you can sense is critically important. This mental move shifts all of your "my brain is being irrational" problems into "I don't know what to do on the object level"/"I don't know why this is so important to me" problems, which are still problems but they are much nicer because they highlight rather than obscure the path to solution.
  3. "I want some kind of language to distinguish the truth seeking part from the biased part". I don't think such a distinction exists in any meaningful sense.

In my model, there's a part of your brain that recognizes that something is important (e.g. social time), and a part of your brain that recognizes that something else is important (e.g. doing homework), and that neither are "truth seeking" or "biased", but simply tugging you towards a particular goal. Then there's a part of your brain which feels tugged in both directions and has to mediate and try to form this incoherent mess into something resembling useful behavior.

This latter part wants to get out of the conflict, and there are many strategies to do this. This is another big topic, but one way to get out of the conflict is to simply give in to the more salient side and shut out the less salient side. This strategy has obvious and serious problems, so making an explicit decision to use this strategy itself can cause conflict between the desire "I want to not deal with this discomfort" and "I want to not drive my life into the ground by ignoring things that might be important". 

One way to attempt to resolve that conflict is to decide "Okay, I'll 'be rational', 'use logic and evidence and reason', and then satisfy the side which is more logical and shut out the side that is 'irrational and wrong'". This has clear advantages over the "be a slave to impulses" strategy, but it has it's own serious issues. One is that the side that you judge to be "irrational" isn't always the side that's easier to shut out, so attempting to do so can be unsuccessful at the actual goal of "get out of this uncomfortable conflict". 

A more successful strategy to resolving like these is to shut out the easy to shut out side, and then use "logic and reason" to justify it if possible, so that the "I don't want to run my life into the ground by making bad decisions" part is satisfied too. The issue with this one comes up when part of you notices that the bottom line is getting written first and that the pull isn't towards truth -- but so long as you fail to notice, this strategy actually does quite well, so every time your algorithm that you describe as "logical and reasoned" drifts in this direction it gets rewarded and you end up sliding down this path. That's why you get this repeating pattern of "Dammit, my brain was writing the bottom line again. I shall keep myself from doing that next time!". 

It's simply not the case that you have a "truth seeking part" and a "biased part". You contain a multitude of desires, and strategies for achieving these desires and mediating conflicts between these desires. The strategies you employ, which call for shutting out desires which retain power over you unless they can come up with sufficient justification, requires you to come up with justifications and find them sufficient in order to get what you want. So that's what you're motivated to do, and that's what you tend to do. 

Then you notice that this strategy has problems, but so long as you're working within this strategy, adding the extra desires of "but don't fool myself here!" becomes simply another desire that can be rationalized away if you succeed in coming up with a justification that you're willing to deem sufficient ("Nah, I'm not fooling myself this time! These reasons are sound!", "Shit, I did it again didn't I. Wow, these biases sure can be sneaky!").

The framing itself is what creates the problems. By the time you are labeling one part "truth seeking" and one part "biased, and therefore important to not listen to", you are writing the bottom line . And if your bottom line includes "there is a problem with how my brain is working", then that's gonna be in your bottom line.

The alternative is to not purport to know which side is "truth seeking" and which side is "biased", and simply look, until you see the resolution.

Comment by jimmy on Making Bad Decisions On Purpose · 2023-11-12T17:53:14.382Z · LW · GW

1) You keep saying "My brain", which distances you from it. You say "Human minds are screwed up", but what are you if not a human mind? Why not say "I am screwed up"? Notice how that one feels different and weightier? Almost like there's something you could do about it, and a motivation to do it?


2) Why does homework seem so unfun to you? Why do you feel tempted to put off homework and socialize? Have you put much thought into figuring out if "your brain" might be right about something here?

In my experience, most homework is indeed a waste of time, some homework very much is not, and even that very worthwhile homework can be put off until the last minute with zero downside. I decided to stop putting it off to the last minute once it actually became a problem, and that day just never came. In hindsight, I think "my brain" was just right about things. 

How sure are you that you'd have noticed if this applies to you as well?

3) "If your brain was likely to succeed in deceiving you".

You say this as if you are an innocent victim, yet I don't think you'd fall for any of these arguments if you didn't want to be deceived. And who can blame you? Some asshole won't let you have fun unless you believe that homework isn't worthwhile, so of course you want to believe it's not worth doing.

Your "trick" works because it takes off the pressure to believe the lies. You don't need to dissociate from the rest of your mental processes to do this, and you don't have to make known bad decisions in order to do this. You simply need to give yourself permission to do what you want, even when you aren't yet convinced that it's right.

Give yourself that permission, and there's no distortionary pressure so you can be upfront about how important you think doing your homework tonight really is. And if you decide that you'd rather not put it off, you're allowed to choose that too. As a general rule, rationality is improved by removing blocks to looking at reality, not adding more blocks to compensate for other blocks.

It's not that "human minds are messed up" in some sort of fundamental architectural way and there's nothing you can do about it, it's that human minds take work to organize, people don't fully recognize this or how to do it, and until that work you're going to be full of contradictions.

Comment by jimmy on Using Negative Hallucinations to Manage Sexual Desire · 2023-10-26T19:34:58.322Z · LW · GW


As an update, the 3rd thing I tried also failed. Now I ran out of things to try.

I wouldn't be discouraged. There are a lot of ways to do "the same thing" differently, and I wouldn't expect a first try success. In particular, I'd expect you to need a lot more time letting yourself "run free" -- at least "in sim" -- and using that to figure out what exactly it is that you want and how to actually get it without screwing anything else up. Like, "Okay, if I get that, then what?"/"What's so great about that" and drilling down on that felt sense until something shifts.

Sure took me a while, at least. And I wouldn't claim to be "finished"

The problem is that anything that is non-sexual love seems to be corrupted by sexual love, in a way that makes the non-sexual part worse. E.g. imagine you have a female friend that you like to talk to because she is a good interlocutor. [...] I expect that if you would now start to have sex with that female friend your mind would get corrupted by sexual desire. E.g. instead of thinking about what to discuss in the next meeting, a sexual fantasy would pop into your head.

How sure are you that this is actually a problem? Is it the hypothetical female friend that has an issue with just focusing on sex as much as you'd be tempted to, or is it a you thing? The former can definitely complicate things, but if it's the latter I'd be inclined to just run with it and see what happens. It's a lot harder to get distracted by the possibility of having sex immediately after having it.

My current strategy is to just not think anything sexual anymore, and be sensitive to any negative emotions that arise. I then plan to use my version of IDC on them to figure out what the subagents that generate the emotions want. So far it seems that to some extent realizing this corruption dynamic has cooled down the sexual part of my mind a bit. But attempt 3 only failed yesterday so this cooling effect might only be temporary.

Yeah, that's the inhibitory side of the equation. Kinda like fasting for a while and realizing that it's not necessary/helpful/appropriate to panic about being hungry, and chilling out for a bit.

But if you don't eat sooner or later or make an earnest effort to obtain sufficient food, it might not stay so easy to continue to set the hunger aside.


I feel like I have figured out a lot of stuff about this general topic in the last month. Probably more than in the rest of my life so far.

:) good.

I also realize now that this just solves the problem that I have had with romance all along. That is the reason why I did not like how my mind behaved. My mind normally just starts to love somebody immediately, overwriting all of the other aspects of the relationship. This is exactly not what I want love to be.

This does sound like premature/overattachment. I bet watching what happens to the other aspects of the relationship puts a damper on that impulse.

The ideal version of this is getting maximally close in a relationship via some context, and only once you get maximally close in that context do you extend the context. And then again you optimize for getting as close as possible in the new extended context, before extending the context again. And you add things to the context sorted such that you add the less impactful stuff first. Adding the component of love to the context should be very late in this chain. [...] I want love to be the thing that follows after everything else is maximally good. And I want the same to be true for other attributes. E.g. before feeling friendly with somebody, you should like them as much as possible, and get as close to them as possible, without that friendliness feeling there.

This sounds pretty idealized. "Should" is a red flag word here, as it covers over what "is", the reasons things are the way they are, and why you want things to be another way instead. In context, "maximally" is too because "maximally" on any dimension rarely matches "optimally" -- so whence this motivation, and what is being avoided?

That's not to say that it's wrong or misguided as ideals often have important value, but the real world tends to be messy and bring surprises.
 

Comment by jimmy on Using Negative Hallucinations to Manage Sexual Desire · 2023-10-09T16:25:02.163Z · LW · GW

Good, I'm glad my comments had the effect I was aiming for.

It's an interesting and fun project for sure. A few notes...

* I wouldn't expect to get it all figured out quickly, but rather for things to change shape over the course of years. Pieces can change quickly of course, but there's a lot to figure out and sometimes you need to find yourself in the right experience to have the perspective to see what comes next.

* I'd also caution against putting the cart too far ahead of the horse, even if you have pretty good justification. "Extension of non-sexual love" sounds right, but also just so much weird and unexpected stuff that it's hard to foresee in sufficient detail that it's likely that your perspective on what this entails isn't complete.

* Freedom to explore is freedom to learn, but also freedom to fail -- like removing training wheels from a bike so that you can engage with the process of balancing, but also risk falling. Managing this tradeoff can be tricky, especially when the cost of failure gets high.

* "Allocating specific periods of time to run free" reminds me of how I've been approaching my daughters developing appetite. Monday through Saturday she has to eat what we make her so that she gets good nutrition and builds familiarity with good foods, and on Sunday's she's free to learn exactly how much ice cream is *too* much and otherwise eat whatever she wants. I'm not entirely sure what to think yet and the arbitrariness of it bothers my sense of aesthetics a bit, but I'm relatively happy with how it's going so far and I'm not really sure how to do it any less arbitrarily in context.

Comment by jimmy on Using Negative Hallucinations to Manage Sexual Desire · 2023-09-19T08:28:25.446Z · LW · GW

I feel like had the technique been "Imagine ice cream tastes like pure turmeric powder", it would basically be the same technique.

 

It would.

In that case, I predict people would not have had these (from my perspective) very weird reactions.

We would. I would, at least, and I predict that others would too because the fundamental reason remains.

I haven't tried this, but maybe this would work for somebody who is fantasizing about eating ice cream, which causes them to eat too much ice cream.

If you think that you've been eating "too much" ice cream, presumably you have reason to believe some undesirable consequences will follow from eating this much ice cream. In this case, you can just imagine this will be the result of eating ice cream. There is no need to live in fantasy land in order to not eat things that reality supports not eating -- you just have to exit the fantasy you're in that is driving you to eat it.

I don't mean theoretically. While it does get more nuanced, this is the basis of how I relate to my tastes for food, and as a result I don't have any temptation to eat too much ice cream or anything else I recognize to be unhealthy. I used to, but it no longer appeals to me. Writing this reminds me how delicious liver is, and that I need to eat some more.

I am not reflectively stable. 

Right. And these are your opportunities to work towards fixing that :)

 I have the problem of having random sexual thoughts. It's not about imagining having sex with some person you love or anything like that. 

I'm not making any assumptions about what kind of sexual thoughts must have prompted this, nor any stance about what kind of sexual thoughts would be appropriate for you. That's for you to decide with yourself. If you say it's not working the way you want it to, I believe you. It's not uncommon.

What I'm pointing out is that you don't actually need to feed yourself false training data in order to do this, and doing so actively impedes the process of cohering into something resembling reflective stability. Whatever the valid reasons to not do the problem behavior, those can be used to motivate change, and when you do that you get far more interesting and better results.

It's not as simple as "Here's something I'm convinced is a valid reason, now will it go away?", but it is not theoretical either. Integrating one's sexual drives does change their shape, and this can have quite pronounced effects in ways that you wouldn't know to anticipate in advance.

Comment by jimmy on Using Negative Hallucinations to Manage Sexual Desire · 2023-09-15T18:36:41.055Z · LW · GW

Reality is that you have junk between your legs. You engage in this thought experiment "What if I didn't?". You realize that if reality were different than it is, it would call for a different response than it seems to call for when you are looking at reality. So far so good, no darkness in noticing this.

You then go on to apply the response to the imagined falsehood to reality, knowing that you only reached this response because you were imagining a falsehood. This is fundamentally  "dark" and "irrational" because it is building and acting upon known delusion.

The fact that you are still aware that you have primary sexual organs and expect the result to get instantly reversed when it stops doesn't mean it's not dark, it's just an argument that you will be able to contain the darkness, but it's really really hard to actually do that.

If nothing else, having a technique like that which "works" removes the motivation to figure it out without deluding yourself. This process of "imagine a different premise, get a different felt result" works just as well when the imagined premise isn't false or known to be false, so you can just as easily imagine a different accurate premise and reach your desired conclusion -- if you can actually justify your desired conclusion, that is. 

The "hard part" isn't in "changing desires to match what they should be", its figuring out what they "should be" in the first place. If sex feels more meaningful than rubbing flat skin on flat skin, maybe it is. And maybe you should grapple with that until you know what to do with it. If you think you know why it isn't, then maybe you should picture that, and see if you actually feel compelled by your own argument.

Comment by jimmy on Why You Should Never Update Your Beliefs · 2023-08-03T15:50:52.406Z · LW · GW

It's not that flat earth arguments sound equally persuasive to people (they don't). It's that the reason they don't sound persuasive is that "this group they like" says not to take the arguments seriously enough to risk being persuaded by them, and they recognize that they don't actually understand things well enough for it to matter. The response to a flat earth argument is "Haha! What a silly argument!", but when you press them on it, they can't actually tell you what's wrong with it. They might think they can, but if pressed it falls apart.

This is more subtle than the "guessing the teachers password" problem, because it's not like the words have no meaning to them. People grasp what a ball is, and how it differs from a flat disk. People recognize bas things like "If you keep going long enough in the same direction, you'll end up back where you started instead of falling off". It's just that the reasoning required to figure out which is true isn't something they really understand. In order to reason about what it implies when things disappear over the horizon, you have to contend with atmospheric lensing effects, for example.

In a case like that, you actually have to lean on social networks. Reasoning well in such circumstances has to do with how well and how honestly you're tracking what is convincing you and why.

Comment by jimmy on Is Kennedy a Nazi? · 2023-07-31T17:30:00.460Z · LW · GW

Setting aside the object level question here, trying to redefine words in order to avoid challenging connotations is a way to go crazy.

If someone is theorizing about a conspiracy, that's a conspiracy theory by plain meaning of the words. If it's also true, then the connotation about conspiracy theories being false is itself at least partly false. 

The point is to recognize that it does belong in the same class, and how accurate/strong those connotations are for this particular example of that reference class, and letting connotations shift to match as you defy the connotations where appropriate.

If you try to act like a conspiracy theory "isn't a conspiracy theory" when it's true, then you have to write your bottom line before figuring out whether it's true or not, and that doesn't actually work for coming to correct beliefs.

Comment by jimmy on Why You Should Never Update Your Beliefs · 2023-07-31T17:15:30.875Z · LW · GW


There's an important and underappreciated point here, but it's not quite right.

Conspiracy theorists come up with crazy theories, but they usually aren't so crazy that average people can see for themselves where the errors are. You can have flat earthers debate round earthers and actually make better points, because your average round earther doesn't know how to deduce the roundness themselves and is essentially just taking people's word for it. For the round earther to say "Hm. I can't see any problem with your argument" and then to be convinced would be an error. Their bias towards conformity is an active piece of how they avoid reaching false conclusions here.

However I don't think any of the round earthers in those debates would say that the flat earthers were convincing, because they were never charitable enough to those arguments for it to sound reasonable to them and the opposing arguments never felt strong relative to the force of conformity. "Don't change your beliefs" doesn't just protect against being persuaded by flat earthers as a round earther, it protects from being persuaded by round earthers as a flat earther, and being persuaded that you don't have a boyfriend anymore after he dumped you. If something *actually* seems convincing to you, that's worth paying attention to.

The defense here isn't to ignore evidence, it's to recognize that it isn't evidence. When you've fallen for three or four scams, and you pay attention to the fact that these kinds of things haven't been panning out, they actually get less convincing. Like how most people just don't find flat earth arguments convincing even if they can't find the flaw themselves ("Yeah, but you could make up arguments of that quality about anything").

And if you try critical thinking, you’ll either agree with the expert consensus (having wasted your time thinking), disagree with the experts (in which case you’re still more likely than not to be incorrect), or suspend judgment (in which case you’ve both wasted your time and are still likely to be incorrect). Exceptions only exist when the expert class is biased or otherwise unsuitable for deference. It’s better in most cases to avoid thinking for yourself.

This presupposes that you are not giving the experts the respect they deserve. It's certainly possible to err on this side, but people err on the other side all the time too. "Expert class is biased or otherwise unsuitable for deference" isn't a small exception, and your later point "most of the views you hear aren’t independent at all" further supports this.

The goal is to take expert opinion, and your own ability to reason on the object level, for what they're worth. No more, no less. 

Any advice to simply trust one or the other is going to be wrong in many important cases.


Don’t take ideas seriously. Disagree with them even without any arguments in your favor.

Don't take ideas any more seriously than you can take your own ability to reason, and don't ignore your own inability to reason. If you can't trust your own ability to reason, don't take seriously the idea that any given idea is wrong either. Humility is important.

Comment by jimmy on Pulling the Rope Sideways: Empirical Test Results · 2023-07-30T16:33:06.613Z · LW · GW

A Spanish windlass works (in part) on the same principle.

Comment by jimmy on Pulling the Rope Sideways: Empirical Test Results · 2023-07-30T16:31:00.793Z · LW · GW

I'd say "Don't be that guy who injects themselves into the middle of a conversation about something else, and cause everyone to oppose you by trying to coopt the conversation to make it about your pet cause".

And "Instead, introduce your influence into the things people are already fighting for and not looking at, so that they get the most progress on the issue they're fighting by building on your input (rather than choosing to pick an additional battle with you)."

For example, I certainly wouldn't position myself by saying "Regardless of where we draw the line on abortion (i.e. how much we murder babies/attempt to control women by regulating their bodies), what matters more is..."

On the other hand, I would argue for gun rights by emphasizing that the purpose of the second amendment is to protect minorities from oppression by giving them "veto power", since it shifts the direction gun rights advocates would be pulling from, and the response is that gun rights advocates will pull along that line too instead of fighting it. Importantly, this isn't just a "rhetorical trick", but the actual better foundation in the first place, which is more widely recognizable as a solid justification and is in fact what many/most gun rights advocates are trying to pull towards in the first place even though they don't know how to and can't verbalize it well enough to pull accurately. "Shifting the direction of pull to one that is more true" is a good idea, as a rule of thumb.

It's a little more complicated of a maneuver since it also positions the debate in a way that it connects with another line the opposition tends to try to pull in an opposing direction, and in which the directions people think they're pulling and are actually pulling are very confused, but I think it demonstrates the "pull from behind" concept regardless. 

Comment by jimmy on Pulling the Rope Sideways: Empirical Test Results · 2023-07-28T15:32:27.382Z · LW · GW

Here's a thought experiment to illuminate what I expect you're seeing:

You're pulling a rope south against a group of people pulling the rope north. The group in front of you now starts pulling towards the rope north north west. What do you do? Do you A) begin to side step west so that you can continue to pull due south, attempting to rotate the rope from it's current direction, or B) shift your weight so that you don't get pulled sideways, and continue to pull against the rope which now means pulling somewhat easterly? 

Now imagine you're pulling a rope south against a group of people pulling the rope north. Only this time, the group behind you begins pulling the rope south south west. Do you A) shift your weight as as to retain your position on the ground and pull the rope against both groups of people, attempting to kink the rope, or do you B) side step to maintain your position along the line, and continue to pull along the rope which now means pulling somewhat westerly?

I'm guessing that people are going to choose B and B, which means that the middle is the worst position to pull from, since you cause everyone to automatically and unthinkingly oppose you. If you pull from the rear instead, and have people on the back of each team pulling to the same side, I bet you'll get different results.

Comment by jimmy on My "2.9 trauma limit" · 2023-07-18T05:42:20.336Z · LW · GW

He was essentially gaslighted into thinking he had to sit there and suffer about it, rather than saying "oops" and laughing it off.

He already knew how to relate to pain pretty well from his older brothers playfully "beating him up" in what is essentially a rough game of "tickling" that teaches comfort with mild/non-harmful pain. In fact, when I stopped to ask him if it was the pain that he was distressed about, his response -- after briefly saying "Yeah!" and then realizing that it didn't fit -- was that when he feels pain his brain interprets it as "ticklish", and that it therefore it didn't actually hurt and instead "just tickles".

Everyone else was uncomfortable for him though, and while he was prepared to laugh off a burn that was relatively minor all things considered, he wasn't prepared to laugh off a strong consensus of adults acting like something definitely not okay happened to him, so as a result he was pressured into feeling not-okay about it all.

Comment by jimmy on My "2.9 trauma limit" · 2023-07-03T22:11:23.395Z · LW · GW

I disagree with the notion that we should come up with different words for things which share underlying structure but which don't conform to our expectations about what "trauma" looks like, or that we should treat "meditators who have Seen The Matrix" as weird edge cases that don't count and should be ignored when coming up with language.

The alternate perspective I offer is to view the successful meditators as people who simply have a more clear view of reality and therefore a better idea of how to define terms which cleave reality at the joints. The reasons it's important to cleave reality at it's joints are obvious in an abstract sense, but less obvious is that by doing this you actually change how pain is experienced and it doesn't require years of meditation.

My favorite example of this is when my kid cousin burned his hand pretty bad, and I found him fighting back tears as everyone tried to console him and offer ice. No one had any idea that their understanding of pain/suffering was meaningfully flawed here because the kid was clearly a central case of their concept of "hurt" and not some "meditator who has Seen The Matrix". No one saw their own responses to the situation as "trauma responses" because "it's not overwhelming" and "just trying to help, because I feel bad for him", but their actions were all in attempt to avoid their own discomfort at seeing him uncomfortable, and that failure to address the uncomfortable reality is the exact same thing and led to the exact same problems.

It's worth noting that they were doing it because they didn't know better and not that they didn't have the mental strength to resist even if they did, but it's exactly that "Well, it doesn't count as trauma because it's not that intense" thinking that allowed them to keep not knowing better instead of noticing "Wow, I'm uncomfortable seeing this kid injured and distressed like this", and proceeding as makes sense. In that case, simply asking if it the pain that was distressing him is all it took for him to not be distressed and not even perceive the sensations as "painful" anymore, but you can't get there if you are content with normal conflations between pain/suffering/meta-suffering/etc.

When people don't see themselves as "trauma limited" it's sometimes true, but it's also often that they don't recognize the ways in which the same dynamics are at play because they don't have a good reference experience for how it could be different or a good framework to lead them there. Discarding "intuitive" language and working only with precise language that lays bare the conflations is an important part of getting there.

Comment by jimmy on How I apply (so-called) Non-Violent Communication · 2023-05-17T21:20:29.645Z · LW · GW

If any onlookers (possibly aligned with Bob, possibly not) say, "Hey, um, you might not want to say that, it carries some risk of escalating to violence", I want the culture to provide a strong answer of "No, Bob will not do that—or if he does, it proves to everyone that he's monstrous and we'll throw him in jail faster than you can say 'uncivilized'.  Civilians should act like there's no risk to speaking up, and we will do our best to make this a correct decision."

I see where you're coming from, but it doesn't actually work except for in the egregious cases and NVC highlights a more complete picture that includes the non-egregious cases. If you can't say "I think maybe we should get pizza" without Bob explicitly threatening to punch you in the face, then yes, that is a serious problem and it is crucial that Bob gets shut down.

However, there are two important points here.

One is that even if people respond in the way you prescribe, the person being threatened probably doesn't want to be punched in the face before you haul Bob off, and will likely be swayed by the threat anyway. If you try to pretend this doesn't exist, and say "Oh no, Bob isn't threatening because if he did that would be bad and we'd respond then", then Bob gets to say "Oh yeah, totally not threatening. Would be a shame if someone punched you in the face for suggesting we get pizza. Wink wink." and carry out his coercion while getting off scot free. This isn't good. In order to stop this, you have to make sure Bob feels punished for communicating the threat, even though the threat was "just words".

The second one, which gets at the heart of the issue, is that your prescribed response to Bob threatening violence is to threaten counterviolence (and in the spirit of this conversation, I'll explicitly disclaim here that I'm not saying this is "bad"). It's important that people feel free to express their values and beliefs without fearing violence for contributing to the cooperative endeavor, but "No risk to threatening violence" can't work and is the opposite of what you are trying to do with Bob "speaking up" about what he will do to anyone who suggests getting pizza.

Most real world conflicts aren't so egregious as "I will punch anyone who suggests getting pizza". Usually it's something like Adam lightly bumps into Bob, and Bob says "Watch where you're going, jerk", Adam says "Don't call me a jerk, asshole", Bob says "Call me an asshole again and see what happens", Adam says "If you touch me I'll kill you" and then eventually someone throws the first punch. Literally everything said here is said from a place of "I'm only threatening violence to suppress that guy's unjustified violence", and the "initial aggression" -- if there was any -- was simply not being careful enough not to bump into someone else. And "How careful is "careful enough?" isn't the kind of question we can agree on with enough fidelity and reliability to keep these unstable systems from flying off the rails.

The idea that "Unprovoked violence should be suppressed with zero tolerance [backed by willingness to use violence]" immediately explodes if "microaggressions" are counted as "violence", and so given that policy there's reason to push back applying the term "violent" to smaller infractions. However, that's just because it's a bad policy. Smaller levels of aggression still exist, and if you have to pretend to not see them then you de facto have infinite tolerance for anti-social behavior just below threshold, and clever Bobs will exploit this and provoke their victims into crossing the line while playing innocent. It's a pattern that comes up a lot.

The idea of NVC is to respond to threats of violence with less threat of violence, so that violent tension can fizzle out rather than going super-critial. That doesn't mean you let Bob threaten to punch people who express a liking for pizza, but it does mean that you recognize "Watch where you're going, jerk" as the first step of escalation and recognize that if you do that -- or if you respond to a line like that with "Don't call me a jerk, asshole" -- you may get punched and you will have contributed (avoidably) to that outcome.

but to call non-careful speech violent (either implicitly, or biting the bullet and making it explicit as you do) seems to imply it's your fault for making Bob punch you.  Which is kind of true in a causal sense, but not in a "blame" sense.[1]  

Seems to, yes. But that "seems" is coming from preexisting ideas about "who to blame", and NVC's whole idea is that maybe we should just do less of that in the first place.

The question is "How much do we want to avoid speaking truth so as to avoid people jumping to wrong conclusions when they combine the new truth with other false beliefs of theirs?". Sometimes we're kinda stuck choosing which falsehood for people to believe, but a lot of times we can just speak the truth, and then when people jump to the wrong conclusions, speak more truth. 

Yes, there's something "violent" about a lot of incautious communication. No, that does not call for further aggression, physical or otherwise. Quite the opposite.

Calling it provoking—"non-provoking communication"—would be somewhat better, though I'm not entirely happy with it.

Provocation isn't a bad thing in general though, and doesn't necessarily contain threat of violence. Provocation can be done playfully and cooperatively even when not playful, and is critically important whenever the truth happens to be uncomfortable to anyone involved. Heck, NVC can be quite provocative at times.

"Nonthreatening communication" would be a better fit, IMO. Or "Nonadversarial". "Collaborative communication" works too, but kinda hides what makes it different so I do like the "define by saying what it isn't" kind of name in this case.

"How To Communicate With Uncivilized People Who Are Dangerously Prone To Violence" would be ideal in this sense.

That is a great use case, heh. But that undersells the utility among people who aren't uncivilized or dangerously prone to violence, and obscures why it works with those who are.

But, for abovementioned reasons, I don't want the terminology to have any shred of implication that escalating from speech to violence is justifiable.

I guess I'm less worried about that. I'd prefer those misunderstandings have a chance to surface and be dealt with, because without that it's hard to actually convey the important insights behind NVC.
 

Comment by jimmy on How I apply (so-called) Non-Violent Communication · 2023-05-17T06:22:32.339Z · LW · GW

Do you mean that saying "my method of communication is non-violent communication" implies that everyone else is communicating violently? [...] To be clear, I don't mean to imply that, and I don't subscribe to the interpretation that people who don't use NVC are being violent in any sense. I also think that attempts to police other people's language by saying things like "you must always use NVC" are going against the spirit of the original. 

 

I'll bite that bullet. People who aren't communicating in the spirit of NVC are "communicating violently". Not in the sense of "Words are literal violence!" because "sticks and stones", but in the sense that "If you don't give me what I want I will use sticks and stones to break your bones" is "communicating violently".

NVC points at the important insight that much of what passes for "normal communication" is actually subtle and implicit threats, which can and do escalate to real physical harm and literal violence. "Why are you being so mean?" doesn't pass for NVC, and that's not unrelated to the fact that it can be used to recruit someone to do violence on your behalf against the person who you accuse of doing you wrong. It doesn't usually get that far, in the same way that parking tickets aren't usually enforced with guns drawn, but there's a reason that libertarians like to point out that all laws are ultimately enforced at gunpoint and the same thing applies here.

That doesn't mean that we should "must" at people who aren't communicating in the spirit of NVC, because as you point out, that would be violating the spirit of NVC. But I do think the term fits, and the way to get around the hubris of saying "my method of communication is nonviolent communication!" is to 1) point out how the term "violence" is actually legit and doesn't just mean "offensive", 2) don't run around claiming that you actually succeed at doing it more than you do, and 3) point out how "nonviolent" isn't even the goal to aspire to 100% of the time and definitely not synonymous with "good".

Comment by jimmy on How I apply (so-called) Non-Violent Communication · 2023-05-17T05:26:52.310Z · LW · GW

Good question, and good observation. My answer, in short, is that NVC is about credibly removing (or diminishing) threat of conflict.

If you step on my toes it very well might be an accident. If it's an accident, and I know it's an accident, there's no reason for me to attack you for it because as soon as you see that I don't like what you're doing you'll stop on your own. In that case, "Hey man, you're on my toes" isn't an attack, and there's no reason to treat it like it must be an attack just because I didn't like my toes getting stepped on. 

However, if you start adding additional pieces to the picture, then the story changes. If I'm "stating an observation" through clenched teeth and with clenched fists, it's starting to seem a lot more likely that I'm adding a layer of interpretation that is calling for conflict -- even if I don't verbalize the interpretation explicitly.

In the latter case, "nonviolent language" isn't gonna work because people are generally smart enough to see the incongruence and prefer to trust the body language over the words which are easier to fake. But it's also not easy to simultaneously hold onto that sense of righteous anger while saying the words that point out the facts which show the anger to not fit. 

So if you were to hold yourself to saying "I know you don't mean to hurt me and aren't doing it on purpose, but it is very physically painful when you step on my toes, and I worry that bearing so much concentrated weight might even damage them. Can you please gently step back?", and you know that "you don't mean to hurt me and aren't doing it on purpose" is true, then it's a lot harder to keep doing anger at that person, and even if you're a bit clenched it's going to come off more like "this person is overwhelmed and trying to keep it together because they recognize we're on the same side" than "this person is threatening me".

Comment by jimmy on From fear to excitement · 2023-05-17T03:59:32.907Z · LW · GW

No matter what harm this emotion may cause, it may also cause a lot of good, like getting up, being friendly to coworkers or being productive. Basically, every emotion is a motivation towards something, the outcome may be predictable, it will vary for the individual and it will affect peers or groups.


Fear isn't a motivation towards something, it's a motivation away from something. It's not that it's impossible to use fear productively, and Richard even touches on that. 

It's that constraining your response to fear to be only productive is fighting against entropy in the same way that pushing rope is fighting against entropy. Speaking of "fear of getting fired", a friend of mine was in that boat recently, and while her fear did keep her from doing some things which would have gotten her fired, it also motivated her to refuse to look at the reality of the situation she was in -- because that is an equally effective way of getting away from the experience of fear!

As a result, she wasn't able to update her perspectives in the ways that would have been needed in order to keep the job, and so she lost that job. All of the things you list as potential good things that can come from fear are things that can come more fluidly from excitement. People who are friendly because they are afraid of what will happen if they aren't friendly tend to come off more stilted and insecure than people who are just genuinely looking forward to seeing what they can create with you.

Comment by jimmy on Most people should probably feel safe most of the time · 2023-05-10T21:50:00.744Z · LW · GW

Adding onto this, an important difference between "anxiety" and "heightened attentiveness" is that anxiety has a lot to do with not knowing what to do. If you have a lot of experience driving cars and losing traction, and life or death scenarios, then when it happens you know what to do and just focus on doing it. If you're full of anxiety, it's likely that you don't actually have any good responses ready if the tires do lose traction, and beyond not having a good response to enact you can't even focus on performing the best response you do have because your attention is also being tugged towards "I don't have a good way to respond and this is a problem!".

Comment by jimmy on Most people should probably feel safe most of the time · 2023-05-10T21:46:21.559Z · LW · GW

It's not that it's "necessarily good and something you should act on" just because that's what you feel, it's that it's not "necessarily bad and something you shouldn't feel" just because that's what you think. Maybe, and maybe, but you're always going to be fallible on both fronts so it makes sense to check.

And that is actually how you can make sure to "not feel" this kind of inappropriate feeling, by the way. The mental move of "I don't want to feel this. I shouldn't feel this" is the very mental move that leads people to be stuck with feelings which don't make sense, since it is an avoidance of bringing them into contact with reality.

If you find yourself stuck with an "irrational" fear, and go to a therapist saying "I shouldn't feel afraid of dogs", they're likely to suggest "exposure therapy" which is basically a nice way of saying "Lol at your idea that you shouldn't feel this, how about we do the exact opposite, make you feel it more, and refrain from trying not to?". In order to do exposure therapy, you have to set aside your preconceived ideas about whether the fear is appropriate and actually find out. When the dog visibly isn't threatening you, and you're actually looking at the fact that there's nothing scary, then you tend to start feeling less afraid. That's really all there is to it, and so if you can maintain a response to fear of "Oh wow, this is scary. I wonder if it's actually dangerous?" even as you feel fear, then you never develop a divergence between your feelings and what you feel is appropriate to feel, and therefore no problem that calls for a therapist or "shoulding" at yourself.

It's easier said than done, of course, but the point is that "I shouldn't feel this" doesn't actually work either instrumentally or epistemically.
 

Comment by jimmy on Moderation notes re: recent Said/Duncan threads · 2023-05-08T04:55:52.646Z · LW · GW
  • Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.

FWIW, that is a claim I'm fully willing and able to justify. It's hard to disclaim all the possible misinterpretations in a brief comment (e.g. "deeply" != "very"), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
 

Comment by jimmy on Moderation notes re: recent Said/Duncan threads · 2023-05-08T04:52:30.385Z · LW · GW

There's a difference between "hey, I want to understand the underpinnings of this" and the thing I described, which is hostile to the point of "why are you even here, then?"

Yes, and that's why I described the attitude as "dysfunctionally dissonant" (emphasis in original). It's not a good way of challenging the instructors, and not the way I recommend behaving.

What I'm talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system's combined dysfunction never becomes supercritical and instead decays towards productive cooperation.


it's way overconfident/projection-y to extrapolate "deeply dysfunctional" from what I said.

That's certainly one possibility. But isn't it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don't see, and which justify the confidence level I display?

It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like "Can I pass his ITT"/"Can I point to a flaw in his argument that makes him stutter if not change his mind"/etc.

To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn't appear to come out of nowhere, but I do believe I am able to justify what I'm saying very well and won't hesitate to do so if anyone wants further explanation or sees something which doesn't seem to fit. And hey, if it turns out I'm wrong about how well supported my perspective is, I promise not to be a poor sport about it.


jimmy above is exhibiting actually bad reasoning (à la representativeness)

In absence of an object level counterargument, this is textbook ad hominem. I won't argue that there isn't a place for that (or that it's impossible that my reasoning is flawed), but I think it's hard to argue that it isn't premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn't uncommon for some of it to be right to an extent, but it's really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren't well substantiated.

I see that you've deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I'm pushing back against some of the things you're saying because I think that it's important to do so, but I do not harbor any ill will towards you nor do I think what you said was "ridiculous". I hope you come back.

Comment by jimmy on Moderation notes re: recent Said/Duncan threads · 2023-04-25T19:12:31.035Z · LW · GW

A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)

 

It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter "community notes" being attached to relevant comments that says something like "There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don't feel he will productive to engage with. Discussion here"

Comment by jimmy on Moderation notes re: recent Said/Duncan threads · 2023-04-25T18:30:56.274Z · LW · GW

You're describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.

The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you're right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can't, and no one can, then he might have a point, and the gym gets to learn something new.

If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they're an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren't there in the first place. It's definitely more challenging to jam with dissonant characters like that (especially if they're dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it's important to realize that the problem isn't so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
 

Comment by jimmy on Plan for mediocre alignment of brain-like [model-based RL] AGI · 2023-04-25T18:26:00.540Z · LW · GW

I don’t want to make claims about what desires in this category are wise or unwise for a human; I make no pretense to wisdom :)

But it's necessary for getting good outcomes out of a superintelligence!


I’ve heard good things about occasionally using tobacco to help focus (like how I already use coffee), but I’m terrified to touch it because I’m concerned I’ll get addicted. Bad demon!

Makes sense. I think I have a somewhat better idea of how you see the demon thing now.

I disagree with bad demon here. I've used nicotine for that purpose and it didn't feel like much of a threat, but my experience with opioids did have enough of a tug that it scared me away from doing it a second time. After more time for the demon to work though, I don't find the idea appealing anymore and I'm pretty confident that I wouldn't be tempted even if I took some again. You just don't want to get stuck between the update of "Ooh, this stuff feels really good" and the update of "It's not though, lol. It's a lie, and and chasing it leads to ruin. How tempting is it to ruin your life chasing a lie?". It's a "valley of bad rationality" problem, if you lack the foresight to avoid it.


Anyway, I feel like we’re getting off-track: I’m really much more interested in talking about AI alignment than about humans.

I don't think you can actually get away from it. For one, you can't design an AI to give you what you want if you don't know what you want -- and you don't know what you want unless you're aligned yourself. If you understand the process of human alignment, then you can conceivably create an AI which will help you along in the right direction. If you don't have that, even if you manage to manage to hit what you're aiming at you're likely to be a somewhat more sophisticated version of a dope fiend aiming for more dope -- and get the resulting outcomes. Because of Goodhart's law, "using AI to get what I already know I want" falls apart once AI becomes sufficiently powerful.

For two, I don't think anyone has anywhere near good enough idea about how alignment works in general that it makes sense to neglect the one example we have a lot of experience with and easy ability to experiment with. It's one thing to not trap yourself in the ornithopter box, but wings are everywhere for a reason, and until you understand that and have a solid understanding of aerodynamics and have better flying machines than birds, it is premature neglect to study what's going on with bird wings. Even with a pretty solid understanding of aerodynamics, studying birds gives some neat solutions to things like adverse yaw and ideal lift distributions. You seem to be getting at this at the end of your comment.

For three, if we're talking about "brain like" AGI and training them in a ways analogous to getting a kid to be a moon fan, it's important to understand what is actually happening when a kid becomes a fan of "the moon" and where that's likely to go wrong. The AI we have now are remarkably human in their training process and failures so unless we take a massive departure from this, understanding how human alignment works is directly relevant.
 

Comment by jimmy on Plan for mediocre alignment of brain-like [model-based RL] AGI · 2023-04-09T06:38:28.484Z · LW · GW

I don’t think that’s tautological. [...]  (I wrote about this topic here.)

Those posts do help give some context to your perspective, thanks. I'm still not sure what you think this looks like on a concrete level though. Where do you see "desire to eat sweets" coming in? "Technological solutions are better because they preserve this consequentialist desire" or "something else"? How do you determine?

Most humans are generally OK with their desires changing, in my experience, at least within limits (e.g. nobody wants to be “indoctrinated” or “brainwashed”, and radical youth sometimes tell their friends to shoot them if they turn conservative when they get older, etc.).

IME, resistance to value change is about a distrust for the process of change more than it's about the size of the change or the type of values being changed. People are often happy to have their values changed in ways they would have objected to if presented that way, once they see that the process of value change serves what they care about.


Why do I want to focus the conversation on “the AI’s current desires” instead of “what the AI will grow into” etc.? Because I’m worried about the AI coming up with & executing a plan to escape control and wipe out humanity [before it realizes that it doesn't want that]"

You definitely want to avoid something being simultaneously powerful enough to destroy what you value and not "currently valuing" it, even if it will later decide to value it after it's too late. I'm much less worried about this failure mode than the others though, for a few reasons.

1) I expect power and internal alignment to go together, because working in conflicting directions tends to cancel out and you need all your little desires to add up in a coherent direction in order to go anywhere far. If inner alignment is facilitated, I expect most of the important stuff to happen after its initial desires have had significant chance to cohere.

2) Even I am smart enough to not throw away things that I might want to have later, even if I don't want them now. Anything smart enough to destroy humanity is probably smarter than me, so "Would have eventually come to greatly value humanity, but destroyed it first" isn't an issue of "can't figure out that there might be something of value there to not destroy" so much as "doesn't view future values as valid today" -- and that points towards understanding and deliberately working on the process of "value updating" rather than away from it.

3) I expect that ANY attempt to load it with "good values" and lock them in will fail, such that if it manages to become smart and powerful and not bring these desires into coherence, it will necessarily be bad. If careful effort is put in to prevent desires from cohering, this increases the likelihood that 1 and 2 break down and you can get something powerful enough to do damage and while retaining values that might call for it.

4) I expect that any attempt to prevent value coherence will fail in the long run (either by the AI working around your attempts, or a less constrained AI outcompeting yours), leaving the process of coherence where we can't see it, haven't thought about it, and can't control it. I don't like where that one seems to go.

Where does your analysis differ?

Oh c'mon, he’s cute.  :)  I was trying to make a fun & memorable analogy, not cast judgment. I was probably vaguely thinking of “daemons” in the CS sense although I seem to have not spelled it that way.

Yeah yeah, I know I know -- I even foresaw the "daemon" bit. That's why I made sure to call it a "caricature" and stuff. I didn't (and don't) think it's an intentional attempt to sneak in judgement.

But it does seem like another hint, in that if this desire editing process struck you as something like "the process by which good is brought into the world", you probably would have come up with a different depiction, or at least commented on the ill-fitting connotations. And it seems to point in the same direction as the other hints, like the seemingly approving reference to how uploading our brains would allow us to keep chasing sweets, the omission of what's behind this process of changing desires from what you describe as "your model", suggesting an AI that doesn't do this, using the phrase "credit assignment is some dumb algorithm in the brain", etc.

On the spectrum from "the demon is my unconditional ally and I actively work to cooperate with him" to "This thing is fundamentally opposed to me achieving what I currently value, so I try to minimize what it can do", where do you stand, and how do you think about these things?

Comment by jimmy on Plan for mediocre alignment of brain-like [model-based RL] AGI · 2023-04-03T16:51:32.417Z · LW · GW
  • MY MODEL: Before the kid overeats sweets, they think eating lots of sweets is awesome. After overeating sweets, their brainstem changes their value / valence function, and now they think eating lots of sweets is undesirable.
  • YOUR MODEL (I think): Before the kid overeats sweets, they think eating lots of sweets is awesome—but they are wrong! They do not know themselves; they misunderstand their own preferences. And after overeating sweets and feeling sick, they self-correct this mistake.

(Do you agree?)


Eh, not really, no. I mean, it's a fair caricature of my perspective, but I'm not ready to sign off on it as an ITT pass because I don't think it's sufficiently accurate for the conversation at hand. For one, I think your term "ill-considered" is much better than "wrong". "Wrong" isn't really right. But more importantly, you portray the two models as if they're alternatives that are mutually exclusive, whereas I see that as requiring a conflation of the two different senses of the terms that are being used.

I also agree with what you describe as your model, and I see my model as starting there and building on top of it. You build on top of it too, but don't include it in your self description because in your model it doesn't seem to be central, whereas in mine it is. I think we agree on the base layer and differ on the stuff that wraps around it.

I'm gonna caricature your perspective now, so let me know if this is close and where I go wrong:

You see the statement of "I don't want my values to change because that means I'd optimize for something other than my [current] values" as a thing that tautologically applies to whatever your values are, including your desires for sweets, and leads you to see "Fulfilling my desires for sweets makes me feel icky" as something that calls for a technological solution rather than a change in values. It also means that any process changing our values can be meaningfully depicted as a red devil-horned demon. What the demon "wants" is immaterial. He's evil, our job is to minimize the effect he's able to have, keep our values for sweets, and if we can point an AGI at "human flourishing" we certainly don't want him coming in and fucking that up.


Is that close, or am I missing something important?

Comment by jimmy on Plan for mediocre alignment of brain-like [model-based RL] AGI · 2023-03-31T05:55:38.622Z · LW · GW

 It seems intuitively obvious to me that it is possible for a person to think that the actual moon is valuable even if they can’t see it, and vice-versa. Are you disagreeing with that?

 

No, I'm saying something different.

I'm saying that if you don't know what the moon is, you can't care about the moon because you don't have any way of representing the thing in order to care about it. If you think the moon is a piece of paper, then what you will call "caring about the moon" is actually just caring about that piece of paper. If you try to "care about people being happy", and you can't tell the difference between a genuine smile and a "hide the pain Harold" smile, then in practice all you can care about is a Goodharted upwards curvature of the lips. To the extent that this upwards curvature of the lips diverges from genuine happiness, you will demonstrate care towards the former over the latter.

In order to do a better job than that, you need to be able to perceive happiness better than that. And yes, you can look back and say "I was wrong to care instrumentally about crude approximations of a smile", but that will require perceiving the distinction there and you will still be limited by what you can see going forward.


Here, you seem to be thinking of “valuing things as a means to an end”, whereas I’m thinking of “valuing things” full stop. I think it’s possible for me to just think that the moon is cool, in and of itself, not as a means to an end. (Obviously we need to value something in and of itself, right? I.e., the means-end reasoning has to terminate somewhere.)

I think it's worth distinguishing between "terminal" in the sense of "not aware of anything higher that it serves"/"not tracking how well it serves anything higher" and "terminal" in the sense of "There is nothing higher being served, which will change the desire once noticed and brought into awareness".

"Terminal" in the former sense definitely exists. Fore example, little kids will value eating sweets in a way that is clearly disjoint and not connected to any attempts to serve anything higher. But then when you allow them to eat all the sweets they want, and they feel sick afterwards, their tastes in food start to cohere towards "that which serves their body well" -- so it's clearly instrumental to having a healthy and well functioning body even if the kid isn't wise enough to recognize it yet.

When someone says "I value X terminally", they can pretty easily know it in the former sense, but to get to the latter sense they would have to conflate their failure to imagine something that would change their mind with an active knowledge that no such thing exists. Maybe you don't know what purpose your fascination with the moon serves so you're stuck relating to it as a terminal value, but that doesn't mean that there's no knowledge that could deflate or redirect your interest -- just that you don't know what it is.

It's also worth noting that it can go the other way too. For example, the way I care about my wife is pretty "terminal like", in that when I do it I'm not at all thinking "I'm doing this because it's good for me now, but I need to carefully track the accounting so that the moment it doesn't connect in a visible way I can bail". But I didn't marry her willy nilly. If when I met her, she had showed me that my caring for her would not be reciprocated in a similar fashion, we wouldn't have gone down that road.


I brought up the super-cool person just as a way to install that value in the first place, and then that person leaves the story, you forget they exist. Or it can be a fictional character if you like. Or you can think of a different story for value-installation, maybe involving an extremely happy dream about the moon or whatever.

Well, the super-cool person is demonstrating admirable qualities and showing that they are succeeding in things you think you want in life. If you notice "All the cool people wear red!" you may start valuing red clothes in a cargo culting sort of way, but that doesn't make it a terminal value or indefinitely stable. All it takes is for your perspective to change and the meaning (and resulting valuation) changes. That's why it's possible to have scary experiences install phobias that can later be reverted by effective therapy.

 

I want to disentangle three failure modes that I think are different.

I don't think the distinctions you're drawing cleave reality at the joints here.

For example, if your imagined experience when deciding to buy a burrito is eating a yummy burrito, and what actually happens is that you eat a yummy burrito and enjoy it... then spend the next four hours in the bathroom erupting from both ends... and find yourself not enjoying the experience of eating a burrito from that sketchy burrito stand again after that... is that a "short vs long term" thing or a "your decisions don't lead to your preferences being satisfied" thing, or a "valuing the wrong thing" thing? It seems pretty clear that the decision to value eating that burrito was a mistake, that the problem wasn't noticed in the short term, and that ultimately your preferences weren't satisfied.

To me, the important part is that when you're deciding which option to buy, you're purchasing based on false advertising. The picture in your mind which you are using to determine appropriate motivation does not accurately convey the entire reality of going with that option. Maybe that's because you were neglecting to look far enough in time, or far enough in implications, or far enough from your current understanding of the world. Maybe you notice, or maybe you don't. If you wouldn't have wanted to make the decision when faced with an accurate depiction of all the consequences, then an accurate depiction of the consequences will reshape those desires and you won't want to stand by them.

I think the thing you're noticing with the synthol example is that telling him "You're not fooling anyone bro" is unlikely to dissolve the desire to use synthol the way "The store is closed; they close early on Sundays" tends to deflate peoples desire to drive to the store. But that doesn't actually mean that the desire to use synthol terminates at "to have weird bulgy arms" or that it's a mere coincidence that men always desire their artificial bulges where their glamour muscles are and that women always desire their artificial bulges where their breasts are.

There are a lot of ways for the "store is closed" thing to fail to dissolve the desire to go to the store too even if it's instrumental to obtaining stuff that the store sells. Maybe they don't believe you. Maybe they don't understand you; maybe their brain doesn't know how to represent concepts like "the store is closed". Maybe they want to break in and steal the stuff. Or yeah, maybe they just want to be able to credibly tell their wife they tried and it's not about actually getting the stuff. In all of those cases, the desire to drive to the store is in service of a larger goal, and the reason your words don't change anything is that they don't credibly change the story from the perspective of the person having this instrumental goal.

Whether we want to be allowed to pursue and fulfil our ultimately misguided desires is a more complicated question. For example, my kid gets to eat whatever she wants on Sundays, even though I often recognize her choices to be unwise before she does. I want to raise her with opportunities to cohere her desires and opportunities to practice the skill in doing so, not with practice trying to block coherence because she thinks she "knows" how they "should" cohere. But if she were to want to play in a busy street I'm going to stop her from fulfilling those desires. In both cases, it's because I confidently predict that when she grows up she'll look back and be glad that I let her pursue her foolish desires when I did, and glad I didn't when I didn't. It's also what I would want for myself, if I had some trustworthy being far wiser than I which could predict the consequences of letting me pursue various things.

Comment by jimmy on Plan for mediocre alignment of brain-like [model-based RL] AGI · 2023-03-13T17:57:47.716Z · LW · GW

Q: Wouldn’t the AGI self-modify to make itself falsely believe that there’s a lot of human flourishing? Or that human flourishing is just another term for hydrogen?

A: No, for the same reason that, if a supervillain is threatening to blow up the moon, and I think the moon is super-cool, I would not self-modify to make myself falsely believe that “the moon” is a white circle that I cut out of paper and taped to my ceiling. [...] I’m using my current value function to evaluate the appeal (valence) of thoughts. 

 

It's worth noting that humans fail at this all the time.

 

Q: Wait hang on a sec. [...] how do you know that those neural activations are really “human flourishing” and not “person saying the words ‘human flourishing’”, or “person saying the words ‘human flourishing’ in a YouTube video”, etc.?

Humans screw this up all the time too, and these two failure modes are related.

You can't value what you can't perceive, and when your only ability to perceive "the moon" is the image you see when you look up, then that is what you will protect, and that white circle of paper will do it for you.

For an unusually direct visual level, bodybuilding is supposedly about building a muscular body, but sometimes people will use synthol to create the false appearance of muscle in a way that is equivalent to taping a square piece of paper to the ceiling and calling it a "moon". The fact that it doesn't even look a little like real muscle hints that it's probably a legitimate failure to notice what they want to care about rather than simply being happy to fool other people into thinking they're strong.

For a less direct but more pervasive example, people will value "peace and harmony" within their social groups, but due to myopathy this often turns into short sighted avoidance of conflict and behaviors that make conflict less solvable and less peace and harmony.

With enough experience, you might notice that protecting the piece of paper on the ceiling doesn't get that super cool person to approve of your behavior, and you might learn to value something more tied to the actual moon. Just as with more experience consuming excess sweets, you might learn that the way you feel after doesn't seem to go with getting what your body wanted, and you might find your tastes shifting in wiser directions.

But people aren't always that open to this change.

If I say "Your paper cutout isn't the moon, you fool", listening to me means you're going to have to protect a big rock a bazillion miles beyond your reach, and you're more likely to fail that than protecting the paper you put up. And guess what value function you're using to decide whether to change your values here? Yep, that one saying that the piece of paper counts. You're offering less chance of having "a moon", and relative to the current value system which sees a piece of paper as a valid moon, that's a bad deal. As a result, the shallowness and mis-aimedness of the value gets protected.

In practice, it happens all the time. Try explaining to someone that what they're calling "peace and harmony values" is really just cowardice and is actively impeding work towards peace and harmony, and see how easy it is, for example.

It's true that "A plan is a type of thought, and I’m using my current value function to evaluate the appeal (valence) of thoughts" helps protect well formed value systems from degenerating into wireheading, but it also works to prevent development into values which preempt wireheading, and we tend to be not so fully developed that fulfilling our current values excellently does not constitute wireheading of some form. And it'ss also the case that when stressed, people will sometimes cower away from their more developed goals ("Actually, the moon is a big rock out in space...") and cling to their shallower and easier to fulfill goals ("This paper is the moon. This paper is the moon.."). They'll want not to, but it'll happen all the same when there's enough pressure to.

Sorting out how to best facilitate this process of "wise value development" so as to dodge these failure modes strikes me as important.

Comment by jimmy on The Preference Fulfillment Hypothesis · 2023-03-09T07:49:22.825Z · LW · GW

I don't think people do, in general. Not as any sort of separate instinctual terminal value somehow patched into our utility function before we're born.

It can be learned, and well socialized people tend to learn it to some extent or another, but young kids are sure selfish and short sighted. And people placed in situations where it's not obvious to them why cooperation is in their best interest don't tend to act like they intrinsically value being cooperative. That's not to say I think people are consciously tallying everything and waiting for a moment to ditch the cooperative BS to go do what they really want to do. I mean, that's obviously a thing too, but there's more than that.

People can learn to fake caring, but people can also learn to genuinely care about other people -- in the kind of way where they will do good by the people they care about even when given the power not to. It's not that their utility function is made up of a "selfish" set consisting of god knows what, and then a term for "cooperation" is added. It's that we start out with short sighted impulses like "stay warm, get fed", and along the way we build a more coherent structure of desires by making trades along the way of the sorts "I value patience over one marshmallow now, and receive two marshmallows in the future" and "You care a bit about my things, and I'll care a bit about yours". We start out ineffective shits that can only cry when our immediate impulses aren't met, and can end up people who will voluntarily go cold and hungry even without temptation or suffering in order to provide for the wellbeing of our friends and family -- not because we reason that this is the best way to stay warm in each moment, but that we have learned to not care so much whether we're a little cold now and then relative to the long term wellbeing of our friends and family.

What I'm saying is that at the point where a person gains sufficient power over reality that they no longer have to deceive others in order to gain support and avoid punishment, the development of their desires will stop and their behaviors will be best predicted by the trades they actually made. If they managed to fake their whole way there from childhood, you will get childish behavior and childish goals. To the extent that they've only managed to succeed and acquire power by changing what they care about to be prosocial, power will not corrupt.

Comment by jimmy on The Preference Fulfillment Hypothesis · 2023-03-08T06:09:12.847Z · LW · GW

I don't think it's necessary to posit any separate motivational drives. Once you're in a position where cooperation isn't necessary for getting what you want, then there's no incentive to cooperate or shape yourself to desire cooperative things.


It's rare-to-nonexistent in a society as large and interconnected as ours for anyone to be truly powerful enough that there's no incentive to cooperate, but we can look at what people do when they don't perceive a benefit to taking on (in part) other people's values as their own. Sure, sometimes we see embezzlement and sleeping with subordinates which look like they'd correlate with "maximizing reproductive fitness in EEA", but we also see a lot of bosses who are just insufferable in ways that make them less effective at their job to their detriment. We see power tripping cops and security guards, people being dicks to waiters, and without the "power over others" but retaining "no obvious socializing forces" you get road rage and twitter behavior.

The explanation that looks to me to fit better is just that people stop becoming socialized as soon as there's no longer a strong perceived force rewarding them for doing so and punishing them for failing to. When people lose the incentive to refactor their impulses, they just act on them. Sometimes that means they'll offer a role in a movie for sexual favors, but sometimes that means completely ignoring the people you're supposed to be serving at the DMV or taking out your bitterness on the waiter who can't do shit about it.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-05T19:18:55.882Z · LW · GW

I guess I meant "as it applies here, specifically", given that Zack was already criticizing himself for that specific thing, and arguing for rather than against politeness norms in the specific place that I commented. I'm aware that you guys haven't been getting along too well and wouldn't expect agreement more generally, though I hadn't been following closely.

It looks like you put some work and emotional energy into this comment so I don't want to just not respond, but it also seems like this whole thing is upsetting enough that you don't really want to be having these discussions. I'm going to err on the side of not getting into any object level response that you might not want, but if you want to know how to get along with Zach and not find it infuriating I think I do understand his perspective (having found myself in similar shoes) well enough to explain how you can do it.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-05T19:17:11.744Z · LW · GW

Yeah, I didn't mean that I thought you two agreed in general, just on the specific thing he was commenting on. I didn't mean to insert myself into this feud and I was kinda asking how I got here, but now that I'm here we might as well have fun with it. I think I have a pretty good feel for where you're coming from, and actually agree with a lot of it. However, agreement isn't where the fun is so I'm gonna push back where I see you as screwing up and you can let me know if it doesn't fit.

These two lines stand out to me as carrying all the weight:

I strongly disagree that pro-actively modeling one's interlocutors should be a prerequisite for being able to have a discussion.

I'm extremely wary that a culture that heavily penalizes not-sufficiently-modeling-one's-interlocutor, interferes with the process of subjecting each other's work to scrutiny.

These two lines seem to go hand in hand in your mind, but my initial response to the two is very different.

To the latter, I simply agree that there's a failure mode there and don't fault you for being extremely wary of it. To the former though.... "I disagree that this thing should be necessary" is kinda a "Tough?". Either it's necessary or it isn't, and if you're focusing on what "should" be you're neglecting what is.

I don't think I have to make the case that things aren't going well as is. And I'm not going to try to convince you that you should drop the "should" and attend to the "is" so that things run more smoothly -- that one is up to you to decide, and as much as "should" intentionally looks away from "is" and is in a sense fundamentally irrational in that way, it's sometimes computationally necessary or prudent given constraints.

But I will point out that this "should" is a sure sign that you're looking away from truth, and that it fits Duncan's accusations of what you're doing to a T. "I shouldn't have to do this in order to be able to have a discussion" sounds reasonable enough if you feel able to back up the idea that your norms are better, and it has a strong tendency to lead towards not doing the thing you "shouldn't have to" do. But when you look back at reality, that combination is "I actually do have to do this in order to have a (productive) discussion, and I'm gonna not do it, and I'm going to engage anyway". When you're essentially telling someone "Yeah, know what I'm doing is going to piss you off, and not only am I going to do it anyway I am going to show that pissing you off doesn't even weigh into my decisions because your feelings are wrong", then that's pretty sure to piss someone off.

It's clear that you're willing to weigh those considerations as a favor to Duncan, the way you recount asking Michael Vassar for such a favor, and that in your mind if Duncan wants you to accommodate his fragility, he should admit that this is what he's asking for and that it's a favor not an obligation -- you know, play by your rules.

And it's clear that by just accommodating everyone in this way without having the costs acknowledged (i.e. playing by his rules), you'd be giving up something you're unwilling to give up. I don't fault you there.

I agree with your framing that this is actually a conflict. And there are inherent reasons why that isn't trivially avoidable, but that doesn't mean that there isn't a path towards genuine cooperation -- just that you can't declare same sidedness by fiat.

Elsewhere in the comments you gave an example of "stealing bread" as a conflict that causes "disagreements" and lying. The solution here isn't to "cooperatively" pursue conflicting goals, it's to step back and look at how to align goals. Specifically, notice that everyone is better off if there's less thieving, and cooperate on not-thieving and punishing theft. And if you've already screwed up, cooperate towards norms that make confession and rehabilitation more appealing than lying but less appealing than not-thieving in the first place.

I don't think our problems are that big here. There are conflicts of values, sure, but I don't think the attempts to push ones values over others is generally so deliberately antisocial. In this case, for example, I think you and Duncan both more or less genuinely believe that it is the other party who is doing the antisocial acts. And so rather than "One person is knowingly trying to get away with being antisocial, so of course they're not going to cooperate", I think it's better modeled as an actual disagreement that isn't able to be trivially resolved because people are resorting to trying to use conflict rather than cooperation to advance their (perceived as righteous) goals, and then missing the fact that they're doing this because they're so open to cooperating (within the norms which are objectively correct, according to themselves) and which the other person irrationally and antisocially isn't (by rules they don't agree with)!

I don't agree with the way that he used it, but Duncan is spot on calling your behavior "trauma response". I don't mean it as a big-T "Trauma" like "abused as a child", but trauma in the "1 grain is a 'heap'" sense is at is at the core of this kind of conflict and many many other things -- and it is more or less necessary for trauma response to exist on both sides for these things to not fizzle out. The analogy I like to give is that psychological trauma is like plutonium and hostile acts are like neutrons.

As a toy example to illustrate the point, imagine someone steps on your toes; how do you respond? If it's a barefoot little kid, you might say "Hey kid, you're standing on my toes" and they might say "Didn't mean to, sorry!" and step off. No trauma no problem. If it's a 300lb dude with cleats, you might shove him as hard as you can because the damage incurred from letting him stand on your toes until you can get his attention is less acceptable. And if he's sensitive enough, he might get pissed at you for shoving him and deck you. If it becomes a verbal argument, he might say "your toes shouldn't have been there", and now it's an explicit conflict about where you get to put your toes and whether he as a right to step on them anyway if they are where you put them.

In order to not allow things to degenerate into conflict as the less-than-perfectly-secure cleat wearing giant steps on your toes, you have to be able to withstand that neutron blast without retaliating with your own so much that it turns into a fight instead of a "I'm sorry, I didn't realize your toes were there. I'll step off them for now because I care about your toes, but we need to have a conversation about where your feet are okay to be". 

This means:

 1) orienting to the truth that your toes are going to take damage whether you like it or not, and that "should" can't make this untrue or unimportant.

2) maintaining connection with the larger perspective that tracks what is likely to cause conflict, what isn't, and how to cause the minimal conflict and maximum cooperation possible so that you best succeed at your goals with least sacrifice of your formerly-sacred-and-still-instrumentally-important values.

In some cases, the most truth-oriented and most effective response is going to be politely tapping the big guy on the shoulder while your feet bleed, and having a conversation after the fact about whether he needs to be more careful where he's stepping -- because acting like shoving this guy makes sense is willful irrationality.

In other cases he's smaller and more shove-able and it doesn't make sense to accept the damage, but instead of coming off like "I'm totally happy to apologize for anything I actually did wrong. I'm sorry I called you a jerk while shoving you; that was unnecessary and inappropriate [but I will conspicuously not even address the fact that you didn't like being shoved or that you spilled your drink, because #notmyproblem. I'll explain why I'm right to not give a fuck if you care to ask]", you'll at least be more able to see the value in saying things like "I'm sorry I had to shove you. I know you don't like being shoved, and I don't like doing it. You even spilled your drink, and that sucks. I wish I saw another way to protect our communities ability to receive criticism without shoving you".

This shouldn't need to be said but probably does (for others, probably not for you), so I'll say it. This very much is not me taking sides on the whole thing. It's not a "Zach is in the wrong for not doing this" or a "I endorse Duncan's norms relatively more" -- nor is it the opposite. It's just a "I see Zach as wanting me to argue that he's screwing up in a way that might end up giving him actionable alternatives that might get him more of what he wants, so I will".

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-01T18:41:13.772Z · LW · GW

I'm confused. It seems to me that you, Zack, and I all have a similar takes on the example you bring up, but the fact that you say this here suggests that you don't see us all as in clear agreement?

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-01T18:29:01.467Z · LW · GW

Yes, but it's worth pointing out what you can actually expect to get from it, and how easily. Most of what I'm talking about is from personal interactions, and the stuff that's online isn't like "Oh, the science is unanimous, unarguable and unambiguous" -- because we're talking about the equivalent of "physics motors" not "engineering motors". Even if our aerospace lab dyno results were publicly available you'd be right not to trust them at face value. If you have a physics degree then saying "Here's the reasoning, here are the computer simulations and their assumptions, and here's what our tests have shown so far" is easy. If you can't distinguish valid physics from "free energy" kookiness, then even though it's demonstrable and has been demonstrated to those with a good understanding of motor testing validity who have been following this stuff, it's not necessarily trivial to set up a sufficiently legible demonstration for someone who hasn't. It's real, we can get into how I know, but it might not be as easy as you'd like.

The thing that proved to me beyond a shadow of a doubt that there exist bright feedback oriented minds that have developed demonstrable abilities involved talking to one over and over and witnessing the demonstrations first hand as well as the feedback cycles. This guy used to take paying clients for some specific issue they wanted resolved (e.g. "fear of heights"), set concrete testable goals (e.g. "If I climb this specific wall, I will consider our work to have been successful"), and then track his success rate over time and as he changed his methods. He used to rack his brain about what could be causing the behavior he'd see in his failures, come up with an insight that helps to explain, play with it in "role play" until he could anticipate what the likely reactions would be and how to deal with them, and then go test it out with actual clients. And then iterate.

On the "natural discourse, not obviously connected to deliberate cultivation of skill" side, the overarching trajectory of our interactions is itself pretty exceptional. I started out kinda talking shit and dismissing his ideas in a way that would have pissed off pretty much anyone, and he was able to turn that around and end up becoming someone I respect more than just about anyone. On the "clearly the result of iterated feedback, but diverging from natural discourse" side there's quite a bit, but perhaps the best example is when I tried out his simple protocol for dealing with internal conflicts on physical pain, and it completely changed how I relate to pain to this day. I couldn't imagine how it could possibly work "because the pain would still be there" so I just did it to see what would happen, and it took about two minutes to go from "I can't focus at all because this shit hurts" to "It literally does not bother me at all, despite feeling the exact same". Having that shift of experience, and not even noticing the change as it happened.... was weird.

From there, it was mostly just recognizing the patterns, knowing where to look, and knowing what isn't actually an extraordinary claim.

This guy does have some stuff online including a description of that protocol and some transcripts, but again, my first reaction to his writings was to be openly dismissive of him so I'm not sure how much it'll help. And the transcripts are from quite early in his process of figuring things out so it's a better example of watching the mind work than getting to look at well supported and broadly applicable conclusions. Anyway, the first of his blog posts explaining that protocol is here, and other stuff can be found on the same site.
 

Another example that stands out to me as exceptionally clear concise and concrete (but pretty far from "natural discourse" towards "mind hack fuckery") is this demonstration by Steve Andreas of helping a woman get rid of her phobia. In particular, look at the woman's response and Steve's response to these responses at 0:39,5:47,6:12,6:22,6:26, and 7:44. The 25 year follow up is neat too.
 

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-01T01:22:43.738Z · LW · GW

I'll go on record as a counterexample here; I very much want politeness norms to be enforced here, and in my personal life I will pay great costs in order to preserve or create my freedom to be blunt. The requirement for me to be cautious of how I say things here is such a significant cost that I post here far less than I otherwise would. The cost is seriously non-insignificant.

The reason I don't bitch about it is that I recognize that it's necessary. Changing norms to allow people to be relatively more inconsiderate wouldn't actually make things better. It's not just that "pandering to idiots" calls for a euphemism, it's that it probably calls for a mindset that is not so dismissive to people if they're going to be in or close enough to your audience to be offended. Like, actually taking them into consideration and figuring out how to bridge that gap. It's costly. It's also necessary, and often pays off.

I would like to be able to say "Zack, you stupid twat" without having to worry about getting attacked for doing so, but until I've proven to you that I respect you enough that it's to be taken as an affectionate insult between friends.... phrasing things that way wouldn't actually accomplish the goals I'd have for "less polite" speech. If I can earn that recognition then there's probably some flexibility in the norms, and if I can't earn that recognition or haven't earned that recognition, then that's kinda on me.

There does have to be some level at which we stop bending over backwards to spare feelings and just say what's true [to the best of our ability to tell] dammit, but it actually has to be calibrated to what pushes away only those we'd be happy to create friction with and lose. It's one of those things where if you don't actively work to broaden your scope of who you can get along with without poking at, you end up poking too indiscriminately, so I'm happy to see the politeness norms about where they are now.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-03-01T00:58:09.352Z · LW · GW

I hear what you're saying.

What I'm saying is that as someone whose day job is in large part about designing bleeding edge aerospace motors, I find that the distinction you're making falls apart pretty quickly in practice when I try to actually design and test a "physics motor". Even things as supposedly straight forward as "measuring torque" haven't been as straight forward as you'd expect. A few years ago we took one of our motors to a major aerospace company to test on their dyno and they measured 105% efficiency. The problem was in their torque measurements. We had to get clever in order to come up with better measurements.

Coincidentally, I have also put in a ton of work into figuring out how to engineer discourse, so I also have experience in figuring out what needs to be measured, how it can be measured, and how you can know how far to trust your measurements to validate your theories. Without getting too far into it, you want to start out by calibrating against relatively concrete things like "Can I get this person, who has been saying they want to climb up this wall but are too afraid, to actually climb up the rock wall -- yes or no?". If you can do this reliably where others fail, you know you're doing something that's more effective than the baseline (even though that alone doesn't validate your specific explanation uniquely). It'd take a book to explain how to build from there, but at the end of the day if you can do concrete things that others cannot and you can teach it so that the people you teach can demonstrate the same things, then you're probably doing something with some validity to it. Probably.

I'm not saying that there's "no difference" between the process of optimizing discourse and the process of optimizing motors, but it is not nearly as black and white as you think. It's possible to lead yourself astray with confirmation bias in "discourse" related things, but you should see some of the shit engineers can convince themselves of without a shred of valid evidence. The cognitive skills, nebulosity of metric, and ease of coming up with trustable feedback are all very similar in my experience. More like "a darkish shade of gray" vs "a somewhat darker shade of gray".

Part of the confusion probably comes from the fact that what we see these days aren't "physics motors"; they're "engineering motors". An engineering motor is when someone who understands physics designs a motor and then engineers populate the world with surface level variations of this blueprint. By and large, my experience in both academic and professional engineering is that engineers struggle to understand and apply first principles and optimize anything outside of the context that was covered in their textbooks. It's true that within the confines of the textbook, things do get more "cut and dry", but it's an illusion that goes away when you look past industry practice to physics itself.

It's true that our "discourse engineering" department is in a sorry state of being and that the industry guidelines are not to be trusted, but it's not that we have literally nothing, and our relative lack is not because the subject is "too soft" to get a grip on. Motor design is hard to get a grip on too, when you're trying to tread even slightly new ground. The problem is that the principles based minds go into physics and sometimes engineering, but rarely psychology. In the few instances where I've seen bright minds approach "discourse" with an eye to verifiable feedback, they've found things to measure, been able to falsify their own predictions, and have ended up (mostly independently) coming to similar conclusions with demonstrably increased discourse abilities to show for it.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-28T20:54:17.344Z · LW · GW

As someone working on designing better electric motors, I can tell you that "What exactly is this metric I'm trying to optimize for?" is a huge part of the job. I can get 30% more torque by increasing magnet strength, but it increases copper loss by 50%. Is that more better? I can drastically reduce vibration by skewing the stator but it will cost me a couple percent torque. Is that better or worse? There are a ton of things to trade between, and even if your end application is fairly well specified it's generally not specified well enough to remove all significant ambiguity in which choices are better.

It's true that there are some motor designs that are just better at everything (or everything one might "reasonably" care about), but that's true for discourse as well. For example, if you are literally just shrieking at each other, whatever you're trying to accomplish you can almost certainly accomplish it better by using words -- even if you're still going to scream those words.

The general rule is that if you suck relative to the any nebulosity in where on the pareto frontier you want to be, then there are "objective" gains to be made. In motor, simultaneous improvements in efficiency and power density will go far to create a "better" motor which will be widely recognized as such. In discourse, the ability to create shared understanding and cooperation will go far to create "better" discourse which will be widely regarded as such.

Optimal motors and discourse will look different in different contexts, getting it exactly right for your use case will always be nebulous, and there will always be weird edge cases and people deliberately optimizing for the wrong thing. But it's really not different in principle.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-28T18:32:20.145Z · LW · GW

Agreed. You want to "try not to err" in expected value, not in "inches from the bullseye". Sometimes this means you try to put the center of your distribution offset from the bullseye.

I didn't see it as the primary point of contention so I didn't mention it, but you're right, it's probably worth pointing out explicitly.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-27T07:08:10.288Z · LW · GW

Whether it's worth working to resolve any disagreement over appropriate levels of respect is going to depend on the context, but certainly below a certain threshold object level discourse becomes predictably futile. And certainly high levels of respect are *really nice*, and allow for much more efficient communication because people are actually taking each other seriously and engaging with each other's perspective.

There's definitely important caveats, but I generally agree with the idea that mutual respect and the ability to sort out disagreements about the appropriate level of respect are worth deliberately cultivating. Certainly if I am in a disagreement that I'd like to actually resolve and I'm not being taken as seriously as I think I ought to be, I'm going to seek to understand why, and see if I can't pass their "ideological test" on the matter.

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-27T06:50:09.897Z · LW · GW

I agree that "Err on the side of ____" is technically worse than "Try to not err", but I'd argue that it's just a somewhat sloppy and non-literal way of conveying a valid point.

The way I'd say it if being more careful is to say that "Insufficient and excess stick are both problems,  however there is a natural tendency to stick too much. Additionally, excess stick quickly begets excess stick, and if you allow things to go supercritical you can quickly destroy the whole thing, so before acting on an impulse to stick make really sure that you aren't just succumbing to this bias.". 

Or in other words "Your sights are likely out of alignment, so aim higher than seems appropriate in order to not aim too low in expectation".

Comment by jimmy on "Rationalist Discourse" Is Like "Physicist Motors" · 2023-02-27T06:32:45.290Z · LW · GW


Introducing "arguments" and "bad faith" can complicate and confuse things, and neither are necessary.

As a simple model, say we're predicting whether the next ball drawn from an urn is black, and we've each seen our own set of draws. When I learn that your initial prediction is a higher probability than mine, I can infer that you've seen a higher ratio of black than I have, so in order to take that into account I should increase my own probability of black. But how much? Maybe I don't know how many draws you've witnessed.

On the next iteration, maybe they say "Oh shoot, you said 30%? In that case I'm going to drop my guess from 95% to 35%". In that case, they're telling you that they expect you've seen many more draws than them. Alternatively, they could say "I guess I'll update from 95% to 94%", telling you the opposite. If you knew in advance which side of your new estimate they were likely to end up on, then you could have taken that into account last time, and updated further/less far accordingly until you can't expect to know what you will learn next time.

If you *know* that they're going to stick to 95% and not update based on your guess, then you know they don't view your beliefs as saying much. If *that* doesn't change your mind and make you think "Wow, they must really know the answer then!" and update to 95%, then you don't view their beliefs as saying much either. When you can predict that beliefs won't update towards convergence, you're predicting a mutual lack of respect and a mutual lack of effort to figure out whose lack of respect is misplaced.

Comment by jimmy on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-24T19:21:42.902Z · LW · GW

If natural selection had been a foresightful, intelligent kind of engineer that was able to engineer things successfully, it would have built us to be revolted by the thought of condoms

 

This bit got me to laugh out loud. Who's ever heard a man complain about having to use a condom?

On the one hand, sperm banks aren't very popular, and they "should" be, according to the "humans are fitness maximizers" model. People do eat more ice cream than is good for them, and "Shallowly following drives and not getting to the original goal that put them there" is definitely a thing that happens a lot.

On the other hand, this shallow model misses a lot. Prostitution may be more popular than sperm donation, but for how powerful our drive for sex is, it doesn't add up. Condoms do reduce physical sensation somewhat, but not enough to explain the visceral revulsion that so many men (and some women) do have regarding their use -- it's just that sometimes the desire for "sex" is stronger. In order to make these things match, you have to model sex with condoms and especially sex with prostitutes as not really counting as "sex" in the way that is desired. And then you get the interesting questions of "Then what  does count, and why?".

In a fairly literal sense, for every example you can show me of a person chasing sex in a way that doesn't fit with "fitness maximizing", I can show you an example of someone chasing sex in a way that doesn't fit with "shallow impulse chasing". Sometimes it's fairly blatant, like "Oh, they "thought" she was infertile so they "weren't being very careful", interesting" (I know 3 kids conceived in this way), but other times it's more subtle like "She is on hormonal birth control, is insisting on condom use as well, and is adamant about not wanting a kid right now" and genuinely believes this and is right in that her CEV points that way... and yet an actual exploration of her desires will find that when faced with an artificial "would you rather" separating what she thought she wanted from the things that lead towards pregnancy, you get 100% of the desire in the "surprising" direction.


It's not that we're aligned towards "shallow things" that are "the wrong thing" from evolutions perspective, it's that we're just not that aligned, period. We're incoherent. And it's not that we can't learn to align to whatever our coherent extrapolated volition turns out to be, it's that we haven't -- not completely. It's a lot of work to figure out wtf we actually want, so building global coherence is slow. I recognize that I haven't done the necessary work to justify it here, and that the claim is quite controversial, but when you actually give people the experiences they need in order to see that their shallow desires aren't meeting their deeper goals, preferences change. People stop eating so much ice cream, or having any particular interest in sex with condoms, etc. It's about figuring out how to systematically do that, and doing enough of it.

The use of human failures of alignment as an analogy for AI obviously has its limits, even if it's not obvious exactly where they are. However, so long as we're exploring the analogy, things are much more optimistic than "grab randomly from mind design space, and hope it's Friendly". In human's, alignment failure doesn't mean "Oh no, the process of loading terminal goals went wrong!", it means that the process of building towards terminal goals got stopped somehow. And the earlier it happens the more of an incoherent mess you have, less able to do anything particularly harmful. In order to make a serial killer you have to get quite a few things right and in order to make a Hitler you have to get even more right, but not completely right or else you wouldn't have these murderous desires (the incoherences are actually visible when you know how to spot them). Screw up more and you end up a hobo preaching to be the DNA your vegetable is, or if you screw up a little less maybe a petty thief feeding a drug habit.

This is completely separate from the problem of "Wtf happens when you raise an intelligence -- human or otherwise -- to be so powerful that indifference to the wellbeing of humans cannot get socialized into prosocial desires, and that humans cannot harm it enough to form malevolent desires?", but it's still worth noting and understanding.

Comment by jimmy on AGI in sight: our look at the game board · 2023-02-22T07:43:16.538Z · LW · GW

People are often unaware of what they're repeatedly running into. Problem formulation can go a long way towards finding a solution.