Posts

Meetup : Garden grove meetup 2012-05-15T02:17:14.042Z · score: 1 (2 votes)
26 March 2011 Southern California Meetup 2011-03-20T18:29:16.231Z · score: 4 (5 votes)
October 2010 Southern California Meetup 2010-10-18T21:28:17.651Z · score: 6 (7 votes)
Localized theories and conditional complexity 2009-10-19T07:29:34.468Z · score: 7 (10 votes)
How to use "philosophical majoritarianism" 2009-05-05T06:49:45.419Z · score: 8 (25 votes)
How to come up with verbal probabilities 2009-04-29T08:35:01.709Z · score: 24 (29 votes)
Metauncertainty 2009-04-10T23:41:52.946Z · score: 19 (23 votes)

Comments

Comment by jimmy on Noise on the Channel · 2020-07-02T18:14:24.673Z · score: 10 (2 votes) · LW · GW

I think it's worth making a distinction between "noise" and "low bandwidth channel". Your first examples of "a literal noisy room" or "people getting distracted by shiny objects passing by" fit the idea of "noise" well. Your last two examples of "inferential distance" and "land mines" don't, IMO.

"Noise" is when the useful information is getting crowded out by random information in the channel, but land mines aren't random. If you tell someone their idea is stupid and then you can't continue telling them why because they're flipping out at you, that's not a random occurrence. Even if such things aren't trivially predictable in more subtle cases, it's still a predictable possibility and you can generally feel out when such things are safe to say or when you must tread a bit more carefully.

The "trying to squeeze my ideas through a straw" metaphor seems much more fitting than "struggling to pick the signal out of the noise floor" metaphor, and I would focus instead on deliberately broadening the straw until you can just chuck whatever's on your mind down that hallway without having to focus any of your attention on the limitations of the channel.

There's a lot to say on this topic, but I think one of the more important bits is that you can often get the same sense of "low noise conversation" if you pivot from focusing on ideas which are too big for the straw to focusing on the straw itself, and how its limitations might be relaxed. This means giving up on trying to communicate the object level thing for a moment, but it wasn't going to fit anyway so you just focus on what is impeding communication and work to efficiently communicate about *that*. This is essentially "forging relationships" so that you have the ability to communicate usefully in the future. Sometimes this can be time consuming, but sometimes knowing how to carry oneself with the right aura of respectability and emotional safety does wonders for the "inferential distance" and "conversational landmines" issues right off the bat.

When the problem is inferential distance, the question comes down to what extent it makes sense to trust someone to have something worth listening to over several inferences. If our reasonings differ several layers deep then offering superficial arguments and counterarguments is a waste of time because we both know that we can both do that without even being right. When we can recognize that our conversation partner might actually be right about even some background assumptions that we disagree on, then all of a sudden the idea of listening to them describe their world view and looking for ways that it could be true becomes a lot more compelling. Similarly, when you can credibly convey that you've thought things through and are likely to have something worth listening to, they will find themselves much more interested in listening to you intently with an expectation of learning something.

When the problem is "land mines", the question becomes whether the topic is one where there's too much sensitivity to allow for nonviolent communication and whether supercritical escalation to "violent" threats (in the NonViolent Communication sense) will necessarily displace invitations to cooperate. Some of the important questions here are "Am I okay enough to stay open and not lash out when they are violent at me?" and the same thing reflected towards the person you're talking to. When you can realize "No, if they snap at me I'm not going to have an easy time absorbing that" you can know to pivot to something else (perhaps building the strength necessary for dealing with such things), but when you can notice that you can brush it off and respond only to the "invitation to cooperate" bit, then you have a great way of demonstrating for them that these things are actually safe to talk about because you're not trying to hurt them, and it's even safe to lash out unnecessarily before they recognize that it's safe. Similarly, if you can sincerely and without hint of condescension ask the person whether they're okay or whether they'd like you to back off a bit, often that space can be enough for them to decide "Actually, yeah. I can play this way. Now that I think about it, its clear that you're not out to get me".

There's a lot more to be said about how to do these things exactly and how to balance between pushing on the straw to grow and relaxing so that it can rebuild, but the first point is that it can be done intentionally and systematically, and that doing so can save you from the frustration of inefficient communication and replace it with efficient communication on the topic of how to communicate efficiently over a wider channel that is more useful for everything you might want to communicate.

Comment by jimmy on Fight the Power · 2020-06-25T03:33:36.012Z · score: 3 (2 votes) · LW · GW

In general, if you're careful to avoid giving unsolicited opinions you can avoid most of these problems even with rigid ideologues. You wouldn't inform a random stranger that they're ugly just because it's true, and if you find yourself expressing or wishing to express ideas which people don't want to hear from you, it's worth reflecting on why that is and what you are looking to get out of saying it.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-17T03:32:11.597Z · score: 4 (2 votes) · LW · GW

I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you're trying to say about it in particular. I think I'm less concerned though, because I don't see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.

Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological "defense mechanisms", because "*obviously* evolution wouldn't select for those who 'defend' from threats by sticking their heads in the sand!" -- as if the problem of wireheading were as simple as competition between a "gene for wireheading" and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It's also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it's too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. "Defense mechanisms" arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective -- and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.

The solution of "simply noticing that the pain from cauterizing a serious bleed isn't a *bad* thing and therefore not flinching from it" isn't trivial. It's *doable*, and to be aspired to, but there's no such thing as "a gene for wise decisions" that is already "hard coded in DNA".

Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn't proof of some sort of "selection for shittiness" any more than it is to notice individual incoherence and the resulting dysfunction. It's not that coherence is impossible or undesirable, just that you're fighting entropy to get there, and succeeding takes work.

The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying "no thanks, I can wait" will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn't winning by disincentivizing cooperation with him, and it's likely more of a "his desire to not feel pain is winning, so he bleeds" sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it's also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there's an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.

I don't see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It's negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying "shut up and be a cog", I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using "shut up and be a cog" as a shit test to filter out the leaders who don't have what it takes to say "nah, I think I'll trust myself and win more", and lead effectively. Just like the burning pain, it's there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It's not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.

And as a result, I don't really feel myself being pulled between a conflict of "respect societies stupid beliefs/rules" and "care about other people". I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it's more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there's no reason to go from "I can't yet be effective in the broader community because I can't yet break out of their 'cog' mold for me, so I'm going to focus on the smaller community where I can" to "fuck them all". There's still plenty of value in reengaging when capable and pretending there isn't isn't that good functional thing we're striving to do. It's not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there's plenty of incentive to help things go well for everyone.

Comment by jimmy on Simulacra Levels and their Interactions · 2020-06-16T05:49:37.392Z · score: 9 (5 votes) · LW · GW
Whereas, if things are too forsaken, one loses the ability to communicate about the lion at all. There is no combination of sounds one can make that makes people think there is an actual lion across an actual river that will actually eat them if they cross the river.

Hm. This sounds like a challenge.

How about this:

Those "popular kids" who keep talking about fictitious "lions" on the other side of the river are actually losers. They try to pretend that they're simply "the safe and responsible people" and pat themselves on the back over it, but really they're just a bunch of cowards who wouldn't know what to do if there were a lion, and so they can't even look across the river and will just shame you for being "reckless" if you doubt the existence of lions that they "just know" are there. I hate having to say something that could lump me with these deplorable fools, and never before has there actually been a lion on the other side of the river, but this time there is. This time it's real, and I'm not saying we can't cross if need be, but if we're going to cross we need to be armed and prepared.

I can see a couple potential failure modes. One is if "Those guys are just crying wolf, but I am legit saving you [and therefore am cool in the way they pretend they are]" itself becomes a cool kid thing to say. The other is if your audience is motivated to see you as "one of them" to the point of being willing to ignore the evidence in front of them, they will do so despite you having credibly signaled that this is not true. Translating to actual issues I can think of, I think it would mostly actually work though.

It becomes harder if you think those guys are actually cool, but that shouldn't really be a problem in practice. Either a) there actually has been a lion every single time it is claimed, in which case it's kinda hard for "there's a lion!" to indicate group membership because it's simply true. Or b) they've actually been wrong, in which case you have something to distance yourself from.

If the truth is contentious and even though there has always been a lion, they've never believed you, then you have a bigger problem than simply having your assertions mistaken for group membership slogans; you simply aren't trusted to be right. I'd still say there's things that can be done there, but it does become a different issue.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-11T19:05:14.154Z · score: 2 (1 votes) · LW · GW
I described what happend to the other post here.

Thanks, I hadn't seen the edit.

I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?

While I generally support the idea that it's better to stop posting than to continue to post things which will predictably be negative karma sum, I don't think that's necessary here. There's plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one's own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.

I think the down votes are about something else which is a lot more easily fixable. While I'm sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don't put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn't miss.

As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I'm not sure where you simply missed that part or whether you don't believe him, but either way it is very hard to have a conversation with someone that doesn't engage with points like this at least enough to say why they aren't convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That's not to say that you have to believe that they're being reasonable/charitable/etc, or that you have to act like you do, but it's nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of "insufficiently charitable" is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.

It's a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn't sweat it.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-11T18:11:24.513Z · score: 5 (3 votes) · LW · GW
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?

Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.

When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

It seems that this bit is your main concern?

It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.

I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.

If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.

Does that address what you’re saying?

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-10T20:08:21.060Z · score: 4 (2 votes) · LW · GW

It's not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there's no shame in that. Heck, maybe I'm even wrong and what I'm perceiving as an error actually isn't faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.

I try to avoid making that mistake, but if you feel like I'm erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.

I'm sorry if it hasn't been sufficiently clear that I'm friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.

Secondly I would also like to hear an actual counterargument to the argument I made

Which one? The "it was only studying IBS" one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It's always going to be "in the cases they've studied" and it's always conceivable that if you only knew to find the right use of placebos to test, you'll find one where it doesn't work. However, when placebos work without deception in every case you've tested, the default hypothesis is no longer "well, they require deception in every case except these two weird cases that I happen to have checked". The default hypothesis should now be "maybe they just don't require deception at all, and if they do maybe it's much more rare than I thought".

I'm not sure what point the existence of nocebo makes for you, but the same principles apply there too. I've gotten a guy to punch a cactus right after he told me "don't make me punch the cactus" simply by making him expect that if I told him to do it he would. Simply replace "because drugs" with "because of the way your mind works" and you can do all the same things and more.

I'm not sure how many more times I'll be willing to address things like this though. I'm willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn't considered and are therefore surprisingly strong, but if you still just don't buy into the general idea as worth exploring then I can agree to disagree.

And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.

Speaking of deleting things, what happened to your other post?

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-10T07:49:50.487Z · score: 2 (1 votes) · LW · GW

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-10T07:48:04.782Z · score: 4 (2 votes) · LW · GW

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Comment by jimmy on [deleted post] 2020-06-08T19:30:29.622Z

1) Isomorphic to my "what if you know you'll do something stupid if you learn that your girlfriend has cheated on you" example. To reiterate, any negative effects of learning are caused by false beliefs. Prioritize over which way you're going to be wrong until you become strong enough to just not be predictably wrong, sure. But become stronger so that you can handle the truths you may encounter.

2) This clearly isn't a conflict between epistemic and instrumental rationality. This is a question about arming your enemies vs not doing so, and the answer there is obvious. To reiterate what I said last time, this stuff all falls apart once you realize that these are two entirely separate systems both with their own beliefs and values and you posit that the subsystem in control is not the subsystem that is correct and shares your values. Epistemic rationality doesn't mean giving your stalker your new address.

3) "Unfortunately studies have shown that in this case the deception is necessary, and the placebo effect won't take hold without it". This is assuming your conclusion. It's like saying "Unfortunately, in my made up hypothetical that doesn't actually exist, studies have shown that some bachelors are married, so now what do you say when you meet a married bachelor!". I say you're making stuff up and that no such thing exists. Show me the studies, and I'll show you where they went wrong.

You can't just throw a blanket over a box and say "now that you can no longer see the gears, imagine that there's a perpetual motion machine in there!" and expect it to have any real world significance. If someone showed me a black box that put out more energy than went into it and persisted longer than known energy storage/conversion mechanisms could do, I would first look under the box for any shenanigans that a magician might try to pull. Next I would measure the electromagnetic energy in the room and check for wireless power transfer. Even if I found none of those, I would first expect that this guy is a better magician than I am anti-magician, and would not begin to doubt the physics. Even if I became assured that it wasn't magician trickery and it really wasn't sneaking energy in somehow, I would then start to suspect that he managed to build a nuclear reactor smaller than I thought possible, or otherwise discovered new physics that makes this possible. I would then proceed to tear the box apart and find out what assumptions I'm missing. At the point where it became likely that it wasn't new physics but rather incorrect old physics, I would continually reference the underlying justifications of the laws of thermodynamics and see if I could start to see how one of the founding assumptions could be failing to hold.

Not until I had done all that would I even start to believe that it is genuinely what it claims to be. The reasons to believe in the laws of thermodynamics are simply so much stronger than the reason to believe people claiming to have perpetual motion machines that if your first response isn't to challenge the hypothetical hard, then you're making a mistake.

"Knowing more true things without knowing more false things leads to worse results by the values of the system that is making the decision even when the system is working properly" is a similarly extraordinary claim that calls for extraordinary evidence. The first thing to look for, besides a complete failure to even meet the description, is for false beliefs being smuggled in. In every case you've given, it's been one or the other of these, and that's not likely to change.

If you want to challenge one of the fundamental laws of rationality, you have to produce a working prototype, and it has to be able to show where the founding assumptions went wrong. You can't simply cast a blanket over the box and declare that it is now "possible" since you "can't see" that is not impossible. Endeavor to open black boxes and see the gears, not close your eyes to them and deliberately reason out of ignorance. Because when you do, you'll start to see the path towards making both your epistemic and your instrumental rationality work better.

4) Throw it away like all spam. Your attention is precious, and you should spend it learning the things that you expect to help you the most, not about seagulls. If you want though, you can use this as an exercise in becoming more resilient and/or about learning about the nature of human psychological frailty.

It's worth noticing though, that you didn't use a real world example and that there might be reasons for this.

5) This is just 2 again.

6) Maybe? As stated, probably not. There are a few different possibilities here though, and I think it makes more sense to address them individually.

a) The torture is physically damaging, like peeling ones skin back of slowly breaking every bone in ones body.

In this case, obviously not. I'm also curious what it feels like to be shot in the leg, but the price of that information is more than I'm willing to spend. If I learn what that feels like, then I don't get to learn what I would have been able to accomplish if I could still walk well. There's no conflict here between epistemic and instrumental rationality here.

b) The "torture" is guaranteed to be both safe and non physically damaging, and not keep me prisoner too long when I could be doing other things.

When I learned about tarantula hawks and that their sting was supposedly both debilitatingly painful and also perfectly non-damaging and safe, I went pretty far out of my way to acquire them and provoke them to sting me. Fear of non-damaging things is a failing to be stamped out. When you accept that the scary thing truly is sufficiently non-dangerous, fear just becomes excitement anyway.

If these mysterious white room people think they can bring me a challenge while keeping things sufficiently safe and non-physically-damaging I'd probably call their bluff and push that button to see what they got.

c) This "torture" really is enough to push me sufficiently past my limits of composure that there will be lasting psychological damage.

I think this is actually harder than you think unless you also cross the lines on physical damage, risk, or get to spend a lot of time at it. However, it is conceivable and so in this case we're back to being another example of number one. If I'm pretty sure it won't be any worse than this, I'd go for it.


This whole "epistemic vs instrumental rationality" thing really is just a failure to do epistemic rationality right, and when you peak into the black box instead of intentionally keeping it covered you can start to see why.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-08T17:53:06.883Z · score: 8 (4 votes) · LW · GW
I'm very glad that you managed to train yourself to do that but this option is not available for everyone.

Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to "maybe I'm missing something here" . I'm not asking you to be more credulous or to simply believe anything I'm saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.

Because you do have that option available to you. In my experience, it's simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says "that's a great idea, let's do that!" and then mysteriously fails. The idea that it's "not available" is one more false belief that gets in the way of focusing on the right thing.

Don't get me wrong, I'm not saying that it's always trivial. Epistemic rationality is not trivial. It's completely possible to try to organize one's mind into coherence and still fail to get the results because you don't realize where you're missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.

I don't see a lot of engaging in the least convenient possible world

Well, if I don't think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)

I'll take a look at your new post.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-08T05:13:44.826Z · score: 10 (3 votes) · LW · GW

Placebo doesn't require deception.

Just like with sports, you can get all the same benefits of placebo by simply pointing your attention correctly without predicating it on nonsense beliefs, and it's actually the nonsense beliefs that are getting in the way and causing the problem in the first place. A "placebo" is just an excuse to stop falsely believing that you can't do whatever it is you need to do without a pill.

And I don't say this as some matter of abstract "theory" that sounds good until you try to put it into practice; it's a very real thing that I actually practice somewhat regularly. I'll give you an example.

One day I sprained my ankle pretty badly. I was frustrated with myself for getting injured and didn't want it to swell up so I indignantly decided "screw this, my ankle isn't going to swell". It was a significant injury and took a month to recover, but it didn't swell. The next several times I got injured I kept this attitude and nothing swelled, including dropping a 50lb chunk of wood on my finger in a way that I was sure would swell enough to keep me from bending that finger... until I remembered that it doesn't have to be that way, and made the difficult decision to actually expect it to not swell. It didn't, and I retained complete finger mobility.

I told a friend of mine about this experience of mine, and while she definitely remained skeptical and it seemed "crazy" to her, she had also learned that even "crazy" things coming out of my mouth had a high probability of turning out to be true, and therefore didn't rule it out. Next time she got injured, she felt a little weird "pretending" that she could just influence these things but figured "why not?" and decided that her injury wasn't going to swell either. It didn't. A few injuries go by, and things aren't swelling so much. Finally, she inadvertently tells someone "Oh, don't worry, I don't need to ice my broken thumb because I just decided that it won't swell". The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break. I called her and talked to later that night and pointed out what had happened with her mental state and helped her fix her silly limiting (and false) beliefs, and when she woke up in the morning swelling had largely subsided again.

The size of the effect was larger than I've ever gotten with ibuprofen, let alone fake ibuprofen. "I have no ability to prevent my body from swelling up" is factually wrong, and being convinced of this falsehood prevents people from even trying. You can lie to yourself and take a sugar pill if you want, but it really is both simpler and more effective to just stop believing false things.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-07T21:22:34.580Z · score: 13 (5 votes) · LW · GW
What something is worth is not an objective belief but a subjective value.

Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"? Because if it turns out that the hot dog had been sitting out for too long and you end up puking your guts out, I think it's pretty unambiguous to say that "worth eating" was clearly false.

The fact that the precise meaning may not be clear does not make the statement immune from "being wrong". A really good start on this problem is "if you were able to see and emotionally integrate all of the consequences, would you regret this decision or be happy that you made it?".

This is not how human psychology works. Optimism does lead to better results in sports.

You have to be able to distinguish between "optimism" (which is good) and "irrational confidence" (which is bad). What leads to good results in sports is an ability to focus on putting the ball where it needs to go, and pessimism (but not accurate beliefs) impedes that.

If you want a good demonstration of that, watch Conor McGregor's rise to stardom. He gained a lot of interest for his "trash talk" which was remarkably accurate. Instead of saying "I'M GONNA KNOCK HIM OUT FIRST ROUND!" every time, he actually showed enough humility to say "My opponent is a tough guy, and it probably won't be a first round knockout. I'll knock him out in the second". It turned out in that case that he undersold himself, but that did not prevent him from getting the first round knockout. When you watch his warm up right before the fights, what his body language screams is that he has no fear, and that's what's important because fear impedes fluid performance. When he finally lost, his composure in defeat showed that his lack of fear came not from successful delusion but from acceptance of the possibility of losing. This is peak performance, and is what we should all be aspiring to.

In general, "not how human psychology works" is a red flag for making excuses for those with poorly organized minds. "You have to expect to win!" is a refrain for a reason; the people who say this falsehood probably would engage in pessimism if they thought they were likely to lose. However, that does not mean that one cannot aspire to do better. Other people don't fall prey to this failure mode, and those people can put on impressive performances that shock even themselves.

Comment by jimmy on [Poll] 'Truth' vs 'Winning' · 2020-06-06T20:17:05.089Z · score: 9 (7 votes) · LW · GW
These two options do not always coincide. Sometimes you have to choose.

I'll go even further than Zack and flat out reject the idea that this even applies to humans.

The most famous examples are: Learning knowledge that is dangerous for humanity (e.g how to build an unaligned Superintelligence in your garage), knowledge that is dangerous to you (e.g Infohazards)

This kind of problem can only happen with an incoherent system ("building and running a superintelligence in ones garage is a bad thing to do"+"I should build and run a superintelligence in my garage!") where you posit that the subsystem in control is not the subsystem that is correct. If you don't posit incoherence of "a system", then this whole thing makes no sense. If garage AIs are bad, don't build them and try to stop others from building them. If garage AIs are good, then build it. Both sides find instrumental and epistemic rationality to be aligned. It's just that my idea of truth doesn't always line up with your idea of best action because you might have a different idea of what the truth is.

It can be more confusing when it happens within one person, but it's the same thing.

If learning that your girlfriend is cheating on you would cause you to think "life isn't worth living" and attempt suicide even though life is still worth living, then the problem isn't that true beliefs ("she cheated on me") are leading to bad outcomes, it's that false beliefs ("life isn't worth living") are leading to bad outcomes, and that your truth finding is so out of whack that you can already predict that true beliefs will lead to false beliefs.

In these cases you have a few options. One is to notice this and say "Huh, if life would still be worth living, why would I feel like it isn't?" and explore that until your thoughts and feelings merge into agreement somewhere. In other words, fix your shit so that true beliefs no longer predictably lead to false beliefs. Another is to put off the hard work of having congruent (and hopefully true) beliefs and feelings, and say "my feelings about life being worth living are wrong, so I will not act on them". Another, if you feel like you can't trust your logical self to retain control over your emotional impulses, is to say "I realize that my belief that my girlfriend isn't cheating on me might not be correct, but my resulting feelings about life would be incorrect in a worse way, and since I am not yet capable of good epistemics, I'm at least going to be strategic about which falsehoods I believe so that my bad epistemics harm me the least".

The worst thing you can do is go full "Epistemics don't matter when my life is on the line" and flat out believe that you're not being cheated on. Because if you do that, then there's nothing protecting you from stumbling upon evidence and being forced between a choice of "unmanaged false beliefs about life's worth" or "detaching from reality yet further".

or trusting false information to increase your chances of achieving your goals (e.g Being unrealistically optimistic about your odds of beating cancer because optimistic people have higher chances of survival).

True beliefs aren't the culprit here either. If you have better odds when you're optimistic, then be optimistic. "The cup isn't completely empty! It's 3% full, and even that may be an underestimate!" is super optimistic, even when "I'm almost certainly going to die" is also true.

This is very similar to the mistaken sports idea that "you have to believe you will win". No you don't. You just have to put the ball through the hoop more than the other guy does, or whatever other criteria your sport has. Yes, you're likely to not even try if you're lying to yourself and saying "it's not even possible to win" because "I shouldn't even try" follows naturally from that. However, if you keep your mind focused on "I can still win this, even if it's unlikely" or even just "Put the ball in the hoop. Put the ball in the hoop", then that's all you need.

In physics, if you think you've found a way to get free energy, that's a good sign that your understanding of the physics is flawed and the right response is to think "okay, what is it that I don't understand about gravity/fluid dynamics/etc that is leading me to this false conclusion?". Similarly , the idea that epistemic rationality and instrumental rationality are in conflict is a major red flag about the quality of your epistemic rationality, and the solution on both fronts is to figure out what you're doing wrong so as to perceive this obvious falsehood.

Comment by jimmy on Reexamining The Dark Arts · 2020-06-06T19:39:22.403Z · score: 3 (2 votes) · LW · GW

That's an interesting hypothesis, and seems plausible as a partial explanation to me. I don't buy it as a full explanation for a couple reasons. One is that it is inherently harder to read and follow rather than being an equally valid aesthetic. It may also function as a signal that you are on team Incoherent Thought, and there may occasionally be reasons to fake a disability, but generally genuine shortcomings don't become attractive things to signal. Even the king of losers is a loser, and generally the impression that I get is that these people did wish they had more mainstream acceptance and would take it in a heartbeat if they could get it at the level that they feel like they deserve. That doesn't mean that they won't flout it when they can, but the signs are there. They spend a lot more time talking about "the establishment" than the establishment spends talking about them, for example.

The main point holds though. If your target audience sees formal attire as a sign of "conformism and closed mindedness" rather than a sign that you are able to shave and afford pricey clothing, then the honest thing to do is to show that you don't have to conform by not wearing a suit when you meet with them. When you're meeting the people who do want to make sure you can shave and put on fancy clothes, it's honest to show that you can do that too.

Comment by jimmy on Reexamining The Dark Arts · 2020-06-01T19:27:55.278Z · score: 4 (2 votes) · LW · GW

If your website looks like this people don't need to read your content in order to tell that you're a crazy person who is out of touch with how he comes off and doesn't have basic competencies like "realize that this is terrible, hire a professional". Just scroll through without reading any of it, and with your defense against the dark arts primed and ready, tell me how likely you feel that the content is some brilliant insight into the nature of time itself. It's a real signal that credibly conveys information about how unlikely this person is to have something to say which is worth listening to. Signalling that you can't make a pretty website when you can is dishonest, and the fact that you would be hindering yourself by doing so makes it no better.

When you know what you're doing, there's nothing "dark" about looking like it.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-30T20:17:20.568Z · score: 2 (1 votes) · LW · GW
a "steel man" is an improvement of someone's position or argument that is harder to defeat than their originally stated position or argument.

This seems compatible with both, to me. "You're likely to underestimate the risks, and you can die even on a short trip" is a stronger argument than "You should always wear your seat belt because it is NEVER safe to be in a car without a seat belt", and cannot be so easily defeated as saying "Parked in the garage. Checkmate".

Reading through the hyperbole to the reasonable point underneath is still an example of addressing "the best form of the other person's argument", and it's not the one they presented.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-30T20:12:46.731Z · score: 2 (1 votes) · LW · GW

I think the conflicting narratives tend to come from different sides of the conflict, and that people generally want the institutions that they're part of (and which give them status) to remain high status. It just doesn't always work.

What I'm talking about is more like.. okay, so Chael Sonnen makes a great example here both because he's great at it and because it makes for a non-political example. Chael Sonnen is a professional fighter who intentionally plays the role of the "heel". He'll say ridiculous things with a straight face, like telling the greatest fighter in the world that he "absolutely sucks" or telling a story that a couple Brazilian fighters (the Nogueira brothers) mistook a bus for a horse and tried to feed it a carrot and sticking to it.

When people try to "fact check" Chael Sonnen, it doesn't matter because not only does he not care that what he's saying is true, he's not even bound by any expectation of you believing him. The bus/carrot story was his way of explaining that he didn't mean to offend any Brazilians, and the only reason he said that offensive stuff online is that he was unaware that they had computers in Brazil. The whole point of being a heel is to provoke a response, and in order to do that all he has to do is have the tiniest sliver of potential truth there and not break character. The bus/carrot story wouldn't have worked if the fighters from a clearly more technologically advanced country than him, even though it's pretty darn far from "they actually think buses are horses, and it's plausible that Chael didn't know they have computers". If your attempt to call Chael out on his BS is to "fact check" whether he was even there to see a potential bus/horse confusion or to point out that if anything, they're more likely to mistake a bus for a Llama, you're missing the entire point of the BS in the first place. The only way to respond is the way Big Nog actually did, which is to laugh it off as the ridiculous story it is.

The problem is that while you might be able to laugh off a silly story about how you mistook a horse for a carrot, people like Chael (if they're any good at what they do) will be able to find things you're sensitive about. You can't so easily "just laugh off" him saying that you absolutely suck even if you're the best in the world, because he was a good enough fighter that he nearly won that first match. Bullshitters like Chael will find the things that are difficult for you to entertain as potentially true and make you go there. If there's any truth there, you'll have to admit to it or end up making yourself look like a fool.

This brings up the other type of non-truthtelling that commonly occurs which is the counterpart to this. Actually expecting to be believed means opening yourself to the possibility of being wrong and demonstrating that you're not threatened by this. If I say it's raining outside and expect you to actually believe me, I have to be able to say "hey, I'll open the door and show you!", and I have to look like I'll be surprised if you don't believe me once you get outside. If I start saying "How DARE you insinuate that I might be lying about the rain!" and generally take the bait that BSers like Chael leave, I show that it's not that I want you to genuinely believe me so much as I want you to shut your mouth and not challenge my ideas. It's a 2+2=5 situation now, and that's a whole nother thing to expect. In these cases there still isn't the same pressure to conform to the truth needed if you expect to be believed, and your real constraint is how much power you have to pressure the other person into silence/conformity.

The biggest threat to truth, as I see it, is that when people get threatened by ideas that they don't want to be true, they try to 2+2=5 at it. Sometimes they'll do the same thing even when the belief they're trying to enforce is actually the correct one, and it causes just as much problems because can't trust someone saying "Don't you DARE question" even when they follow it up with "2+2=4", and unless you can do the math yourself you can't know what to believe. To give a recent example, I found a document written by a virologist PhD about why the COVID pandemic is very unlikely to have come from a lab and it was more thorough and covered more possibilities I hadn't yet seen anyone cover, which was really cool. The problem is that when I actually checked his sources, they didn't all say what he said they said. I sent him a message asking whether I was missing something in a particular reference, and his response was basically "Ah, yeah. It's not in that one it's in another one from China that has been deleted and doesn't exist anymore." and went on to cite the next part of his document as if there's nothing wrong with making blatantly false implications that the sources one gives support the point one made, and the only reason I could even be asking about it is that I hadn't read the following paragraph about something else. When I pointed out that conspiracy minded people are likely to latch on to any little reason to not trust him and that in order to be persuasive to his target audience he should probably correct it and note the change, he did not respond and did not correct his document. And he wonders why we have conspiracy theories.

Bullshitters like Chael can sometimes lose (or fail to form) their grip on reality and let their untruths actually start to impact things in a negative way, and that's a problem. However, it's important to realize that the fuel that sustains these people is the over-reaching attempts to enforce "2+2=what I want you to say it does", and if you just do the math and laugh it off when he straight face says that 2+2=22, there's no more oppressive bullshit for him to eat and fuel his trolling bullshit.

Comment by jimmy on Updated Hierarchy of Disagreement · 2020-05-29T20:14:08.635Z · score: 3 (2 votes) · LW · GW
You don't want your interlocutor to feel like you are either misrepresenting or humiliating him. Improving an argument is still desirable, but don't sour the debate.


There are a couple different things I sometimes see conflated together under the label "steel man".

As an example, imagine you're talking to the mother of a young man who was killed by a drunk driver on the way to the corner store, and whose life could likely have been saved if he had been wearing a seat belt. This mom might be a bit emotional when she says "NEVER get in a car without your seat belt on! It's NEVER safe!", and interpreted completely literally it is clearly bad advice based on a false premise.

One way to respond would be to say "Well, that's pretty clearly wrong, since sitting in a car in your garage isn't dangerous without a seat belt on. If you were to make a non-terrible argument for wearing seat belts all the time, you might say that it's good to get in the habit so that you're more likely to do it when there is real danger", and then respond to the new argument. The mother in this case is likely to feel both misrepresented and condescended to. I wouldn't call this steel manning.

Another thing you could do is to say "Hm. Before I respond, let me make sure I'm understanding you right. You're saying that driving with a seat belt is almost always dangerous (save for obvious cases like "moving the car from the driveway into the garage") and that the temptation to say "Oh, that rarely happens!"/"it won't happen to me!"/"it's only a short trip!" is so dangerously dismissive of real risk that it's almost never worth trusting that impulse when the cost of failure is death and the cost of putting a seat belt on is negligible. Is that right?". In that case, you're more likely to get a resounding "YES!" in response, even though that not only isn't literally what she said, it also contradicts the "NEVER" in her statement. It's not "trying to come up with a better argument, because yours is shit", it's "trying to understand the actual thing you're trying to express, rather than getting hung up on irrelevant distractions when you don't express it perfectly and/or literally". Even if you interpret wrong, you're not going to get bad reactions because you're checking for understanding rather than putting words in their mouth, and you're responding to the thing they are actually trying to communicate. This is the thing I think was being pointed at in the original description of "steel man", and is something worth striving for.

Comment by jimmy on Is fake news bullshit or lying? · 2020-05-27T18:03:07.743Z · score: 3 (2 votes) · LW · GW

I think another distinction worth making here is whether the person "bullshitting"/"lying" even expects or intends to be believed. It's possible to have "not care whether the things he says describe reality correctly" and still be saying it because you expect people to take you seriously and believe you, and I'd still call that lying.

It's quite a different thing when that expectation is no longer there.

Comment by jimmy on Reflective Complaints · 2020-05-24T20:20:49.045Z · score: 4 (2 votes) · LW · GW

I used "flat earthers" as an exaggerated example to highlight the dynamics the way a caricature might highlight the shape of a chin, but the dynamics remain and can be important even and especially in relationships which you'd like to be close simply because there's more reason to get things closer to "right".

The reason I brought up "arrogance"/"humility" is because the failure modes you brought up of "not listening" and "having obvious bias without reflecting on it and getting rid of it" are failures of arrogance. A bit more humility makes you more likely to listen and to question whether your reasoning is sound. As you mention though, there is another dimension to worry about which is the axis you might label "emotional safety" or "security" (i.e. that thing that drives guarded/defensive behavior when it's not there in sufficient amounts).

When you get defensive behavior (perhaps in the form of "not listening" or whatever), cooperative and productive conversation requires that you back up and get the "emotional safety" requirements fulfilled before continuing on. Your proposed response assumes that the "safety" alarm is caused by an overreach on what I'd call the "respect" dimension. If you simply back down and consider that you might be the one in the wrong this will often satisfy the "safety" requirement because expecting more relative respect can be threatening. It can also be epistemically beneficial for you if and only if it was a genuine overreach.

My point isn't "who cares about emotional safety, let them filter themselves out if they can't handle the truth [as I see it]", but rather that these are two separate dimensions, and while they are coupled they really do need to be regulated independently for best results. Any time you try to control two dimensions with one lever you end up having a 1d curve that you can't regulate at all, and therefore is free to wander without correction.

While people do tend to mirror your cognitive algorithm so long as it is visible to them, it's not always immediately visible and so you can get into situations where you *have been* very careful to make sure that you're not the one that is making a mistake and since it hasn't been perceived you can still get "not listening" and the like anyway. In these kinds of situations it's important to back up and make it visible, but that doesn't necessarily mean questioning yourself again. Often this means listening to them explain their view and ends up looking almost the same, but I think the distinctions are important because of the other possibilities they help to highlight.

The shared cognitive algorithm I'd rather end up in is one where I put my objections aside and listen when people have something they feel confident in, and one where when I have something I'm confident in they'll do the same. It makes things run a lot more smoothly and efficiently when mutual confidence is allowed, rather than treated as something that has to be avoided at all costs, and so it's nice to have a shared algorithm that can gracefully handle these kinds of things.

Comment by jimmy on Reflective Complaints · 2020-05-22T03:06:37.049Z · score: 7 (2 votes) · LW · GW
It seems to me that I'm explaining something reasonable, and they're not understanding it because of some obvious bias, which should be apparent to them. 
But, in order for them to notice that, from inside the situation, they'd have to run the check of:
TRIGGER: Notice that the other person isn't convinced by my argument
ACTION: Hmm, check if I might be mistaken in some way. If I were deeply confused about this, how would I know?

The fact that the other person isn’t convinced by your argument is only evidence that you’re mistaken to the extent you’d expect this other person would be convinced by good arguments. For your friends and people who have earned your respect this action is a good response, but in the more general case it might be hard to get yourself to apply it faithfully because really, when the flat earther isn’t convinced are you honestly going to consider whether you’re actually the one that’s wrong?

The more general approach is to refuse to engage in false humility/false respect and make yourself choose between being genuinely provocative and inviting (potentially accurate) accusations of arrogance or else finding some real humility. For the trigger you give, I’d suggest the tentative alternate action of “stick my neck out and offer for it to be chopped off”, and only if that action makes you feel a bit uneasy do you start hedging and wondering “maybe I’m overstepping”.

For example, maybe you’re arguing politics and they scoffed at your assertion that policy X is better than policy Y or whatever, and it strikes you as arrogant for them to just dismiss out of hand ideas which you’ve thought very hard about. You could wonder whether you’re the arrogant one, and that you really should have thought harder before presenting such scoffable ideas and asked for their expertise before forming an opinion — and in some cases that’ll be the right play. In other cases though, you can can be pretty sure that you’re not the arrogant one, and so you can say “you think I’m being arrogant by thinking I can trust my thinking here to be at least worth addressing?” and give them the chance to say “Yes”.

You can ask this question because “I’m not sure if I am being arrogant here, and I want to make sure not to overstep”, but you can also ask because it’s so obvious what the answer is that when you give them an opening and invite their real belief they’ll have little option but to realize “You’re right, that’s arrogant of me. Sorry”. It can’t be a statement disguised as a question and you really do have to listen to their answer and take it in whatever it is, but you don’t have to pretend to be uncertain of what the answer is or what they will believe it to be under reflection. “Hey, so I’m assuming you’re just acting out of habit and if so that’s fine, but you don’t really think it’s arrogant of me to have an opinion here, do you?” or “Can you honestly tell me that I’m being arrogant here”. It doesn’t really matter whether you say it because “you want to point out to people when they aren’t behaving consistently with their beliefs”, or because “I want to find out whether they really believe that this behavior is appropriate”, or because “I want to find out whether I’m actually the one in the wrong here”. The important point is conspicuously removing any option you have for weaseling out of noticing when you’re wrong so that even when you are confident that it’s the other guy in the wrong, should your beliefs make false predictions it will come up and be absolutely unmissable.

Comment by jimmy on Consistent Glomarization should be feasible · 2020-05-09T04:57:51.059Z · score: 6 (3 votes) · LW · GW
With close friends or rationalist groups, you might agree in advance that there's a "or I don't want to tell you about what I did" attached to every statement about your life, or have a short abbreviation equivalent to that.

This already exists, and the degree of “or I’m not telling the truth” is communicated nonverbally.

For example, when my wife early in her pregnancy we attended the wedding of one of her friends, and a friend noticed that she wasn’t drinking “her” drink and asked “Oh my gosh, are you pregnant!?”. My wife’s response was to smile and say “yep” and then take a sip of beer. The reason this worked for both 1) causing her friend to conclude that she [probably] wasn’t pregnant and 2) not feeling like her trust was betrayed later is that the response was given “jokingly”, which means “don’t put too much weight into the seriousness of this statement”. A similar response could be “No, don’t you think I’d have immediately told you immediately if I were pregnant?”, again, said jokingly so as to highlight the potential for “no, I suppose you might not want to share if it’s that early”. It still communicates “No, or else I have a good reason for not wanting to tell you”.

If you want to be able to feel betrayed when their answer is misleading, you have to get a sincere sounding answer first, and “refuses to stop joking and be serious” is one way that people communicate their reluctance to give a real answer. Pushing for a serious answer after this is clear is typically seen as bad manners, and so it’s easy to go from joking around to a flat “don’t pry” when needed without seeming like you have anything to hide. Because after all, if they weren’t prying they’d have just accepted the joking/not-entirely-serious answer as good enough.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T18:16:03.995Z · score: 4 (2 votes) · LW · GW
Understand that the urge to breath is driven by the body’s desire to rid itself of carbon dioxide (CO2)--not (as some assume) your body's desire to take in oxygen (O2).

Interestingly enough, this isn't entirely true. If you get a pulse oximeter and a bottle of oxygen you can have some fun with it.

Because of the nonlinearity in the oxygen dissociation curve, oxygen saturation tends to hold pretty steady for a while and then really tank quickly, whereas CO2 discomfort builds more uniformly. In my experience, when I get that really "panicked" feeling and start breathing again, the pulse oximiter on my finger shows my saturation tank shortly after (there's a bit of a delay, which is useful here for knowing that it's not the numbers on the display causing the distress).

If it were just CO2 causing the urge to breathe, CO2 contractions and the urge to breathe should come on in the exact same way when breathing pure oxygen, and this is not the case. Instead of coming on at ~2-2.5min and being quite uncomfortable, they didn't start until four minutes and were very very mild. I've broken five minutes when I was training more, and it was psychologically quite difficult. Compartively speaking, 5 minutes on pure O2 was downright trivial, and at 7 minutes it wasn't any harder. The only reason I stopped the experiment then is that I started feeling narcosis from the CO2 and figured I should do some more research about hypercapnia (too much CO2) before pushing further.

Along those same lines, rebreather divers sometimes drown when they pass out due to hypercapnia, and while you'd think it'd be way too uncomfortable to miss, this doesn't seem (always) to be the case. In my own experiments, rebreathing a scrubberless bag of oxygen did get uncomfortable quickly, but when they did a blind study on it five out of twenty people failed to notice that there was no CO2 being removed in 5 minutes.

At the same time, a scrubbed bag with no oxygen replacement is completely comfortable even as the lights go out, so low O2 alone isn't enough to trigger that panic.

Comment by jimmy on Meditation skill: Surfing the Urge · 2020-05-08T17:57:28.209Z · score: 4 (2 votes) · LW · GW

Certainly not in any obvious way like people that suffer repeated blows to the head. There's some debate over whether loss of motor control (they call it "samba" because it's kinda like you start dancing involuntarily) can cause damage that makes it more likely to happen again in the future, but I haven't been able to find any evidence that there is any damage at all in normal training and even the former seems to be controversial.

Comment by jimmy on On Platitudes · 2020-04-22T19:36:47.911Z · score: 4 (2 votes) · LW · GW

This is a big topic and I think both slider's "Part of the problem about such tibits of wisdom that they are about big swath of experience/information and kind of need that supporting infrastructure." and Christian's "It seems to me that the skillset towards which you are pointing is a part of hypnosis" are important parts of it. In particular, hypnotists like Milton Erickson have put a lot of time into figuring out how to best convey the felt sense that there is a big swath of experience/information in there that needs to be found, and how to give pointers in the right direction. Hypnotized people can forget their own name without understanding any of the supporting theory about how this is even possible, and religious people can live on commandments even though they do not grasp or have an ability to convey the wisdom upon which they rest. Knowing who to trust and how to believe things that one does not yet understand can be very important life skills, and it doesn't come naturally for those of us who like to "think for ourselves".

The reason Peterson can be so powerful in how he expresses these "platitudes" is that to him they aren't platitudes. He actually did the work and developed the wisdom necessary for these things to stand on their own and not drift away as a "Yeah, nice thought, heard that before". When you see the effects of people breaking the relevant commandments enough that you start to get a gut level appreciation of what it would be like if you were to allow yourself to make that mistake, it starts to have the same intrinsic revulsion that you get when trying to eat Chinese food after it gave you food poisoning the time before. It's a different thing that way.

If you look at someone who makes a living spouting feel good platitudes that they do not themselves live by or understand, how do they respond when challenged? How would you respond if you had tried to tell people to "clean their rooms" as if it were a solution for everything up to and including global warming, only to have BS called on you? Here's how Peterson responds. He does not falter and lose confidence. He does not back away into more platitudes to prevent engagement. He actually goes forward and begins to expound on the underpinnings of why "clean your room" is shorthand for a very important principle (in his view, at least, and mine as well) about how social activism is best done. He does it without posturing about how clean his room is and without accusing his accuser of having an unclean room herself. This part is a bit subtle as he makes no apologies for her behavior and his models do suggest unflattering motivations, but he doesn't go so far as to make it about her or about deflecting criticism from himself. He keeps his focus on the importance of cleaning ones room so that one can do good in this world and not be led astray by psychological avoidance and ignorance, and this is exactly what you would expect from someone who is actually onto something real and who means what they say. This engagement is crucial.

Even if "clean your room" isn't terribly informative or novel itself, his two minute explanation is more. Even though that's not enough, he does have books and lectures where he spells it all out in more detail. When even a book or two isn't enough, there's clearly a lifetime of experience and practice under there beyond immediate reach. You can get started with a YouTube video or a book, but back to slider's point, there's a big ass iceberg under there and you have to piece the bulk of it together yourself. The YouTube videos and books are as much an advertisement as they are a pointer. "Here are [short descriptions of] the rules he endeavors to live by, and the results are there to judge for yourself". When people see someone who practices what they preach and whose results they like at least in part, it creates that motivation to learn more of what is underneath and, in the meantime, to accept some of what they can't understand on their own when they can see that the results are there to back it up.

You can't just say "She's happier now in heaven" and expect words that are meaningless to you to convey any meaning. But when "She wouldn't have wanted you to be unhappy" is true and relevant and not just a pretense in attempt to avoid the real hurt of real loss... because the suffering they're going through isn't just plain grieving but also beating oneself up out of some mistaken idea that it's what a "good" husband would do... then absolutely those words can be powerful. Because they actually mean something, and you would know it.

When the meaning is there, and you know it, and you are willing to engage and stand up to the potential challenges of people who might want to push away from your advice, then even simple and "non-novel" words can be very novel and compelling thing. Because while they may have heard someone spout that platitude before, they likely have never heard anyone stand behind and really mean it.

Comment by jimmy on Reflections on Arguing about Politics · 2020-04-14T18:51:57.126Z · score: 2 (1 votes) · LW · GW
>really want to change the other’s mind
Which is very zero-sum, and indicates that to the extent a discussion is productive for one, it's counter-productive for the other. I recommend NOT HAVING those arguments. If you're going in with goals of understanding their position, changing your own mind, or better modeling the universe (and those in it), then you might actually be productive.

Not quite. If my goal is to change your mind and I succeed, you don't lose and therefore it's not zero-sum.. If I succeed it's probably because I'm right, or at least in your estimation I seem more likely right than your old position was. This holds true even if you went into it really wanting to change my mind as well -- it would just mean that you'd have had to change your mind on whether that was a good goal once you started seeing that I might be right.

The real problem is going in not wanting to be convinced. If you do that, and keep attachment to your belief that you're right, then you're adding a negative penalty to a win condition that makes it hard to get to. So long as you go in willing and happy to be convinced, you can productively go in with the main goal being to change their mind if you expect that to be more likely than them having something to say which could change your mind. In cases where you don't already understand their position, then this comes down to the same thing you say where you work towards goals like "understanding their position" and "changing your own mind", but when you think you already get their side then putting that to the test and seeing if you can change their mind is a very valid goal. You just come at it in a very different way when you're open to their viewpoints than when you're not.

Comment by jimmy on Hanson & Mowshowitz Debate: COVID-19 Variolation · 2020-04-14T18:34:55.690Z · score: 4 (2 votes) · LW · GW

It depends on what you mean by "unpopular". If you mean that someone is going to ignore the lives that would be saved and accuse you of being uncaring, then that's certainly true and you would need to be ready to deal with that.

On the other hand, if you mean that everyone would actually be against this idea then I think you're wrong. I've been floating the idea every time I end up in a discussion about this virus, and while my conversations can't be taken as completely representative, it's worth noting that not once have I had anyone say it's a bad idea. The most negative I've gotten was "that's interesting", and most of the other people I've talked to have said that they would do it right now and that so would a lot of their friends.

In a situation where the risks for healthy young people are low and eventual infection is likely anyway, "If people are going to get sick anyway, let them do it on their terms so that it can be as safe as possible" is not a hard argument to win, and the people who need to be convinced are likely far more sympathetic to such ideas than you think.

We just need to create the common knowledge that such ideas are thinkable and doesn't have to be a politically losing stance. A lot of "common knowledge" stances have turned out to be wrong and to flip overnight, and you'd be offering people a chance to be ahead of the curve and the first to jump on the winning team that saved the day. It'd have to be done deliberately and carefully, but if you do it right people will take it.

Comment by jimmy on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-06T18:37:03.456Z · score: 2 (1 votes) · LW · GW
I linked to a Bulletin of Atomic Scientists article about why this debunked idea still keeps coming up and the harms associated with it. Just printing articles and pointing people to them wasn't enough. I don't have more to say about your specific arguments because I think they're covered pretty well by the article I linked.

That article is just a list of a bunch of opinions people have and it is nothing more than a gossip piece. Literally all it does is repeat things like:

Mahmoud Ahmadinejad, the former Iranian president who seems unable to resist a good opportunity to propagate falsehoods (even Al-Qaida once asked him to stop making things up), also got in on the coronavirus conspiracy action. In an open letter to the UN secretary-general, he wrote that it was clear that the virus was “produced in laboratories … by the warfare stock houses of biologic war belonging to world hegemonic powers.”

and

Lentzos worries that the parade of prominent figures promoting the bioweapons conspiracy theory could weaken the global taboo against possessing bioweapons—making biological weapon research appear to be widespread.

It does nothing to even begin commenting on why these ideas keep spreading, just that they are and who is spreading them. Likewise, exactly nothing in that article responds to anything I've said.

It's no wonder that linking to trash like this doesn't convince anyone. To even get started you need to be able to link to things like this. Then you need to have people who can understand why that is credible explain it to their social circle who respect them and wouldn't understand it on their own. And that means you need an army of people who are capable of empathizing with the very real concerns that these "conspiracy theorists" have instead of falling into the trap of arrogance to hide from their own difficulties in being persuasive and credible.

Yes, it's hard. Let's get to work.

Comment by jimmy on Discussion about COVID-19 non-scientific origins considered harmful · 2020-04-05T18:27:14.766Z · score: 11 (6 votes) · LW · GW

"Considered harmful" is what Wikipedia refers to as "weasel words". By whom? Why do we care what they think? It's much better to make the case directly than to attempt to weasel in a (false) sense of consensus. Doing the latter damages your credibility, and you're going to need that.

If you're concerned about conspiracy theories "failing to be debunked", what you need are honest and credible experts. The public can't evaluate the claims themselves. Heck, I'm a pretty smart guy and I can't evaluate the evidence based on looking at the genome itself or even from evaluating the object level claims of people who have. But I and many many others are smart enough to notice a big coincidence when we see one, and smart enough to know that many many people like to think it's a good idea to censor, distort, or otherwise lie about things to paint their preferred viewpoint. Honestly and openness is critical if you want to persuade anyone of anything. If you say "The bio-weapon theory has been debunked as just a conspiracy theory", what am I supposed to take from that? That you are very open to this theory would speak publicly about all evidence you find in its favor if only it existed? Or that you want to silence "information hazards" with weasel phrases and like to use terms like "debunked" or maybe even "conspiracy theory" to discredit ideas without even diving into whether or not they might be true? When the latter is a strong possibility, we can't just take things like "X has been debunked" on faith.

I actually don't think it's a bio-weapon and I do believe it has been debunked. But the reason for this is that when I look to people who are able to evaluate the object level claims themselves, the ones who are capable of honestly considering and stating "yep, this sure looks like a bio-weapon" are actually saying "yeah, I considered that hypothesis myself because it's a totally reasonable thing to check, but it turns out that this one looks natural (and here's why, if you want to check my work)". That is the only way you can debunk these things, since everyone can't become virology experts overnight and you can't declare yourself into credibility by fiat.

You're right that now is not the time to be starting wars, and I think there is a very persuasive case to be made for that. Fighting and posturing are last and second to last resorts respectively, and not ones we want to hastily resort to ever, let alone in difficult times where there might be a flinch to do so. It is a bad idea to pick fights and start wars, especially when the ability to cooperate globally is most important, especially especially without thinking these things through very very carefully. However, this all holds true even if it were a bio-weapon, or escaped from a lab due to negligence, or spread worldwide due to attempted cover-ups/etc. Instead of removing your voice from the conversation that will happen anyway and has to happen anyway, use your voice to say what needs to be said. "Yes, it's very reasonable to suspect that it might be a bio-weapon, and that's why we checked. It doesn't look like it is". "Yes, it is very important for people and organizations to be held accountable for their actions. It is also very important to first make sure we know what those actions actually are and to give people the benefit of the doubt both on what they did and their willingness to take responsibility voluntarily first. Now is the time to cooperate with one another to fight this pandemic. Later is the time to sort through the mistakes we made and make sure we're all working honestly to avoid repeating them for personal gain or otherwise".

Comment by jimmy on Taking Initial Viral Load Seriously · 2020-04-01T23:59:37.871Z · score: 2 (1 votes) · LW · GW
Looking at a 50% low risk, 50% high risk scenario, we can only save 50% of what we could save if we started in a 50% high risk scenario.

I don’t think this is right.

It’s worth noting that the 14x difference between the risk for the first kid in the house and the second is a (noisy) lower bound on the degree to which the risk depends on dose. For example, this data is consistent with the toy model that the risk vs dose is a step function going from 0% to 100% at a certain point; it would just require that the kid bringing the disease into the house is 14x less likely to have crossed that threshold. With intentional inoculation we don’t know how much lower the risk can be driven, nor do we know how much of the exceptionally bad cases are simply due to exceptionally high initial dose of the virus. It’s entirely possible that even in your “50% low risk 50% high risk” scenario, all or most of the low risk people that die are because of unusually high viral loads given their risk grouping, and that with careful titration of dosage we can do much better than shifting people from one crude grouping to another.

I’m also quite skeptical that we should find the absence of more evidence to be particularly damning. Where would it come from? Ethics boards aren’t going to be happy about intentionally infecting people with deadly diseases, and it’s hard to get more than a very crude guess at the initial dose of cases caught “in the wild”. Furthermore, if you’re going to intentionally inject people with non-potent virus in order to build antibodies, you’d normally want to go all the way and do an actual vaccine. How many people have been thinking about what to do in case there’s a pandemic where you can’t wait for a real vaccine, and how many of them have been studying variolation? I wouldn’t think many.

To me, it sounds like taking volunteers to empty cruise ships sounds like an easy and potentially big win. There are plenty of young people who aren’t concerned and aren’t at high risk to begin with, and you can offer them both a lower risk (because they’re likely to get it anyway), a free party, and a way to feel like they’re helping instead of hurting other people. In return we get data, a step towards herd immunity, and workers which can safely treat other patients or run nursing homes. Once we need to scale up we can start thinking on how to triage people who are wanting to take this risk. For now, it seems like we just want to rush to get this idea accepted and tried somewhere.

Comment by jimmy on mind viruses about body viruses · 2020-03-28T18:54:09.407Z · score: 16 (9 votes) · LW · GW

When you're dealing with a threat that doubles in size every few days, you do not have the luxury of excess caution. Inverted pendulums have an exponentially growing error as well, and no matter what you do (or don't do) to react, if your control system doesn't do it faster than the instability grows, you lose. Period. If you try to move slowly in the act of balancing, then you will fall off the tight rope no matter how sure you later become of what the right action would have been. It is fundamentally necessary to be able to react and then correct for errors later (so yes, pre-frame this in your communication so that you don't over-commit to something you will need to later change).

It's also worth noting that "literally everyone on earth" only starts trying to solve the problem once they know that it's a problem, and at the time that Pueyo's first essay came out, that was absolutely not the case. At that time, I was still scrambling trying to figure out how to best leverage my credibility and communication skills to convey the exact same point about "Why you must act now" because people around me had not yet come to realize how serious an issue this was going to be. Sure, they'd have heard it without me too. But they would have heard it later with less time to act, and might not have taken it as seriously without my credibility behind it. If enough people like me took your advice they might not have heard it at all because it could be out competed by other less useful memes.

It's just that, as you manage to find alternative takes (perhaps by credentialed experts, perhaps not) that find flaws in the memes you've been spreading, you spread those too. I would say "correct for your mistakes" except that it's not even a "mistake" necessarily, just "a clarification of oversimplification" or "the next control input given the newest estimate of the state".

As we get deeper into this mess and people start mobilizing, then "do something in this general direction!" becomes less important. At some point you have to wonder whether the pendulum has swung too far, or if we need to be acting in a different direction, or something else. When everyone in the world is thinking about it we now have a very different problem and instead of simply requiring an ability to take back of the envelope models seriously when they are outside the "socially accepted reality", you actually need more detailed analyses.

Still, public opinion will need to get on board with whatever is necessary, and in the absence of your input the memes don't just stop and wait for science, and neither does the coronavirus. If you try to say "but I can pick nits! This isn't credentialed and perfect!" and try to replace useful first steps with inaction, then you blow your credibility and with it your ability to help shape things for the better. Let's not do that.

Yes, it is important to not initiate or signal boost bad information at the cost of good ones. Yes, it is important to look for people who are (actually) experts. But it's also important to provide a path from the real experts to the layfolk, since that doesn't and cannot happen on its own. The public in general not only can't evaluate the object level arguments about epidemiology and must defer to authority, they can't even evaluate object level arguments about who is the real authority -- that's why you get antivaxxers listening to crackpots. It's appeals to authority (mixed in with justifications) all the way up. If you can't create the best ideas but you can distinguish between the best ideas and those which merely look good to the untrained eye, it is your job to pass the best ideas down to those who are less able to make that distinction. If you can't make that distinction yourself but you can at least distinguish between people who can and posers, then it is your job as the next link in the chain to pass this information from those more able to discern to those who are less able to discern than you. This goes all the way down to the masses watching the news, and you better hope you can get the news to get their shit together. I still know people who are in denial because mainstream news told them to be and then failed to appropriately correct for their earlier mistakes. Let's work to fix that.

Exponential memetic spread does not pathology make. Yes, it's possible for overactive or mistargeted immune systems to fail to prevent things or to do more harm than good. Yes, Dunning Kruger applies and humility is as necessary as ever. However, so is the courage to be bold and to take action when it is called for instead of hiding in false humility. This "intellectual curve" is a part of our collective immune response to an actual virus which is killing people and threatening to kill exponentially more. Do not flatten the wrong curve. Find a role that allows you to guide it in the right direction, and then guide.

Comment by jimmy on Authorities and Amateurs · 2020-03-26T21:34:25.905Z · score: 17 (9 votes) · LW · GW

Here's my answer:

There is an important distinction between "object level arguments" and "appeals to authority". Contrary to how it's normally spoken about, appeal to authority is not really fallacious and at times absolutely necessary. If I am unable to parse the object level arguments myself, I have to defer to experts. The only issue is whether I have the self awareness and integrity to say "I'm not capable of evaluating this myself, so unfortunately I have to defer to the people I trust to get these things right. Maybe you're right and I'm just not smart enough to see it". However, this must ground out somewhere. If you listen to people who only appeal to authority (whether it is their own or others) and there are never any attempts to ground things in object level arguments, then there is nothing this trust is founded on and so your beliefs can float away with no connection to reality.

What I do is take into consideration all object level arguments which I am not personally qualified to evaluate, and then weigh my trust in the various "authorities" based on how capable they seem in actually getting into the object level and making at least as much sense as the people they're arguing against. As it applies here, the amateurs linked to actually got into the object level and made very plausible sounding arguments. I didn't see any major holes in the main premise, even if I could pick less important nits. I never saw any credentialed authority engaging in the object level and making even plausibly correct counterarguments which negated the main point of these amateur models. There were a lot of "don't worry, nothing to see here", but there weren't any that were backed up by concrete models that didn't have visible holes.

The people I'm going to listen to (regardless of how capable I personally am of evaluating the object level arguments) are those who 1) have been willing to stick their neck out and make actual arguments, and 2) haven't had their neck chopped off by people pointing out identifiable mistakes in ways that are either personally verifiable or agreed upon by a more compelling network of "authority".

I think this heuristic worked pretty well in this case.

Comment by jimmy on Advice on reducing other risks during Coronavirus? · 2020-03-25T17:28:25.422Z · score: 2 (1 votes) · LW · GW

I'm not so sure the recommendation for walking over driving holds up. According to the CDC "Per trip, pedestrians are 1.5 times more likely than passenger vehicle occupants to be killed in a car crash."

Comment by jimmy on Should I buy a gun for home defense in response to COVID-19? · 2020-03-23T06:42:53.441Z · score: 8 (6 votes) · LW · GW


Strong disagree. Anyone who knows how to operate their weapon and is willing to use it is a formidable threat to all but the most trained and determined invaders. The level of accuracy needed to hit a man sized target inside a house with a long gun is really low. Low enough that if you miss the problem isn’t that you aren’t yet skilled in the art of aiming, it’s that you didn’t make sure to aim at all before you pulled the trigger.

The bigger barrier is psychological. If you can’t get yourself to take deliberate aim on another human and pull the trigger knowing what will happen, then a firearm might not be useful. If you can do that though, the mechanics won't be a problem except in the difficult cases.

Comment by jimmy on March Coronavirus Open Thread · 2020-03-20T19:23:15.627Z · score: 3 (2 votes) · LW · GW

Right, I got that it was them doing the math correction not you. Still, they did the math and give an age breakdown of the passengers and a crude sanity check gives a number within about 30% of what they report.

Comment by jimmy on March Coronavirus Open Thread · 2020-03-20T19:07:46.008Z · score: 2 (1 votes) · LW · GW

I'm not sure what makes you think it doesn't have sharp edges. In order to not have sharp edges it would need to be a bar, not flexible tape.

Comment by jimmy on March Coronavirus Open Thread · 2020-03-19T01:41:00.207Z · score: 3 (2 votes) · LW · GW

Yeah, the 1/8th multiplier sounded hard to believe. A 1/2 multiplier based on demographic correction sounds a lot more plausible, and it's nice to have confirmation that someone else actually did the math. Thanks for finding/sharing it!

Comment by jimmy on March Coronavirus Open Thread · 2020-03-18T20:26:51.831Z · score: 6 (3 votes) · LW · GW
The one situation where an entire, closed population was tested was the Diamond Princess cruise ship and its quarantine passengers. The case fatality rate there was 1.0%, but this was a largely elderly population, in which the death rate from Covid-19 is much higher.
Projecting the Diamond Princess mortality rate onto the age structure of the U.S. population, the death rate among people infected with Covid-19 would be 0.125%.

John Ioannidis is making an interesting (and reassuring, if true) claim here. Has anyone looked at the demographics and done the comparison themselves?

Comment by jimmy on March Coronavirus Open Thread · 2020-03-18T20:26:05.947Z · score: 2 (1 votes) · LW · GW

Yes, that looks right. The edges of any thin tape are going to be sharp, it's just that copper is strong enough to hold that geometry instead of folding easily before it cuts you.

Comment by jimmy on Could you save lives in your community by buying oxygen concentrators from Alibaba? · 2020-03-17T04:04:07.233Z · score: 5 (3 votes) · LW · GW

Unless you're underwater or in a hyperbaric chamber, oxygen toxicity isn't really a big concern, and a cheap oxygen concentrator like the one described above can't get you close to where problems start. Even if you had a better oxygen concentrator, it doesn't take any fancy training to add oxygen until 92% saturation or whatever.

Comment by jimmy on How effective are tulpas? · 2020-03-10T18:18:28.172Z · score: 9 (5 votes) · LW · GW
Bowing down to authority every time someone tells me not to do something isn't going to accomplish that.

Not if applied across the board like that, no. At the same time, a child who ignores his parents' vague warnings about playing in the street is likely to become much weaker or nonexistent for it, not stronger. You have to be able to dismiss people as posers when they lack the wisdom to justify their advice and be able to act on opaque advice from people who see things you don't. Both exist, and neither blind submission nor blind rebellion make for successful strategies.

An important and often missed aspect of this is that not all good models are easily transferable and therefore not all good advice will be something you can easily understand for yourself. Sometimes, especially when things are complicated (as the psychology of human minds can be), the only thing that can be effectively communicated within the limitations is an opaque "this is bad, stay away" -- and in those cases you have no choice but to evaluate the credibility of the person making these claims and decide whether or not this specific "authority" making this specific claim is worth taking seriously even before you can understand the "why" behind it. Whether you want to want to heed or ignore the warnings here is up to you, but keep in mind that there is a right and wrong answer, and that the cost of being wrong in one direction isn't the same as the other. A good heuristic which I like to go by and which you might want to consider is to refrain from discounting advice until you can pass the intellectual Turing test of the person who is offering it. That way, you can know that when you choose to experiment with things deemed risky, you're at least doing it informed of the potential risks.

FWIW, I think the best argument against spending effort on tulpas isn't the risk but just the complete lack of reward relative to doing the same things without spending time and effort on "wrapping paper" which can do nothing but impede. You're hardware hours limited anyway, and so if your "tulpa" is going to become an expert in chess it will be with eye/hand/brain hours that you could have used becoming an expert in chess yourself. If your tulpa is going to have important wisdom to offer by virtue of holding different perspectives, those perspectives will be generated with brain time you could have used generating those perspectives for yourself. There's no rule saying people can't specialize in more than one thing or be more than uni-dimensional, it's just a question of where you want to spend your limited hours.

Comment by jimmy on Credibility of the CDC on SARS-CoV-2 · 2020-03-08T18:34:23.596Z · score: 17 (6 votes) · LW · GW

We do, and that's the point. It's not "hey, we're not as bad as them so don't complain to us!". It's that there is already a lot of distrust out there, and giving people something to latch onto with "see, I knew the CDC wasn't being honest with me!" can keep them from spiraling out of control with their distrust, since at least they know where it ends.

Mild well sourced criticism is way more encouraging of trust than no criticism under obvious threat of censorship because the alternative isn't "they must be perfect" it's "if they have to hide it, the problems are probably worse than 'mild'".

Comment by jimmy on What is the appropriate way to communicate that we are experiencing a pandemic? · 2020-03-05T18:28:05.729Z · score: 5 (3 votes) · LW · GW
But then I thought the psychological consequences for a not inconsiderable amount of people would be disastrous, as it seems to my girlfriend. [...] I don't want to be a information hazard source.


It's important to note that unpleasant emotions are functional when faced with a new threat that one hasn't prepared for; the whole point of emotions like fear is to reorient ourselves towards the reality we find ourselves in and come up with a more informed (and therefore hopefully more effective) response. It is always unpleasant to realize that things aren't quite as nice as we've been hoping and planning on, but the actual information hazard would be things that "protect" people from the emotion that could have protected their life and well being as well as the life and well being of their loved ones. What you're talking about doing is the opposite of an information hazard.

That said, there are a few things that can be important for doing it right.

One is that you want to draw very clear boundaries between the position you advocate and alarmism. You're pushing for integration of scary information as well, not for blindness to good news and the potential for optimism. You don't want to push people from "white thinking" to "black thinking", you want to encourage people to take in all information and pick the most appropriate shade of gray given the current information available.

Not only is some shade of gray more accurate than pure black, making this distinction clear will help you persuade people. When people are primed and ready to "not give into alarmist/doomer thinking", you don't want them to pattern match you as this opposite form of irrational thought. If you have had/seen any conversations about this where people are saying "it's not the end of the world" in response to statements like "it's not 'just the flu'", this is what is going on. You're seeing them argue against what they don't want to believe rather than what is being argued. I would make sure to include and emphasize everything optimistic you can without sacrificing accuracy, and make sure you're not trying to "push one side" as much as offer more information as someone who can see both the reassuring and the scary.


Secondly, recognize the fact that you are deliberately exposing people to scary ideas which many many people are not emotionally prepared to deal with. The whole reason people dismiss reasonable arguments as "alarmist" is because their emotional response would be somewhat like your girlfriend's, and they don't want to have to face that. To every extent you can, ease this transition. Be comforting and hospitable, even if just in body language or vocal tone in a YouTube video. You want to emphasize (explicitly or implicitly) that feeling fear is not a sign of cowardice but of courage -- after all, they've proven themselves capable of avoiding it if they wanted to. You want to give people an idea of what they can do, and what their cues should be for various decisions. This can help lower the amount of uncertainty that they will have to deal with and make the transition more comfortable, as well as cutting down on the unnecessarily duplicated cognitive effort of "figuring out what the hell to do about it". People are always free to doubt and question and to disagree, of course, but it can be nice having a "default" value to jump to so that you can update on risks without having to be emotionally and mentally ready to compute all your own ideas on first principles.

This is very important work, as there are relatively few people who are willing and able to engage with the scarier possibilities without losing touch of hope and succumbing to alarmist paranoia and losing all credibility. I definitely encourage you to make the video.

Comment by jimmy on How does electricity work literally? · 2020-02-24T22:44:18.845Z · score: 6 (4 votes) · LW · GW
I don't know whether AC or DC would be a better choice if we were starting from scratch now, but both systems were proposed and tried very early in the history of electrical power generation and I'm pretty sure all the obvious arguments on both sides were aired right from the start.

DC wasn't really a viable option at the start because of the transformer issue you mentioned. The local power lines carry ~100x higher voltage than what you get in your house, and the long distance power lines up to another 100x on top of that. Without that voltage step up, you'd need 100-10,000x as much wire.

Modern semi conductors change the game considerably though. In a lot of areas, the big iron transformers are getting phased out and replaced with switching power supplies, which suggests that it could be economically efficient now, if not for the requirement for a 50 or 60hz sinewave and existing stuff.

A DC based system would have advantages of not requiring rectification on many end uses, give some minor improvement in corona losses in transmission, and would allow for variable speed generators. It would come at the cost of controller-less induction motors and clocks that use the AC signal to keep time. I'm not sure about the cost of doing the voltage step-up/step-down because both methods are still in use. I'm not sure which would be the better choice now, but it is an interesting question.

Comment by jimmy on Who lacks the qualia of consciousness? · 2019-10-06T18:03:12.033Z · score: 8 (5 votes) · LW · GW
Over on Facebook (I don't know if it's possible to link to a Facebook post, but h/t Alexander Kruel) and Twitter, the subject of missing qualia has come up. Some people are color-blind. This deficiency can be objectively demonstrated by tasks such as the Ishihara patterns

Lacking the ability to distinguish colors well means your brain does not know which qualia to use, not that it doesn't have all of the qualia available. I'm red/green color blind (according to the tests, and difficulty determining the color of small things), but I have very distinct red and green qualia. Most of the time my experience feels like "I'm unsure if this line is red or green", which is different than "this line is red-green, as there is not actually a difference between the thing people call 'red' and the thing people call 'green'".

However, I have also had the experience of having red text show up as bright green and then switch on me. I was reading part of the sequences back in the day, and I could tell from the context that the word "GREEN" was supposed to be red (stroop tests), but my brain took that as a cue that the text was supposed to be green. When I brought my face closer to the screen to check, the text flipped to red. When I backed up it returned to green. In between, individual pieces of each letter would start to flip color.

Comment by jimmy on The Power to Demolish Bad Arguments · 2019-09-05T18:34:20.975Z · score: 2 (1 votes) · LW · GW

Okay, I thought that might be the case but I wasn't sure because the way it was worded made it sound like the first interaction was real. ("You can see I was showing off my mastery of basic economics." doesn't have any "[in this hypothetical]" clarification and "This seemed like a good move to me at the time" also seems like something that could happen in real life but an unusual choice for a hypothetical).

To clarify though, it's not quite "doubt that it's sufficiently realistic". Where your simulated conversation differs from my experience is easily explained by differing subcommunication and preexisting relationships, so it's not "it doesn't work this way" but "it doesn't *have to* work this way". The other part of it is that even if the transcript was exactly something that happened, I don't see any satisfying resolution. If it ended in "Huh, I guess I didn't actually have any coherent point after all", it would be much stronger evidence that they didn't actually have a coherent point -- even if the conversation were entirely fictional but plausible.

Comment by jimmy on The Power to Demolish Bad Arguments · 2019-09-04T16:34:43.687Z · score: 0 (5 votes) · LW · GW


1) There is a risk in looking at concrete examples before understanding the relevant abstractions. Your Uber example relies on the fact that you can both look at his concrete example and know you're seeing the same thing. This condition does not always hold, as often the wrong details jump out as salient.

To give a toy example, if I were to use the examples "King cobra, Black mamba" to contrast with "Boa constrictor, Anaconda" you'd probably see "Ah, I get it! Venomous snakes vs non-venomous snakes", but that's not the distinction I'm thinking of so now I have to be more careful with my selection of examples. I could say "King cobra, Reticulated python" vs "Rattlesnake, Anaconda", but now you're just going to say "I don't get it" (or worse yet, you might notice "Ah, Asia vs the Americas!"). At some point you just have to stop the guessing game, say "live young vs laying eggs", and only get back to the concrete examples once they know where to be looking and why the other details aren't relevant.

Anything you have to teach which is sufficiently different from the persons pre-existing world view is necessarily going to require the abstractions first. Even when you have concrete real life experiences that this person has gone through themselves, they will simply fail to recognize what is happening to them. Your conclusion "I showed three specific guesses of what Michael’s advice could mean for Drew, but we have no idea what it does mean, if anything." is kinda the point. When you're learning new ways of looking at things, you're not going to immediately be able to cache them out into specific predictions. Noticing this is an important step that must come before evaluating predictions for accuracy if you're going to evaluate reliably. You do have to be able to get specific eventually, or else the new abstractions won't have any way to provide value, but "more specificity" isn't always the best next step.


2) It seems like the main function you have for "can you give me a concrete example" is to force coherence by highlighting the gaps. Asking for concrete examples is one way of doing this, but it is not required. All you really need for that is a desire to understand how their worldview works, and you can do this in the abstract as well. You can ask "Can you give me a concrete example?", but you could also ask "What do you think of the argument that Uber workers could simply work for McDonald's instead if Uber isn't treating them right?". Their reasoning is in the abstract, and it will have holes in the abstract too.

You could even ask "What do you mean by 'exploits its workers'?", so long as it's clear that your intent is to really grok how their worldview works, and not just trying to pick it apart in order to make them look dumb. In fact, your hypothetical example was a bit jarring to me, because "what do you mean by [..]" is exactly the kind of thing I'd ask and "Come on, you know!" isn't a response I ever get.

3) Am I understanding your post correctly that you're giving a real-world example of you not using the skill you're aiming to teach, and then a purely fictional example of you imagine that the conversation would have gone if you had?

I'd be very hesitant to accept that you've drawn the right conclusion about what is actually going on in people's heads if you cannot show it with actual conversations and at the very least provoke cognitive dissonance, if not agreement and change. Otherwise, you have a "fictitious evidence" problem, and you're in essence trusting your analysis rather than actually testing your analysis.

You say "Once you’ve mastered the power of specificity, you’ll see this kind of thing everywhere: a statement that at first sounds full of substance, but then turns out to actually be empty.", but I don't see any indication anywhere that you've actually ruled out the hypothesis "they actually have something to say, but I've failed to find it".

Comment by jimmy on How to Ignore Your Emotions (while also thinking you're awesome at emotions) · 2019-08-03T17:51:31.661Z · score: 7 (3 votes) · LW · GW

I wouldn't interpret Kaj as saying "Go ahead and remember false things for instrumental gain. What could possibly go wrong with that!?". Truth is obviously important, and allowing oneself to pretend "this looks instrumentally useful to believe, so I can ignore the fact that it's clearly false" is definitely a recipe for disaster.

What Kaj is saying, I think, is that the possibility of being wrong is not justification for closing ones eyes and not looking. If we attempt to have any beliefs at all, we're going to be wrong now and then, and the best way to deal with this is to keep this in mind, stay calibrated, and generally look at more rather than less.

It's not that "recovering memories" is especially error prone, it's that everything is error prone and people often fail to appreciate how unreliable memory can be because they don't actually get how it works. If you try to mislead someone and convince them that a certain thing is happened, they might remember "oh, but I could have been mislead" where as if you do the exact same thing but instead you mislead them to think "you remember this happening", then they now get this false stamp of certainty saying "but I remember it!".


Comment by jimmy on Raw Post: Talking With My Brother · 2019-07-21T08:29:37.514Z · score: 6 (3 votes) · LW · GW
I think the same is true of NVC. If only one person's doing it, it's not going to work very well. It takes two. Some of my best memories are of conversations that took place between myself and somebody else schooled in NVC or something similar. Some of my worst are of applying NVC or similar techniques in a situation where the other person is used to getting their way through domineering or abusive behavior.

Hm. My experience is the opposite. I’ve found the most use in NVC type communication in cases where the other person is getting quite violent, as it can be remarkably disarming to see through the threats and care about the hurt that is driving the person to make them. It can also make it quite difficult for people to continue justifying their domineering and/or abusive behavior to themselves and others, if that’s what they’re doing.

My model of NVC is that’s useful in the way that neutron moderators can be useful. There’s a certain “gain” by which the “violence level” of communications are amplified after being expressed and received, and then having the response expressed and received. If you get multiplied by a number greater than one after going around the loop, things will melt down or explode. If one person is both interpreting and responding uncharitably, the other party is going to have to shoulder more of the burden to “moderate neutrons” and be extra clean in their communications so as to not escalate or allow for escalation. Additional effort to communicate nonviolently is going to make the most difference in the cases where you can actually cross unity. If you’re well below one to start with, then there’s no need. If you can’t get below one even with effort, it’s a bit futile (and therefore frustrating/discouraging).

You seem pretty aware of the failure mode of trying to use neutrality/rationality as an emotional defense mechanism, and how reaching for tools like NVC out of these motivations can lead to stifling of the important emotions that need to be expressed (which, interestingly, necessarily leads to misapplication/cargo culting of the tools). Do you think that could be behind your difficulties in getting good results with NVC in the “one sided” cases? Also, to me, your conversation with your brother looks like a perfect example of using NVC with someone who presumably isn’t trained in NVC to transform things from where they were coming off as domineering/abusive to one where you two are clearly on the same side and working together. Do you conceptualize it differently?