post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Zack_M_Davis · 2020-06-06T18:10:51.331Z · LW(p) · GW(p)

I'll grant that there's a sense in which instrumental and epistemic rationality could be said to not coincide for humans, but I think they conflict much less often than you seem to be implying, and I think overemphasizing the epistemic/instrumental distinction was a pedagogical mistake in the earlier days of the site.

Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?

From this perspective, the purported counterexamples to the coincidence of instrumental and epistemic rationality seem like pathological edge cases that depend on weird defects in human psychology. Learning how to build an unaligned superintelligence or an atomic bomb isn't dangerous if you just ... choose not to build the dangerous thing, even if you know how. Maybe there are some cases where believing false things helps achieve your goals (particularly in domains where we were designed by evolution to have false beliefs for the function of decieving others [LW · GW]), but trusting false information doesn't increase your chances of using information to make decisions that achieve your goals.

Replies from: lsusr, mr-hire, Bob Jacobs
comment by lsusr · 2020-06-06T18:25:22.780Z · LW(p) · GW(p)

In my personal life, I've observed that self-deception is related to one's ability to deceive others. Narcissism is a less contrived conflict between instrumental and epistemic rationality.

The narcissists I know who genuinely self-deceive (as opposed to mere doublethink) tend to be unhappy, unstable and unproductive. But…they also have a superficial charisma. Evolutionarily-speaking, I think this is a Nash equilibrium.

I think self-deception is instrumental in acting unethically for one's own self-interest. In this way, believing false things can help achieve your evolution's goals.

Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?

The AI depends on epistemic rationality to achieve its goals. Instrumental rationality at the expense of epistemic rationality may help the AI achieve yours.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-06T18:32:52.569Z · LW(p) · GW(p)

Comparing Pragmatism to narcissism? What about being willing to reduce your own knowledge (aka your own advantage) for other peoples sake (e.g forgetting knowledge that could lead to genocide). I would argue this is altruistic rather than narcissistic (and realism would be more narcissistic)

(EDIT: a spelling error, more impersonal and better flow)

Replies from: lsusr
comment by lsusr · 2020-06-06T19:25:40.573Z · LW(p) · GW(p)

It sounds like you are concerned about hypothetical situations that test the limits of philosophical ideas whereas Zack_M_Davis and I are concerned about real-world situations that happen all the time.

Fair enough. Let us dive into the fantastical. Suppose we lived in a world like you describe.

Slightly reducing one's own knowledge to prevent massive harm to others is the moral imperative. I don't think anyone here would disagree. But I don't think that's the fundamental problem either. The interesting question is whether you're willing to deceive yourself to achieve moderate instrumental ends.

Suppose there was an invisible monster that ate anyone who knew it existed. If I accidentally discovered this monster then I would want to forget that knowledge in order to protect my life.

But I would not want to replace this knowledge with a false belief. Such a false belief could get me into trouble in other ways. I would also want to preserve the knowledge in some form.

What follows is a passage from Luna Lovegood and the Chamber of Secrets.

Luna daydreamed a lot. She often found herself in rooms with little memory of how she got there. It was rare for her to find herself in a room with literally no memory of how she got there.

"I've just been obliviated, haven't I?" Luna said.

"You rushed in here and pleaded for me to erase your memory," Professor Lapsusa said.

"And?"

"It is a crime for a professor to modify the memory of a student. And for good reason. No. I have never magically tampered with your mind and I will never do so."

Luna's felt like she had just run up several flights of stairs. She was breathing quickly. Sweat soaked from her fingertips into the diary she was holding.

"Have I been possessed?" Luna asked.

"No," Lapsusa said.

Lapsusa waited for Luna to work it out.

"This book I'm holding. Is it magicial?" Luna asked.

Lapsusa smiled.

"It is a tool for self-obliviation then," Luna said.

"Diaries store memories," Lapsusa said.

"Where did it come from?" Luna asked.

Lapsusa winked.

"Thank you," Luna said.

"You are welcome," Lapsusa said.

"Have you ever read it?" Luna asked.

Lapsusa looked out the window.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-06T19:44:20.198Z · LW(p) · GW(p)

That's an odd passage, not sure what you're trying to say, but I'll check out Luna Lovegood and the Chamber of Secrets. But this knowledge dilemma is not so hypothetical as you might think. Placebo is a very real thing we encounter everyday and I would generally advice people to stay optimistic during medical operations because you'll increase your chances of succes (I would argue skewing your worldview temporarily is worth it). When governments decide whether they should fund research into e.g nuclear weapons I would generally advice against it (even though it gives us a better map of the territory) because it's dangerous. I much rather spend that money on pragmatic (but unintellectual) projects like housing the homeless.

comment by Matt Goldenberg (mr-hire) · 2020-06-10T20:13:38.294Z · LW(p) · GW(p)

I would argue the edge cases for humans are fairly common.

comment by B Jacobs (Bob Jacobs) · 2020-06-06T18:21:32.442Z · LW(p) · GW(p)

I made no mention of the frequency of such an occurrence, I agree [LW(p) · GW(p)] that it's a rare edge case, but that's what makes it interesting. Also we are humans, so of course I'm interested in what humans (aka this site's demographic) will do and not some hypothetical other being.

comment by jimmy · 2020-06-06T20:17:05.089Z · LW(p) · GW(p)
These two options do not always coincide. Sometimes you have to choose.

I'll go even further than Zack and flat out reject the idea that this even applies to humans.

The most famous examples are: Learning knowledge that is dangerous for humanity (e.g how to build an unaligned Superintelligence in your garage), knowledge that is dangerous to you (e.g Infohazards)

This kind of problem can only happen with an incoherent system ("building and running a superintelligence in ones garage is a bad thing to do"+"I should build and run a superintelligence in my garage!") where you posit that the subsystem in control is not the subsystem that is correct. If you don't posit incoherence of "a system", then this whole thing makes no sense. If garage AIs are bad, don't build them and try to stop others from building them. If garage AIs are good, then build it. Both sides find instrumental and epistemic rationality to be aligned. It's just that my idea of truth doesn't always line up with your idea of best action because you might have a different idea of what the truth is.

It can be more confusing when it happens within one person, but it's the same thing.

If learning that your girlfriend is cheating on you would cause you to think "life isn't worth living" and attempt suicide even though life is still worth living, then the problem isn't that true beliefs ("she cheated on me") are leading to bad outcomes, it's that false beliefs ("life isn't worth living") are leading to bad outcomes, and that your truth finding is so out of whack that you can already predict that true beliefs will lead to false beliefs.

In these cases you have a few options. One is to notice this and say "Huh, if life would still be worth living, why would I feel like it isn't?" and explore that until your thoughts and feelings merge into agreement somewhere. In other words, fix your shit so that true beliefs no longer predictably lead to false beliefs. Another is to put off the hard work of having congruent (and hopefully true) beliefs and feelings, and say "my feelings about life being worth living are wrong, so I will not act on them". Another, if you feel like you can't trust your logical self to retain control over your emotional impulses, is to say "I realize that my belief that my girlfriend isn't cheating on me might not be correct, but my resulting feelings about life would be incorrect in a worse way, and since I am not yet capable of good epistemics, I'm at least going to be strategic about which falsehoods I believe so that my bad epistemics harm me the least".

The worst thing you can do is go full "Epistemics don't matter when my life is on the line" and flat out believe that you're not being cheated on. Because if you do that, then there's nothing protecting you from stumbling upon evidence and being forced between a choice of "unmanaged false beliefs about life's worth" or "detaching from reality yet further".

or trusting false information to increase your chances of achieving your goals (e.g Being unrealistically optimistic about your odds of beating cancer because optimistic people have higher chances of survival).

True beliefs aren't the culprit here either. If you have better odds when you're optimistic, then be optimistic. "The cup isn't completely empty! It's 3% full, and even that may be an underestimate!" is super optimistic, even when "I'm almost certainly going to die" is also true.

This is very similar to the mistaken sports idea that "you have to believe you will win". No you don't. You just have to put the ball through the hoop more than the other guy does, or whatever other criteria your sport has. Yes, you're likely to not even try if you're lying to yourself and saying "it's not even possible to win" because "I shouldn't even try" follows naturally from that. However, if you keep your mind focused on "I can still win this, even if it's unlikely" or even just "Put the ball in the hoop. Put the ball in the hoop", then that's all you need.

In physics, if you think you've found a way to get free energy, that's a good sign that your understanding of the physics is flawed and the right response is to think "okay, what is it that I don't understand about gravity/fluid dynamics/etc that is leading me to this false conclusion?". Similarly , the idea that epistemic rationality and instrumental rationality are in conflict is a major red flag about the quality of your epistemic rationality, and the solution on both fronts is to figure out what you're doing wrong so as to perceive this obvious falsehood.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-06T21:07:25.313Z · LW(p) · GW(p)
it's that false beliefs ("life isn't worth living") are leading to bad outcomes

What something is worth is not an objective belief but a subjective value.

This is very similar to the mistaken sports idea that "you have to believe you will win". No you don't. You just have to put the ball through the hoop more than the other guy does

This is not how human psychology works. Optimism does lead to better results in sports.

Replies from: jimmy
comment by jimmy · 2020-06-07T21:22:34.580Z · LW(p) · GW(p)
What something is worth is not an objective belief but a subjective value.

Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"? Because if it turns out that the hot dog had been sitting out for too long and you end up puking your guts out, I think it's pretty unambiguous to say that "worth eating" was clearly false.

The fact that the precise meaning may not be clear does not make the statement immune from "being wrong". A really good start on this problem is "if you were able to see and emotionally integrate all of the consequences, would you regret this decision or be happy that you made it?".

This is not how human psychology works. Optimism does lead to better results in sports.

You have to be able to distinguish between "optimism" (which is good) and "irrational confidence" (which is bad). What leads to good results in sports is an ability to focus on putting the ball where it needs to go, and pessimism (but not accurate beliefs) impedes that.

If you want a good demonstration of that, watch Conor McGregor's rise to stardom. He gained a lot of interest for his "trash talk" which was remarkably accurate. Instead of saying "I'M GONNA KNOCK HIM OUT FIRST ROUND!" every time, he actually showed enough humility to say "My opponent is a tough guy, and it probably won't be a first round knockout. I'll knock him out in the second". It turned out in that case that he undersold himself, but that did not prevent him from getting the first round knockout. When you watch his warm up right before the fights, what his body language screams is that he has no fear, and that's what's important because fear impedes fluid performance. When he finally lost, his composure in defeat showed that his lack of fear came not from successful delusion but from acceptance of the possibility of losing. This is peak performance, and is what we should all be aspiring to.

In general, "not how human psychology works" is a red flag for making excuses for those with poorly organized minds. "You have to expect to win!" is a refrain for a reason; the people who say this falsehood probably would engage in pessimism if they thought they were likely to lose. However, that does not mean that one cannot aspire to do better. Other people don't fall prey to this failure mode, and those people can put on impressive performances that shock even themselves.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-07T22:10:51.420Z · LW(p) · GW(p)
Would you say "this hot dog is worth eating" is similarly "a subjective value" and not "an objective belief"?

This is entering the domain of axiology. Nothing wrong with debating axiology per se, but I would rather not get too much off topic. So I'll drop this argument for simplicities sake.

But placebo on the other hand is a very real phenomenon in human beings where we think (e.g) a pill is a pharmaceutical but isn't (epistemically irrational), but our irrational belief still helps us achieve our goals of (e.g) not dying (instrumentally rational).

Replies from: jimmy
comment by jimmy · 2020-06-08T05:13:44.826Z · LW(p) · GW(p)

Placebo doesn't require deception.

Just like with sports, you can get all the same benefits of placebo by simply pointing your attention correctly without predicating it on nonsense beliefs, and it's actually the nonsense beliefs that are getting in the way and causing the problem in the first place. A "placebo" is just an excuse to stop falsely believing that you can't do whatever it is you need to do without a pill.

And I don't say this as some matter of abstract "theory" that sounds good until you try to put it into practice; it's a very real thing that I actually practice somewhat regularly. I'll give you an example.

One day I sprained my ankle pretty badly. I was frustrated with myself for getting injured and didn't want it to swell up so I indignantly decided "screw this, my ankle isn't going to swell". It was a significant injury and took a month to recover, but it didn't swell. The next several times I got injured I kept this attitude and nothing swelled, including dropping a 50lb chunk of wood on my finger in a way that I was sure would swell enough to keep me from bending that finger... until I remembered that it doesn't have to be that way, and made the difficult decision to actually expect it to not swell. It didn't, and I retained complete finger mobility.

I told a friend of mine about this experience of mine, and while she definitely remained skeptical and it seemed "crazy" to her, she had also learned that even "crazy" things coming out of my mouth had a high probability of turning out to be true, and therefore didn't rule it out. Next time she got injured, she felt a little weird "pretending" that she could just influence these things but figured "why not?" and decided that her injury wasn't going to swell either. It didn't. A few injuries go by, and things aren't swelling so much. Finally, she inadvertently tells someone "Oh, don't worry, I don't need to ice my broken thumb because I just decided that it won't swell". The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break. I called her and talked to later that night and pointed out what had happened with her mental state and helped her fix her silly limiting (and false) beliefs, and when she woke up in the morning swelling had largely subsided again.

The size of the effect was larger than I've ever gotten with ibuprofen, let alone fake ibuprofen. "I have no ability to prevent my body from swelling up" is factually wrong, and being convinced of this falsehood prevents people from even trying. You can lie to yourself and take a sugar pill if you want, but it really is both simpler and more effective to just stop believing false things.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-08T11:46:51.497Z · LW(p) · GW(p)

I'm very glad that you managed to train yourself to do that but this option is not available for everyone. I see a lot of engaging the details and giving singular instances of something not occurring, but I don't see a lot of engaging in the least convenient possible world [LW · GW]. As I was writing this reply it became longer and longer, so I decided to rewrite it and make it it's own post. You can check out some more inconvenient counterexamples I thought up here. [Edit: I saved the post to draft by accident. I didn't want to reupload it, but if we ever get a way to have 'unlisted' posts I will upload it unlisted. Until that time I have changed the link so you can still see my post and the comments it received]

Replies from: jimmy, Richard_Kennaway
comment by jimmy · 2020-06-08T17:53:06.883Z · LW(p) · GW(p)
I'm very glad that you managed to train yourself to do that but this option is not available for everyone.

Do you have any evidence for this statement? That seems like an awfully quick dismissal given that twice in a row you cited things as if they countered my point when they actually missed the point completely. Both epistemically and instrumentally, it might make sense to update the probability you assign to "maybe I'm missing something here" . I'm not asking you to be more credulous or to simply believe anything I'm saying, mind you, but maybe a bit more skeptical and a little less credulous of your own ideas, at least until that stops happening.

Because you do have that option available to you. In my experience, it's simply not true that attempts at self deception ever give better results than simply noticing false beliefs and then letting them go once you do, or that anyone ever says "that's a great idea, let's do that!" and then mysteriously fails. The idea that it's "not available" is one more false belief that gets in the way of focusing on the right thing.

Don't get me wrong, I'm not saying that it's always trivial. Epistemic rationality is not trivial. It's completely possible to try to organize one's mind into coherence and still fail to get the results because you don't realize where you're missing something. Heck, in the last example I gave my friend did just that. Still, at the end of the day, she got her results, and is she a much happier and more competent person than she was years back when her mind was still caught up on more well-meaning self deceptions.

I don't see a lot of engaging in the least convenient possible world [LW · GW]

Well, if I don't think any valid examples exist, all I can do is knock over the ones you show me. Perhaps you can make your examples a little less convenient to knock over and put me to a better test then. ;)

I'll take a look at your new post.

Replies from: Rana Dexsin, Bob Jacobs
comment by Rana Dexsin · 2020-06-10T12:40:38.534Z · LW(p) · GW(p)

[This is condensed and informalized from a much longer and more explicit comment which I'm not sure would have been worth wading through, which still seemed hazy in important ways, and which seemed like it needed me to open more boxes than I have energy for right now. This one still seems hazy, but hopefully it wears it more on its sleeve. I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I have a completely different tack in mind: how do we know that the sort of mental maneuvers you describe don't become harmful in their aggregate effects when too many people do them, or do them without coordinating enough, or something along those lines?

I would like to point out the following:

The person literally could not process what she said because it was so far from what he was expecting, and she felt foolish for saying it. Her injury then swelled up, even though it had already been a while since the break.

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms? Some detachment from it seems to be part of emotional maturity, but that's coupled with a lot of other mediating material. Further detachment seems to be part of various spiritual traditions—also coupled with even more mediating material. That's not very promising for any implied “there's no such thing as too much”.

More specifically, I would like to consider the possibility that many of the sort of false aliefs you're talking about act more like restraining bolts imposed by the social cohesion subunit of a human mind, “because” humans are not safe under amplification with regard to social values. (And notably, “hypercompetent individuals are good for society” is by no means a universal more.)

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

[… this was going to be more-edited, but I've accidentally hit Submit, and I don't want to do too much frantic editing, so I've just cleaned up a few pieces. I think this is still just-about worth enough to leave up; I'll try to come back to it later if it's deemed worth talking about.]

Replies from: jimmy
comment by jimmy · 2020-06-11T18:11:24.513Z · LW(p) · GW(p)
I should also declare up front that I have a bunch of weird emotional warping around this topic; hopefully I'm working around enough of it for this to still be useful.]

This is a really cool declaration. It doesn’t bleed through in any obvious way, but thanks for letting me know and I’ll try to be cautious of what I say/how I say them. Lemme know if I’m bumping into anything or if there’s anything I could be doing differently to better accommodate.

I think you're interpreting “this is not how human psychology works” in a noncentral way compared to how Bob Jacobs is likely to have meant it, or maybe asserting your examples of psychology working that way more as normative than as positive claims.

I’m not really sure what you mean here, but I can address what you say below. I’m not sure if it’s related?

“felt foolish” together with the consequences looks like a description of an alief-based and alief-affecting social feedback mechanism. How safe is it for individuals to unilaterally train themselves out of such mechanisms?

Depends on how you go about it and what type of risk you’re trying to avoid. When I first started playing with this stuff I taught someone how to “turn off” pain, and in her infinite wisdom she used this new ability to make it easier to be stubborn and run on a sprained ankle. There’s no foolproof solution to make this never happen (in my infinite wisdom I’ve done similar things even with the pain), but the way I go about it now is explicitly mindful of the risks and uses that to get more reliable results. With the swelling, for example, part of my indignant reaction was “it doesn’t have to swell up, I just won’t move it”.

When you’ve seen something happen with your own eyes multiple times, I think that’s beyond the level where you should be foolish for thinking that it might be possible. When you see that the thing that is stopping other people from doing it too is ignorance of the possibility rather than an objection that it shouldn’t be done, then “thinking it through and making your reasoned best guess” isn’t going to be right all the time, but according to your own best guess it will be right more often than the alternative.

Or: individual coherence and social cohesion seem to be at odds often enough for that to be a way for “not-winning due to being too coherent” to sneak in through crazy backdoors in the environment, absent unbounded handling-of-detachment resources which are not in evidence and at some point may be unimplementable within human bounds.

It seems that this bit is your main concern?

It can be a real concern. More than once I’ve had people express concern about how it has become harder to relate with their old friends after spending a lot of time with me. It’s not because of stuff like “I can consciously prevent a lot of swelling, and they don’t know how to engage with that” but rather because stuff like “it’s hard to be supportive of what I now see as clearly bad behavior that attempt to shirk reality to protect feelings and inevitably ends up hurting everyone involved”. In my experience, it’s a consequence of being able to see the problems in the group before being able to see what to do about it.

I don’t seem to have that problem anymore, and I think it’s because of the thought that I’ve put into figuring out how to actually change how people organize their minds. Saying “here, let me use math and statistics to show you why you’re definitely completely wrong” can work to smash through dumb ideas, but then even when you succeed you’re left with people seeing their old ideas (and therefore the ideas of the rest of their social circle) as “dumb” and hard to relate to. When you say “here, let me empathize and understand where you’re coming from, and then address it by showing how things look to me”, and go out of your way to make their former point of view understandable, then you no longer get this failure mode. On top of that, by showing them how to connect with people who hold very different (and often less well thought out) views than you, it gives them a model to follow that can make connecting with others easier. My friend in the above example, for instance, went from sort of a “socially awkward nerd” type to a someone who can turn that off and be really effective when she puts her mind to it. If someone is depressed and not even his siblings can get him to talk, he’ll still talk to her.

If there’s a group of people you want to be able to relate to effectively, you can’t just dissociate off into your own little world where you give no thought to their perspectives, but neither can you just melt in and let your own perspective become that social consensus, because if you don’t retain enough separation that you can at least have your own thoughts and think about whether they might be better and how best to merge them with the group, then you’re just shirking your leadership responsibilities, and if enough people do this the whole group can become detached from reality and led by whomever wants to command the mob. This doesn’t tend to lead to great things.

Does that address what you’re saying?

Replies from: Rana Dexsin
comment by Rana Dexsin · 2020-06-16T00:00:56.896Z · LW(p) · GW(p)

I've put a few cycles into trying to come up with a better way to point at the thing/model I'm thinking of. (I say “thing/model” because in the domain of social psychology especially, Strange Loops between a phenomenon and people's models of the phenomenon cause them to not be that cleanly separable. Is there a word for that that I'm missing?) I haven't gotten through much of it, but in the meantime, I've also just noticed that a recent second-level comment by Vaniver [LW(p) · GW(p)] on their own “How alienated should you be?” post has description that seems to come from a similar observation/interpretation of the world to the part of mine I'm trying to point at, and the main post goes into more detail. So that may help. I think there is a streak of variants of this idea in LW already, and it's possible that what I really want to do is go through the archives and find the best-aligned existing posts on the subject to link to…

Replies from: jimmy
comment by jimmy · 2020-06-17T03:32:11.597Z · LW(p) · GW(p)

I think I get the general idea of the thing you and Vaniver are gesturing at, but not what you're trying to say about it in particular. I think I'm less concerned though, because I don't see inter agent value differences and the resulting conflict as some fundamental inextricable part of the system.

Perhaps it makes sense to talk about the individual level first. I saw a comment recently where the person making it was sorta mocking the idea of psychological "defense mechanisms", because "*obviously* evolution wouldn't select for those who 'defend' from threats by sticking their heads in the sand!" -- as if the problem of wireheading were as simple as competition between a "gene for wireheading" and a gene against. Evolution is going to select for genes that make people flinch away from injuring themselves with hot stoves. It's also going to select for people who cauterize their wounds when necessary to keep from bleeding out. Designing an organism that does *both* is not trivial. If sensitivity to pain is too low, you get careless burns. If it's too high, you get refusal to cauterize. You need *some* mechanism to distinguish between effective flinches and harmful flinches, and a way to enact mostly the former. "Defense mechanisms" arise not out of mysterious propagation of fitness reducing genes, but rather the lack of solution to the hard problem of separating the effective flinches from the ineffective -- and sometimes even the easiest solution to these ineffective flinches is hacked together out of more flinches, such as screaming and biting down on a stick when having a wound cauterized, or choosing to take pain killers.

The solution of "simply noticing that the pain from cauterizing a serious bleed isn't a *bad* thing and therefore not flinching from it" isn't trivial. It's *doable*, and to be aspired to, but there's no such thing as "a gene for wise decisions" that is already "hard coded in DNA".

Similarly, society is incoherent and fragmented and flinches and cooperates imperfectly. You get petty criminals and cronyism and censorship of thought and expression, and all sorts of terrible stuff. This isn't proof of some sort of "selection for shittiness" any more than it is to notice individual incoherence and the resulting dysfunction. It's not that coherence is impossible or undesirable, just that you're fighting entropy to get there, and succeeding takes work.

The desire to eat marshmallows succeeds more if it can cooperate and willingly lose for five minutes until the second marshmallow comes. The individual succeeds more if they are capable of giving back to others as a means to foster cooperation. Sometimes the system is so dysfunctional that saying "no thanks, I can wait" will get you taken advantage of, and so the individually winning thing is impulsive selfishness. Even then, the guy failing to follow through on promises of second marshmallows likely isn't winning by disincentivizing cooperation with him, and it's likely more of a "his desire to not feel pain is winning, so he bleeds" sort of situation. Sometimes the system really is so dysfunctional that not only is it winning to take the first marshmallow, it's also winning to renege on your promises to give the second. But for every time someone wins by shrinking the total pie and taking a bigger piece, there's an allocation of the more cooperative pie that would give this would-be-defector more pie while still having more for everyone else too. And whoever can find these alternatives can get themselves more pie.

I don't see negative sum conflict between the individual and society as *inevitable*, just difficult to avoid. It's negotiation that is inevitable, and done poorly it brings lossy conflict. When Vaniver talks about society saying "shut up and be a cog", I see a couple things happening simultaneously to one degree or another. One is a dysfunctional society hurting themselves by wasting individual potential that they could be profiting from, and would love to if only they could see how and implement it. The other is a society functioning more or less as intended and using "shut up and be a cog" as a shit test to filter out the leaders who don't have what it takes to say "nah, I think I'll trust myself and win more", and lead effectively. Just like the burning pain, it's there for a reason and how to calibrate it so that it gets overridden at only and all the right times is a bit of an empirical balancing act. It's not perfect as is, but neither is it without function. The incentive for everyone to improve this balancing is still there, and selection on the big scale is for coherence.

And as a result, I don't really feel myself being pulled between a conflict of "respect societies stupid beliefs/rules" and "care about other people". I see people as a combination of *wanting* me to pass their shit tests and show them a better replacement for their stupid beliefs/rules, being afraid and unsure of what to do if I succeed, and selfishly trying to shrink the size of the pie so that they can keep what they think will be the bigger piece. As a result, it makes me want to rise to the occasion and help people face new and more accurate beliefs, and also to create common knowledge of defection when it happens and rub their noses in it to make it clear that those who work to make the pie smaller will get less pie. Sometimes it's more rewarding and higher leverage to run off and gain some momentum by creating and then expanding a small bubble where things actually *work*, but there's no reason to go from "I can't yet be effective in the broader community because I can't yet break out of their 'cog' mold for me, so I'm going to focus on the smaller community where I can" to "fuck them all". There's still plenty of value in reengaging when capable and pretending there isn't isn't that good functional thing we're striving to do. It's not like we can *actually* form a bubble and reject the outside world, because the outside world will still bring you pandemics and AI, and from even a selfish perspective there's plenty of incentive to help things go well for everyone.

comment by B Jacobs (Bob Jacobs) · 2020-06-08T22:57:46.526Z · LW(p) · GW(p)

You could probably write the same answer without the snark. Your study on placebo only mentions it working on IBS patients so its not the grand dismissal of placebo that you claim it is, but even if it was there are still plenty of similar phenomena. The easiest to adapt would be the nocebo-effect, just switch the positives with the negatives in the example and you have your nocebo argument.

Replies from: jimmy, jimmy
comment by jimmy · 2020-06-10T07:48:04.782Z · LW(p) · GW(p)

There's no snark in my comment, and I am entirely sincere. I don't think you're going to get a good understanding of this subject without becoming more skeptical of the conclusions you've already come to and becoming more curious about how things might be different than you think. It simply raises the barrier to communication high enough so as to make reaching agreement not worthwhile. If that's not a perspective you can entertain and reason about, then I don't think there's much point in continuing this conversation.

If you can find another way to convey the same message that would be more acceptable to you, let me know.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-10T17:53:24.378Z · LW(p) · GW(p)

I would favor a conversation where we keep attacks on the persons to an absolute minimum and focus instead on the arguments being made (addressing the person is sometimes necessary, but entirely ignoring the argument in favor of attempting to psychoanalyze a stranger on the internet is not a good way to have a philosophical discussion). Secondly I would also like to hear a counterargument to the argument I made. And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Replies from: jimmy
comment by jimmy · 2020-06-10T20:08:21.060Z · LW(p) · GW(p)

It's not an attack, and I would recommend not taking it as one. People make that mistake all the time, and there's no shame in that. Heck, maybe I'm even wrong and what I'm perceiving as an error actually isn't faulty. Learning from mistakes (if it turns out to be one) is how we get stronger.

I try to avoid making that mistake, but if you feel like I'm erring, I would rather you be comfortable pointing out what you see instead of fearing that I will take it as an attack. Conversations (philosophical and otherwise) work much more efficiently this way.

I'm sorry if it hasn't been sufficiently clear that I'm friendly and not attacking you. I tried to make it clear by phrasing things carefully and using a smiley face, but if you can think of anything else I can do to make it clearer, let me know.

Secondly I would also like to hear an actual counterargument to the argument I made

Which one? The "it was only studying IBS" one was only studying IBS, sure. It still shows that you can do placebos without deception in the cases they studied. It's always going to be "in the cases they've studied" and it's always conceivable that if you only knew to find the right use of placebos to test, you'll find one where it doesn't work. However, when placebos work without deception in every case you've tested, the default hypothesis is no longer "well, they require deception in every case except these two weird cases that I happen to have checked". The default hypothesis should now be "maybe they just don't require deception at all, and if they do maybe it's much more rare than I thought".

I'm not sure what point the existence of nocebo makes for you, but the same principles apply there too. I've gotten a guy to punch a cactus right after he told me "don't make me punch the cactus" simply by making him expect that if I told him to do it he would. Simply replace "because drugs" with "because of the way your mind works" and you can do all the same things and more.

I'm not sure how many more times I'll be willing to address things like this though. I'm willing to move on to further detail of how this stuff works, or to address counterarguments that I hadn't considered and are therefore surprisingly strong, but if you still just don't buy into the general idea as worth exploring then I can agree to disagree.

And thirdly I have never deleted a comment, but you appear to have double posted, shall I delete one of them?

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.

Speaking of deleting things, what happened to your other post?

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-11T11:33:47.464Z · LW(p) · GW(p)
It's not an attack, and I would recommend not taking it as one.

Attack is just the way in which 'verbal arguments against X' are often shortened to, but while it is the common way of phrasing such a thing, I agree that it is stylistically odd. I didn't assume you had any malice in mind, I was just using it the common way but will refrain from doing so (in similar context) in the future.

Yeah, it didn't submit properly the first time and then didn't seem to be working the second time so it ended up posting two by the time I finally got confirmation that it worked. I'd have deleted one if I could have.
Speaking of deleting things, what happened to your other post?

Alright no problem, things like that happen all the time so I will just delete it. I described what happend to the other post here [LW(p) · GW(p)]. This was one of the difficult cases where I had to balance my desire to have a record of the things (and mistakes) people (including me) said and not wanting to clog the website with low-quality (as the downvotes indicated) content (I think I found a good solution). I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

Replies from: jimmy
comment by jimmy · 2020-06-11T19:05:14.154Z · LW(p) · GW(p)
I described what happend to the other post here [LW(p) · GW(p)].

Thanks, I hadn't seen the edit.

I'm having the same dilemma right now where my genuine comments are getting voted into the negative and I'm starting to feel really bad for trying to satisfy my own personal curiosity at the expense of eating up peoples time with content they think is low quality (yes yes, I know that that doesn't mean it is low quality per se, but it is a close enough heuristic that I'm mostly willing to stick to it). But the downvotes are very clear so while I'm disappointed that we couldn't talk through this issue, I will no longer be eating up peoples time.

The only comments of yours that I see downvoted into the negative are the two prior conversations in this thread. Were there others that are now positive again?

While I generally support the idea that it's better to stop posting than to continue to post things which will predictably be negative karma sum, I don't think that's necessary here. There's plenty of room on LW for things other than curated posts sharing novel insights, and I think working through one's own curiosity can be good not just for the individual in question, but any other lurkers who might have the same curiosities and for the community, as bringing people up to speed is an important part of helping them learn to interact best with the community.

I think the down votes are about something else which is a lot more easily fixable. While I'm sure they were genuine, some of your comments strike me as not particularly charitable. In order to hold a productive conversation, people have to be able to build from a common understanding. The more work you put in to understanding where the other person is coming from and how it can be a coherent and reasonable stance to hold, the less effort it takes for them to communicate something that is understood. At some point, if you don't put enough effort in you start to miss valid points which would have been easy for you to find and would be prohibitively difficult to word in a way that you wouldn't miss.

As an example, you responded to Richard_Kenneway as if he thought you were lying despite the fact that he explicitly stated that he was not imputing any dishonesty. I'm not sure where you simply missed that part or whether you don't believe him, but either way it is very hard to have a conversation with someone that doesn't engage with points like this at least enough to say why they aren't convinced. I think, with a little more effort put into understanding how your interlocutors might be making reasonable, charitable, and valid points, you will be able to avoid the down votes in the future. That's not to say that you have to believe that they're being reasonable/charitable/etc, or that you have to act like you do, but it's nice to at least put in some real effort to check and give them a chance to show when they are. Because the tendency for people to fail on the side of "insufficiently charitable" is really really strong, and even when the uncharitable view is the correct one (not that common on LW), the best way to show it is often to be charitable and have it visibly not fit.

It's a very common problem that comes up in conversation, especially when pushing into new territory. I wouldn't sweat it.

comment by jimmy · 2020-06-10T07:49:50.487Z · LW(p) · GW(p)
comment by Richard_Kennaway · 2020-06-08T13:58:31.328Z · LW(p) · GW(p)

The examples that people have given are real ones. Yours are fictional. It's easy to make up stories of how the world would look, conditional upon any proposition whatever being true. (V cerqvpg gung ng yrnfg bar ernqre jvyy vafgnagyl erfcbaq gb guvf pynvz ol znxvat hc n fgbel va juvpu vg vf snyfr.) In this light, the "least convenient possible world" for one's interlocutors is the most convenient possible for oneself, the one in which the point at issue is imagined to be true.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-08T14:57:27.998Z · LW(p) · GW(p)

We assume it's true, we don't have any evidence. I could tell stories about my personal experience but you'd have no way to check them. At least saying upfront that it's a thought experiment is keeping the debate ground neutral and allows peoples reasoning to do the work instead of their emotions. And no I would never make up a story to defend my argument, the fact that you would assume your interlocutor is being a liar without any evidence to back that up is really hampering my desire to debate you.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-06-08T15:28:25.439Z · LW(p) · GW(p)

You made up six stories here [LW · GW]. I was not imputing any dishonesty, only pointing out that they are fiction.

OTOH, you just said of the other stories presented here that "we don't have any evidence". The stories I was referring to are jimmy's story of preventing swelling in an injured joint, and his account of Conor McGregor. These stories purport to be of real things that happened. To say that his account is no evidence of that looks very like what you took me to be doing.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-08T16:10:05.924Z · LW(p) · GW(p)

I was referring to that block of text that you have encoded, I decoded it and there you state the assumption that your interlocutor will lie. And no I am assuming they are true which is why I said "we assume it's true". I would also keep anecdotal evidence to a minimum in this type of discussion because I would want my interlocutor to be able to check every step of my reasoning. And anecdotal evidence for a positive occurrence of phenomenon does not discount the existence of a negative occurrence. I say there exists such a thing as X and the counterargument is but this one time there was Y. Do you have any arguments as to why my counterarguments or something in a similar vein couldn't happend?

[EDIT] Richard says he meant the encoded text to only mean that the reader thinks up, but doesn't present the false story. This is a plausible interpretation of the text and since I can't know which one was meant I will assume it was the more charitable one and retract these comments.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-06-09T07:58:56.430Z · LW(p) · GW(p)

As before, I was not imputing any dishonesty to the hypothetical reader reflexively thinking up a hypothetical counterexample to a generalisation.

comment by lsusr · 2020-06-06T17:37:11.750Z · LW(p) · GW(p)

Epistemic rationality depends on absolute truth. Instrumental rationality depends on well-defined values. I believe in absolute truth. My values are context-dependent; they drift with time. I believe I do not have well-defined values. When forced to choose between epistemic rationality and instrumental rationality, I consistently choose epistemic rationality because it has a solid foundation.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-06T17:46:53.521Z · LW(p) · GW(p)

Thank you for participating. I think I might be in the minority here with my preference for pragmatism. [edit; apparently not] Since you're so early is there any phrasing you would like me to edit? I tried to keep it neutral but maybe you see something that could be leading people to one particular choice? Also would you be in favor of having more polls (say, once a week)?

Replies from: lsusr
comment by lsusr · 2020-06-06T18:05:34.917Z · LW(p) · GW(p)

For philosophical questions like this, I think there should always be a "the question is malformed" option, distinct from "unsure/no preference". Surveyees need a way to express their opinion that the question is wrong.

As for having more polls, it depends what the polls are on. With this question, I do not care very much how other people vote. I am curious about the underlying logic, which isn't something you can see in a poll. On the other hand, I am curious about this site's demographics.

Replies from: Bob Jacobs
comment by B Jacobs (Bob Jacobs) · 2020-06-06T18:14:45.976Z · LW(p) · GW(p)

Well if you think (for example) that you can do both there is always the option of "other philosophy", but I will nonetheless include that in the future. And I will make a (cursory) site demographic poll next week.

comment by romeostevensit · 2020-06-06T18:18:55.361Z · LW(p) · GW(p)

Feedback loop. Truth isn't free, so you buy as much of it as you can afford and then try to leverage it to get more money. In the lucky cases this winds up as a self-sustaining feedback loop, though many such loops become degenerate over time due to goodharting.

Replies from: lsusr
comment by lsusr · 2020-06-06T19:47:18.778Z · LW(p) · GW(p)

I had to look up Goodhart's law.

When a measure becomes a target, it ceases to be a good measure.