Comment by Zvi on [deleted post] 2019-07-17T12:20:13.418Z

Before I read (2), I want to note that a universal idea that one is responsible for all the consequences of one's accurate speech - in an inevitably Asymmetric Justice / CIE fashion - seems like it is effectively a way to ban truth-seeking entirely, and perhaps all speech of any kind. And the fact that there might be other consequences to true speech that one may not like and might want to avoid, does not mean it is unreasonable to point out that the subclass of such consequences that seems to be in play in these examples, seems like a subclass that seems much less worth worrying about avoiding. But yes, Kant saying you should tell the truth to an Axe murderer seems highly questionable, and all that.

And echo Jessica that it's not reasonable to say that all of this is voluntary within the frame you're offering, if the response to not doing it is to not be welcome, or to be socially punished. Regardless of what standards one chooses.

Comment by Zvi on [deleted post] 2019-07-17T12:13:32.275Z

I think that is a far from complete description of my decision theory and selection of virtues here. Those are two important considerations, and this points in the right direction for the rest, but there are lots of others too. Margin too small to contain full description.

At some point I hope to write a virtue ethics sequence, but it's super hard to describe it in written form, and every time I think about it I assume that even if I do get it across, people who speak better philosopher will technically pick anything I say to pieces and all that and I get an ugg field around the whole operation, and assume it won't really work at getting people to reconsider. Alas.

Comment by zvi on Integrity and accountability are core parts of rationality · 2019-07-17T11:25:02.248Z · score: 2 (1 votes) · LW · GW

Agree strongly with this decomposition of integrity. They're definitely different (although correlated) things.

My biggest disagreement with this model is that the first form (structurally integrated models) seems to me to be something broader? Something like, you have structurally integrated models of how things work and what matters to you, and take the actions suggested by the models to achieve what matters to you based on how things work?

Need to think through this in more detail. One can have what one might call integrity of thought without what one might call integrity of action based on that thought - you have the models, but others/you can't count on you to act on them. And you can have integrity of action without integrity of thought, in the sense that you can be counted on to perform certain actions in certain circumstances, without integrity of thought, in which case you'll do them whether or not it makes any sense, but you can at least be counted on. Or you can have both.

And I agree you have to split integrity of action into keeping promises when you make them slash following one's own code, and keeping to the rules of the system slash following others' codes, especially codes that determine what is blameworthy. To me, that third special case isn't integrity. It's often a good thing, but it's a different thing - it counts as integrity if and only if one is following those rules because of one's own code saying one should follow the outside code. We can debate under what circumstances that is or isn't the right code, and should.

So I think for now I have it as Integrity-1 (Integrity of Thought) and Integrity-2 (Integrity of Action), and a kind of False-Integrity-3 (Integrity of Blamelessness) that is worth having a name for, and tracking who has and doesn't have it in what circumstances to what extent, like the other two, but isn't obviously something it's better to increase than decrease by default. Whereas Integrity-1 is by default to be increased, as is Integrity-2, and if you disagree with that, this implies to me there's a conflict causing you to want others to be less effective, or you're otherwise trying to do extraction or be zero sum.

Comment by Zvi on [deleted post] 2019-07-15T22:04:52.267Z

(5) Splitting for threading.

Wow, this got longer than I expected. Hopefully it is an opportunity to grok the perspective I'm coming from a lot better, which is why I'm trying a bunch of different approaches. I do hope this helps, and helps appreciate why a lot of the stuff going on lately has been so worrying to some of us.

Anyway, I still have to give a response to Ray's comment, so here goes.

Agree with his (1) that it comes across as politics-in-a-bad-way, but disagree that this is due to the simulacrum level, except insofar as the simulacrum level causes us to demand sickeningly political statements. I think it's because that answer is sickeningly political! It's saying "First, let me pay tribute to those who assume the title of Doer of Good or Participant in Nonprofit, whose status we can never lower and must only raise. Truly they are the worthy ones among us who always hold the best of intentions. Now, my lords, may I petition the King to notice that your Doers of Good seem to be slaughtering people out there in the name of the faith and kingdom, and perhaps ask politely, in light of the following evidence that they're slaughtering all these people, that you consider having them do less of that?"

I mean, that's not fair. But it's also not all that unfair, either.

(2) we strongly agree.

Pacifists who say "we should disband the military" may or may not be making the mistake of not appreciating the military - they may appreciate it but also think it has big downsides or is no longer needed. And while I currently think the answer is "a lot," I don't know to what extent the military should be appreciated.

As for appreciation of people's efforts, I appreciate the core fact of effort of any kind, towards anything at all, as something we don't have enough of, and which is generally good. But if that effort is an effort towards things I dislike, especially things that are in bad faith, then it would be weird to say I appreciated that particular effort. There are times I very much don't appreciate it. And I think that some major causes and central actions in our sphere are in fact doing harm, and those engaged in them are engaging in them in bad faith and have largely abandoned the founding principles of the sphere. I won't name them in print, but might in conversation.

So I don't think there's a missing mood, exactly. But even if there was, and I did appreciate that, there is something about just about everyone I appreciate, and things about them I don't, and I don't see why I'm reiterating things 'everybody knows' are praiseworthy, as praiseworthy, as a sacred incantation before I am permitted to petition the King with information.

That doesn't mean that I wouldn't reward people who tried to do something real, with good intentions, more often than I would be inclined not to. Original proposal #1 is sickeningly political. Original proposal #2 is also sickeningly political. Original proposal #3 will almost always be better than both of them. That does not preclude it being wise to often do something between #1 and #3 (#1 gives maybe 60% of its space to genuflections, #2 gives maybe 70% of its space to insults, #3 gives 0% to either, and I think my default would be more like 10% to genuflections if I thought intentions were mostly good?).

But much better would be that pointing out that someone was in fact doing harm would not be seen as punishment, if they stop when this is pointed out. In the world in which doing things is appreciated and rewarded, saying "I see you trying to do a thing! I think it's harmful and you should stop." and you saying "oops!" should net you points without me having to say "POINTS!"

Comment by Zvi on [deleted post] 2019-07-15T21:20:28.214Z

(4) Splitting for threading.

Pure answer / summary.

The nature of this should is that status evaluations are not why I am sharing the information. Nor are they my responsibility, nor would it be wise to make them my responsibility as the price of sharing information. And given I am sharing true and relevant information, any updates are likely to be accurate.

The meta-ethical framework I'm using is almost always a combination of Timeless Decision Theory and virtue ethics. Since you asked.

I believe it is virtuous, and good decision theory, to share true and relevant information, to try to create clarity. I believe it is not virtuous or good decision theory to obligate people with additional burdens in order to do this, and make those doing so worry about being accursed of violating such burdens. I do believe it is not virtuous or good decision theory to, while doing so, structure one's information in order to score political points, so don't do that. But it's also not virtuous or good decision theory to carefully always avoid changing the points noted on the scoreboard, regardless of events.

The power of this "should" is that I'm denying the legitimacy of coercing me into doing something in order to maintain someone else's desire for social frame control. If you want to force me to do that in order to tell you true things in a neutral way, the burden is on you to tell me why "should" attaches here, and why doing so would lead to good outcomes, be virtuous and/or be good decision theory.

The reason I want to point out that people are doing something I think is bad? Varies. Usually it is so we can know this and properly react to this information. Perhaps we can convince those people to stop, or deal with the consequences of those actions, or what not. Or the people doing it can know this and perhaps consider whether they should stop. Or we want to update our norms.

But the questions here in that last paragraph seem to imply that I should shape my information sharing primary based on what I expect the social reaction to my statements should be, rather than I should share my information in order to improve people's maps and create clarity. That's rhetoric, not discourse, no?

Comment by Zvi on [deleted post] 2019-07-15T21:05:41.663Z

(3) (Splitting for threading)

Sharing true information, or doing anything at all, will cause people to update.

Some of those updates will cause some probabilities to become less accurate.

Is it therefore my responsibility to prevent this, before I am permitted to share true information? Before I do anything? Am I responsible in an Asymmetric Justice fashion for every probability estimate change and status evaluation delta in people's heads? Have I become entwined with your status, via the Copenhagen Interpretation, and am now responsible for it? What does anything even have to do with anything?

Should I have to worry about how my information telling you about Bayesian probability impacts the price of tea in China?

Why should the burden be on me to explain should here, anyway? I'm not claiming a duty, I'm claiming a negative, a lack of duty - I'm saying I do not, by sharing information, thereby take on the burden of preventing all negative consequences of that information to individuals in the form of others making Bayesian updates, to the extent of having to prevent them.

Whether or not I appreciate their efforts, or wish them higher or lower status! Even if I do wish them higher status, it should not be my priority in the conversation to worry about that.

Thus, if you think that I should be responsible, then I would turn the question around, and ask you what normative/meta-ethical framework you are invoking. Because the burden here seems not to be on me, unless you think that the primary thing we do when we communicate is we raise and lower the status of people. In which case, I have better ways of doing that than being here at LW and so do you!

Comment by Zvi on [deleted post] 2019-07-15T20:59:46.723Z

(2) (Splitting these up to allow threading)

Sharing true information will cause people to update.

If they update in a way that causes your status to become lower, why should we presume that this update is a mistake?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, would not a proper Bayesian expect me to do that, and thus use my praise only as evidence of the degree to which I think others should update negatively on the basis of the information I share later?

If it is standard practice to do something to raise your status in order to offset the expected lowering of status that will come with the information, but only some of the time, what is going on there? Am I being forced to make a public declaration of whether I wish you to be raised or lowered in status? Am I being forced to acknowledge that you belong to a protected class of people whose status one is not allowed to lower in public? Am I worried about being labeled as biased against groups you belong to if I am seen as sufficiently negative towards you? (E.g. "I appreciate all the effort you have put in towards various causes, I think that otherwise you're a great person and I'm a big fan of [people of the same reference group] and support all their issues and causes, but I feel you should know that I really wish you hadn't shot me in the face. Twice.")


Comment by Zvi on [deleted post] 2019-07-15T20:51:46.858Z

(1) Glad you asked! Appreciate the effort to create clarity.

Let's start off with the recursive explanation, as it were, and then I'll give the straightforward ones.

I say that because I actually do appreciate the effort, and I actually do want to avoid lowering your status for asking, or making you feel punished for asking. It's a great question to be asking if you don't understand, or are unsure if you understand or not, and you want to know. If you're confused about this, and especially if others are as well, it's important to clear it up.

Thus, I choose to expend effort to line these things up the way I want them lined up, in a way that I believe reflects reality and creates good incentives. Because the information that you asked should raise your status, not lower your status. It should cause people, including you, to do a Bayesian update that you are praiseworthy, not blameworthy. Whereas I worry, in context, that you or others would do the opposite if I answered in a way that implied I thought it was a stupid question, or was exasperated by having to answer, and so on.

On the other hand, if I believed that you damn well knew the answer, even unconsciously, and were asking in order to place upon me the burden of proof via creation of a robust ethical framework justifying not caring primarily about people's social reactions rather than creation of clarity, lest I cede that I and others the moral burden of maintaining the status relations others desire as their primary motivation when sharing information. Or if I thought the point was to point out that I was using "should" which many claim is a word that indicates entitlement or sloppy thinking and an attempt to bully, and thus one should ignore the information content in favor of this error. Or if in general I did not think this question was asked in good faith?

Then I might or might not want to answer the question and give the information, and I might or might not think it worthwhile to point out the mechanisms I was observing behind the question, but I certainly would not want to prevent others from observing your question and its context, and performing a proper Bayesian update on you and what your status and level of blame/praise should be, according to their observations.

(And no, really, I am glad you asked and appreciate the effort, in this case. But: I desire to be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to be glad you asked, and I desire to not be glad you asked if knowing the true mechanisms behind the question combined with its effects would cause me to not be glad you asked. Let me not become attached to beliefs I may not want. And I desire to tell you true things. Etc. Amen.)


Comment by zvi on Everybody Knows · 2019-07-04T20:56:52.778Z · score: 5 (3 votes) · LW · GW

I agree that these are (sometimes) legitimate things to do, and that people often use the 'everybody knows' framing to do them implicitly. But I think that using this framing, rather than saying the thing more explicitly, is useful for those trying to do other things, and counter-productive for those trying to do the exact things you are describing, unless they also want to do other things.

Comment by zvi on Everybody Knows · 2019-07-04T20:48:59.767Z · score: 3 (2 votes) · LW · GW

For #1, the reason we do that is exactly because it is likely that not everyone in the room knows (even though they really should if they are in the room) and the people who don't know are going to be lost if you don't tell them. And certainly not everyone knows there are 20 amino acids (e.g. I didn't know that and will doubtless not remember it tomorrow).

I find your example in #2 to be on point: I am highly confident that far from everyone knows what happens if trash bags are left outside the dumpster. I actually had another mode in at one point to describe the form "I thought that everyone knew X, but it turned out I was wrong" because in my experience that's how this actually comes up.


Comment by zvi on Raemon's Shortform · 2019-07-03T12:19:59.920Z · score: 5 (6 votes) · LW · GW

Also important to note that learn Calculus this week is a thing a person can do fairly easily without being some sort of math savant.

(Presumably not the full 'know how to do all the particular integrals and be able to ace the final' perhaps, but definitely 'grok what the hell this is about and know how to do most problems that one encounters in the wild, and where to look if you find one that's harder than that.' To ace the final you'll need two weeks.)

Everybody Knows

2019-07-02T12:20:00.646Z · score: 70 (23 votes)
Comment by zvi on Causal Reality vs Social Reality · 2019-06-26T12:19:58.292Z · score: 17 (7 votes) · LW · GW

The cases Scott talks about are individuals clamoring for symbolic action in social reality in the aid of individuals that they want to signal they care about. It's quite Hansonian, because the whole point is that these people are already dead and none of these interventions do anything but take away resources from other patients. They don't ask 'what would cause people I love to die less often' at all, which my model says is because that question doesn't even parse to them.

Comment by zvi on 2013 Survey Results · 2019-06-23T18:44:56.762Z · score: 2 (1 votes) · LW · GW

Noting that this was suggested to me by the algorithm, and presumably shouldn't be eligible for that.

Comment by zvi on Recommendation Features on LessWrong · 2019-06-19T15:48:32.560Z · score: 4 (2 votes) · LW · GW

A 'remind me what recommendations you've given me recently' list being available to be clicked on might be nice?

Magic Arena Bot Drafting

2019-06-18T16:00:00.402Z · score: 18 (6 votes)
Comment by zvi on Recommendation Features on LessWrong · 2019-06-17T11:20:21.762Z · score: 4 (2 votes) · LW · GW

Part of the idea of curation is that some posts are what one might call Evergreen. They make sense out of the context of the discussion at that time, or are part of a full Evergreen discussion that makes sense out of the context of that time. Also, some posts are designed largely as exercises or places to sort things out, versus creating Evergreen things that last.

This especially applies to calls to action that no longer make any sense given the time that has passed.

If we're going to do recommendations as the top thing on the page every time, it seems like it would be worth it to remove the ones that are about topics that no longer apply or make sense. I realize this will involve judgment calls. I don't have a good solution beyond 'someone goes through them and picks which ones not to include.'

Comment by zvi on Recommendation Features on LessWrong · 2019-06-16T15:16:25.520Z · score: 2 (1 votes) · LW · GW

My gut says that it's worth it to explicitly offer both, if someone comes in in the middle?

Comment by zvi on Recommendation Features on LessWrong · 2019-06-16T15:15:16.270Z · score: 7 (3 votes) · LW · GW

I'd go a step stronger. Brainstorm: From the Archives should have a random order for some time period (e.g. something from a day to a week) and show you the three things highest on that list that you haven't read.

Comment by zvi on Recommendation Features on LessWrong · 2019-06-16T15:13:35.830Z · score: 5 (3 votes) · LW · GW

Random recommendations included things I've read since LW2.0 came into fashion, presumably because I wasn't logged in. I'm guessing there's no reasonable fix for this (e.g. IP tracking), but perhaps a button that says "mark as read" would be cool, same as "mark as unread" but in a place that would be easy to mark. Dunno. I do realize one can just click on the thing.

It also doesn't feel like any of the current options actually give me a good "Best of Less Wrong" thing either in general or on particular topics. The selected sequences (and the sequences themselves) are good things to have access to, but it seems like the thing I want to exist, simply doesn't and isn't trivial to make? Alas, I don't have the time to make it right now.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T15:06:25.887Z · score: 12 (4 votes) · LW · GW

The thing is, I don't think that shorthand (along with similar things like "You're an idiot") ever stays understood outside of very carefully maintained systems of people working closely together in super high trust situations, even if it starts out understood.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T11:35:53.567Z · score: 16 (4 votes) · LW · GW

True. But I do think we've run enough experiments on 'don't say anyone is a bad person, only point out bad actions and bad logic and false beliefs' to know that people by default read that as claims about who is bad, and we need better tech for what to do about this.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T11:31:02.522Z · score: 7 (4 votes) · LW · GW

If the EA Hotel is easily confirmed as real, as in it is offering what it claims it is offering at a reasonable quality level at the price it claims to be offering that thing, then I am confused why it has any trouble being funded. This is yet another good reason for that.

I understand at least one good reason why there aren't more such hotels - actually doing a concrete physical world thing is hard and no one does it.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T11:18:16.378Z · score: 10 (5 votes) · LW · GW

Note that Wei Dei also notes that he chose exit from academia, as did many others on Less Wrong and in our social circles (combined with surprising non-entry).

If this is the model of what is going on, that quality and useful research is much easier without academia, but academia is how one gains credibility, then destroying the credibility of academia would be the logical useful action.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T11:14:18.792Z · score: 25 (9 votes) · LW · GW

I think these are (at least some of) the right questions to be asking.

The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?

Which I won't answer here, because it's a hard question, but my current best guess on question one is: It's the natural endpoint if you don't create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.

My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn't mean we can give up on major corporations or national governments without better options that we don't currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.

As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I'm not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It's also not clear that academia as it currently exists at scale is feasible at that scale. I'm not close enough to it, to be the one who should make such claims.


Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T11:00:58.050Z · score: 10 (3 votes) · LW · GW

Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.

On two levels.

Level one is the one where some level of endorsement of something means that I'm making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.

Level two is that the OP doesn't make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for 'science' or 'the world' but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.

That is importantly different from claiming that these are bad people.

Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?

I actually am asking, because I don't know.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T10:48:57.852Z · score: 10 (3 votes) · LW · GW

Thank you.

I read your steelman as importantly different from the quoted section.

It uses the weak claim that such action 'could be bad' rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.

It changes the standard of behavior from 'any behavior that responds to local incentives is automatically all right' to 'behaviors that are above average and net helpful, but imperfect.'

This is an example of the kind of equivalence/transformation/Mott and Bailey I've observed, and am attempting to highlight - not that you're doing it, you're not because this is explicitly a steelman, but that I've seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).

That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T10:36:23.779Z · score: 9 (2 votes) · LW · GW

Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-16T10:32:57.541Z · score: 12 (5 votes) · LW · GW

It's fair to say that fake data is a Boolean and a Rubicon, where once you do it once, at all, all is lost. Whereas there are varying degrees of misleading statistics versus clarifying statistics, and how one draws conclusions from those statistics, and one can engage in some amount of misleading without dooming the whole enterprise, so long as (as you note) the author is explicit and clear about what the data was and what tests were applied, so anyone reading can figure out what was actually found.

However, I think it's not that hard for it to pass a threshold where it's clearly fraud, although still a less harmful/dangerous fraud than fake data, if you accept that an opinion columnist cherry-picking examples is fraud (e.g. for it to be more fraudulent than that, especially if the opinion columnist isn't assumed to be claiming that the examples are representative). And I like that example more the more I think about it, because that's an example of where I expect to be softly defrauded in the sense that I assume that the examples and arguments are words written are soldiers chosen to make a point slash sell papers, rather than an attempt to create common knowledge and seek truth. If scientific papers are in the same reference class as that...

Press Your Luck

2019-06-15T15:30:00.702Z · score: 13 (6 votes)
Comment by zvi on No, it's not The Incentives—it's you · 2019-06-15T13:22:15.556Z · score: 39 (13 votes) · LW · GW
We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it.

No. No. Big No. A thousand times no.

(We all agree with that first sentence, everyone here knows these things are bad, that's just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)

I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I'm happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.

I'm still worried that such treatment will mostly occur...

There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot 'call them out' on such action, even if such calling out has no tangible consequences.

I'm too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I'm simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.


Comment by zvi on No, it's not The Incentives—it's you · 2019-06-15T12:43:14.072Z · score: 11 (5 votes) · LW · GW
But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change.

I haven't said 'bad person' unless I'm missing something. I've said things like 'doing net harm in your career' or 'making it worse' or 'not doing the right thing.' I'm talking about actions, and when I say 'right thing' I mean shorthand for 'that which moves things in the directions you'd like to see' rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.

It's a strange but consistent thing that people's brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don't take the better actions of being bad people. Or even, as you say, 'exceptionally bad' people.


Comment by zvi on No, it's not The Incentives—it's you · 2019-06-14T22:15:43.034Z · score: 19 (8 votes) · LW · GW
In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I think this claim is a hugely important error.

One scientist unilaterally deciding to stop faking data isn't going to magically make the whole world come around. But the idea that it doesn't help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn't make it worse?

I don't understand how one can think that.

That's not unique to the example of faking data. That's true of anything (at least partially) observable that you'd like to change.

One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.

But don't pretend it doesn't matter.

Similarly, I find it odd that one uses the idea that 'doing the right thing is not free' as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you're not counting the thing itself as a benefit.

But the whole point of some things being right is that you do them even though it's not free, because It's Not The Incentives, It's You. You're making a choice.

Ideally we'd design a system where one not only cultivated the virtue of doing the right thing, and was rewarded for doing that, one would also be rewarded in expectation for doing the right thing as often as possible. Doing the right thing is, in fact, a prime way of moving towards that.

Again, sometimes the cost of doing the otherwise 'right thing' gets too high. Especially if you can't coordinate on it. There are trade-offs. One can't do every good thing or never compromise.

But if there is one takeaway from Moral Mazes that everyone should have, it's a really, really simple one:

Being in a moral maze is not worth it. They couldn't pay you enough, and even if they could, they definitely don't. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

If academia has become a moral maze, the same applies, except that the money was never good to begin with.

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-14T13:00:58.072Z · score: 16 (6 votes) · LW · GW

I almost wrote a reply to that post when it came up (but didn't because one should not respond too much when Someone Is Wrong On The Internet, even Scott), because this neither seemed like an economic perspective on moral standards, nor did it work under any equilibrium (it causes a moral purity cascade, or it does little, rarely anything in between), nor did it lead to useful actions on the margin in many cases as it ignores cost/benefit questions entirely. Strictly dominated actions become commonplace. It seems more like a system for avoiding being scapegoated and feeling good about one's self, as Benquo suggests.

(And of course, >50% of people eat essentially the maximum amount of quality meat they can afford.)

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-14T12:55:44.508Z · score: 9 (8 votes) · LW · GW

I am very surprised that you still endorse this comment on reflection, but given that you do, it's not unreasonable to ask: Given that most people lie a lot, and you think personally not eating meat is more important than not lying, your track record actually not eating meat, and your claim that it's reasonable to be a 51st percentile moral person, why should we then trust your statements to be truthful? Let alone in good faith. I mean, I don't expect you to lie because I know you, but if you actually believed the above for real, wouldn't my expectation be foolish?

I'm trying to square your above statement and make it make sense for you to have said it and I just... can't?

Comment by zvi on No, it's not The Incentives—it's you · 2019-06-14T12:49:32.574Z · score: 34 (18 votes) · LW · GW

If you're an academic and you're using fake data or misleading statistics, you are doing harm rather than good in your academic career. You are defrauding the public, you are making our academic norms be about fraud, you are destroying both public trust in academia in particular and knowledge in general, and you are creating justified reasons for this destruction of trust. You are being incredibly destructive to the central norms of how we figure things out about the world - one of many of which is whether or not it is bad to eat meat, or how we should uphold moral standards.

And you're doing it in order to extract resources from the public, and grab your share of the pie.

I would not only rather you eat meat. I would rather you literally go around robbing banks at gunpoint to pay your rent.

If one really, really did think that personally eating meat was worse than committing academic fraud - which boggles my mind, but supposing that - what the hell are you doing in academia in the first place, and why haven't you quit yet? Unless your goal now is to use academic fraud to prevent people from eating meat, which I'd hope is something you wouldn't endorse, and not what 99%+ of these people are doing. As the author of OP points out, if you can make it in academia, you can make more money outside of it, and have plenty of cash left over for salads and for subsidizing other people's salads, if that's what you think life is about.


Comment by zvi on Some Ways Coordination is Hard · 2019-06-13T19:23:43.006Z · score: 5 (3 votes) · LW · GW

Edit: Nope, not wordplay, fixing it now.

Feels like an autocorrect problem, but could also be my brain caching it wrong.

Some Ways Coordination is Hard

2019-06-13T13:00:00.443Z · score: 44 (9 votes)
Comment by zvi on What is a good moment to start writing? · 2019-06-13T12:58:20.525Z · score: 6 (3 votes) · LW · GW

Now.

If you have an idea and are inspired to write, write. If you have an idea that you would understand better if you wrote it out, write. If you want to get better at writing, write. Now. Ideally, every day.

That's how one improves at writing. Also understanding. I learn a ton by trying to write stuff out.

Often I then trash what I'd written. Don't be afraid to do that, either. Writing and then discarding is not bad. It's not a waste of time.

If you write what you know now, then a year from now you know more, great! Write it again. Show people how you got from there to here.

When should you post what you've written versus not, once done writing (or at least ready for the editing pass)? That's trickier, but again, I think that the bigger and more common mistake is not posting. Most of the things you're worried might happen aren't even bad.


Comment by zvi on Quotes from Moral Mazes · 2019-06-06T11:13:12.868Z · score: 5 (3 votes) · LW · GW

It seems likely to me there was a key person at some point in the past that determined the core strategies of things like "maintain a full monopoly with highly inflated prices" and "pay off Hollywood producers to show engagement rings as this super important thing" and such, in ways that most others would not have, and that diamond profitability is through the roof versus the counterfactual. But I'm guessing that most of the credit for that was then reaped by subsequent managers (CEOs, VPs of Marketing, and so on down the line) of DeBeers, who were mostly cashing in on that good strategy.

Comment by zvi on Moral Mazes and Short Termism · 2019-06-05T21:41:40.545Z · score: 5 (3 votes) · LW · GW

Ah, I was worried about giving that impression, but wasn't able to find an elegant way to avoid it. I don't think this is right, although it's not that wrong, either - I think there is a large cluster of people/things that are effectively non-infected, although there are important differences, especially in how they interact with the infected and how vulnerable they are to infection. Once you are infected, there are definitely levels, which can be thought of as similar to the simulacra. One might, for example, be fine with and support (be infected by) level-3 behavior that pretends to have some reflection of reality but not be comfortable, yet, with level-4 behavior that does not so pretend.

Comment by zvi on FB/Discord Style Reacts · 2019-06-05T12:12:43.293Z · score: 5 (3 votes) · LW · GW

It's not that you don't have to track those points in the meeting as part of your decision. You definitely do. It's that if the primary reason you're doing anything in the meeting is so that you can maximize various point totals to seem like a good meeting-attender, then the meeting is no longer serving its original intended purpose and you're stuck in a signaling nightmare (and likely a moral maze). Remember that (almost) everyone hates meetings and wants to avoid them. Being caught in a continuous permanently-available meeting of that type seems like something to avoid.

I do agree that we want there to be jokes when they are high-value and not when they are low-value, like most other things, but I'd like this to be about questions like "will this help this discussion accomplish something worthwhile and illustrate the questions involved?" and "is it funny and therefore Worth It to tell this?"

In terms of the answer I gave earlier, I totally stand by that - losing a little karma is a (small) price of a negative dopamine hit and a small hit to total karma, and the karma gives the message that the question was annoying so they can update that they're imposing real costs, and sometimes it's worth imposing real costs and taking small status hits to do things anyway. I was more pushing back against this idea that "If you get negative karma on a post/comment you should react as if this is a crisis situation and you are bad and should feel bad."

Comment by zvi on Asymmetric Justice · 2019-06-05T12:04:04.090Z · score: 2 (1 votes) · LW · GW

Huh. Funny no one caught that until now. Edited.

Comment by zvi on FB/Discord Style Reacts · 2019-06-04T15:58:06.273Z · score: 12 (4 votes) · LW · GW

If I'm deciding whether to post a comment, and my worry is an impact on my long term karma versus how many little dopamine hits I'll get from reactions, that feels like exactly the types of questions I want to avoid in my life. If either of these things is driving my decision, rather than what would help build knowledge or be useful to myself and others, then I'd consider it time to pack it in and stop posting entirely.

Comment by zvi on Moral Mazes and Short Termism · 2019-06-04T15:13:25.684Z · score: 15 (5 votes) · LW · GW

As Benquo notes I think the detailed anecdotes are good evidence, and it matches my experiences in business and what I know of other business. But of course, no one has successfully 'run a study' of the question, nor would I expect such attempts at such a study to get to the heart of the question effectively.

Agreed there are traits X where people with X tend to want those around them to have less X or contrasting trait Y, the most amusing one (in many important contexts, but in far from all contexts) being chromosomes where they're literally X and Y.

Your examples show things I'm not clear about. People do want sympathetic others around, even for 'bad' traits, and often view those sharing those traits as 'their kind of people,' and often as 'winners.'

I'm not sure if dominant people typically want others to be more dominant or more submissive. Certainly they want specific others to be submissive so they can dominate them, but they tend to feel kinship and friendship with, and form alliances with, other dominants, and generally promote the idea of both domination whether or not they also promote submission, in my experience/observation. I believe they tend to strongly favor people who think that dominants should control submissive, whether that person is dominant or submissive themselves, over those who think everyone should be equal.

Greedy people want to succeed in their greed, so they want those they are directly competing with or asking for things to be generous and less greedy, but this gets murky fast with other relationships. Greedy people tend to speak the virtues of greed at least to their friends and allies, telling them to be more greedy when dealing with others, and see those exhibiting greed as good - see capitalists supporting greedy competition, or traders such as Gordon Gecko, who says literally "Greed is good," but also many others. This should not be confused with wanting to experience generous acts in direct exchanges. Think of it this way. If you were greedy and your rival was generous, who would you want picking between you, a greedy person or a generous person? What if you were generous and your rival greedy?

I hope that helps share my intuitions a bit more?

Comment by zvi on Moral Mazes and Short Termism · 2019-06-03T12:29:24.193Z · score: 15 (6 votes) · LW · GW

Definitely no honor among them thieves. They can and do betray each other all the time. But common interests, and ease of understanding. I think it's hard for people like us to get inside this other mindset properly.

Dishonest others will tend to reinforce dishonesty-rewarding norms (and other thieves will tend to do things that make stealing a better idea). They will be easier for other dishonest people to understand, and thus this will yield comparative advantage dealing with them. Most importantly, they will 'play ball.' If you deal with an honest person, they will hold you to standards of honesty, and value being honest and you being honest over what is locally expedient. You don't want that. You definitely don't want that for a coworker.

You prefer an honest underling because you intend to act in an honest way yourself. For a dishonest person, an honest underling won't be loyal or trustworthy, and is likely to refuse to play ball with what you want. They'll want explicit orders, they won't understand what you need, they'll have moral objections, they'll tell the truth when asked by others, and so on. You want obedience and loyalty. Yes, it's frustrating that given the opportunity most managers will stab you in the back, but they think of this as hate the player, not the game. Besides, if they weren't willing to do this, they wouldn't be willing to backstab others, so they wouldn't be a good ally.

Also, if you don't care about object-level accomplishment, then honesty loses a lot of its edges.

In terms of doing business with them, a dishonest person to work with will help you advance your agenda at the expense of your coworkers and corporation (and the public), will be fine with and even help with deceptive practices, and understand your needs better, allowing you to get what you really want while avoiding being explicit. You also get a comparative advantage, since dissimilar actors won't know how to handle the situation.

Comfortable is a term of art, here, of a sort - it means roughly that one is confident this person can exhibit basic competence and understanding, and will do what is expedient, in ways that are unlikely to get you scapegoated. Or that the situation won't lead to same, as you have a plan to avoid this. You're uncomfortable when you worry this person or situation will cause you to become scapegoated.

Comment by zvi on Habryka's Shortform Feed · 2019-06-02T11:36:51.491Z · score: 5 (3 votes) · LW · GW

More than fine. Please do post a version on its own. A lot of strong insights here, and where I disagree there's good stuff to chew on. I'd be tempted to respond with a post.

I do think this has a different view of integrity than I have, but in writing it out, I notice that the word is overloaded and that I don't have as good a grasp of its details as I'd like. I'm hesitant to throw out a rival definition until I have a better grasp here, but I think the thing you're in accordance with is not beliefs so much as principles?

Moral Mazes and Short Termism

2019-06-02T11:30:00.348Z · score: 62 (18 votes)
Comment by zvi on FB/Discord Style Reacts · 2019-06-02T01:28:00.373Z · score: 18 (3 votes) · LW · GW

I have never used them in team slack or discord, and also haven't been tempted to do so. I mean, I can just type stuff, that's what I'm there for, and we already had emoticons, so I don't really see the point?

Comment by zvi on FB/Discord Style Reacts · 2019-06-02T01:25:28.693Z · score: 8 (4 votes) · LW · GW

Example-specific note, but perhaps not a coincidence: Person C's comment seems like it's not a great thing to say on LW. Almost as if someone is using social pressure slash arguing from authority that isn't even theirs. If we get into a question of which high-status people are clicking like on which things, that seems very bad.

And of course, hidden information is everywhere all the time, social and otherwise, so these aren't new pathologies.

My default would be to have a norm that this is indeed private information, and if Person A wants B's backup in public they can ask for it.

Comment by zvi on FB/Discord Style Reacts · 2019-06-02T00:47:01.907Z · score: 24 (10 votes) · LW · GW

Reactions will raise the toxicity and blight levels of LessWrong. Non-anonymous ones would raise it more.

I have large uncertainty about the magnitude of these effects.

But I do know that I try very, very hard to never use reacts on social media to (among other reasons) avoid there being information in my failure to react to things or any pressure to see things in order to react to them.

Comment by zvi on FB/Discord Style Reacts · 2019-06-02T00:40:33.936Z · score: 8 (5 votes) · LW · GW

"Agree with central point but logic flawed."

Comment by zvi on FB/Discord Style Reacts · 2019-06-02T00:39:30.461Z · score: 8 (5 votes) · LW · GW

Brainstorm question: Are we sure this type of feedback needs/wants to be public? I see a mode where it would be helpful to know the reason, but where having the reason by default stamped onto posts is even more demotivating than not knowing.

Not sure how this interacts with possibly being non-anonymous.

Comment by zvi on FB/Discord Style Reacts · 2019-06-02T00:36:12.103Z · score: 7 (4 votes) · LW · GW

"Important."

Comment by zvi on Feedback Requested! Draft of a New About/Welcome Page for LessWrong · 2019-06-02T00:29:37.381Z · score: 6 (3 votes) · LW · GW

I will note that even now I am not entirely clear on a number of these questions.

Quotes from Moral Mazes

2019-05-30T11:50:00.489Z · score: 78 (20 votes)

Laws of John Wick

2019-05-24T15:20:00.322Z · score: 21 (9 votes)

More Notes on Simple Rules

2019-05-21T14:50:00.305Z · score: 34 (10 votes)

Simple Rules of Law

2019-05-19T00:10:01.124Z · score: 50 (13 votes)

Tales from the Highway

2019-05-12T19:40:00.862Z · score: 15 (7 votes)

Tales From the American Medical System

2019-05-10T00:40:00.768Z · score: 56 (30 votes)

Dishonest Update Reporting

2019-05-04T14:10:00.742Z · score: 55 (14 votes)

Asymmetric Justice

2019-04-25T16:00:01.106Z · score: 145 (51 votes)

Counterfactuals about Social Media

2019-04-22T12:20:00.476Z · score: 54 (20 votes)

Reflections on Duo Standard

2019-04-18T23:20:01.037Z · score: 8 (1 votes)

Reflections on the Mythic Invitational

2019-04-17T11:50:00.315Z · score: 11 (3 votes)

Deck Guide: Biomancer’s Familiar

2019-03-26T15:20:00.420Z · score: 5 (4 votes)

Privacy

2019-03-15T20:20:00.269Z · score: 79 (26 votes)

Speculations on Duo Standard

2019-03-14T14:30:00.343Z · score: 10 (6 votes)

New York Restaurants I Love: Pizza

2019-03-12T12:10:01.002Z · score: 11 (6 votes)

On The London Mulligan

2019-03-05T21:30:00.662Z · score: 5 (6 votes)

Blackmail

2019-02-19T03:50:04.606Z · score: 67 (28 votes)

New York Restaurants I Love: Breakfast

2019-02-14T13:10:01.072Z · score: 9 (7 votes)

Minimize Use of Standard Internet Food Delivery

2019-02-10T19:50:00.866Z · score: -21 (4 votes)

Book Trilogy Review: Remembrance of Earth’s Past (The Three Body Problem)

2019-01-30T01:10:00.414Z · score: 47 (20 votes)

Game Analysis Index

2019-01-21T15:30:00.371Z · score: 13 (4 votes)

Less Competition, More Meritocracy?

2019-01-20T02:00:00.974Z · score: 81 (24 votes)

Disadvantages of Card Rebalancing

2019-01-06T23:30:08.255Z · score: 33 (7 votes)

Advantages of Card Rebalancing

2019-01-01T13:10:02.224Z · score: 9 (2 votes)

Card Rebalancing and Economic Considerations in Digital Card Games

2018-12-31T17:00:00.547Z · score: 14 (5 votes)

Card Balance and Artifact

2018-12-28T13:10:00.323Z · score: 9 (2 votes)

Card Collection and Ownership

2018-12-27T13:10:00.977Z · score: 19 (5 votes)

Artifact Embraces Card Balance Changes

2018-12-26T13:10:00.384Z · score: 12 (3 votes)

Fifteen Things I Learned From Watching a Game of Secret Hitler

2018-12-17T13:40:01.047Z · score: 13 (8 votes)

Review: Slay the Spire

2018-12-09T20:40:01.616Z · score: 14 (9 votes)

Prediction Markets Are About Being Right

2018-12-08T14:00:00.281Z · score: 81 (26 votes)

Review: Artifact

2018-11-22T15:00:01.335Z · score: 21 (8 votes)

Preschool: Much Less Than You Wanted To Know

2018-11-20T19:30:01.155Z · score: 68 (23 votes)

Deck Guide: Burning Drakes

2018-11-13T19:40:00.409Z · score: 9 (2 votes)

Octopath Traveler: Spoiler-Free Review

2018-11-05T17:50:00.986Z · score: 12 (4 votes)

Linkpost: Arena’s New Opening Hand Rule Has Huge Implications For How We Play the Game

2018-11-01T12:30:00.810Z · score: 13 (4 votes)

The Art of the Overbet

2018-10-19T14:00:00.518Z · score: 58 (25 votes)

The Kelly Criterion

2018-10-15T21:20:03.430Z · score: 60 (28 votes)

Additional arguments for NIMBY

2018-10-11T20:40:05.547Z · score: 35 (11 votes)

Eternal: The Exit Interview

2018-10-10T16:50:02.776Z · score: 12 (3 votes)

Apply for Emergent Ventures

2018-09-13T21:50:00.295Z · score: 45 (17 votes)

On Robin Hanson’s Board Game

2018-09-08T17:10:00.263Z · score: 55 (17 votes)

You Play to Win the Game

2018-08-30T14:10:00.279Z · score: 26 (10 votes)

Unknown Knowns

2018-08-28T13:20:00.982Z · score: 105 (46 votes)

Chris Pikula Belongs in the Magic Hall of Fame

2018-08-22T21:10:00.448Z · score: 28 (17 votes)