Tal Yarkoni: No, it's not The Incentives—it's you

post by Zack_M_Davis · 2019-06-11T07:09:16.405Z · LW · GW · 119 comments

This is a link post for https://www.talyarkoni.org/blog/2018/10/02/no-its-not-the-incentives-its-you/

Neuroscientist Tal Yarkoni denounces many of his colleagues' tendency to appeal to publish-or-perish incentives as an excuse for sloppy science (October 2018, ~4600 words). Perhaps read as a complement to our recent [LW · GW] discussion [LW · GW] of Moral Mazes?

119 comments

Comments sorted by top scores.

comment by Laura B (Lara_Foster) · 2019-06-16T15:58:28.939Z · LW(p) · GW(p)

There is a lot of arguing in the comments about what the 'tradeoffs' are for individuals in the scientific community and whether making those tradeoffs is reasonable. I think what's key in the quoted article is that fraudsters are trading so much for so little. They are actively obscuring and destroying scientific progress while contributing to the norm of obscuring and destroying scientific progress. Potentially preventing cures to diseases, time and life-saving technology, etc. This is REALLY BAD. And for what? A few dollars and an ego trip? An 11% instead of a 7% chance at a few dollars and an ego trip? I do not think it is unreasonable to judge this behavior as reprehensible, reguargless if it is the 'norm'.

Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value. If 100% of scam artists steal people's money, I don't forgive a scam artist for stealing less money than the average scam artist. They are not 'making things better' by in theory reducing the average amount of money stolen per scam artist. They are still stealing money. DO NOT BECOME A SCAM ARTIST IN THE FIRST PLACE. If academia is all a scam, then that is very sad, but it does not make it ok for people to join in the scam and shrug it off as a norm.

And being fraudulent in science is SO MUCH WORSE than just being an ordinary scam artist who steals money. It's more like being a scam artist who takes money in exchange for poisoning the water and land with radioactive waste. No, it's not ok because other people are doing it.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-06-24T17:55:16.248Z · LW(p) · GW(p)
Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value.

This seems to imply that you think that the world would be better off without academia at all. Do you endorse that?

Perhaps you only mean that if the world would be better off without academia at all, and nearly everyone in it is net negative / destroying value, then no one could justify joining it. I can agree with the implication, but I disagree with the premise.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-11-27T01:52:14.274Z · LW(p) · GW(p)

I mean, there's a spectrum here. At what point should you avoid joining an institution out of principle. Is it only when the institution is net negative? I think that if your primary goal will be to fix the institution, then it can still be right to join an institution even if it's net negative. But I think if you find out that your institution has broken its promises and principles, even if it's mostly moving in a positive direction, especially if it's crowding out others doing work (indeed, it's hard to get respect as a scientist in modern society if you don't have a PhD), then I think that can be the sign to leave in protest and not cooperate with it, even while it's locally net positive and there isn't a clear alternative.

(This reminds me of Zvi unilaterally leaving Facebook, even though Facebook has a coordination advantage. I'm glad Zvi left Facebook, and it helped me leave Facebook, but I couldn't have predicted that very directly at the time, and I don't think Zvi was in a position to either. Naive consequentialism is very difficult, because modelling the effects of your social decisions is incredibly hard, and is one of key advantages of deontological recommendations is to do the right thing even when you're not in a position to compute all its effects.)

It seems like a fair hypothesis to me that academia has lied enough about whether it knows the truth, whether it has privileged access to truth, and whether its people are doing good work, that it should not be 'joined' but instead 'quit, and get to work on an alternative'.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-11-27T02:58:32.578Z · LW(p) · GW(p)

Tbc, in the grandparent I was responding to the specific sentence I quoted, which seems to me to be making a bold claim that I think is false. It's of course possible that the correct action is still "leave academia", but for a different reason, like the one you gave.

Re:

it should not be 'joined' but instead 'quit, and get to work on an alternative'.

That depends pretty strongly on what the alternative is. Suppose your goal is for more investigation of speculative ideas that may or may not pan out, so that humanity figures out true and useful things about the world. It's not clear to me that you can do significantly better than current academia, even if you assume that everyone will switch from academia to your new institution.

And of course, people in academia are selected for being good at academic jobs, and may not be good at building institutions. Or they may hate all the politicking that would be required for an alternative. Or they might not particularly care about impact, and just want to do research because it's fun. All of which are reasons you might join academia rather than quit and work on an alternative, and it's "morally fine".

comment by Rohin Shah (rohinmshah) · 2020-12-29T19:26:15.484Z · LW(p) · GW(p)

I do not like this post. I think it gets most of its rhetorical oomph from speaking in a very moralizing tone, with effectively no data, and presenting everything in the worst light possible; I also think many of its claims are flat-out false. Let's go through each point in order.

1. You can excuse anything by appealing to The Incentives

No, seriously—anything. Once you start crying that The System is Broken in order to excuse your actions (or inactions), you can absolve yourself of responsibility for all kinds of behaviors that, on paper, should raise red flags. Consider just a few behaviors that few scientists would condone:

  • Fabricating data or results
  • Regulary threatening to fire trainees in order to scare them into working harder
  • Deliberately sabotaging competitors’ papers or grants by reviewing them negatively

Wow, that would be truly shocking; indeed that would be truly an indictment of academia. What's the evidence?

When Diederik Stapel confessed to fabricating the data used in over 50 publications, he didn’t explain his actions by saying “oh, you know, I’m probably a bit of a psychopath”; instead, he placed much of the blame squarely on The Incentives:

... Did you expect people who were caught doing bad things to say "yup, I'm a terrible person?" Also, a single extreme case does not make the rule. I can only assume you're going to say that all the academics accepted this excuse; that would be an indictment of academia...

Curiously, I’ve never heard any of my peers—including many of the same people who are quick to invoke The Incentives to excuse their own imperfections—seriously endorse The Incentives as an acceptable justification for Stapel’s behavior. In Stapel’s case, the inference we overwhelmingly jump to is that there must be something deeply wrong with Stapel, seeing as the rest of us also face the same perverse incentives on a daily basis, yet we somehow manage to get by without fabricating data.

... I guess not. Well, point 1 is deeply unconvincing. (I also think it is false, just from my own experience in academia.) Let's hope the others do better.

2. It would break the world if everyone did it

I abhor arguments of this form; I don't think I've ever seen one used well. It would also break the world if everyone ate food but didn't farm; are you going to become a farmer now?

The actual content of the section is a bit different from the heading:

To be clear, I’m not saying perverse incentives never induce bad behavior in medicine or other fields. Of course they do. My point is that practitioners in other fields at least appear to have enough sense not to loudly trumpet The Incentives as a reasonable justification for their antisocial behavior—or to pat themselves on the back for being the kind of people who are clever enough to see the fiendish Incentives for exactly what they are.

... So academics are bad because they know that the bad incentives are there, and are willing to talk about them? Do you prefer the world in which they don't know the incentives are there or don't talk about them but still act in accordance with them? What?

we do seem to have collectively developed a rather powerful form of learned helplessness that doesn’t seem to be matched by other communities. Which is a fortunate thing, because if every other community also developed the same attitude, we would be in a world of trouble.

If true, that seems bad. I think it is mostly false; no evidence was given in the post. I feel like the author just sees more of academia than other fields; I don't get the sense that doctors, lawyers, journalists etc. are valiantly fighting off the bad incentives.

Okay, well, onwards to point 3:

3. You are not special

[...]

Well yeah, people actually do suffer. There are many scientists who are willing to do the right things—to preregister their analysis plans, to work hard to falsify rather than confirm their hypotheses, to diligently draw attention to potential confounds that complicate their preferred story, and so on. When you assert your right to opt out of these things because apparently your publications, your promotions, and your students are so much more important than everyone else’s, you’re cheating those people.

This seems to be an argument that most of the academics are truthful, and only a few bad actors are ruining it for everyone else. I certainly agree that if there are a few bad actors violating academic norms, those people are doing something bad / should be punished. But I thought the claim was that nearly all academics are doing bad things? If so, you can't reasonably argue that these people are being "special".

You can't be willfully blind to norms and then claim that people following the norms are bad and think they are "special".

Let's say you want to smoke pot, and you're in an area where there's a law against smoking pot. However, no one enforces it and everyone smokes pot all the time. Perhaps if everyone stopped smoking pot, you could make the case that smoking pot isn't addictive and get the law overturned. It is incorrect to blame everyone else for smoking pot, or to say that they are defecting, even though it is true that the law could be reversed if everyone stopped smoking pot.

If the claim is actually that there are only a few bad actors, then I assume the author must be talking about the especially bad cases like fabricating data. I expect that when caught such cases really are shamed, and you can't get out of it by saying "Oh, The Incentives".

4. You (probably) have no data

It’s telling that appeals to The Incentives are rarely supported by any actual data. It’s simply taken for granted that engaging in the practice in question would be detrimental to one’s career.

This is a clear isolated demand for rigor [LW · GW]. (Ironically, this post itself has no data, making it even clearer than usual.)

Coming by the kind of data you’d need to answer this question is actually not that easy: it’s not enough to reflexively point to, say, the fact that some journals have higher impact factors than others, To identify the utility-maximizing course of action, you’d need to integrate over both benefits and costs, and the costs are not always so obvious.

I can only assume that the author decides when to shower, what to eat, what transit to use, etc by simulating out all the possible consequences of each possible decision several years into the future, aggregating the consequences according to their personal utility function, and then choosing the one that is best. I must applaud the author for their immense brainpower; unfortunately us mere mortals can't do this ourselves and need to rely on heuristics.

Sarcasm aside; I think it is perfectly possible to make good decisions without having all the available data. We do this all the time in lots of different scenarios. Why expect that people can't do it here as well?

5. It (probably) won’t matter anyway

Outcomes in academia are multiply determined and enormously complex. You can tell yourself that getting more papers out faster will get you a job if it makes you feel better, but that doesn’t make it true. If you’re a graduate student on the job market these days, I have sad news for you: you’re probably not getting a tenure-track job no matter what you do. It doesn’t matter how many p-hacked papers you publish, or how thinly you slice your dissertation into different “studies”; there are not nearly enough jobs to go around for everyone who wants one.

Conditions of extreme competition over scarce resources are exactly where you expect the most conformance to the incentives.

Suppose you’re right, and your sustained pattern of corner-cutting is in fact helping you get ahead. How far ahead do you think it’s helping you get? Is it taking you from a 3% chance of getting a tenure-track position at an R1 university to an 80% chance? Almost certainly not. Maybe it’s increasing that probability from 7% to 11%; that would still be a non-trivial relative increase, but it doesn’t change the fact that, for the average grad student, there is no full-time faculty position waiting at the end of the road.

I'll note that these are once again claims with no data. I think it's false for the people who follow The Incentives who do become professors; I would not be surprised if they went from 20% to 80%, which seems like a substantial change. I agree that on average it's probably not very high, because a lot of people are not going to become professors regardless (i.e. maybe they go from 2% to 3%), but it's hard to tell from the inside which of these situations you're in.

6. You’re (probably) not going to “change things from the inside”

Over the years, I’ve talked to quite a few early-career researchers who have told me that while they can’t really stop engaging in questionable research practices right now without hurting their career, they’re definitely going to do better once they’re in a more established position.

[...]

I can think of at least a half-dozen people off-hand who’ve regaled me with me some flavor of “once I’m in a better position” story, and none of them, to my knowledge, have carried through on their stated intentions in a meaningful way.

I totally agree with this point.

7. You’re not thinking long-term

[...] One thing that I think has been largely overlooked in discussions about the current incentive structure of science is what impact the replication crisis will have on the legacies of a huge number of presently famous scientists.

I’ll tell you what impact it will have: many of those legacies will be completely zeroed out. And this isn’t just hypothetical scaremongering. It’s happening right now to many former stars of psychology (and, I imagine, other fields I’m less familiar with).

[...] So if your justification for cutting corners is that you can’t otherwise survive or thrive in the present environment, you should consider the prospect—and I mean, really take some time to think about it—that any success you earn within the next 10 years by playing along with The Incentives could ultimately make your work a professional joke within the 20 years after that.

I am pretty uncertain about this point. Most people do not become famous and have their work scrutinized in detail. I agree that long-term considerations do point in the direction of bucking the incentives; I just don't know how strongly.

8. It achieves nothing and probably makes things worse

(The "It" here is "complaining about incentives", not "following the incentives".)

If your complaints are achieving anything at all, they’re probably actually making things worse by constantly (and incorrectly) reminding everyone around you about just how powerful The Incentives are. Here’s a suggestion: maybe try not talking about The Incentives for a while.

I don't really see the point here. Surely fixing the problem involves first creating common knowledge about the problem? But in any case, this seems to be orthogonal to the main point (that following the incentives is bad), so I'm going to skip it.

9. It’s your job

This last one seems so obvious it should go without saying, but it does need saying, so I’ll say it: a good reason why you should avoid hanging bad behavior on The Incentives is that you’re a scientist, and trying to get closer to the truth, and not just to tenure, is in your fucking job description. Taxpayers don’t fund you because they care about your career; they fund you to learn shit, cure shit, and build shit.

This same complaint can be leveled at most of society. You don't get to choose a job description and then have people faithfully execute that job description; what you get depends on the incentives you give.

Would the world be better if you could just give a job description and have people faithfully do that? Probably. Does the world work that way? No. Just yelling "I want you to do X, therefore you should do X" seems counterproductive and willfully blind about how the world works.

----

I do not know what people get out of this post; the arguments given do not remotely support the conclusions, with a couple of exceptions. If I had to guess some effects that would make people upvote / nominate the post:

  1. Powerfully suggested that people should bear responsibility for bucking the incentives
  2. Written as a rant, and so was fun to read
  3. Anti-academia, which agrees with the reader's beliefs
  4. Pro-principles, which agrees with the reader's beliefs / morals

(I am not claiming that people consciously thought about these reasons.)

I'll ignore 2, 3, and 4; I think it's obvious why those are not reasons we want to promote on LW.

I think the first one could plausibly be a reason that we would want to promote this on LW. Unfortunately, I think it is wrong: I do not think that people should usually feel upon themselves the burden of bucking bad incentives. There are many, many bad incentives in the world; you cannot buck them all simultaneously and make the world a better place. Rather, you need to conform with the bad incentives, even though it makes your blood boil, and choose a select few areas in which you are going to change the world, and focus on those.

(Tbc, this is not a recommendation to p-hack; that is not the norm in academia. The norm in academia was to do other questionable things that in effect constitute p-hacking, but it was never to explicitly p-hack. And that norm might be changing at this point, I don't know. Personally, while there are questionable academic practices that I follow when writing papers, such as including all the pros of a proposal in the introduction but ignoring the cons, I try to make my experiments as un-p-hacked as I can, though I have the advantage of not seeking a job in academia, as well as a good network, so incentives affect me less than they do most academics.)

Replies from: habryka4, fiddler
comment by habryka (habryka4) · 2021-01-13T18:51:25.728Z · LW(p) · GW(p)

I agree with most of this review, and also didn't really like this post when it came out.

I think the first one could plausibly be a reason that we would want to promote this on LW. Unfortunately, I think it is wrong: I do not think that people should usually feel upon themselves the burden of bucking bad incentives. There are many, many bad incentives in the world; you cannot buck them all simultaneously and make the world a better place. Rather, you need to conform with the bad incentives, even though it makes your blood boil, and choose a select few areas in which you are going to change the world, and focus on those.

Just for the record, and since I think this is actually an important point, my perspective is that indeed people cannot take on themselves the burden of bucking bad all bad incentives, but that there are a few domains of society where not following these incentives is much worse than others and where I currently expect the vast majority of contributors to be net-negative participants because of those incentives (and as such establishing standards of "deal with it or leave it" is a potentially reasonable choice). 

I think truth-seeking institutions are one of those domains, and that in those places, slightly bad incentives seem to have larger negative effects, and also that it is very rarely worth gaining other resources in exchange for making your truth-seeking institutions worse.

For almost any other domain of the world (with the notable exception of institutions that are directly responsible for handling highly dangerous technologies), I am much less worried about incentives and generally wouldn't judge someone very much for conforming to most of them. 

comment by fiddler · 2020-12-29T22:39:38.725Z · LW(p) · GW(p)

Note that this review is not of the content that was nominated; nomination justifications strongly suggest that the comment suggestion, not the linkpost, was nominated.

Replies from: Raemon, rohinmshah
comment by Raemon · 2020-12-29T23:12:22.005Z · LW(p) · GW(p)

I think the comments are in large part about the post, though, and it matters a lot whether the post is wrong or misleading.

I also think that, while, this post wouldn't be eligible for the 2019 Review, an important point of the overall review process is still to have a coordinated time where everyone evaluates posts that have permeated the culture. I think this review is quite valuable along those lines.

Replies from: fiddler
comment by fiddler · 2020-12-30T00:57:46.617Z · LW(p) · GW(p)

That’s fair-I wasn’t disparaging the usefulness of the comment, just pointing out that the post itself is not actually what’s being reviewed, which is important, because it means that a low-quality post that sparks high-quality discussion isn’t disqualifying.

comment by Rohin Shah (rohinmshah) · 2020-12-30T00:11:06.730Z · LW(p) · GW(p)

As I read it, two of the nominations are for the post itself, and one is for the comments...

...is what I was going to say until I checked and saw that this comment [LW(p) · GW(p)] is a review, not a nomination. So one is for the post, and one for the comments.

----

I agree with Raemon that even if the nomination is for the comments, evaluating the post is important. I actually started writing a section on the comments, but didn't have that much to say, because they all seem predicated on the post stating something true about the world.

The highest-voted top-level comment [LW(p) · GW(p)], as well as Zvi's position in this comment thread [LW(p) · GW(p)], seem to basically be considering the case where academia as a whole is net negative. I broadly agree with Zvi that it is not acceptable for an academic to go around faking data; if that were the norm in academia I expect I would think that academia was net negative and one could not justify joining it (unless you were going to buck the incentives). But... that isn't the norm in academia. I feel like these comments are only making an important point if you actually believe the original post, which I don't. The other comments seem to have only a little content, or to be on relatively tangential topics.

Replies from: fiddler
comment by fiddler · 2020-12-30T00:59:40.454Z · LW(p) · GW(p)

That’s a fair point-see my comment to Raemon. The way I read it, the mod consensus was that we can’t just curate the post, meaning that comments are essentially the only option. To me, this means an incorrect/low quality post isn’t disqualifying, which doesn’t decrease the utility of the review, just the frame under which it should be interpreted.

comment by Raemon · 2021-01-11T03:36:26.748Z · LW(p) · GW(p)

The discussion around It's Not the Incentives, It's You, was pretty gnarly. I think at the time there were some concrete, simple mistakes I was making. I also think there were 4-6 major cruxes of disagreement between me and some other LessWrongers. The 2019 Review seemed like a good time to take stock of that.

I've spent around 12 hours talking with a couple people who thought I was mistaken and/or harmful last time, and then 5-10 writing this up. And I don't feel anywhere near done, but I'm reaching the end of the timebox so here goes.

Core Claims

I think this post and the surrounding commentary (at least on the “pro” side) was making approximately these claims:

A. You are obligated to buck incentives. You might be tempted sometimes to blame The Incentives rather than take personal responsibility for various failures of virtue (epistemic or otherwise). You should take responsibility. 

B. Academia has gotten more dishonest, and academics are (wrongly) blaming “The Incentives” instead of taking responsibility.

C. Epistemics are the most important thing. Epistemic Integrity is the most important virtue. Improving societal epistemics is the top cause area. 

  • Possible stronger claim: Lying or manipulating data on the public record is (sometimes? often?) worse than state-sanctioned killing.

D. Special Academic Responsibility: Scientists/Academics have a special responsibility to never be dishonest about their work, and/or to be proactively extremely honest, due to the nature of their profession, independent of the current standards of their profession. If they can’t, they should leave.

E. The previous points are not just correct, but obviously correct.

This is just a short summary, and the claims are necessarily simplified. I think if you add lots of hedging words like “often”, and “mostly”, some of the claims become less controversial. 

But I disagreed a lot with some people on how much to weight them. It seemed to me they were leaning a lot more towards absolutism, in a domain that seemed to me to be legitimately murky and confusing.

Quick tl;dr on my takes on those claims: 

Re: Bucking Incentives: You should buck incentives at least some of the time, and you should allocate a lot of attention to "notice when incentives are pressuring you, and think about what you really value." 

I'm not sure about "exactly how much" and "when?".

Re: Academia is more dishonest: I’m mostly agnostic on this. Important if true. Rohin argues with this claim, and I think people who want to debate it should respond to him.

Re: Epistemics are Most Important Thing: The previous discussion actively changed my mind on this. Previously I believed “epistemics are maybe in the top 5 causes”. I now think that Epistemics are plausibly the very top cause and most important virtue. At the very least, they are much more important than they seemed to me at the time. (Note that I still maintain that there are degrees of freedom in "how exactly do you accomplish good epistemics", which truthseekers can disagree on [LW · GW])
 
I am still somewhat confused about "lying on public record vs state sanctioned killing". I now think it is plausible. But it is a pretty intense claim (which I think most society disagrees with). I am still mulling over, and am unclear on some details and not even sure what the people I've argued with believe about it.

RE: Special Academic Responsibility: IMO this depends a lot on the current state of academia, and what its onboarding culture is like. 

RE: "This is all obvious": This is importantly false, and this is what I was mostly arguing against last year, and still argue against now.

I have thoughts on individual pieces of this, which I'll write as followup comments as I find time.

Replies from: Raemon, Raemon, rohinmshah
comment by Raemon · 2021-01-11T03:38:24.473Z · LW(p) · GW(p)

Personal Anecdote:

"It wasn't the Incentives. It was me." 

The forceful, moralizing tone of the article was helpful for me to internalize that I need the skill of noticing, and then bucking, incentives.

Just a few days ago, on Dec 31st, I found myself trying to rush an important blogpost out before 2020 ended, so it could show up next year in the 2020 LW Review. I found myself writing to some people, tongue-in-cheekly saying “Hey guys, um, the incentives say I should try to publish this today. Can you give feedback on it, and/or tell me that it’s not ready and I should take more time?”

And… well, sure I can hide behind the tongue-in-cheekness. And, "Can you help review a blogpost?" is a totally reasonable thing to ask my friends to do. 

But, also, as I clicked ‘send’ on the email, I felt a little squirming in my heart. Because I knew damn well the post wasn’t ready. I was just having trouble admitting it to myself because I’d be sad if it were delayed a year from getting into the next set of LW Books. And this was a domain where I literally invented the incentives I was responding to

It was definitely not the Incentives, It Was Me.

I still totally should have asked my friends for help here. But I knew the answer to my primary question “is it shippable today?”. So I didn’t need to impose any urgency on their help. 

This is sticking out in my mind, not because the local instance was very important, but because the moral muscle of noticing a principle in the moment, and applying it, is pretty important. Someday there will be a higher stakes thing where this matters more, and I was disappointed in myself for not getting the answer right in the low-stakes case. 

I was able to notice at all, just a little too late, in large part due to this post. I hope to do better next time. 

comment by Raemon · 2021-01-11T03:54:47.358Z · LW(p) · GW(p)

Framing disagreements

Cognitive processes vs right answers; Median vs top thinkers

My frame here is “what cognitive strategy is useful for the median person to find the right answers”. 

I think that people I’ve argued against here were focused more directly on “What are the right answers?” or “What should truthseekers with high standards and philosophical sophistication do?”. 

I expect there to be a significant difference between the median academic and the sort of person participating in this conversation.

I think the median academic is running on social cognition, which is very weak. Fixing that should be their top priority. I think fixing that is cognitively very different fromnot being academically dishonest.” (Though this may depend somewhat on what sort of academic dishonesty we’re talking about, and how prevalent it is)

I think the people I’ve argued with probably disagree about that, and maybe think that ‘be aligned with the truth’ is a central cognitive strategy that is entangled across the board. This seems false to me for most people, although I can imagine changing my mind.

Arranging coordinated-efforts-that-work (i.e. Stag Hunts) is the most important thing, most other things are distracting and mostly not-the-point

Another central disagreement seemed to have something to do with “there are deontological or virtue-ethics norms you should be following here, about not lying, etc”. 

I think it is important to follow your society’s existing norms. But when it comes to trying to improve society’s status quo, virtues and rules are much less important than the virtue of “figure out how to actually coordinate on changing things, and then do that.” 

Related to the “cognitive process” point, I think people who get focused on following the exact virtues/rules mostly waste a lot of time on unimportant virtues/rules. The exceptions are when those virtues/rules happen to be particularly important, or bootstrap into stag hunts. But this requires moral luck.

My family cares a lot about recycling and buying local. A lot of the arguments I had heard about this post seemed more like the sort of cognitive algorithm that outputs ‘recycle and buy local’ than ‘Be Richard Feynman or Eliezer Yudkowsky’, when implemented on the average person. 

comment by Rohin Shah (rohinmshah) · 2021-01-13T18:37:31.382Z · LW(p) · GW(p)

To the extent that your summary of the "pro" case is accurate, particularly "Epistemics are the most important thing", I find it deeply ironic and sad that all of the commentary, besides one comment from Carl Shulman (and my own), seems to be about what people should do, rather than what is actually true. One would hope that people pushing "epistemics are the most important thing" would want to rely on true facts when pushing their argument.

Replies from: Raemon
comment by Raemon · 2021-01-13T18:44:40.358Z · LW(p) · GW(p)

There are a few more threads I ideally want to write here about what I think was going on in here. I'm not 100% sure whether I endorse your implied argument but think there was something to unravel here in the space you're pointing at.

comment by Dagon · 2019-06-11T18:24:32.721Z · LW(p) · GW(p)

I can't upvote this strongly enough. It's the perfect followup to discussion and analysis of Moloch and imperfect equilibria (and Moral Mazes) - goes straight to the heart of "what is altruism?" If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices, only economic ones.

Replies from: Pattern
comment by Pattern · 2019-06-11T19:40:01.140Z · LW(p) · GW(p)

I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want. The altruist may raise money for a charity, and the selfish may act in their own (view of) self interest say, to accumulate money/what they value. The difference isn't that the moral don't act economically, it's that they act economically with regards to something else.

Replies from: abramdemski, Dagon
comment by abramdemski · 2019-06-23T00:57:43.083Z · LW(p) · GW(p)

I see what you mean, but there's a tendency to think of 'homo economicus' as having perfectly selfish, non-altruistic values.

Also, quite aside from standard economics, I tend to think of economic decisions as maximizing profit. Technically, the rational agent model in economics allows arbitrary objectives. But, what kinds of market behavior should you really expect?

When analyzing celebrities, it makes sense to assume rationality with a fame-maximizing utility function, because the people who manage to become and remain celebrities will, one way or another, be acting like fame-maximizers. There's a huge selection effect. So Homo Hollywoodicus can probably be modeled well with a fame-maximizing assumption.

This has nothing to do with the psychology of stardom. People may have all kinds of motives for what they do -- whether they're seeking stardom consciously or just happen to engage in behavior which makes them a star.

Similarly, when modeling politics, it is reasonable to make a Homo Politicus assumption that people seek to gain and maintain power. The politicians whose behavior isn't in line with this assumption will never break into politics, or at best will be short-lived successes. This has nothing to do with the psychology of the politicians.

And again, evolutionary game theory treats reproductive success as utility, despite the many other goals which animals might have.

So, when analyzing market behavior, it makes some sense to treat money as the utility function. Those who aren't going for money will have much less influence on the behavior of the market overall. Profit motives aren't everything, but other motives will be less important that profit motives in market analysis.

comment by Dagon · 2019-06-11T20:59:08.209Z · LW(p) · GW(p)
I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want.

I think of economic decisions in terms of visible/modeled tradeoffs including time- and uncertainty-discounted cost/benefit choices. Moral decisions are this, plus hard-to-model (illegible) values and preferences. I acknowledge that there's a lot of variance in how those words are used in different contexts, and I'm open to suggestions on what to use instead.

Replies from: Pattern
comment by Pattern · 2019-06-23T06:05:00.317Z · LW(p) · GW(p)

In the case you referenced, "selfish" or "short sighted", depending on what you were going for, seem to fit.

If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices,

I very much agree with this part.

comment by Samuel Hapák (hleumas) · 2019-06-11T19:56:49.860Z · LW(p) · GW(p)

Very nice. Few notes:

1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.

2. World isn't black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.

3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.

4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.

5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

Replies from: Dagon, Viliam
comment by Dagon · 2019-06-11T22:09:34.511Z · LW(p) · GW(p)
5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that _nothing important_ should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

(Note - this is mostly a reductio argument. My actual opinion is that the split between hobby and income is itself part of the incorrect incentive structure, and there's no actual way to opt out. As such, you need to thread the needle of doing good while accepting some and rejecting other incentives.)

Replies from: John_Maxwell_IV, hleumas
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-15T21:49:26.165Z · LW(p) · GW(p)

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that nothing important should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

This is an interesting argument for funding something like the EA Hotel over traditional EA orgs.

Replies from: Zvi
comment by Zvi · 2019-06-16T11:31:02.522Z · LW(p) · GW(p)

If the EA Hotel is easily confirmed as real, as in it is offering what it claims it is offering at a reasonable quality level at the price it claims to be offering that thing, then I am confused why it has any trouble being funded. This is yet another good reason for that.

I understand at least one good reason why there aren't more such hotels - actually doing a concrete physical world thing is hard and no one does it.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-17T02:57:49.064Z · LW(p) · GW(p)

There's been a great deal of discussion of the EA Hotel on the EA Forum. Here's one relevant thread:

https://forum.effectivealtruism.org/posts/ek299LpWZvWuoNeeg/usd100-prize-to-best-argument-against-donating-to-the-ea [EA · GW]

Here's another:

https://forum.effectivealtruism.org/posts/JdqHvyy2Tjcj3nKoD/ea-hotel-with-free-accommodation-and-board-for-two-years#vLtqSuEh7ZPm7jwYT [EA(p) · GW(p)]

It's possible the hotel's funding troubles have more to do with weirdness aversion [EA(p) · GW(p)] than anything else.

I personally spent 6 months at the hotel, thought it was a great environment [EA · GW], and felt the time I spent there was pretty helpful for my career as an EA. The funding situation is not as dire as it was a little while ago. But I've donated thousands of dollars to the project and I encourage others to donate too.

comment by Samuel Hapák (hleumas) · 2019-06-12T22:27:35.093Z · LW(p) · GW(p)

Some important things can be a source of income, such as farming. Farming is pretty important and there are no huge issues with farmers doing it for profit.

Problems happen when there is a huge disconnect between the value and reward. This happens in a basic research a lot, because researchers don't have any direct customers.


Arguably, in a basic research, you principally can't have any customers. Your customers are future researchers that will build on top of your research. They would be able to decide whether your work was valuable or whether it was crap, but you'd be pretty old or dead by that time.

comment by Viliam · 2019-06-13T21:48:50.962Z · LW(p) · GW(p)

As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it.

A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.

Replies from: Benquo
comment by Benquo · 2019-06-14T03:55:35.752Z · LW(p) · GW(p)

In that case, might as well go into something better-paying.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-15T21:48:46.533Z · LW(p) · GW(p)

Well from a consequentialist perspective, if people with a stronger desire for scientific integrity self-select out of science, that makes science weaker in the long run.

I think a more realistic norm, which will likely create better outcomes, is for you personally to ensure that your work is at least in the top 40% for quality, and castigate anyone whose work is in the bottom 20%. Either of these practices should cause a gradual increase in quality if widely implemented (assuming these thresholds are tracked & updated as they change over time).

Replies from: hleumas
comment by Samuel Hapák (hleumas) · 2019-06-15T22:08:43.954Z · LW(p) · GW(p)

Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn't you be in a much better position doing quality research if you're earning good salary, saving a big portion of it, and doing science as a hobby?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-16T03:50:10.696Z · LW(p) · GW(p)

It's possible. That's what I myself am doing--supporting myself with a part-time job while I self-study and do independent FAI research.

However, it's harder have credibility in the eyes of the public with this path. And for good reason--the public has no easy way to tell apart a crank from a lone genius, since it's hard to judge expertise in a domain unless you yourself are an expert in it. One could argue that the academia acts as a reasonable approximation of eigendemocracy and thereby solves this problem.

Anyway, if the scientists with credibility are the ones who don't care about scientific integrity, that seems bad for public epistemology.

Replies from: Zvi
comment by Zvi · 2019-06-16T11:18:16.378Z · LW(p) · GW(p)

Note that Wei Dei also notes that he chose exit from academia, as did many others on Less Wrong and in our social circles (combined with surprising non-entry).

If this is the model of what is going on, that quality and useful research is much easier without academia, but academia is how one gains credibility, then destroying the credibility of academia would be the logical useful action.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-17T03:54:10.847Z · LW(p) · GW(p)

quality and useful research is much easier without academia

I think you have to do a lot more to demonstrate this.

destroying the credibility of academia would be the logical useful action.

Did you read Scott Alexander's recent posts on cultural evolution?

If the credibility of academia is destroyed, it's not obvious something better will come along to fill that void. Why is it better to destroy than repair? Plus, if something new gets created, it will probably have its own set of flaws. The more pressure is put on your system (in terms of funding and status), the greater the incentive to game things, and the more the cracks will start to show.

I suggest instead of focusing on the destruction of a suboptimal means for ascertaining credibility, you focus on the creation of a superior means for ascertaining credibility. Let's phase academia out after it has been made obsolete, not before.

Replies from: hleumas
comment by Samuel Hapák (hleumas) · 2019-06-17T07:42:30.778Z · LW(p) · GW(p)

Academia in the current form isn’t Lindy. It’s not like we’re doing this thousands of years. Current system of Academia is at most 70 years old.

Replies from: habryka4
comment by habryka (habryka4) · 2019-06-17T22:05:19.613Z · LW(p) · GW(p)

The broader institutions around academia have been around since at least the Royal Society, which was founded in 1660. That's usually the age I would put the rough institutions surrounding academia.

Replies from: hleumas
comment by Samuel Hapák (hleumas) · 2019-06-19T17:36:37.749Z · LW(p) · GW(p)

Royal Society in 1660 and current academia are very different beasts. For example the current citations/journal’s game is pretty new phenomenon. Peer-review wasn’t really a thing 100 years ago. Neither complex grant applications.

Replies from: habryka4
comment by habryka (habryka4) · 2019-06-19T19:08:10.582Z · LW(p) · GW(p)

I thought peer-review had always been a core part of science in some form or another. I think you might be confusing external peer-view and editorial peer-review. As this Wikipedia article says:

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[2][3][4]
The first peer-reviewed publication might have been the Medical Essays and Observationspublished by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[5] began to involve external reviewers in the mid-19th-century,[6] and did not become commonplace until the mid-20th-century.[7]
Peer review became a touchstone of the scientific method, but until the end of the 19th century was often performed directly by an editor-in-chief or editorial committee.[8][9][10]Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabilispapers in the 1905 issue of Annalen der Physik were peer-reviewed by the journal's editor-in-chief, Max Planck, and its co-editor, Wilhelm Wien, both future Nobel prize winners and together experts on the topics of these papers. On another occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere".[11]

It's true that external peer-review is recent, which I do think is a significant shift. But I would still think that the broader institution of peer-review is basically as old as science.

Replies from: hleumas
comment by Samuel Hapák (hleumas) · 2019-06-19T19:50:27.664Z · LW(p) · GW(p)

It's a huge difference whether the reviewer is some anonymous person unrelated to the journal or whether it's an editor in chief of the journal itself. I don't think it's appropriate to call the latter peer-review (there are no "peers" involved), but that's not important.

Editor in chief has a strong motivation to have a good quality journal. If he rejects a good article, it's his loss. On the contrary, anonymous peer have stronger motivation to use this as an opportunity to promote (get cited) his own research than to help journal curate the best science.

Let me try to rephrase the shift I see in science. Over the 20th century, science became bureaucraciesed, the process of "doing science" was largely formalised and standardised. Researchers obsess about impact factors, p-values, h-indexes, anonymous peer reviews, grants, currents...

There are actual rules in place that determine formally whether you are "good" scientist. That wasn't the case over the most of the history of the science.

Also the "full-time" scientist who never did any other job than academy research was much less common in the past. Take Einstein as an example.

Replies from: habryka4
comment by habryka (habryka4) · 2019-06-19T20:05:52.727Z · LW(p) · GW(p)

Oh, I think we both definitely agree that science has changed a lot. I do also think that it still very clearly has maintained a lot of its structure from its very early days, and to bring things back to John's top level point, it is less obvious that that structure would redevelop if we were to give up completely on academia or something like that.

comment by Rohin Shah (rohinmshah) · 2019-06-22T19:04:27.235Z · LW(p) · GW(p)

I disagree with most of the post and most of the comments here. I think most academics are not explicitly committing fraud, but bad science results anyway. I also think that for the vast majority of (non-tenured) academics, if you don't follow the incentives, you don't make it in academia. If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would. So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia. I think the appropriate worlds to consider are: science as it exists now with academics following incentives or ~no academia at all.

It is probably correct that each individual instance of having to deal with bad incentives doesn't make that much of a difference, but there are many such instances. Probably there's an 80-20 thing to do here where you get 80% of the benefit by not following the worst 20% of bad incentives, but it's actually quite hard to identify these, and it requires you to be able to predict the consequences of not following the bad incentives, which is really hard to do. (I don't think I could do it, and I've been in a PhD program for 5 years now.)

To be clear: if you know that someone explicitly and intentionally committed fraud for personal gain with the knowledge that it would result in bad science, that seems fine to punish. But this is rare, and it's easy to mistake well-intentioned mistakes for intentional fraud.


Replies from: CarlShulman, jessica.liu.taylor, Benquo
comment by CarlShulman · 2019-07-25T06:02:25.700Z · LW(p) · GW(p)

Survey and other data indicate that in these fields most people were doing p-hacking/QRPs (running tests selected ex post, optional stopping, reporting and publication bias, etc), but a substantial minority weren't, with individual, subfield, and field variation. Some people produced ~100% bogus work while others were ~0%. So it was possible to have a career without the bad practices Yarkoni criticizes, aggregating across many practices to look at overall reproducibility of research.

And he is now talking about people who have been informed about the severe effects of the QRPs (that they result in largely bogus research at large cost to science compared to reproducible alternatives that many of their colleagues are now using and working to reward) but choose to continue the bad practices. That group is also disproportionately tenured, so it's not a question of not getting a place in academia now, but of giving up on false claims they built their reputation around and reduced grants and speaking fees.

I think the core issue is that even though the QRPs that lead to mostly bogus research in fields such as social psych and neuroimaging often started off without intentional bad conduct, their bad effects have now become public knowledge, and Yarkoni is right to call out those people on continuing them and defending continuing them.


Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-07-25T18:19:42.644Z · LW(p) · GW(p)
Some people produced ~100% bogus work while others were ~0%. So it was possible to have a career without the bad practices Yarkoni criticizes, aggregating across many practices to look at overall reproducibility of research.

I'm curious how many were able to hit 0%? Based on my 10x estimate [LW(p) · GW(p)] below I'd estimate 9%, but that was definitely a number I pulled out of nowhere.

That group is also disproportionately tenured, so it's not a question of not getting a place in academia now, but of giving up on false claims they built their reputation around and reduced grants and speaking fees.

I personally feel the most pressure to publish because the undergrads I work with need a paper to get into grad school. I wonder if it's similar for tenured professors with their grad students.

Also, the article seems to be condemning academics who are not tenured, e.g.

“I would publish in open access journals,” your friendly neighborhood scientist will say. “But those have a lower impact factor, and I’m up for tenure in three years.”

I think the core issue is that even though the QRPs that lead to mostly bogus research in fields such as social psych and neuroimaging often started off without intentional bad conduct, their bad effects have now become public knowledge, and Yarkoni is right to call out those people on continuing them and defending continuing them.

Thought experiment (that I acknowledge is not reality): Suppose that it were actually the case that in order to stay in academia you had to engage in QRPs. Do you still think it is right to call out / punish such people? It seems like this ends up with you always punishing everyone in academia, with no gains to actually published research, or you abolish academia outright.


comment by jessicata (jessica.liu.taylor) · 2019-06-23T03:53:14.342Z · LW(p) · GW(p)

Isn't "academics who don't follow bad incentives almost never become professors" blatantly incompatible with "these are well-intentioned mistakes"?

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-06-23T16:23:54.801Z · LW(p) · GW(p)

The former is a statement about outcomes while the latter is a statement about intentions.

My model for how most academics end up following bad incentives is that they pick up the incentivized bad behaviors via imitation. Anyone who doesn't do this ends up doing poorly and won't make it in academia (and in any case such people are rare, imitation is the norm for humans in general). As part of imitation, people come up with explanations for why the behavior is necessary and good for them to do. (And this is also usually the right thing to do; if you are imitating a good behavior, it makes sense to figure out why it is good, so that you can use that underlying explanation to reason about what other behaviors are good.)

I think that I personally am engaging in bad behaviors because I incorrectly expect that they are necessary for some goal (e.g. publishing papers to build academic credibility). I just can't tell which ones really are necessary and which ones aren't.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-06-23T17:29:57.744Z · LW(p) · GW(p)

This seems related to the ideas in this post on unconscious economies [LW · GW].

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-06-23T21:44:29.818Z · LW(p) · GW(p)

Agreed that it's related, and I do think it's part of the explanation.

I will go even further: while in that post the selection happens at the level of properties of individuals who participate in some culture, I'm claiming that the selection happens at the higher level of norms of behavior in the culture, because most people are imitating the rest of the culture.

This requires even fewer misaligned individuals. Under the model where you select on individuals, you would still need a fairly large number of people to have the property of interest -- if only 1% of salesmen had the personality traits leading to them being scammy and the other 99% were usually honest about the product, the scammy salesmen probably wouldn't be able to capture all of the sales jobs. However, if most people imitate, then those 1% of salesmen will slowly push the norms towards being more scammy over generations, and you'd end up in the equilibrium where nearly every salesman is scammy.

Come to think of it, I think I would estimate that ~1% of academics are explicitly thinking about how to further their own career at the cost of science (in ways that are different from imitation).

comment by Benquo · 2019-06-23T02:02:25.571Z · LW(p) · GW(p)
If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would.

And how many if you didn't intervene?

So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia.

How do you reconcile this with the immediately prior sentence?

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2019-06-23T16:08:21.564Z · LW(p) · GW(p)
And how many if you didn't intervene?

Significantly more, maybe 20. To do a proper estimate I'd need to know which field we're considering, what the base rates are, etc. The thing I should have said was that I expect it makes it ~10x less likely that you become a professor; that seems more robust to the choice of field and isn't conditional on base rates that I don't know.

The Internet suggests a base rate of 3-5%, which means without intervention 3-5 of them would become professors; if that's true I would say that with intervention an expected 0.4 of them would become professors.

How do you reconcile this with the immediately prior sentence?

I didn't mean that it was literally impossible for a person who doesn't follow the incentives to get into academia, I meant that it was much less likely. I do in fact know people in academia who I think are reasonably good at not following bad incentives.

comment by hamnox · 2020-12-26T03:47:00.365Z · LW(p) · GW(p)

I did not follow the Moral Mazes discussion as it unfolded. I came across this article context-less. So I don't know that it adds much to Lesswrong. If that context is relevant, it should get a summary before diving in. From my perspective, its inclusion in the list was a jump sideways.

It's written engagingly. I feel Yarkoni's anger. Frustration bleeds off the page, and he has clearly gotten on a roll. Not performing moral outrage, just *properly, thoroughly livid* that so much has gone wrong in the science world.

We might need that.

What he wrote does not only apply to scientists. It is *written* as though scientists are a unique concern. He makes strong claims about the way things are in academia very casually, as if tacitly assuming anyone with eyes would have seen enough to corroborate it themselves. (Whether they would agree, I don't know. For followup work I would like to see more concrete estimates and hard data about the extent and impact of these issues.) He writes this for his peers, I think, it almost feels intimate and like I ought to feel embarassed for eavesdropping.

But as I said, scientists don't seem like a special case. The loss of integrity in science is what he knows, but I suspect it's a much wider trend and he's rather gelman amnesia'd about the extent to which it exists in other fields. From a quick perusal of the Moral Mazes tag... it sure seems like. In Anna Salamon's question post about institutions, the dissolution of personal integrity is up there in the list of possible causes for modern institutional decay. He writes "if every other community also developed the same attitude, we would be in a world of trouble", but I don't think he should be so confident that they *didn't*.

This article is a kick in the pants. I don't tend to like that approach because I have empathy for people who are afraid. It is exactly when they are making decisions from fear that they don't support their decisions with data or reason, as Yarkoni complains about. I continue to believe that you can't really fix fear with punishment and shaming. But I'm coming around to the idea that the wake-up call needs to come before the support.

As off-audience as this post seems... the message is timely and on brand.

We are not slave to incentives. Constraining your goals to be safely within the bounds of ease and social agreeableness is NOT HOW YOU ACTUALLY WIN, and we are playing to win. Shut up and multiply.

comment by Matt Goldenberg (mr-hire) · 2020-12-10T01:20:39.530Z · LW(p) · GW(p)

This post substantially updated my thinking about personal responsibility.  While I totally disagree with the one-side framing of the post, the framing of it made me see that the "personal responsibility" vs. "incentives" thing wasn't really about beliefs at all, but was in fact about the framing.

I think it articulates the "personal responsibility" frame particularly well, and helps see how choosing "individuals" as the level of abstraction naturally leads to a personal responsibility framing.

Replies from: Zack_M_Davis, Raemon
comment by Zack_M_Davis · 2020-12-10T01:43:52.734Z · LW(p) · GW(p)

Um, this is a linkpost. Can you nominate a linkpost to something by a non-Less Wrong-affiliated author? Certainly the comment thread is worth pointing to as evidence of "things we learned in 2019", but I don't think the post should be eligible for the voting round?

Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-10T01:55:15.365Z · LW(p) · GW(p)

I’m not sure. 

My current guess is:

  • If this post was very helpful for lots of LWers, that’s valuable info to know (e.g. if it scored highly in the voting round).
  • I like the post quite a bit. Also, looking at the author’s blog, they seem pretty cool e.g. they write fiction about GPT-3 :)
  • If it scores well in the review, I would be open to reaching out to the author and asking if they wanted to be fully crossposted and published in our annual essay collection.
  • I think it could be somewhat confusing if it were included in the book, as though it were an opinion of a LessWronger, even though it wasn’t written by a LessWrong user.

I was considering just nominating the comment section, as I see that Ray has done.

We’re still obviously figuring it all out, and I want to take it on a case-by-case basis. But I tentatively lean ‘yes’.

comment by Raemon · 2020-12-10T01:42:03.351Z · LW(p) · GW(p)

I was going to nominate this post for similar reasons, and then realized it's almost entirely a third-party linkpost. It feels like an important part of my worldview but I wasn't sure how to think about it in the context of the review.

I suppose maybe it's good to have an Official Overton Window Fight about the post, without necessarily having that fight output an essay in the Best Of 2019 Book?

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-10T02:30:24.262Z · LW(p) · GW(p)

Yeah, I think that if in part the review is "What should be LW common knowledge" we can vote for that outside of *and should therefore be included in a book."

Replies from: Benito
comment by Ben Pace (Benito) · 2020-12-10T02:40:02.635Z · LW(p) · GW(p)

Note that a lot more people will buy the book set than will look at the vote page. In some ways the book is the common-knowledge building mechanism.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2020-12-10T02:46:26.080Z · LW(p) · GW(p)

Hmm, I guess I was imagining something like a "New LW sequences" featured prominently on the front page or something.

comment by Matt Goldenberg (mr-hire) · 2021-01-12T13:18:51.083Z · LW(p) · GW(p)

In general, I think this post does a great job of articulatng a single, incomplete frame. Others in the review take umbrage with the moralizing tone, but I think the moralizing tone is actually quite useful to give an inside view of this frame. 

I believe this frame is incomplete, but gives an important perspective that is often ignored in the Lesswrong/Gray tribe.

Replies from: rohinmshah
comment by Rohin Shah (rohinmshah) · 2021-01-13T18:27:40.605Z · LW(p) · GW(p)

I primarily take umbrage at the fact that the post makes claims I think are false without providing any evidence for them. I brought up the moralizing tone as an explanation for why it was popular despite making (what I think are) false claims with no evidence.

Replies from: mr-hire
comment by Matt Goldenberg (mr-hire) · 2021-01-14T01:11:16.047Z · LW(p) · GW(p)

I don't think that evidence is needed to articulate the frame I'm talking about, it's much more about a way of interpreting the situation.

comment by Raemon · 2020-12-10T01:46:45.779Z · LW(p) · GW(p)

While I don't think this post is actually eligible for the Best of LW 2019 book (since it's written offsite and is only a linkpost here), I think it's reasonable to nominate the comments here for some kind of "what do we collectively feel about this 1.5 years later?" discussion.

comment by Raemon · 2019-06-13T21:58:22.114Z · LW(p) · GW(p)

Definitely think this is an important point in the conversation.

I think my take is something like "The incentives are the problem" is a useful frame for how to look at systems and (often but not always) other people, but should throw up a red flag when you use it as an excuse for your own behavior.

I'm not sure I endorse this post precisely as written, because "take ownership of your behavior" is a cause that will be Out To Get You [LW · GW] for everything you've got (while leaving you vulnerable to Asymmetric Justice in the meanwhile). There are lots of things you [generic you] probably do or are complicit in, that have a bad effect on other people. If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is), and I'm an academic, should I be more worried about avoiding that norm, or avoiding eating meat (which personally seems worse to me), or some third thing I do that causes harm?

The things that actually seem workable to me are:

  • Try to be in the top 50% of the population at morality
  • In general practice the muscle of defying social norms that push you to do wrong things, and pick at least some areas where you commit to be significantly better than the status quo.
  • "don't play the game" may not always be an option, but "try to create situations where you *actually* create opportunities stag hunts [LW · GW] to succeed." i.e. in this case, maybe try to at least make it true that at your department or university or conference, it is unacceptable to fake or exaggerate data. This is a longterm battle that won't work if you halfass it.

Replies from: Zvi, Zvi, Raemon, Benquo, Raemon, pktechgirl, Benquo
comment by Zvi · 2019-06-14T12:49:32.574Z · LW(p) · GW(p)

If you're an academic and you're using fake data or misleading statistics, you are doing harm rather than good in your academic career. You are defrauding the public, you are making our academic norms be about fraud, you are destroying both public trust in academia in particular and knowledge in general, and you are creating justified reasons for this destruction of trust. You are being incredibly destructive to the central norms of how we figure things out about the world - one of many of which is whether or not it is bad to eat meat, or how we should uphold moral standards.

And you're doing it in order to extract resources from the public, and grab your share of the pie.

I would not only rather you eat meat. I would rather you literally go around robbing banks at gunpoint to pay your rent.

If one really, really did think that personally eating meat was worse than committing academic fraud - which boggles my mind, but supposing that - what the hell are you doing in academia in the first place, and why haven't you quit yet? Unless your goal now is to use academic fraud to prevent people from eating meat, which I'd hope is something you wouldn't endorse, and not what 99%+ of these people are doing. As the author of OP points out, if you can make it in academia, you can make more money outside of it, and have plenty of cash left over for salads and for subsidizing other people's salads, if that's what you think life is about.


Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2019-06-15T21:25:03.212Z · LW(p) · GW(p)

fake data or misleading statistics

You shouldn't put these in the same category. Fake data is a much graver sin than failing to correct for multiple comparisons or running a study with a small sample size. For the second two, anyone who reads you paper can see what you did (assuming you mention all the comparisons you made) and discount your conclusions accordingly. For a savvy reader or meta-analysis author, a paper which commits these sins can still improve their overall picture of the literature, especially if they employ tools to detect/correct for publication bias. It's not obvious to me that a scientist who employs these practices is doing harm with their academic career, especially given that readers are getting more and more savvy nowadays.

I don't think "fraud" is the right word for these statistical practices. Cherry-picking examples that support your point, the way an opinion columnist does, is probably a more fraudulent practice.

Replies from: Zvi
comment by Zvi · 2019-06-16T10:32:57.541Z · LW(p) · GW(p)

It's fair to say that fake data is a Boolean and a Rubicon, where once you do it once, at all, all is lost. Whereas there are varying degrees of misleading statistics versus clarifying statistics, and how one draws conclusions from those statistics, and one can engage in some amount of misleading without dooming the whole enterprise, so long as (as you note) the author is explicit and clear about what the data was and what tests were applied, so anyone reading can figure out what was actually found.

However, I think it's not that hard for it to pass a threshold where it's clearly fraud, although still a less harmful/dangerous fraud than fake data, if you accept that an opinion columnist cherry-picking examples is fraud (e.g. for it to be more fraudulent than that, especially if the opinion columnist isn't assumed to be claiming that the examples are representative). And I like that example more the more I think about it, because that's an example of where I expect to be softly defrauded in the sense that I assume that the examples and arguments are words written are soldiers chosen to make a point slash sell papers, rather than an attempt to create common knowledge and seek truth. If scientific papers are in the same reference class as that...

comment by Zvi · 2019-06-14T12:55:44.508Z · LW(p) · GW(p)

I am very surprised that you still endorse this comment on reflection, but given that you do, it's not unreasonable to ask: Given that most people lie a lot, and you think personally not eating meat is more important than not lying, your track record actually not eating meat, and your claim that it's reasonable to be a 51st percentile moral person, why should we then trust your statements to be truthful? Let alone in good faith. I mean, I don't expect you to lie because I know you, but if you actually believed the above for real, wouldn't my expectation be foolish?

I'm trying to square your above statement and make it make sense for you to have said it and I just... can't?

Replies from: Raemon
comment by Raemon · 2019-06-14T21:37:16.274Z · LW(p) · GW(p)

I think "you are a bad person" is a very powerful and dangerous tool to use on yourself or others. I think there are a lot of ways to deeply fuck yourself up with it.

Similarly, moral obligation is a very powerful and dangerous concept.

I think it is (sort of) reasonably safe to use with "if you are in the bottom 50% of humanity*, you are morally obligated to work on that, and if you aren't at least working on it, you are a bad person."

Aspiring towards being a truly *good* person is a lot of effort. It requires time to think a lot about your principles, it requires slack to dedicate towards both executing them and standing up to various peer pressures, etc. It is enough effort, and I think most people have enough on their plate, that I don't consider it morally obligatory.

I aspire to be a truly good person, and I in fact try to create a fenced-in-bubble, which requires you to be aspiring towards some manner of goodness in order to gain many of the benefits I contribute to the semi-public commons. I think this is a pretty good strategy, to avoid the dangers of moral obligation and "bad person" mindset, while capturing the benefits of high percentile goodness.

*possibly "if you're in the bottom 50% of your reference class", where reference class is somewhat vague.

Replies from: Benquo, Dagon
comment by Benquo · 2019-06-16T06:35:12.946Z · LW(p) · GW(p)

You're the one bringing up the question of whether someone's a bad person.

Replies from: Zvi, Raemon
comment by Zvi · 2019-06-16T11:35:53.567Z · LW(p) · GW(p)

True. But I do think we've run enough experiments on 'don't say anyone is a bad person, only point out bad actions and bad logic and false beliefs' to know that people by default read that as claims about who is bad, and we need better tech for what to do about this.

Replies from: Dagon
comment by Dagon · 2019-06-16T14:55:00.608Z · LW(p) · GW(p)

As long as we understand that "bad person" is shorthand for "past and likely near-future behaviors are interfering with group goals", It's a reasonable judgement to make. And it's certainly useful to call out people you'd like to eject from the group, or to reduce in status, or to force a behavioral change on.

I don't object to calling someone a bad person, I only object to believing that such a thing is real.

Replies from: Zvi
comment by Zvi · 2019-06-16T15:06:25.887Z · LW(p) · GW(p)

The thing is, I don't think that shorthand (along with similar things like "You're an idiot") ever stays understood outside of very carefully maintained systems of people working closely together in super high trust situations, even if it starts out understood.

Replies from: Dagon
comment by Dagon · 2019-06-16T23:22:45.852Z · LW(p) · GW(p)

I'd agree. Outside of closely-knit, high-trust situations, I don't think it's achievable to have that subtlety of conceptual communication. You can remind (some) people, and you can use more precise terminology where the distinction is both important and likely to succeed. In other cases, maintaining your internal standards of epistemic hygiene is valuable, even when playing status games you don't like very much.

comment by Raemon · 2019-06-17T00:10:43.914Z · LW(p) · GW(p)

I think two different things are going on here:

1. The OP read as directly moralizing to me. I do realize it doesn't necessarily spell it out directly, but moralizing language rarely is. I don't know the author of the OP. There are individuals I trust on LW to be able to have this sort of conversation without waging subtle-or-unsubtle wars over who is a bad person, but they are rare. I definitely don't assume that for random people on the internet.

2. My "Be in the top 50% morally" statement was specifically meant to be in the context of the full Scott Alexander post, which is explicitly about (among other things) people being worried about being a good person.

And, yes, I brought the second point up (and I did bring it up in an offhand way without doing much to establish the context, which was sloppy. I do apologize for that).

But afterward providing the link, it seemed like people were still criticizing that point. And... I'm not sure I have a good handle on how this played out. But my impression is something like you and maybe a couple others were criticizing the 50% comment as if it were part of a different context, whereas if you read the original post it's pretty clearly applying to the "when should you consider yourself a good/bad or blameworthy/praiseworthy person?" context. So things that seem (to me) to make sense to criticize are either the entire frame of the post (rather than the specific rule about 'be in the top 50%") or criticizing the 50% rule in it's original context. And it didn't seem like that's what was happening.


comment by Dagon · 2019-06-15T03:56:09.459Z · LW(p) · GW(p)

The gradients between horrific, forbidden, disallowed, discouraged, acceptable, preferable, commendable, heroic seem like something that should be discussed here. I suspect you're mixing a few different kinds of judgement of self, judgement of others, and perceived judgement by others. I don't find them to be the same thing or the same dimensions of judgement, but there's definitely some overlap.

I reject "goodness" as an attribute of a person - it does not fit my intuitions nor reasoned beliefs. There are behaviors and attitudes which are better or worse (sometimes by large amounts), but these are contingent rather than identifying. There _are_ bad people, who consistently show harmful behavior and no sign of changing throughout their lives. There are a LOT of morally mediocre people who have a mix of good and bad behavior, often more context-driven than choice-driven. I don't think I can distinguish among them, so I tend to assume that almost everyone is mediocre. Note that I can decide that someone is unpleasant or harmful TO ME, and avoid them, without having to condemn them as a bad person.

So, I don't aspire to be a truly good person, as I don't think that's a thing. I aspire to do good things and make choices for the commons, which I partake of. I'm not perfect at it, but I reject judgement on any absolute scale, so I don't think there's a line I'm trying to find where I'm "good enough", just fumbling my way around what I'm able/willing to do.


comment by Raemon · 2019-06-14T21:07:27.234Z · LW(p) · GW(p)

Note: I may not be able to weigh in on this more until Sunday.

Clarifying some things all at once since a few people have brought up related points. I'm probably not going to get to address the "which is worse – lying or eating meat" issue until Sunday (in the meanwhile, to be clear, I think "don't lie" is indeed one of the single most important norms to coordinate on, and to create from scratch if you don't have such a norm, regardless of whether there are other things that are as or more important)

A key clause in the above comment was:

If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is)

In a world where the norm in academia is to not use bad statistics, or not to fake data, then absolutely the correct thing is to uphold that norm.

In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I don't actually know the state of academia enough to know which world we're in (I suspect different pockets of academia are different), and the sentence was intended to be useful in either world.

In the world where everyone in academia is literally faking data all the time, yes, absolutely, I think it is good execute a broader strategy that can actually change things. I don't think it's morally obligatory, for the same reason I think it's not morally obligatory to pour all your resources into third world poverty or x-risk (despite the latter being sort of literally the most important thing). But it is probably up there in the top 5 things that are worth doing and dedicating your life to.

(it might be that you dedicate your life to fixing academia, or you leave academia, depending on how bad things are and what seems tractable)

Now, in the rationalsphere we do credible have enough background norms that higher-than-average honesty norms, and there's clear meta-level agreement that there's some kind of stag hunt here that we're aiming for. But we haven't gotten agreement on exactly which stag hunt we're running. And most of the people I respect quite a lot (including Zvi, Benquo, Jessica Taylor, Habryka, and Duncan), periodically make moves that look to me like obvious defecting in what I'd implicitly assumed the obvious social norms were (and it seems like I make moves that look like defecting to them).

And the big topic on my mind this past couple years is figuring out how to get onto the right level of meta-alignment, where we can collectively be working on something very hard, but which a) requires strategy to get right, b) has a lot of ways to go wrong, c) I expect everyone to continuously have deep, diverging models about.

Is being good costly?

Catherio notes that if things are free, you should... just actually be a good person. I agree with this, but I think there a number of reasons why being good isn't free. This is a sort of complex domain that I think requires a few things to be acknowledged at once:

1. Willpower is real

2. Also, willpower apparently isn't real?

3. Social pressure is real

4. Resisting social pressure requires either changing your local environment or moving to a new environment or burning willpower

5. Figuring out what battles are worth fighting, and how to fight them, is cognitively hard, and if nothing else requires allotment of time

My current read on the contradictory "do people get willpower depleted" literature is that... people totally do get willpower depleted. But, also, people who hold a stance that they have infinite willpower have an easier time managing willpower. This is a quite weird intersection of epistemic and instrumental rationality that I'm not 100% sure how to think about.

If someone is in a moral-maze-esque corporation, "being locally, naively good" doesn't seem like a strategy that goes anywhere useful. It'll get subtly and unsubtly punished by people around you. It is not free – not only do you miss out on some benefits, you will probably actively get hurt.

Replies from: Zvi, Zack_M_Davis
comment by Zvi · 2019-06-14T22:15:43.034Z · LW(p) · GW(p)
In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I think this claim is a hugely important error.

One scientist unilaterally deciding to stop faking data isn't going to magically make the whole world come around. But the idea that it doesn't help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn't make it worse?

I don't understand how one can think that.

That's not unique to the example of faking data. That's true of anything (at least partially) observable that you'd like to change.

One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.

But don't pretend it doesn't matter.

Similarly, I find it odd that one uses the idea that 'doing the right thing is not free' as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you're not counting the thing itself as a benefit.

But the whole point of some things being right is that you do them even though it's not free, because It's Not The Incentives, It's You. You're making a choice.

Ideally we'd design a system where one not only cultivated the virtue of doing the right thing, and was rewarded for doing that, one would also be rewarded in expectation for doing the right thing as often as possible. Doing the right thing is, in fact, a prime way of moving towards that.

Again, sometimes the cost of doing the otherwise 'right thing' gets too high. Especially if you can't coordinate on it. There are trade-offs. One can't do every good thing or never compromise.

But if there is one takeaway from Moral Mazes that everyone should have, it's a really, really simple one:

Being in a moral maze is not worth it. They couldn't pay you enough, and even if they could, they definitely don't. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

If academia has become a moral maze, the same applies, except that the money was never good to begin with.

Replies from: dxu, Wei_Dai, Raemon
comment by dxu · 2019-06-15T00:35:33.012Z · LW(p) · GW(p)
One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don't pretend it doesn't matter.

This reads as enormously uncharitable to Raemon, and I don't actually know where you're getting it from. As far as I can tell, not a single person in this conversation has made the claim that it "doesn't matter"--and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or "pretending" it, which is just as bad) doesn't say good things about the level of conversation.

What has been claimed is that "doing the thing that reinforces good norms" is ineffective, i.e. it doesn't actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a whole. If this claim is true (and even if it's false, it's not obviously false), then there's no point hoping to see knock-on effects from such a change--and that in turn means all that's left is the cost-benefit calculation: is the amount of good that I would do by publishing a paper with non-fabricated data (even if I did, how would people know to pay attention to my paper and not all the other papers out there that totally did use fabricated data?), worth the time/effort/willpower it would take me to do so?

As you say: it is indeed a trade-off. Now, you might argue (perhaps rightly so!) that one individual's personal time/effort/willpower is nowhere near as important as the effects of their decision whether to fabricate data. That they ought to be willing to expend their own blood, sweat, and tears to Do The Right Thing--at least, if they consider themselves a moral person. And in fact, you made just such an argument in your comment:

Similarly, I find it odd that one uses the idea that 'doing the right thing is not free' as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you're not counting the thing itself as a benefit.
But the whole point of some things being right is that you do them even though it's not free, because It's Not The Incentives, It's You. You're making a choice.

But this ignores the fact that every decision has an opportunity cost: if I spend vast amounts of time and effort designing and conducting a rigorous study, pre-registering my plan, controlling for all possible confounders (and then possibly getting a negative result and needing to go back to the drawing board, all while my colleague Joe Schmoe across the hall fabricates his way into Nature), this will naturally make me more tired than I would be otherwise. Perhaps it will cause me to have less patience than I normally do, become more easily frustrated at events outside of my control, be less willing to tolerate inconveniences in other areas of my life, etc. If, for example, I believed eating meat was morally wrong, I might nonetheless find meat more difficult to deliberately deprive myself of it if I was already spending a great deal of willpower every day on seeing this study through. And if I expect that to be the case, then I have to ask myself which thing I ought to prioritize: not eating meat, or doing the study properly?

This is the (somewhat derisively named) "goodness budget" Benquo mentioned upthread. But another name for it might be Moral Slack. It's the limited amount of room we have to be less than maximally good in our lives, without being socially punished for it. It's the privilege we're granted, to not have to constantly ask ourselves "Should I be doing this? Am I being a bad person for doing this?" It's--look, you wrote half the posts I just linked to. You know the concept. I don't know why you're not applying it here, but it seems pretty obvious to me that it applies just as well here as it does in any other aspect of life.

To be clear: you know that falsifying data is a Very Bad Thing. I know that falsifying data is a Very Bad Thing. Raemon knows that falsifying data is a Very Bad Thing. We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it. If you do that, then you're using moral indignation as a weapon--a way to not only coerce other people into using up their willpower, but to come out of it looking good yourself.

People who manage to resist the incentives--who ignore the various siren calls they constantly hear--are worthy of extremely high praise. They are exceptionally good people--by definition, in fact, because if they weren't exceptional, everyone else would be doing it, too. By all means, praise those people as much as you want. But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change. "It's Not The Incentives, It's You" puts the emphasis in the wrong place, and it degrades communication with people who might have been reachable with a more nuanced take.

Replies from: Zvi, Zvi
comment by Zvi · 2019-06-15T13:22:15.556Z · LW(p) · GW(p)
We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it.

No. No. Big No. A thousand times no.

(We all agree with that first sentence, everyone here knows these things are bad, that's just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)

I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I'm happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.

I'm still worried that such treatment will mostly occur...

There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot 'call them out' on such action, even if such calling out has no tangible consequences.

I'm too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I'm simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.


Replies from: catherio, Bucky, clone of saturn, Pattern
comment by catherio · 2019-06-19T23:41:34.186Z · LW(p) · GW(p)

Here's another further-afield steelman, inspired by blameless postmortem culture.

When debriefing / investigating a bad outcome, it's better for participants to expect not to be labeled as "bad people" (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.

More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.


In general, I'm curious what Zvi and Ben think about the interaction between "I expect people to yell at me if I say I'm doing this" and promoting/enabling "honest accounting".

comment by Bucky · 2019-06-15T20:40:54.975Z · LW(p) · GW(p)

Trying to steelman the quoted section:

If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.

I’m not sure I endorse the specific example there but in a personal example:

My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.

I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.

If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.

Replies from: Zvi, dxu
comment by Zvi · 2019-06-16T10:48:57.852Z · LW(p) · GW(p)

Thank you.

I read your steelman as importantly different from the quoted section.

It uses the weak claim that such action 'could be bad' rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.

It changes the standard of behavior from 'any behavior that responds to local incentives is automatically all right' to 'behaviors that are above average and net helpful, but imperfect.'

This is an example of the kind of equivalence/transformation/Mott and Bailey I've observed, and am attempting to highlight - not that you're doing it, you're not because this is explicitly a steelman, but that I've seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).

That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.

comment by dxu · 2019-06-16T03:31:59.322Z · LW(p) · GW(p)

I might try and write up a reply of my own (to Zvi's comment), but right now I'm fairly pressed for time and emotional energy, so until/unless that happens, I'm going to go ahead and endorse this response as closest to the one I would have given.

EDIT: I will note that this bit is (on my view) extremely important:

If one were to be above average but imperfect (emphasis mine)

"Above average", of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I'm not claiming that this is actually the case in academia, to be clear.) And if it's true that I'm only doing what everyone else does, then it makes no sense to call me out, especially if your "call-out" is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.

(An interesting analogy can be made here regarding speeding--most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers--all of whom are driving at similarly high speeds--get by unscathed. I don't think it's particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of "intervention" has pretty much zero impact on driving behavior as a whole.)

Replies from: Zvi, Zack_M_Davis, Bucky
comment by Zvi · 2019-06-16T10:36:23.779Z · LW(p) · GW(p)

Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?

Replies from: Bucky
comment by Bucky · 2019-06-16T11:30:22.371Z · LW(p) · GW(p)

Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.

(Maybe some modification to the calculation of the average to account for queues etc.)

comment by Zack_M_Davis · 2019-07-13T19:48:01.046Z · LW(p) · GW(p)

As it happens, the case of speeding also came up in the comments on the OP. Yarkoni writes:

[...] I think the point I'm making actually works well for speeding too: when you get pulled over by a police officer for going 10 over the limit, nobody is going to take you seriously if your objection to the ticket is "but I'm incentivized to go 10 over, because I can get home a little faster, and hardly anyone ever gets pulled over at that speed!" The way we all think about speeding tickets is that, sure, there may be reasons we choose to break the law, but it's still our informed decision to do so. We don’t try shirk the responsibility for speeding by pretending that we’re helpless in the face of the huge incentive to get where we're going just a little bit faster than the law actually allows. I think if we looked at research practice the same way, that would be a considerable improvement.

comment by Bucky · 2019-06-16T10:57:20.507Z · LW(p) · GW(p)

On reflection I’m not sure “above average” is a helpful frame.

I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).

comment by clone of saturn · 2019-06-15T17:54:06.274Z · LW(p) · GW(p)

I don't endorse the quoted statement, I think it's just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it's wrong to shame someone for violating a norm they didn't explicitly agree to follow. If you call me out for falsifying data, you're not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you're simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.

(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don't see it that way.)

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2019-06-15T18:50:34.446Z · LW(p) · GW(p)

It's an assumption of a pact among fraudsters (a fraud ring). I'll cover for your lies if you cover for mine. It's a kind of peace treaty.

In the context of fraud rings being pervasive, it's valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.

Replies from: clone of saturn
comment by clone of saturn · 2019-06-15T19:08:31.034Z · LW(p) · GW(p)

Right... but fraud rings need something to initially nucleate around. (As do honesty rings)

comment by Pattern · 2019-06-15T18:37:07.736Z · LW(p) · GW(p)
are there any others here, that would endorse the quoted statement as written?

I don't endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where "bad"/"good" seems like a non-issue*/counterproductive.

*If not outright beneficial.

comment by Zvi · 2019-06-15T12:43:14.072Z · LW(p) · GW(p)
But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change.

I haven't said 'bad person' unless I'm missing something. I've said things like 'doing net harm in your career' or 'making it worse' or 'not doing the right thing.' I'm talking about actions, and when I say 'right thing' I mean shorthand for 'that which moves things in the directions you'd like to see' rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.

It's a strange but consistent thing that people's brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don't take the better actions of being bad people. Or even, as you say, 'exceptionally bad' people.


Replies from: dxu
comment by dxu · 2019-06-16T03:51:13.526Z · LW(p) · GW(p)
I haven't said 'bad person' unless I'm missing something.

I mean, you haven't called anyone a bad person, but "It's Not The Incentives, It's You" is a pretty damn accusatory thing to say, I'd argue. (Of course, I'm also aware that you weren't the originator of that phrase--the author of the linked article was--but you at least endorse its use enough to repeat it in your own comments, so I think it's worth pointing out.)

Replies from: Zvi
comment by Zvi · 2019-06-16T11:00:58.050Z · LW(p) · GW(p)

Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.

On two levels.

Level one is the one where some level of endorsement of something means that I'm making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.

Level two is that the OP doesn't make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for 'science' or 'the world' but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.

That is importantly different from claiming that these are bad people.

Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?

I actually am asking, because I don't know.

Replies from: Raemon
comment by Raemon · 2019-06-17T04:24:31.712Z · LW(p) · GW(p)
Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?
I actually am asking, because I don't know.

I've touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It's hugely inefficient. But I don't actually see alternatives.

It sucks even more because it's probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I'm not confident how common this is. I think a more prototypical example is "southern politeness" with "Oh bless your heart").

So I don't think there's a permanent answer for public discourse. There's just costly signaling via phrasing things carefully in a way that suggests you're paying attention to your reader's mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.

(Duncan's more recent writing often seems to be making an effort at this. It doesn't work universally, due to the unfortunate fact that not all one's readers will be having the same mental state. A disclaimer that reassures one person may alienate another)

It seems... hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn't necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of 'we're all here to truthseek'. See tensions in truthseeking [LW · GW] for some [possibly out of date] thoughts on mine on that)

I've noted issues with Public Archipelago [LW · GW] given current technologies, but it still seems like the best solution to me.

Replies from: Benquo
comment by Benquo · 2019-06-17T21:23:42.519Z · LW(p) · GW(p)

It seems pretty fucked up to take positive proposals at face value given that context.

comment by Wei Dai (Wei_Dai) · 2019-06-16T02:48:03.678Z · LW(p) · GW(p)

Optimizing for anything is costly if you’re not counting the thing itself as a benefit.

Suppose I do count the thing itself (call it X) as a benefit. Given that I'm also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or "call out" someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people's intuitions that calling people out for this is bad.

Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?

Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn't produce Moral Mazes, but if it wasn't very hard to invent/spread, someone probably would have done it already.

If academia has become a moral maze, the same applies, except that the money was never good to begin with.

As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it's a feasible option. But I'm not sure it's actually feasible at scale.

Replies from: Zvi
comment by Zvi · 2019-06-16T11:14:18.792Z · LW(p) · GW(p)

I think these are (at least some of) the right questions to be asking.

The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?

Which I won't answer here, because it's a hard question, but my current best guess on question one is: It's the natural endpoint if you don't create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.

My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn't mean we can give up on major corporations or national governments without better options that we don't currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.

As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I'm not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It's also not clear that academia as it currently exists at scale is feasible at that scale. I'm not close enough to it, to be the one who should make such claims.


Replies from: habryka4, Dagon, Pattern
comment by habryka (habryka4) · 2019-06-17T22:10:05.665Z · LW(p) · GW(p)

This comment feels like it correctly summarizes a lot of my thinking on this topic, and I would feel excited about a top-level post version of it.

Replies from: Raemon
comment by Raemon · 2019-06-17T22:57:54.677Z · LW(p) · GW(p)

Same.

comment by Dagon · 2019-06-17T23:14:46.834Z · LW(p) · GW(p)

"Selling out" has been in the well-known concept space for a long long time - it's not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.

Do we have any examples of groups that both behave well AND get significant things done?

comment by Pattern · 2019-06-18T04:16:34.476Z · LW(p) · GW(p)

One idea on the subject of government is "eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies."

One alternative to this, would be to start a group/country/etc. with an explicit end date - something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)

comment by Raemon · 2019-06-14T23:43:44.506Z · LW(p) · GW(p)

Nod. I don't know that I disagree with any of this per se. I'll respond more on Sunday. Any disagreements I have I think are about how to weight things and how to strategize (with slightly different caveats for individuals, for groups with fences, and for amorphous society)

comment by Zack_M_Davis · 2019-06-19T05:03:57.147Z · LW(p) · GW(p)

unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I could imagine this being true in some sort of hyper-Malthusian setting where any deviation from the Nash equilibrium gets you immediately killed and replaced with an otherwise-identical agent who will play the Nash equilibrium.

comment by Benquo · 2019-06-14T06:21:30.245Z · LW(p) · GW(p)

Let me try to be a little clearer here.

If someone defrauds me, and I object, and they explain that the incentive structure society has set up for them pays more on net for fraud than for honest work, then this is at least a relevant reply, and one that is potentially consistent with owning one's decision to participate in corruption rather than fighting it or opting out. (Though I think the article makes a pretty good case that in the specific case of academia, "fighting it or opting out" is better for most reasonable interests.)

If someone defrauds me, and I object, and they explain that they're instead spending their goodness budget on avoiding eating meat, this is not a relevant reply in the same sense. Factory farmed animals aren't a party we're negotiating with or might want to win the trust of, and the public interest in accurate information is different in kind from the public interest in people not causing animals to suffer.

Replies from: Benquo, catherio
comment by Benquo · 2019-06-14T06:38:07.685Z · LW(p) · GW(p)

This is especially important in the light of a fairly recent massive grass-roots effort [LW(p) · GW(p)] in academia - originated by academics in multiple disciplines volunteering their spare time - to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that's not actually a proposal to decline to initiate a stag hunt, that's a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.

Replies from: catherio
comment by catherio · 2019-06-14T07:48:31.968Z · LW(p) · GW(p)

Another distinction I think is important, for the specific example of "scientific fraud vs. cow suffering" as a hypothetical:

Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.

I have a guess that "science, specifically" as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray's "which harm is highest priority?" thought experiment had been "high-frequency-trading" (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I'm curious to what extent that's true.

Replies from: Benquo, Raemon
comment by Benquo · 2019-06-14T07:58:51.482Z · LW(p) · GW(p)

You're right that I'd respond to different cases differently. Doing high frequency trading in a way that causes some harm - if you think you can do something very good with the money - seems basically sympathetic to me, in a sufficiently unjust society such as ours.

Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level [LW · GW] of something like finance is pretty high in many respects.

comment by Raemon · 2019-06-14T21:23:57.334Z · LW(p) · GW(p)

I think your final paragraph is getting at an important element of the disagreement. To be clear, *I* treat science and high frequency trading differently, too, but yes I think to me it registers as "very important" and to Ben it seems closer to "sacred" (which, to be clear, seems like a quite reasonable outlook to me)

Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.

Small background tidbit that's part of this: I think many scientists have goals that seem like more like like "do what their parents want" and "be respectable" or something. Which isn't about traditional financial success, but looks like opting into a particular weird sub-status-hierarchy that one might plausibly well suited to win at.

Another background snippet informing my model:

Recently I was asking an academic friend "hey, do you think your field could benefit from better intellectual infrastructure?" and they said "you mean like LessWrong?" and I said "I mean a meta-level version of it that tries to look at the local set of needs and improve communication in some fashion."

And they said something like "man, sorry to disappoint you, but most of academia is not, like, trying to solve problems together, the way it looks like the rationality or AI alignment communities are. They wouldn't want to post clearer communications earlier in the idea-forming stage because they'd be worried about getting scooped. They're just trying to further their own career."

This is just one datapoint, and again I know very little about academia overall. Ben's comments about how the replication crisis happened via an organic grassroots process seems quite important and quite relevant.

Reiterating from my other post upthread: I am not making any claims about what people in science and/or academia should do. I'm making conditional claims, which depend on the actual state of science and academia.

comment by catherio · 2019-06-14T07:34:31.517Z · LW(p) · GW(p)

One distinction I see getting elided here:

I think one's limited resources (time, money, etc) are a relevant question in one's behavior, but a "goodness budget" is not relevant at all.

For example: In a world where you could pay $50 to the electric company to convert all your electricity to renewables, or pay $50 more to switch from factory to pasture-raised beef, then if someone asks "hey, your household electrical bill is destroying the environment, why didn't you choose the green option", a relevant reply is "because I already spent my $50 on cow suffering".

However, if both options cost $0, then "but I already switched to pasture-raised beef" is just irrelevant in its entirety.

comment by Raemon · 2019-07-10T01:17:58.378Z · LW(p) · GW(p)

A few clarifications that seem worth making for posterity:

[Epistemic status: none of this was or is anything I'm especially confident in. But I'm relatively meta-confident that if you think the answers here are obvious, you're typical minding on how obvious your particular world/morality-model is]

  • To be clear, I eat meat, and I devote a lot of effort to improving my honesty, meta-honesty and integrity. I expect the median scientist to be in a fairly different epistemic and agentic position that I am.
  • There were a couple mistakes that I noticed or had pointed out to me after making the above comment. One of them is discussed here [LW(p) · GW(p)].
  • For the point I was actually trying to make, a perhaps less (or differently?) distracting example would be: "If I'm a median academic, it's not obvious whether I should try to become more honest* than the median academic, or try to, I dunno, recycle more or something."
    • *where I think "honesty" is a skill, built out of various sub-skills including social resilience, introspection, etc.
    • This question might have different answers depending on how dishonest or honest you think the median academic is.
    • I do not think recycling is very important. [edit: concretely, it is definitely less important than academic honesty by a large margin]. But the world is full of things that might possibly be important for me to do, and figuring out which of them matter and why is hard. I expect the median academic to have put very little thought into their morality. If they think improving their honesty is more important, or improving their recycling is more important, I think this says very little about how good they are at choosing moral principles, and has much more to do with having happened to accidentally bump into a set of friends/family/etc that priorities honesty or recycling or veganism or whatever.
      • I think the most important thing the median academic should do is become more socially resilient, agenty, and good at moral reasoning.
      • If a median academic decides that becoming more-honest-than-the-median academic is an important priority, I stand by the claim that they should very quickly be moving towards "figure out how to change their social environment to make it easier to coordinate on honesty." I think this dwarfs whatever their own independent efforts towards improving at honesty are.
comment by Elizabeth (pktechgirl) · 2019-06-14T06:37:59.323Z · LW(p) · GW(p)

I can't tell if you're saying eating meat is worse than faking data to you personally, or for a hypothetical academic, could you clarify? And if it is a position you personally hold, can you explain your moral calculus?

comment by Benquo · 2019-06-14T03:54:23.630Z · LW(p) · GW(p)
Try to be in the top 50% of the population at morality

What does this mean?

Replies from: Raemon
comment by Raemon · 2019-06-14T04:12:50.851Z · LW(p) · GW(p)

It was a reference to this post:

https://slatestarcodex.com/2018/11/16/the-economic-perspective-on-moral-standards/

Replies from: Zvi, Benquo
comment by Zvi · 2019-06-14T13:00:58.072Z · LW(p) · GW(p)

I almost wrote a reply to that post when it came up (but didn't because one should not respond too much when Someone Is Wrong On The Internet, even Scott), because this neither seemed like an economic perspective on moral standards, nor did it work under any equilibrium (it causes a moral purity cascade, or it does little, rarely anything in between), nor did it lead to useful actions on the margin in many cases as it ignores cost/benefit questions entirely. Strictly dominated actions become commonplace. It seems more like a system for avoiding being scapegoated and feeling good about one's self, as Benquo suggests.

(And of course, >50% of people eat essentially the maximum amount of quality meat they can afford.)

comment by Benquo · 2019-06-14T05:28:50.176Z · LW(p) · GW(p)

So you mean try to do slightly less of what can get you blamed than average? What policy goal does slightly outperforming at an incoherent standard achieve?

Replies from: Raemon
comment by Raemon · 2019-06-14T05:41:22.003Z · LW(p) · GW(p)

Try coming up with a charitable interpretation of what I said. I feel like the various posts I linked showcase why I think there failure modes to naively doing the thing you're saying, not to mention the next two bullet points.

Replies from: Benquo
comment by Benquo · 2019-06-14T05:57:03.379Z · LW(p) · GW(p)

I don't actually understand how to be "more charitable" or "less charitable" here - I'm trying to make sense of what you're saying, and don't see any point in making up a different but similar-sounding opinion which I approve of.

If I try to back out what motives lead to tracking the average level of morality (as opposed to trying to do decision theory on specific cases), it ends up to be about managing how much you blame yourself for things (i.e. trying to "be" "good"); I actually don't see how thinking about global outcomes would get you there.

If you have a different motivation that led you there, you're in a better position to explain it than I am.

comment by Slider · 2019-06-16T19:33:27.031Z · LW(p) · GW(p)

Writing posts a certain way to get more karma on lesswrong is an area of application for this stance.