No, it's not The Incentives—it's you

post by Zack_M_Davis · 2019-06-11T07:09:16.405Z · score: 89 (28 votes) · LW · GW · 90 comments

This is a link post for

Neuroscientist Tal Yarkoni denounces many of his colleagues' tendency to appeal to publish-or-perish incentives as an excuse for sloppy science (October 2018, ~4600 words). Perhaps read as a complement to our recent [LW · GW] discussion [LW · GW] of Moral Mazes?


Comments sorted by top scores.

comment by Laura B (Lara_Foster) · 2019-06-16T15:58:28.939Z · score: 31 (8 votes) · LW · GW

There is a lot of arguing in the comments about what the 'tradeoffs' are for individuals in the scientific community and whether making those tradeoffs is reasonable. I think what's key in the quoted article is that fraudsters are trading so much for so little. They are actively obscuring and destroying scientific progress while contributing to the norm of obscuring and destroying scientific progress. Potentially preventing cures to diseases, time and life-saving technology, etc. This is REALLY BAD. And for what? A few dollars and an ego trip? An 11% instead of a 7% chance at a few dollars and an ego trip? I do not think it is unreasonable to judge this behavior as reprehensible, reguargless if it is the 'norm'.

Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value. If 100% of scam artists steal people's money, I don't forgive a scam artist for stealing less money than the average scam artist. They are not 'making things better' by in theory reducing the average amount of money stolen per scam artist. They are still stealing money. DO NOT BECOME A SCAM ARTIST IN THE FIRST PLACE. If academia is all a scam, then that is very sad, but it does not make it ok for people to join in the scam and shrug it off as a norm.

And being fraudulent in science is SO MUCH WORSE than just being an ordinary scam artist who steals money. It's more like being a scam artist who takes money in exchange for poisoning the water and land with radioactive waste. No, it's not ok because other people are doing it.

comment by rohinmshah · 2019-06-24T17:55:16.248Z · score: 2 (1 votes) · LW · GW
Using peers in a field as a proxy for good vs. bad behavior doesn't make sense if the entire field is corrupt and destroying value.

This seems to imply that you think that the world would be better off without academia at all. Do you endorse that?

Perhaps you only mean that if the world would be better off without academia at all, and nearly everyone in it is net negative / destroying value, then no one could justify joining it. I can agree with the implication, but I disagree with the premise.

comment by Dagon · 2019-06-11T18:24:32.721Z · score: 16 (6 votes) · LW · GW

I can't upvote this strongly enough. It's the perfect followup to discussion and analysis of Moloch and imperfect equilibria (and Moral Mazes) - goes straight to the heart of "what is altruism?" If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices, only economic ones.

comment by Pattern · 2019-06-11T19:40:01.140Z · score: 1 (1 votes) · LW · GW

I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want. The altruist may raise money for a charity, and the selfish may act in their own (view of) self interest say, to accumulate money/what they value. The difference isn't that the moral don't act economically, it's that they act economically with regards to something else.

comment by abramdemski · 2019-06-23T00:57:43.083Z · score: 2 (1 votes) · LW · GW

I see what you mean, but there's a tendency to think of 'homo economicus' as having perfectly selfish, non-altruistic values.

Also, quite aside from standard economics, I tend to think of economic decisions as maximizing profit. Technically, the rational agent model in economics allows arbitrary objectives. But, what kinds of market behavior should you really expect?

When analyzing celebrities, it makes sense to assume rationality with a fame-maximizing utility function, because the people who manage to become and remain celebrities will, one way or another, be acting like fame-maximizers. There's a huge selection effect. So Homo Hollywoodicus can probably be modeled well with a fame-maximizing assumption.

This has nothing to do with the psychology of stardom. People may have all kinds of motives for what they do -- whether they're seeking stardom consciously or just happen to engage in behavior which makes them a star.

Similarly, when modeling politics, it is reasonable to make a Homo Politicus assumption that people seek to gain and maintain power. The politicians whose behavior isn't in line with this assumption will never break into politics, or at best will be short-lived successes. This has nothing to do with the psychology of the politicians.

And again, evolutionary game theory treats reproductive success as utility, despite the many other goals which animals might have.

So, when analyzing market behavior, it makes some sense to treat money as the utility function. Those who aren't going for money will have much less influence on the behavior of the market overall. Profit motives aren't everything, but other motives will be less important that profit motives in market analysis.

comment by Dagon · 2019-06-11T20:59:08.209Z · score: 2 (1 votes) · LW · GW
I wouldn't call them "economic" actions/decisions - how to do things at a concrete level is about what you want.

I think of economic decisions in terms of visible/modeled tradeoffs including time- and uncertainty-discounted cost/benefit choices. Moral decisions are this, plus hard-to-model (illegible) values and preferences. I acknowledge that there's a lot of variance in how those words are used in different contexts, and I'm open to suggestions on what to use instead.

comment by Pattern · 2019-06-23T06:05:00.317Z · score: 1 (1 votes) · LW · GW

In the case you referenced, "selfish" or "short sighted", depending on what you were going for, seem to fit.

If you're not taking actions contrary to incentives, choosing to do something you value that "the system" doesn't, you're not making moral choices,

I very much agree with this part.

comment by rohinmshah · 2019-06-22T19:04:27.235Z · score: 14 (5 votes) · LW · GW

I disagree with most of the post and most of the comments here. I think most academics are not explicitly committing fraud, but bad science results anyway. I also think that for the vast majority of (non-tenured) academics, if you don't follow the incentives, you don't make it in academia. If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would. So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia. I think the appropriate worlds to consider are: science as it exists now with academics following incentives or ~no academia at all.

It is probably correct that each individual instance of having to deal with bad incentives doesn't make that much of a difference, but there are many such instances. Probably there's an 80-20 thing to do here where you get 80% of the benefit by not following the worst 20% of bad incentives, but it's actually quite hard to identify these, and it requires you to be able to predict the consequences of not following the bad incentives, which is really hard to do. (I don't think I could do it, and I've been in a PhD program for 5 years now.)

To be clear: if you know that someone explicitly and intentionally committed fraud for personal gain with the knowledge that it would result in bad science, that seems fine to punish. But this is rare, and it's easy to mistake well-intentioned mistakes for intentional fraud.

comment by jessicata (jessica.liu.taylor) · 2019-06-23T03:53:14.342Z · score: 8 (4 votes) · LW · GW

Isn't "academics who don't follow bad incentives almost never become professors" blatantly incompatible with "these are well-intentioned mistakes"?

comment by rohinmshah · 2019-06-23T16:23:54.801Z · score: 10 (4 votes) · LW · GW

The former is a statement about outcomes while the latter is a statement about intentions.

My model for how most academics end up following bad incentives is that they pick up the incentivized bad behaviors via imitation. Anyone who doesn't do this ends up doing poorly and won't make it in academia (and in any case such people are rare, imitation is the norm for humans in general). As part of imitation, people come up with explanations for why the behavior is necessary and good for them to do. (And this is also usually the right thing to do; if you are imitating a good behavior, it makes sense to figure out why it is good, so that you can use that underlying explanation to reason about what other behaviors are good.)

I think that I personally am engaging in bad behaviors because I incorrectly expect that they are necessary for some goal (e.g. publishing papers to build academic credibility). I just can't tell which ones really are necessary and which ones aren't.

comment by Benito · 2019-06-23T17:29:57.744Z · score: 6 (3 votes) · LW · GW

This seems related to the ideas in this post on unconscious economies [LW · GW].

comment by rohinmshah · 2019-06-23T21:44:29.818Z · score: 8 (4 votes) · LW · GW

Agreed that it's related, and I do think it's part of the explanation.

I will go even further: while in that post the selection happens at the level of properties of individuals who participate in some culture, I'm claiming that the selection happens at the higher level of norms of behavior in the culture, because most people are imitating the rest of the culture.

This requires even fewer misaligned individuals. Under the model where you select on individuals, you would still need a fairly large number of people to have the property of interest -- if only 1% of salesmen had the personality traits leading to them being scammy and the other 99% were usually honest about the product, the scammy salesmen probably wouldn't be able to capture all of the sales jobs. However, if most people imitate, then those 1% of salesmen will slowly push the norms towards being more scammy over generations, and you'd end up in the equilibrium where nearly every salesman is scammy.

Come to think of it, I think I would estimate that ~1% of academics are explicitly thinking about how to further their own career at the cost of science (in ways that are different from imitation).

comment by Benquo · 2019-06-23T02:02:25.571Z · score: 2 (1 votes) · LW · GW
If you intervened on ~100 entering PhD students and made them committed to always not following the incentives where they are bad, I predict that < 10% of them will become professors -- maybe an expected 2 of them would.

And how many if you didn't intervene?

So you can't say "why don't the academics just not follow the incentives"; any such person wouldn't have made it into academia.

How do you reconcile this with the immediately prior sentence?

comment by rohinmshah · 2019-06-23T16:08:21.564Z · score: 2 (1 votes) · LW · GW
And how many if you didn't intervene?

Significantly more, maybe 20. To do a proper estimate I'd need to know which field we're considering, what the base rates are, etc. The thing I should have said was that I expect it makes it ~10x less likely that you become a professor; that seems more robust to the choice of field and isn't conditional on base rates that I don't know.

The Internet suggests a base rate of 3-5%, which means without intervention 3-5 of them would become professors; if that's true I would say that with intervention an expected 0.4 of them would become professors.

How do you reconcile this with the immediately prior sentence?

I didn't mean that it was literally impossible for a person who doesn't follow the incentives to get into academia, I meant that it was much less likely. I do in fact know people in academia who I think are reasonably good at not following bad incentives.

comment by Samuel Hapák (hleumas) · 2019-06-11T19:56:49.860Z · score: 8 (5 votes) · LW · GW

Very nice. Few notes:

1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.

2. World isn't black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.

3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.

4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.

5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

comment by Dagon · 2019-06-11T22:09:34.511Z · score: 4 (3 votes) · LW · GW
5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that _nothing important_ should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

(Note - this is mostly a reductio argument. My actual opinion is that the split between hobby and income is itself part of the incorrect incentive structure, and there's no actual way to opt out. As such, you need to thread the needle of doing good while accepting some and rejecting other incentives.)

comment by John_Maxwell_IV · 2019-06-15T21:49:26.165Z · score: 9 (3 votes) · LW · GW

Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that nothing important should be a source of income. All long-term values-oriented work should be undertaken as hobbies.

This is an interesting argument for funding something like the EA Hotel over traditional EA orgs.

comment by Zvi · 2019-06-16T11:31:02.522Z · score: 6 (3 votes) · LW · GW

If the EA Hotel is easily confirmed as real, as in it is offering what it claims it is offering at a reasonable quality level at the price it claims to be offering that thing, then I am confused why it has any trouble being funded. This is yet another good reason for that.

I understand at least one good reason why there aren't more such hotels - actually doing a concrete physical world thing is hard and no one does it.

comment by John_Maxwell_IV · 2019-06-17T02:57:49.064Z · score: 9 (3 votes) · LW · GW

There's been a great deal of discussion of the EA Hotel on the EA Forum. Here's one relevant thread:

Here's another:

It's possible the hotel's funding troubles have more to do with weirdness aversion than anything else.

I personally spent 6 months at the hotel, thought it was a great environment, and felt the time I spent there was pretty helpful for my career as an EA. The funding situation is not as dire as it was a little while ago. But I've donated thousands of dollars to the project and I encourage others to donate too.

comment by Samuel Hapák (hleumas) · 2019-06-12T22:27:35.093Z · score: 7 (4 votes) · LW · GW

Some important things can be a source of income, such as farming. Farming is pretty important and there are no huge issues with farmers doing it for profit.

Problems happen when there is a huge disconnect between the value and reward. This happens in a basic research a lot, because researchers don't have any direct customers.

Arguably, in a basic research, you principally can't have any customers. Your customers are future researchers that will build on top of your research. They would be able to decide whether your work was valuable or whether it was crap, but you'd be pretty old or dead by that time.

comment by Viliam · 2019-06-13T21:48:50.962Z · score: 3 (2 votes) · LW · GW

As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it.

A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.

comment by Benquo · 2019-06-14T03:55:35.752Z · score: 8 (4 votes) · LW · GW

In that case, might as well go into something better-paying.

comment by John_Maxwell_IV · 2019-06-15T21:48:46.533Z · score: 2 (1 votes) · LW · GW

Well from a consequentialist perspective, if people with a stronger desire for scientific integrity self-select out of science, that makes science weaker in the long run.

I think a more realistic norm, which will likely create better outcomes, is for you personally to ensure that your work is at least in the top 40% for quality, and castigate anyone whose work is in the bottom 20%. Either of these practices should cause a gradual increase in quality if widely implemented (assuming these thresholds are tracked & updated as they change over time).

comment by Samuel Hapák (hleumas) · 2019-06-15T22:08:43.954Z · score: 9 (2 votes) · LW · GW

Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn't you be in a much better position doing quality research if you're earning good salary, saving a big portion of it, and doing science as a hobby?

comment by John_Maxwell_IV · 2019-06-16T03:50:10.696Z · score: 3 (2 votes) · LW · GW

It's possible. That's what I myself am doing--supporting myself with a part-time job while I self-study and do independent FAI research.

However, it's harder have credibility in the eyes of the public with this path. And for good reason--the public has no easy way to tell apart a crank from a lone genius, since it's hard to judge expertise in a domain unless you yourself are an expert in it. One could argue that the academia acts as a reasonable approximation of eigendemocracy and thereby solves this problem.

Anyway, if the scientists with credibility are the ones who don't care about scientific integrity, that seems bad for public epistemology.

comment by Zvi · 2019-06-16T11:18:16.378Z · score: 10 (5 votes) · LW · GW

Note that Wei Dei also notes that he chose exit from academia, as did many others on Less Wrong and in our social circles (combined with surprising non-entry).

If this is the model of what is going on, that quality and useful research is much easier without academia, but academia is how one gains credibility, then destroying the credibility of academia would be the logical useful action.

comment by John_Maxwell_IV · 2019-06-17T03:54:10.847Z · score: 4 (6 votes) · LW · GW

quality and useful research is much easier without academia

I think you have to do a lot more to demonstrate this.

destroying the credibility of academia would be the logical useful action.

Did you read Scott Alexander's recent posts on cultural evolution?

If the credibility of academia is destroyed, it's not obvious something better will come along to fill that void. Why is it better to destroy than repair? Plus, if something new gets created, it will probably have its own set of flaws. The more pressure is put on your system (in terms of funding and status), the greater the incentive to game things, and the more the cracks will start to show.

I suggest instead of focusing on the destruction of a suboptimal means for ascertaining credibility, you focus on the creation of a superior means for ascertaining credibility. Let's phase academia out after it has been made obsolete, not before.

comment by Samuel Hapák (hleumas) · 2019-06-17T07:42:30.778Z · score: 1 (1 votes) · LW · GW

Academia in the current form isn’t Lindy. It’s not like we’re doing this thousands of years. Current system of Academia is at most 70 years old.

comment by habryka (habryka4) · 2019-06-17T22:05:19.613Z · score: 2 (1 votes) · LW · GW

The broader institutions around academia have been around since at least the Royal Society, which was founded in 1660. That's usually the age I would put the rough institutions surrounding academia.

comment by Samuel Hapák (hleumas) · 2019-06-19T17:36:37.749Z · score: 1 (1 votes) · LW · GW

Royal Society in 1660 and current academia are very different beasts. For example the current citations/journal’s game is pretty new phenomenon. Peer-review wasn’t really a thing 100 years ago. Neither complex grant applications.

comment by habryka (habryka4) · 2019-06-19T19:08:10.582Z · score: 2 (1 votes) · LW · GW

I thought peer-review had always been a core part of science in some form or another. I think you might be confusing external peer-view and editorial peer-review. As this Wikipedia article says:

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[2][3][4]
The first peer-reviewed publication might have been the Medical Essays and Observationspublished by the Royal Society of Edinburgh in 1731. The present-day peer-review system evolved from this 18th-century process,[5] began to involve external reviewers in the mid-19th-century,[6] and did not become commonplace until the mid-20th-century.[7]
Peer review became a touchstone of the scientific method, but until the end of the 19th century was often performed directly by an editor-in-chief or editorial committee.[8][9][10]Editors of scientific journals at that time made publication decisions without seeking outside input, i.e. an external panel of reviewers, giving established authors latitude in their journalistic discretion. For example, Albert Einstein's four revolutionary Annus Mirabilispapers in the 1905 issue of Annalen der Physik were peer-reviewed by the journal's editor-in-chief, Max Planck, and its co-editor, Wilhelm Wien, both future Nobel prize winners and together experts on the topics of these papers. On another occasion, Einstein was severely critical of the external review process, saying that he had not authorized the editor in chief to show his manuscript "to specialists before it is printed", and informing him that he would "publish the paper elsewhere".[11]

It's true that external peer-review is recent, which I do think is a significant shift. But I would still think that the broader institution of peer-review is basically as old as science.

comment by Samuel Hapák (hleumas) · 2019-06-19T19:50:27.664Z · score: 2 (2 votes) · LW · GW

It's a huge difference whether the reviewer is some anonymous person unrelated to the journal or whether it's an editor in chief of the journal itself. I don't think it's appropriate to call the latter peer-review (there are no "peers" involved), but that's not important.

Editor in chief has a strong motivation to have a good quality journal. If he rejects a good article, it's his loss. On the contrary, anonymous peer have stronger motivation to use this as an opportunity to promote (get cited) his own research than to help journal curate the best science.

Let me try to rephrase the shift I see in science. Over the 20th century, science became bureaucraciesed, the process of "doing science" was largely formalised and standardised. Researchers obsess about impact factors, p-values, h-indexes, anonymous peer reviews, grants, currents...

There are actual rules in place that determine formally whether you are "good" scientist. That wasn't the case over the most of the history of the science.

Also the "full-time" scientist who never did any other job than academy research was much less common in the past. Take Einstein as an example.

comment by habryka (habryka4) · 2019-06-19T20:05:52.727Z · score: 2 (1 votes) · LW · GW

Oh, I think we both definitely agree that science has changed a lot. I do also think that it still very clearly has maintained a lot of its structure from its very early days, and to bring things back to John's top level point, it is less obvious that that structure would redevelop if we were to give up completely on academia or something like that.

comment by Raemon · 2019-06-13T21:58:22.114Z · score: 3 (10 votes) · LW · GW

Definitely think this is an important point in the conversation.

I think my take is something like "The incentives are the problem" is a useful frame for how to look at systems and (often but not always) other people, but should throw up a red flag when you use it as an excuse for your own behavior.

I'm not sure I endorse this post precisely as written, because "take ownership of your behavior" is a cause that will be Out To Get You [LW · GW] for everything you've got (while leaving you vulnerable to Asymmetric Justice in the meanwhile). There are lots of things you [generic you] probably do or are complicit in, that have a bad effect on other people. If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is), and I'm an academic, should I be more worried about avoiding that norm, or avoiding eating meat (which personally seems worse to me), or some third thing I do that causes harm?

The things that actually seem workable to me are:

  • Try to be in the top 50% of the population at morality
  • In general practice the muscle of defying social norms that push you to do wrong things, and pick at least some areas where you commit to be significantly better than the status quo.
  • "don't play the game" may not always be an option, but "try to create situations where you *actually* create opportunities stag hunts [LW · GW] to succeed." i.e. in this case, maybe try to at least make it true that at your department or university or conference, it is unacceptable to fake or exaggerate data. This is a longterm battle that won't work if you halfass it.

comment by Zvi · 2019-06-14T12:49:32.574Z · score: 30 (16 votes) · LW · GW

If you're an academic and you're using fake data or misleading statistics, you are doing harm rather than good in your academic career. You are defrauding the public, you are making our academic norms be about fraud, you are destroying both public trust in academia in particular and knowledge in general, and you are creating justified reasons for this destruction of trust. You are being incredibly destructive to the central norms of how we figure things out about the world - one of many of which is whether or not it is bad to eat meat, or how we should uphold moral standards.

And you're doing it in order to extract resources from the public, and grab your share of the pie.

I would not only rather you eat meat. I would rather you literally go around robbing banks at gunpoint to pay your rent.

If one really, really did think that personally eating meat was worse than committing academic fraud - which boggles my mind, but supposing that - what the hell are you doing in academia in the first place, and why haven't you quit yet? Unless your goal now is to use academic fraud to prevent people from eating meat, which I'd hope is something you wouldn't endorse, and not what 99%+ of these people are doing. As the author of OP points out, if you can make it in academia, you can make more money outside of it, and have plenty of cash left over for salads and for subsidizing other people's salads, if that's what you think life is about.

comment by John_Maxwell_IV · 2019-06-15T21:25:03.212Z · score: 3 (2 votes) · LW · GW

fake data or misleading statistics

You shouldn't put these in the same category. Fake data is a much graver sin than failing to correct for multiple comparisons or running a study with a small sample size. For the second two, anyone who reads you paper can see what you did (assuming you mention all the comparisons you made) and discount your conclusions accordingly. For a savvy reader or meta-analysis author, a paper which commits these sins can still improve their overall picture of the literature, especially if they employ tools to detect/correct for publication bias. It's not obvious to me that a scientist who employs these practices is doing harm with their academic career, especially given that readers are getting more and more savvy nowadays.

I don't think "fraud" is the right word for these statistical practices. Cherry-picking examples that support your point, the way an opinion columnist does, is probably a more fraudulent practice.

comment by Zvi · 2019-06-16T10:32:57.541Z · score: 11 (4 votes) · LW · GW

It's fair to say that fake data is a Boolean and a Rubicon, where once you do it once, at all, all is lost. Whereas there are varying degrees of misleading statistics versus clarifying statistics, and how one draws conclusions from those statistics, and one can engage in some amount of misleading without dooming the whole enterprise, so long as (as you note) the author is explicit and clear about what the data was and what tests were applied, so anyone reading can figure out what was actually found.

However, I think it's not that hard for it to pass a threshold where it's clearly fraud, although still a less harmful/dangerous fraud than fake data, if you accept that an opinion columnist cherry-picking examples is fraud (e.g. for it to be more fraudulent than that, especially if the opinion columnist isn't assumed to be claiming that the examples are representative). And I like that example more the more I think about it, because that's an example of where I expect to be softly defrauded in the sense that I assume that the examples and arguments are words written are soldiers chosen to make a point slash sell papers, rather than an attempt to create common knowledge and seek truth. If scientific papers are in the same reference class as that...

comment by Benquo · 2019-06-14T06:21:30.245Z · score: 12 (7 votes) · LW · GW

Let me try to be a little clearer here.

If someone defrauds me, and I object, and they explain that the incentive structure society has set up for them pays more on net for fraud than for honest work, then this is at least a relevant reply, and one that is potentially consistent with owning one's decision to participate in corruption rather than fighting it or opting out. (Though I think the article makes a pretty good case that in the specific case of academia, "fighting it or opting out" is better for most reasonable interests.)

If someone defrauds me, and I object, and they explain that they're instead spending their goodness budget on avoiding eating meat, this is not a relevant reply in the same sense. Factory farmed animals aren't a party we're negotiating with or might want to win the trust of, and the public interest in accurate information is different in kind from the public interest in people not causing animals to suffer.

comment by Benquo · 2019-06-14T06:38:07.685Z · score: 16 (6 votes) · LW · GW

This is especially important in the light of a fairly recent massive grass-roots effort [LW · GW] in academia - originated by academics in multiple disciplines volunteering their spare time - to do the work that led to the replication crisis, because academics in many fields are actually still trying to get the right answer along some dimensions and are willing to endure material costs (including reputational damage to their own fields) to do so. So, that's not actually a proposal to decline to initiate a stag hunt, that's a proposal to unilaterally choose Rabbit in a context where close to a critical quorum might be choosing Stag.

comment by catherio · 2019-06-14T07:48:31.968Z · score: 16 (6 votes) · LW · GW

Another distinction I think is important, for the specific example of "scientific fraud vs. cow suffering" as a hypothetical:

Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.

I have a guess that "science, specifically" as a career-with-harmful-impacts in the hypothetical was not specifically important to Ray, but that it was very important to Ben. And that if the example career in Ray's "which harm is highest priority?" thought experiment had been "high-frequency-trading" (or something else that some folks believe has harms when ordinarily practiced, but is lucrative and thus could have benefits worth staying for, and is not specifically a role of stewardship over our communal epistemics) that Ben would have a different response. I'm curious to what extent that's true.

comment by Benquo · 2019-06-14T07:58:51.482Z · score: 8 (4 votes) · LW · GW

You're right that I'd respond to different cases differently. Doing high frequency trading in a way that causes some harm - if you think you can do something very good with the money - seems basically sympathetic to me, in a sufficiently unjust society such as ours.

Any info good (including finance and trading) is on some level pretending to involve stewardship over our communal epistemics, but the simulacrum level [LW · GW] of something like finance is pretty high in many respects.

comment by Raemon · 2019-06-14T21:23:57.334Z · score: 7 (3 votes) · LW · GW

I think your final paragraph is getting at an important element of the disagreement. To be clear, *I* treat science and high frequency trading differently, too, but yes I think to me it registers as "very important" and to Ben it seems closer to "sacred" (which, to be clear, seems like a quite reasonable outlook to me)

Science is a terrible career for almost any goal other than actually contributing to the scientific endeavor.

Small background tidbit that's part of this: I think many scientists have goals that seem like more like like "do what their parents want" and "be respectable" or something. Which isn't about traditional financial success, but looks like opting into a particular weird sub-status-hierarchy that one might plausibly well suited to win at.

Another background snippet informing my model:

Recently I was asking an academic friend "hey, do you think your field could benefit from better intellectual infrastructure?" and they said "you mean like LessWrong?" and I said "I mean a meta-level version of it that tries to look at the local set of needs and improve communication in some fashion."

And they said something like "man, sorry to disappoint you, but most of academia is not, like, trying to solve problems together, the way it looks like the rationality or AI alignment communities are. They wouldn't want to post clearer communications earlier in the idea-forming stage because they'd be worried about getting scooped. They're just trying to further their own career."

This is just one datapoint, and again I know very little about academia overall. Ben's comments about how the replication crisis happened via an organic grassroots process seems quite important and quite relevant.

Reiterating from my other post upthread: I am not making any claims about what people in science and/or academia should do. I'm making conditional claims, which depend on the actual state of science and academia.

comment by catherio · 2019-06-14T07:34:31.517Z · score: 14 (6 votes) · LW · GW

One distinction I see getting elided here:

I think one's limited resources (time, money, etc) are a relevant question in one's behavior, but a "goodness budget" is not relevant at all.

For example: In a world where you could pay $50 to the electric company to convert all your electricity to renewables, or pay $50 more to switch from factory to pasture-raised beef, then if someone asks "hey, your household electrical bill is destroying the environment, why didn't you choose the green option", a relevant reply is "because I already spent my $50 on cow suffering".

However, if both options cost $0, then "but I already switched to pasture-raised beef" is just irrelevant in its entirety.

comment by Elizabeth (pktechgirl) · 2019-06-14T06:37:59.323Z · score: 10 (6 votes) · LW · GW

I can't tell if you're saying eating meat is worse than faking data to you personally, or for a hypothetical academic, could you clarify? And if it is a position you personally hold, can you explain your moral calculus?

comment by Zvi · 2019-06-14T12:55:44.508Z · score: 8 (7 votes) · LW · GW

I am very surprised that you still endorse this comment on reflection, but given that you do, it's not unreasonable to ask: Given that most people lie a lot, and you think personally not eating meat is more important than not lying, your track record actually not eating meat, and your claim that it's reasonable to be a 51st percentile moral person, why should we then trust your statements to be truthful? Let alone in good faith. I mean, I don't expect you to lie because I know you, but if you actually believed the above for real, wouldn't my expectation be foolish?

I'm trying to square your above statement and make it make sense for you to have said it and I just... can't?

comment by Raemon · 2019-06-14T21:37:16.274Z · score: 3 (4 votes) · LW · GW

I think "you are a bad person" is a very powerful and dangerous tool to use on yourself or others. I think there are a lot of ways to deeply fuck yourself up with it.

Similarly, moral obligation is a very powerful and dangerous concept.

I think it is (sort of) reasonably safe to use with "if you are in the bottom 50% of humanity*, you are morally obligated to work on that, and if you aren't at least working on it, you are a bad person."

Aspiring towards being a truly *good* person is a lot of effort. It requires time to think a lot about your principles, it requires slack to dedicate towards both executing them and standing up to various peer pressures, etc. It is enough effort, and I think most people have enough on their plate, that I don't consider it morally obligatory.

I aspire to be a truly good person, and I in fact try to create a fenced-in-bubble, which requires you to be aspiring towards some manner of goodness in order to gain many of the benefits I contribute to the semi-public commons. I think this is a pretty good strategy, to avoid the dangers of moral obligation and "bad person" mindset, while capturing the benefits of high percentile goodness.

*possibly "if you're in the bottom 50% of your reference class", where reference class is somewhat vague.

comment by Benquo · 2019-06-16T06:35:12.946Z · score: 12 (3 votes) · LW · GW

You're the one bringing up the question of whether someone's a bad person.

comment by Zvi · 2019-06-16T11:35:53.567Z · score: 16 (4 votes) · LW · GW

True. But I do think we've run enough experiments on 'don't say anyone is a bad person, only point out bad actions and bad logic and false beliefs' to know that people by default read that as claims about who is bad, and we need better tech for what to do about this.

comment by Dagon · 2019-06-16T14:55:00.608Z · score: 3 (2 votes) · LW · GW

As long as we understand that "bad person" is shorthand for "past and likely near-future behaviors are interfering with group goals", It's a reasonable judgement to make. And it's certainly useful to call out people you'd like to eject from the group, or to reduce in status, or to force a behavioral change on.

I don't object to calling someone a bad person, I only object to believing that such a thing is real.

comment by Zvi · 2019-06-16T15:06:25.887Z · score: 12 (4 votes) · LW · GW

The thing is, I don't think that shorthand (along with similar things like "You're an idiot") ever stays understood outside of very carefully maintained systems of people working closely together in super high trust situations, even if it starts out understood.

comment by Dagon · 2019-06-16T23:22:45.852Z · score: 3 (2 votes) · LW · GW

I'd agree. Outside of closely-knit, high-trust situations, I don't think it's achievable to have that subtlety of conceptual communication. You can remind (some) people, and you can use more precise terminology where the distinction is both important and likely to succeed. In other cases, maintaining your internal standards of epistemic hygiene is valuable, even when playing status games you don't like very much.

comment by Raemon · 2019-06-17T00:10:43.914Z · score: 11 (4 votes) · LW · GW

I think two different things are going on here:

1. The OP read as directly moralizing to me. I do realize it doesn't necessarily spell it out directly, but moralizing language rarely is. I don't know the author of the OP. There are individuals I trust on LW to be able to have this sort of conversation without waging subtle-or-unsubtle wars over who is a bad person, but they are rare. I definitely don't assume that for random people on the internet.

2. My "Be in the top 50% morally" statement was specifically meant to be in the context of the full Scott Alexander post, which is explicitly about (among other things) people being worried about being a good person.

And, yes, I brought the second point up (and I did bring it up in an offhand way without doing much to establish the context, which was sloppy. I do apologize for that).

But afterward providing the link, it seemed like people were still criticizing that point. And... I'm not sure I have a good handle on how this played out. But my impression is something like you and maybe a couple others were criticizing the 50% comment as if it were part of a different context, whereas if you read the original post it's pretty clearly applying to the "when should you consider yourself a good/bad or blameworthy/praiseworthy person?" context. So things that seem (to me) to make sense to criticize are either the entire frame of the post (rather than the specific rule about 'be in the top 50%") or criticizing the 50% rule in it's original context. And it didn't seem like that's what was happening.

comment by Dagon · 2019-06-15T03:56:09.459Z · score: 7 (4 votes) · LW · GW

The gradients between horrific, forbidden, disallowed, discouraged, acceptable, preferable, commendable, heroic seem like something that should be discussed here. I suspect you're mixing a few different kinds of judgement of self, judgement of others, and perceived judgement by others. I don't find them to be the same thing or the same dimensions of judgement, but there's definitely some overlap.

I reject "goodness" as an attribute of a person - it does not fit my intuitions nor reasoned beliefs. There are behaviors and attitudes which are better or worse (sometimes by large amounts), but these are contingent rather than identifying. There _are_ bad people, who consistently show harmful behavior and no sign of changing throughout their lives. There are a LOT of morally mediocre people who have a mix of good and bad behavior, often more context-driven than choice-driven. I don't think I can distinguish among them, so I tend to assume that almost everyone is mediocre. Note that I can decide that someone is unpleasant or harmful TO ME, and avoid them, without having to condemn them as a bad person.

So, I don't aspire to be a truly good person, as I don't think that's a thing. I aspire to do good things and make choices for the commons, which I partake of. I'm not perfect at it, but I reject judgement on any absolute scale, so I don't think there's a line I'm trying to find where I'm "good enough", just fumbling my way around what I'm able/willing to do.

comment by Raemon · 2019-06-14T21:07:27.234Z · score: 6 (6 votes) · LW · GW

Note: I may not be able to weigh in on this more until Sunday.

Clarifying some things all at once since a few people have brought up related points. I'm probably not going to get to address the "which is worse – lying or eating meat" issue until Sunday (in the meanwhile, to be clear, I think "don't lie" is indeed one of the single most important norms to coordinate on, and to create from scratch if you don't have such a norm, regardless of whether there are other things that are as or more important)

A key clause in the above comment was:

If the norm in academia is to use bad statistics, or fake data (I don't know whether it is or not, or how common it is)

In a world where the norm in academia is to not use bad statistics, or not to fake data, then absolutely the correct thing is to uphold that norm.

In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I don't actually know the state of academia enough to know which world we're in (I suspect different pockets of academia are different), and the sentence was intended to be useful in either world.

In the world where everyone in academia is literally faking data all the time, yes, absolutely, I think it is good execute a broader strategy that can actually change things. I don't think it's morally obligatory, for the same reason I think it's not morally obligatory to pour all your resources into third world poverty or x-risk (despite the latter being sort of literally the most important thing). But it is probably up there in the top 5 things that are worth doing and dedicating your life to.

(it might be that you dedicate your life to fixing academia, or you leave academia, depending on how bad things are and what seems tractable)

Now, in the rationalsphere we do credible have enough background norms that higher-than-average honesty norms, and there's clear meta-level agreement that there's some kind of stag hunt here that we're aiming for. But we haven't gotten agreement on exactly which stag hunt we're running. And most of the people I respect quite a lot (including Zvi, Benquo, Jessica Taylor, Habryka, and Duncan), periodically make moves that look to me like obvious defecting in what I'd implicitly assumed the obvious social norms were (and it seems like I make moves that look like defecting to them).

And the big topic on my mind this past couple years is figuring out how to get onto the right level of meta-alignment, where we can collectively be working on something very hard, but which a) requires strategy to get right, b) has a lot of ways to go wrong, c) I expect everyone to continuously have deep, diverging models about.

Is being good costly?

Catherio notes that if things are free, you should... just actually be a good person. I agree with this, but I think there a number of reasons why being good isn't free. This is a sort of complex domain that I think requires a few things to be acknowledged at once:

1. Willpower is real

2. Also, willpower apparently isn't real?

3. Social pressure is real

4. Resisting social pressure requires either changing your local environment or moving to a new environment or burning willpower

5. Figuring out what battles are worth fighting, and how to fight them, is cognitively hard, and if nothing else requires allotment of time

My current read on the contradictory "do people get willpower depleted" literature is that... people totally do get willpower depleted. But, also, people who hold a stance that they have infinite willpower have an easier time managing willpower. This is a quite weird intersection of epistemic and instrumental rationality that I'm not 100% sure how to think about.

If someone is in a moral-maze-esque corporation, "being locally, naively good" doesn't seem like a strategy that goes anywhere useful. It'll get subtly and unsubtly punished by people around you. It is not free – not only do you miss out on some benefits, you will probably actively get hurt.

comment by Zvi · 2019-06-14T22:15:43.034Z · score: 19 (8 votes) · LW · GW
In a world where the norm is explicitly not to do those things (i.e. greater than 50% of academics would fake data), then we have very big problems, and unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I think this claim is a hugely important error.

One scientist unilaterally deciding to stop faking data isn't going to magically make the whole world come around. But the idea that it doesn't help? That failing to do so, and not only being complicit in others faking data but also faking data, doesn't make it worse?

I don't understand how one can think that.

That's not unique to the example of faking data. That's true of anything (at least partially) observable that you'd like to change.

One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.

But don't pretend it doesn't matter.

Similarly, I find it odd that one uses the idea that 'doing the right thing is not free' as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you're not counting the thing itself as a benefit.

But the whole point of some things being right is that you do them even though it's not free, because It's Not The Incentives, It's You. You're making a choice.

Ideally we'd design a system where one not only cultivated the virtue of doing the right thing, and was rewarded for doing that, one would also be rewarded in expectation for doing the right thing as often as possible. Doing the right thing is, in fact, a prime way of moving towards that.

Again, sometimes the cost of doing the otherwise 'right thing' gets too high. Especially if you can't coordinate on it. There are trade-offs. One can't do every good thing or never compromise.

But if there is one takeaway from Moral Mazes that everyone should have, it's a really, really simple one:

Being in a moral maze is not worth it. They couldn't pay you enough, and even if they could, they definitely don't. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

If academia has become a moral maze, the same applies, except that the money was never good to begin with.

comment by dxu · 2019-06-15T00:35:33.012Z · score: 11 (5 votes) · LW · GW
One can argue that coordinated action would be more efficient, and I'd agree. One can argue that in context, it's not worth the trade-off to do the thing that reinforces good norms and makes things better, versus the thing that's better for you and makes things generally worse. Sure. Not everything that would improve norms is worth doing.
But don't pretend it doesn't matter.

This reads as enormously uncharitable to Raemon, and I don't actually know where you're getting it from. As far as I can tell, not a single person in this conversation has made the claim that it "doesn't matter"--and for good reason: such a claim would be ridiculous. That you seem willing to accuse someone else in the conversation of making such a claim (or "pretending" it, which is just as bad) doesn't say good things about the level of conversation.

What has been claimed is that "doing the thing that reinforces good norms" is ineffective, i.e. it doesn't actually reinforce the good norms. The claim is that without a coordinated effort, changes in behavior on an individual level have almost no effect on the behavior of the field as a whole. If this claim is true (and even if it's false, it's not obviously false), then there's no point hoping to see knock-on effects from such a change--and that in turn means all that's left is the cost-benefit calculation: is the amount of good that I would do by publishing a paper with non-fabricated data (even if I did, how would people know to pay attention to my paper and not all the other papers out there that totally did use fabricated data?), worth the time/effort/willpower it would take me to do so?

As you say: it is indeed a trade-off. Now, you might argue (perhaps rightly so!) that one individual's personal time/effort/willpower is nowhere near as important as the effects of their decision whether to fabricate data. That they ought to be willing to expend their own blood, sweat, and tears to Do The Right Thing--at least, if they consider themselves a moral person. And in fact, you made just such an argument in your comment:

Similarly, I find it odd that one uses the idea that 'doing the right thing is not free' as what seems to be a justification for not doing the right thing. Yes, obviously when the right thing is free for you versus the not-right thing you should do the right thing. And of course being good is costly! Optimizing for anything is costly if you're not counting the thing itself as a benefit.
But the whole point of some things being right is that you do them even though it's not free, because It's Not The Incentives, It's You. You're making a choice.

But this ignores the fact that every decision has an opportunity cost: if I spend vast amounts of time and effort designing and conducting a rigorous study, pre-registering my plan, controlling for all possible confounders (and then possibly getting a negative result and needing to go back to the drawing board, all while my colleague Joe Schmoe across the hall fabricates his way into Nature), this will naturally make me more tired than I would be otherwise. Perhaps it will cause me to have less patience than I normally do, become more easily frustrated at events outside of my control, be less willing to tolerate inconveniences in other areas of my life, etc. If, for example, I believed eating meat was morally wrong, I might nonetheless find meat more difficult to deliberately deprive myself of it if I was already spending a great deal of willpower every day on seeing this study through. And if I expect that to be the case, then I have to ask myself which thing I ought to prioritize: not eating meat, or doing the study properly?

This is the (somewhat derisively named) "goodness budget" Benquo mentioned upthread. But another name for it might be Moral Slack. It's the limited amount of room we have to be less than maximally good in our lives, without being socially punished for it. It's the privilege we're granted, to not have to constantly ask ourselves "Should I be doing this? Am I being a bad person for doing this?" It's--look, you wrote half the posts I just linked to. You know the concept. I don't know why you're not applying it here, but it seems pretty obvious to me that it applies just as well here as it does in any other aspect of life.

To be clear: you know that falsifying data is a Very Bad Thing. I know that falsifying data is a Very Bad Thing. Raemon knows that falsifying data is a Very Bad Thing. We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it. If you do that, then you're using moral indignation as a weapon--a way to not only coerce other people into using up their willpower, but to come out of it looking good yourself.

People who manage to resist the incentives--who ignore the various siren calls they constantly hear--are worthy of extremely high praise. They are exceptionally good people--by definition, in fact, because if they weren't exceptional, everyone else would be doing it, too. By all means, praise those people as much as you want. But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change. "It's Not The Incentives, It's You" puts the emphasis in the wrong place, and it degrades communication with people who might have been reachable with a more nuanced take.

comment by Zvi · 2019-06-15T13:22:15.556Z · score: 39 (13 votes) · LW · GW
We all know that falsifying data is bad. But if that's the way the incentives point (and that's a very important if!), then it's also bad to call people out for doing it.

No. No. Big No. A thousand times no.

(We all agree with that first sentence, everyone here knows these things are bad, that's just quoted for context. Also note that everyone agrees that those incentives are bad and efficient action to change them would be a good idea.)

I believe the above quote is a hugely important crux. Likely it, or something upstream of it, is the crux. Thank you for being explicit here. I'm happy to know that this is not a straw-man, that this is not going to get the Mott and Bailey treatment.

I'm still worried that such treatment will mostly occur...

There is a position, that seems to be increasingly held and openly advocated for, that if someone does something according to their local, personal, short-term amoral incentives, that this is, if not automatically praiseworthy (although I believe I have frequently seen this too, increasingly explicitly, but not here or by anyone in this discussion), at least an immunity from being blameworthy, no matter the magnitude of that incentive. One cannot 'call them out' on such action, even if such calling out has no tangible consequences.

I'm too boggled, and too confused about how one gets there in good faith, to figure out how to usefully argue against such positions in a way that might convince people who sincerely disagree. So instead, I'm simply going to ask the question, are there any others here, that would endorse the quoted statement as written? Are there people who endorse the position in the above paragraph, as written? With or without an explanation as to why. Either, or both. If so, please confirm this.

comment by catherio · 2019-06-19T23:41:34.186Z · score: 17 (7 votes) · LW · GW

Here's another further-afield steelman, inspired by blameless postmortem culture.

When debriefing / investigating a bad outcome, it's better for participants to expect not to be labeled as "bad people" (implicitly or explicitly) as a result of coming forward with information about choices they made that contributed to the failure.

More social pressure against admitting publicly that one is contributing poorly contributes to systematic hiding/obfuscation of information about why people are making those choices (e.g. incentives). And we need all that information to be out in the clear (or at least available to investigators who are committed & empowered to solve the systemic issues), if we are going to have any chance of making lasting changes.

In general, I'm curious what Zvi and Ben think about the interaction between "I expect people to yell at me if I say I'm doing this" and promoting/enabling "honest accounting".

comment by Bucky · 2019-06-15T20:40:54.975Z · score: 6 (3 votes) · LW · GW

Trying to steelman the quoted section:

If one were to be above average but imperfect (e.g. not falsifying data or p-hacking but still publishing in paid access journals) then being called out for the imperfect bit could be bad. That person’s presence in the field is a net positive but if they don’t consider themselves able to afford the penalty of being perfect then they leave and the field suffers.

I’m not sure I endorse the specific example there but in a personal example:

My incentive at work is to spend more time on meeting my targets (vs other less measurable but important tasks) than is strictly beneficial for the company.

I do spend more time on these targets than would be optimal but I think I do this considerably less than is typical. I still overfocus on targets as I’ve been told in appraisals to do so.

If someone were to call me out on this I think I would be justified in feeling miffed, even if the person calling me out was acting better than me on this axis.

comment by Zvi · 2019-06-16T10:48:57.852Z · score: 10 (3 votes) · LW · GW

Thank you.

I read your steelman as importantly different from the quoted section.

It uses the weak claim that such action 'could be bad' rather than that it is bad. It also re-introduces the principle of being above average as a condition, which I consider mostly a distinct (but correlated) line of thought.

It changes the standard of behavior from 'any behavior that responds to local incentives is automatically all right' to 'behaviors that are above average and net helpful, but imperfect.'

This is an example of the kind of equivalence/transformation/Mott and Bailey I've observed, and am attempting to highlight - not that you're doing it, you're not because this is explicitly a steelman, but that I've seen. The claim that it is reasonable to focus on meeting explicit targets rather than exclusively what is illegibly good for the company versus the claim that it is cannot be blameworthy to focus exclusively on what you are locally personally incentivized to do, which in this case is meeting explicit targets and things you would be blamed for, no matter the consequence to the company (unless it would actually suffer enough to destroy its ability to pay you).

That is no straw man. In the companies described in Moral Mazes, managers do in fact follow that second principle, and will punish those seen not doing so. In exactly this situation.

comment by dxu · 2019-06-16T03:31:59.322Z · score: 2 (1 votes) · LW · GW

I might try and write up a reply of my own (to Zvi's comment), but right now I'm fairly pressed for time and emotional energy, so until/unless that happens, I'm going to go ahead and endorse this response as closest to the one I would have given.

EDIT: I will note that this bit is (on my view) extremely important:

If one were to be above average but imperfect (emphasis mine)

"Above average", of course, a comparative term. If e.g. 95% of my colleagues in a particular field regularly submit papers with bad data, then even if I do the same, I am no worse from a moral perspective than the supermajority of the people I work with. (I'm not claiming that this is actually the case in academia, to be clear.) And if it's true that I'm only doing what everyone else does, then it makes no sense to call me out, especially if your "call-out" is guilt-based; after all, the kinds of people most likely to respond to guilt trips are likely to be exactly the people who are doing better than average, meaning that the primary targets of your moral attack are precisely the ones who deserve it the least.

(An interesting analogy can be made here regarding speeding--most people drive 10-15 miles over the official speed limit on freeways, at least in the US. Every once in a while, somebody gets pulled over for speeding, while all the other drivers--all of whom are driving at similarly high speeds--get by unscathed. I don't think it's particularly controversial to claim that (a) the driver who got pulled over is usually more annoyed at being singled out than they are recalcitrant, and (b) this kind of "intervention" has pretty much zero impact on driving behavior as a whole.)

comment by Zvi · 2019-06-16T10:36:23.779Z · score: 9 (2 votes) · LW · GW

Is your prediction that if it was common knowledge that police had permanently stopped pulling any cars over unless the car was at least 10 mph over the average driving speed on that highway in that direction over the past five minutes, in addition to being over the official speed limit, that average driving speeds would remain essentially unchanged?

comment by Bucky · 2019-06-16T11:30:22.371Z · score: 1 (1 votes) · LW · GW

Take out the “10mph over” and I think this would be both fairer than the existing system and more effective.

(Maybe some modification to the calculation of the average to account for queues etc.)

comment by Bucky · 2019-06-16T10:57:20.507Z · score: 1 (1 votes) · LW · GW

On reflection I’m not sure “above average” is a helpful frame.

I think it would be more helpful to say someone being “net negative” should be a valid target for criticism. Someone who is “net positive” but imperfect may sometimes still be a valid target depending on other considerations (such as moving an equilibrium).

comment by clone of saturn · 2019-06-15T17:54:06.274Z · score: 3 (2 votes) · LW · GW

I don't endorse the quoted statement, I think it's just as perverse as you do. But I do think I can explain how people get there in good faith. The idea is that moral norms have no independent existence, they are arbitrary human constructions, and therefore it's wrong to shame someone for violating a norm they didn't explicitly agree to follow. If you call me out for falsifying data, you're not recruiting the community to enforce its norms for the good of all. There is no community, there is no all, you're simply carrying out an unprovoked attack against me, which I can legitimately respond to as such.

(Of course, I think this requires an illogical combination of extreme cynicism towards object-level norms with a strong belief in certain meta-norms, but proponents don't see it that way.)

comment by jessicata (jessica.liu.taylor) · 2019-06-15T18:50:34.446Z · score: 2 (1 votes) · LW · GW

It's an assumption of a pact among fraudsters (a fraud ring). I'll cover for your lies if you cover for mine. It's a kind of peace treaty.

In the context of fraud rings being pervasive, it's valuable to allow truth and reconciliation: let the fraud that has been committed come to light (as well as the processes causing it), while having a precommitment to no punishments for people who have committed fraud. Otherwise, the incentive to continue hiding is a very strong obstacle to the exposition of truth. Additionally, the consequences of all past fraud being punished heavily would be catastrophic, so such large punishments could only make sense when selectively enforced.

comment by clone of saturn · 2019-06-15T19:08:31.034Z · score: 2 (1 votes) · LW · GW

Right... but fraud rings need something to initially nucleate around. (As do honesty rings)

comment by Pattern · 2019-06-15T18:37:07.736Z · score: 1 (1 votes) · LW · GW
are there any others here, that would endorse the quoted statement as written?

I don't endorse it in that context, because data matters. Otherwise, why not? There are plenty of situations where "bad"/"good" seems like a non-issue*/counterproductive.

*If not outright beneficial.

comment by Zvi · 2019-06-15T12:43:14.072Z · score: 11 (5 votes) · LW · GW
But that doesn't mean that everyone who fails to do what they did is an exceptionally bad person, and lambasting them for it isn't actually a very good way to get them to change.

I haven't said 'bad person' unless I'm missing something. I've said things like 'doing net harm in your career' or 'making it worse' or 'not doing the right thing.' I'm talking about actions, and when I say 'right thing' I mean shorthand for 'that which moves things in the directions you'd like to see' rather than any particular view on what is right or wrong to move towards, or what moves towards what, leaving those to the individual.

It's a strange but consistent thing that people's brains flip into assuming that anyone thinking some actions are better than other actions are accusing others who don't take the better actions of being bad people. Or even, as you say, 'exceptionally bad' people.

comment by dxu · 2019-06-16T03:51:13.526Z · score: 2 (1 votes) · LW · GW
I haven't said 'bad person' unless I'm missing something.

I mean, you haven't called anyone a bad person, but "It's Not The Incentives, It's You" is a pretty damn accusatory thing to say, I'd argue. (Of course, I'm also aware that you weren't the originator of that phrase--the author of the linked article was--but you at least endorse its use enough to repeat it in your own comments, so I think it's worth pointing out.)

comment by Zvi · 2019-06-16T11:00:58.050Z · score: 10 (3 votes) · LW · GW

Interesting. I am curious how widely endorsed this dynamic is, and what rules it operates by.

On two levels.

Level one is the one where some level of endorsement of something means that I'm making the accusations in it. Which at some levels that it happens often in the wild is clearly reasonable, and at some other levels that it happens in the wild often, is clearly unreasonable.

Level two is that the OP doesn't make the claim that anyone is a bad person. I re-read the OP to check. My reading is this. It claims that they are engaging in bad actions, and that there are bad norms that seem to have emerged, that together are resulting in bad outcomes. And it argues that people are using bad justifications for that. And it importantly claims that these bad outcomes will be bad not only for 'science' or 'the world' but for the people that are taking the actions in question, who the OP believes misunderstand their own incentives, in addition to having false beliefs as to what impact actions will have on others, and sometimes not caring about such impacts.

That is importantly different from claiming that these are bad people.

Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?

I actually am asking, because I don't know.

comment by Raemon · 2019-06-17T04:24:31.712Z · score: 11 (6 votes) · LW · GW
Is it possible to say 'your actions are bad and maybe you should stop' or even 'your actions are having these results and maybe you should stop' without saying 'you are bad and you should feel bad'?
I actually am asking, because I don't know.

I've touched on this elsethread, but my actual answer is that if you want to do that, you either need to create a dedicated space of trust for it, that people have bought into. Or you need to continuously invest effort in it. And yes, that sucks. It's hugely inefficient. But I don't actually see alternatives.

It sucks even more because it's probably anti-inductive, where as some phrases become commonly understood they later become carrier waves for subtle barbs and political manipulations. (I'm not confident how common this is. I think a more prototypical example is "southern politeness" with "Oh bless your heart").

So I don't think there's a permanent answer for public discourse. There's just costly signaling via phrasing things carefully in a way that suggests you're paying attention to your reader's mental state (including their mental map of the current landscape of social moves people commonly pull) and writing things that expressly work to build trust given that mental state.

(Duncan's more recent writing often seems to be making an effort at this. It doesn't work universally, due to the unfortunate fact that not all one's readers will be having the same mental state. A disclaimer that reassures one person may alienate another)

It seems... hypothetically possible for LessWrong to someday establish this sort of trust, but I think it actually requires hours and hours of doublecrux for each pair of people with different worldviews, and then that trust isn't necessarily transitive between the next pair of people with different different worldviews. (Worldviews which affect what even seem like reasonable meta-level norms within the paradigm of 'we're all here to truthseek'. See tensions in truthseeking [LW · GW] for some [possibly out of date] thoughts on mine on that)

I've noted issues with Public Archipelago [LW · GW] given current technologies, but it still seems like the best solution to me.

comment by Benquo · 2019-06-17T21:23:42.519Z · score: 12 (4 votes) · LW · GW

It seems pretty fucked up to take positive proposals at face value given that context.

comment by Wei_Dai · 2019-06-16T02:48:03.678Z · score: 10 (5 votes) · LW · GW

Optimizing for anything is costly if you’re not counting the thing itself as a benefit.

Suppose I do count the thing itself (call it X) as a benefit. Given that I'm also optimizing for other things at the same time, the outcome I end up choosing will generally be a compromise that leaves some X on the table. If everyone is leaving some X on the table, then deciding when to blame or "call out" someone for leaving some X on the table (i.e., not being as honest in their research as they could be) becomes an issue of selective prosecution (absent some bright line in the sand such just making up data out of thin air). I think this probably underlies some people's intuitions that calling people out for this is bad.

Being in a moral maze is not worth it. They couldn’t pay you enough, and even if they could, they definitely don’t. Even if you end up CEO, you still lose. These lives are not worth it. Do not be a middle manager at a major corporation that looks like this. Do not sell your soul.

What if Moral Mazes is the inevitable outcome of trying to coordinate a large group of humans in order to take advantage of some economy of scale? (My guess is that Moral Mazes is just part of the coordination cost that large companies are prepared to pay in order to gain the benefits of economies of scale.) Should we just give up on making use of such economies of scale?

Obviously the ideal outcome would be to invent or spread some better coordination technology that doesn't produce Moral Mazes, but if it wasn't very hard to invent/spread, someone probably would have done it already.

If academia has become a moral maze, the same applies, except that the money was never good to begin with.

As someone who explicitly opted out of academia and became an independent researcher due to similar concerns (not about faking data per se, but about generally bad coordination in academia), I obviously endorse this for anyone for whom it's a feasible option. But I'm not sure it's actually feasible at scale.

comment by Zvi · 2019-06-16T11:14:18.792Z · score: 25 (9 votes) · LW · GW

I think these are (at least some of) the right questions to be asking.

The big question of Moral Mazes, as opposed to conclusions worth making more explicit, is: Are these dynamics the inevitable result of large organizations? If so, to what extent should we avoid creating large organizations? Has this dynamic ever been different in the past in other places and times, and if so why and can we duplicate those causes?

Which I won't answer here, because it's a hard question, but my current best guess on question one is: It's the natural endpoint if you don't create a culture that explicitly opposes it (e.g. any large organization that is not explicitly in opposition to being an immoral maze will increasingly become one, and things generally only get worse over time on this axis rather than better unless you have a dramatic upheaval which usually means starting over entirely) and also that the more other large organizations around you are immoral mazes, the faster and harder such pressures will be, and the more you need to push back to stave them off.

My best guess on question two is: Quite a lot. At least right here, right now any sufficiently large organization, be it a corporation, a government, a club or party, you name it, is going to end up with these dynamics by default. That means we should do our best to avoid working for or with such organizations for our own sanity and health, and consider it a high cost on the existence of such organizations and letting them be in charge of things. That doesn't mean we can give up on major corporations or national governments without better options that we don't currently have. But I do think there are cases where an organization with large economies of scale would be net positive absent these dynamics, but is net negative with these dynamics, and these dynamics should push us (and do push us!) towards using less economies of scale. And that this is worthwhile.

As for whether exit of academia is feasible at scale (in terms of who would do the research without academia), I'm not sure, but it is feasible on the margin for a large percentage of those involved (as opposed to exit from big business, which is at least paying those people literal rent in dollars, at the cost of anticipated experiences). It's also not clear that academia as it currently exists at scale is feasible at that scale. I'm not close enough to it, to be the one who should make such claims.

comment by Dagon · 2019-06-17T23:14:46.834Z · score: 6 (3 votes) · LW · GW

"Selling out" has been in the well-known concept space for a long long time - it's not a particularly recent phenomenon to have to make choices where the moral/prosocial option is not the materially-rewarded one. It probably _IS_ recent that any group or endeavor can be expected to have large impact over much of humanity.

Do we have any examples of groups that both behave well AND get significant things done?

comment by habryka (habryka4) · 2019-06-17T22:10:05.665Z · score: 5 (2 votes) · LW · GW

This comment feels like it correctly summarizes a lot of my thinking on this topic, and I would feel excited about a top-level post version of it.

comment by Raemon · 2019-06-17T22:57:54.677Z · score: 3 (1 votes) · LW · GW


comment by Pattern · 2019-06-18T04:16:34.476Z · score: 1 (1 votes) · LW · GW

One idea on the subject of government is "eventually it will fail/fall. This has happened a lot throughout history, and it will happen someday to this country. Things may keep getting big/inefficient, but the system keeps chugging along until it dies."

One alternative to this, would be to start a group/country/etc. with an explicit end date - something similar with regards to some aspect. (Reviewing all laws on the books to see if they should stick around would be a big deal, as would implementing laws with end dates, or only laws with end dates. Some consider this to have failed in the past though, as emergency powers demonstrate.)

comment by Raemon · 2019-06-14T23:43:44.506Z · score: 5 (2 votes) · LW · GW

Nod. I don't know that I disagree with any of this per se. I'll respond more on Sunday. Any disagreements I have I think are about how to weight things and how to strategize (with slightly different caveats for individuals, for groups with fences, and for amorphous society)

comment by Zack_M_Davis · 2019-06-19T05:03:57.147Z · score: 4 (2 votes) · LW · GW

unilaterally deciding to stop faking data... is nice, but isn't actually going to help unless it is part of a broader, more concerted strategy.

I could imagine this being true in some sort of hyper-Malthusian setting where any deviation from the Nash equilibrium gets you immediately killed and replaced with an otherwise-identical agent who will play the Nash equilibrium.

comment by Benquo · 2019-06-14T03:54:23.630Z · score: 4 (2 votes) · LW · GW
Try to be in the top 50% of the population at morality

What does this mean?

comment by Raemon · 2019-06-14T04:12:50.851Z · score: 1 (2 votes) · LW · GW

It was a reference to this post:

comment by Zvi · 2019-06-14T13:00:58.072Z · score: 13 (5 votes) · LW · GW

I almost wrote a reply to that post when it came up (but didn't because one should not respond too much when Someone Is Wrong On The Internet, even Scott), because this neither seemed like an economic perspective on moral standards, nor did it work under any equilibrium (it causes a moral purity cascade, or it does little, rarely anything in between), nor did it lead to useful actions on the margin in many cases as it ignores cost/benefit questions entirely. Strictly dominated actions become commonplace. It seems more like a system for avoiding being scapegoated and feeling good about one's self, as Benquo suggests.

(And of course, >50% of people eat essentially the maximum amount of quality meat they can afford.)

comment by Benquo · 2019-06-14T05:28:50.176Z · score: 8 (4 votes) · LW · GW

So you mean try to do slightly less of what can get you blamed than average? What policy goal does slightly outperforming at an incoherent standard achieve?

comment by Raemon · 2019-06-14T05:41:22.003Z · score: 2 (6 votes) · LW · GW

Try coming up with a charitable interpretation of what I said. I feel like the various posts I linked showcase why I think there failure modes to naively doing the thing you're saying, not to mention the next two bullet points.

comment by Benquo · 2019-06-14T05:57:03.379Z · score: 17 (4 votes) · LW · GW

I don't actually understand how to be "more charitable" or "less charitable" here - I'm trying to make sense of what you're saying, and don't see any point in making up a different but similar-sounding opinion which I approve of.

If I try to back out what motives lead to tracking the average level of morality (as opposed to trying to do decision theory on specific cases), it ends up to be about managing how much you blame yourself for things (i.e. trying to "be" "good"); I actually don't see how thinking about global outcomes would get you there.

If you have a different motivation that led you there, you're in a better position to explain it than I am.

comment by Slider · 2019-06-16T19:33:27.031Z · score: 1 (1 votes) · LW · GW

Writing posts a certain way to get more karma on lesswrong is an area of application for this stance.