Thiel on secrets and indefiniteness 2021-04-20T21:59:35.792Z
2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import” 2021-04-02T16:26:51.725Z
Logan Strohl on exercise norms 2021-03-30T04:28:22.331Z
Julia Galef and Matt Yglesias on bioethics and "ethics expertise" 2021-03-30T03:06:07.323Z
Thirty-three randomly selected bioethics papers 2021-03-22T21:38:08.281Z
Politics is way too meta 2021-03-17T07:04:42.187Z
Deflationism isn't the solution to philosophy's woes 2021-03-10T00:20:07.357Z
What I'd change about different philosophy fields 2021-03-08T18:25:30.165Z
MIRI comments on Cotra's "Case for Aligning Narrowly Superhuman Models" 2021-03-05T23:43:54.186Z
Utilitarian doppelgangers vs. making everything smell like bananas 2021-02-20T23:57:34.724Z
MIRI: 2020 Updates and Strategy 2020-12-23T21:27:39.206Z
Cartesian Frames Definitions 2020-11-08T12:44:34.509Z
"Cartesian Frames" Talk #2 this Sunday at 2pm (PT) 2020-10-28T13:59:20.991Z
Updates and additions to "Embedded Agency" 2020-08-29T04:22:25.556Z - A Petition 2020-06-25T05:44:50.050Z
EA Forum AMA - MIRI's Buck Shlegeris 2019-11-15T23:27:07.238Z
A simple sketch of how realism became unpopular 2019-10-11T22:25:36.357Z
Christiano decision theory excerpt 2019-09-29T02:55:35.542Z
Kohli episode discussion in 80K's Christiano interview 2019-09-29T01:40:33.852Z
Rob B's Shortform Feed 2019-05-10T23:10:14.483Z
Helen Toner on China, CSET, and AI 2019-04-21T04:10:21.457Z
New edition of "Rationality: From AI to Zombies" 2018-12-15T21:33:56.713Z
On MIRI's new research directions 2018-11-22T23:42:06.521Z
Comment on decision theory 2018-09-09T20:13:09.543Z
Ben Hoffman's donor recommendations 2018-06-21T16:02:45.679Z
Critch on career advice for junior AI-x-risk-concerned researchers 2018-05-12T02:13:28.743Z
Two clarifications about "Strategic Background" 2018-04-12T02:11:46.034Z
Karnofsky on forecasting and what science does 2018-03-28T01:55:26.495Z
Quick Nate/Eliezer comments on discontinuity 2018-03-01T22:03:27.094Z
Yudkowsky on AGI ethics 2017-10-19T23:13:59.829Z
MIRI: Decisions are for making bad outcomes inconsistent 2017-04-09T03:42:58.133Z
CHCAI/MIRI research internship in AI safety 2017-02-13T18:34:34.520Z
MIRI AMA plus updates 2016-10-11T23:52:44.410Z
A few misconceptions surrounding Roko's basilisk 2015-10-05T21:23:08.994Z
The Library of Scott Alexandria 2015-09-14T01:38:27.167Z
[Link] Nate Soares is answering questions about MIRI at the EA Forum 2015-06-11T00:27:00.253Z
Rationality: From AI to Zombies 2015-03-13T15:11:20.920Z
Ends: An Introduction 2015-03-11T19:00:44.904Z
Minds: An Introduction 2015-03-11T19:00:32.440Z
Biases: An Introduction 2015-03-11T19:00:31.605Z
Rationality: An Introduction 2015-03-11T19:00:31.162Z
Beginnings: An Introduction 2015-03-11T19:00:25.616Z
The World: An Introduction 2015-03-11T19:00:12.370Z
Announcement: The Sequences eBook will be released in mid-March 2015-03-03T01:58:45.893Z
A forum for researchers to publicly discuss safety issues in advanced AI 2014-12-13T00:33:50.516Z
Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda 2014-11-26T11:02:01.038Z
Groundwork for AGI safety engineering 2014-08-06T21:29:38.767Z
Politics is hard mode 2014-07-21T22:14:33.503Z
The Problem with AIXI 2014-03-18T01:55:38.274Z
Solomonoff Cartesianism 2014-03-02T17:56:23.442Z


Comment by Rob Bensinger (RobbBB) on Rob B's Shortform Feed · 2021-04-22T11:24:24.259Z · LW · GW

It's apparently not true that 90% of startups fail. From Ben Kuhn:

Hot take: the outside view is overrated.

(“Outside view” = e.g. asking “what % of startups succeed?” and assuming that’s ~= your chance of success.)

In theory it seems obviously useful. In practice, it makes people underrate themselves and prematurely give up their ambition. 

One problem is that finding the right comparison group is hard.

For instance, in one commonly-cited statistic that “90% of startups fail,” ( “startup” meant all newly-started small businesses including eg groceries. Failure rates vary wildly by industry!

But it’s worse than that. Even if you knew the failure rate specifically of venture-funded tech startups, that leaves out a ton of important info. Consumer startups probably fail at a higher rate than B2B startups. Single-founder startups probably fail more than 2-founder ones.

OK, so what you really need to do is look at the failure rate of 2-founder B2B startups, right?

The problem is that nearly all the relevant information about whether your startup is going to succeed isn’t encoded in your membership in some legible subgroup like that.

Instead, it’s encoded in things like: How likely are you to break up with your cofounder? How good will you be at hiring? How determined are you to never ever give up?

Outside view fans anchor far too strongly on the base rate, and don’t update enough on inside views like these.

“If only 10% of startups succeed, how could I claim a 50% chance? That’d imply I’ve observed evidence with a 5:1 odds ratio! How presumptuous!”

Actually, evidence way stronger than 5:1 is everywhere:

(It’s rarer for startups than for names, but still.)


According to an answer on Quora, "the real percentage of venture-backed startups that fail—as defined by companies that provide a 1X return or less to investors—has not risen above 60% since 2001" (2017 source). Other Quora answers I saw claimed numbers as high as 98% (on different definitions, like ten-year "failure" (survival?) rate), but didn't cite their sources.

Comment by Rob Bensinger (RobbBB) on [ACX Linkpost] Prospectus on Próspera · 2021-04-16T16:28:11.607Z · LW · GW

Are there any obvious (or, failing that, plausible) large improvements that could have been made (or could be made) to Próspera's legal code?

Comment by Rob Bensinger (RobbBB) on A New Center? [Politics] [Wishful Thinking] · 2021-04-12T16:03:35.073Z · LW · GW

[Wishful Thinking]

Possibly-self-fulfilling-prophecy warning!

Comment by Rob Bensinger (RobbBB) on Monastery and Throne · 2021-04-08T23:25:31.171Z · LW · GW

Re "nudgers", compare Alex Tabarrok in Ezra Klein's recent article:

In all of this, the same issue recurs: What should regulators do when there’s an idea that might work to save a large number of lives and appears to be safe in early testing but there isn’t time to run large studies? “People say things like, ‘You shouldn’t cut corners,’” Tabarrok told me. “But that’s stupid. Of course you should cut corners when you need to get somewhere fast. Ambulances go through red lights!”

One problem is no one, on either side of this debate, really knows what will and won’t destroy public trust. Britain, which has been one of the most flexible in its approach to vaccines, has less vaccine hesitancy than Germany or the United States. But is that because of regulatory decisions, policy decisions, population characteristics, history, political leadership or some other factor? Scientists and politicians are jointly managing public psychology, and they’re just guessing. If a faster, looser F.D.A. would lose public trust, that’s a good reason not to have a faster, looser F.D.A. But that’s a possibility, not a fact.

“My view is this was all psychology which no one really understood, so I just said, ‘Go with the expected value. Do the thing that’ll save the most lives and stick with it,’” Tabarrok said. “That’s a better rule than trying to figure out ‘If I do this, what will someone else do?’”

Comment by Rob Bensinger (RobbBB) on Monastery and Throne · 2021-04-08T23:19:43.807Z · LW · GW

serious harm to children's development

Why do you think the lockdowns caused serious harm to children's development?

Comment by Rob Bensinger (RobbBB) on What Do We Know About The Consciousness, Anyway? · 2021-04-08T22:02:22.266Z · LW · GW

Illusionism thinks the illusion-of-phenomenal-consciousness is 'perception-like' — it's more like seeing an optical illusion, and less like just having a stubborn hunch that won't go away even though there's no apparent perceptual basis for it.

The view you're describing is different from illusionism, and is more like the one articulated by Dennett for most of his career. E.g., Dennett's 1979 “On the absence of phenomenology”:

[...] Since I hold that we have privileged access only to judgments, and since I cannot make sense of any claim to the effect that something to which I do not have privileged access is an element of my immediate conscious experience, I am left defending the view that such judgments exhaust our immediate consciousness, that our individual streams of consciousness consist of nothing but such propositional episodes, or better: that such streams of consciousness, composed exclusively of such propositional episodes, are the reality that inspires the variety of misdescriptions that pass for theories of consciousness, both homegrown and academic.

[...] You may be wondering if you even have judgments. Typically these episodes are the momentary, wordless thinkings or convictions (sometimes misleadingly called conscious or episodic beliefs) that are often supposed to be the executive bridges leading to our public, worded introspective reports from our perusal or enjoyment of the phenomenological manifold our reports are about. My view, put bluntly, is that there is no phenomenological manifold in any such relation to our reports. There are the public reports we issue, and then there are episodes of our propositional awareness, our judgments, and then there is—so far as introspection is concerned—darkness. What lies beyond or on the interior of our judgments of the moment, what grounds or causes or controls them, is wholly a matter for science or speculation—in any event it is not a matter to which we have any privileged access at all.

Or his 1991 Consciousness Explained:

[...] You seem to think there’s a difference between thinking (judging, deciding, being of the heartfelt opinion that) something seems pink to you and something really seeming pink to you. But there is no difference.

There is no such phenomenon as really seeming – over and above the phenomenon of judging in one way or another that something is the case.

Indeed, Dennett describes non-physicalism as being based on a "hunch", as though it were just a nagging hard-to-pin-down belief and not something that feels palpably present in all experience. This seems very weird to me.

These days I believe Dennett endorses illusionism instead, though I'm not sure what changed his mind if so? And I have to wonder whether he has some aphantasia-like condition that made a view as weird as delusionism appealing.

Comment by Rob Bensinger (RobbBB) on I'm from a parallel Earth with much higher coordination: AMA · 2021-04-06T22:03:40.465Z · LW · GW

'You're allowed to do what you want, provided you don't learn anything or produce wealth in the process', vs. 'You're allowed to do what you want, provided you learn something or produce wealth in the process'.

Comment by Rob Bensinger (RobbBB) on A few misconceptions surrounding Roko's basilisk · 2021-04-06T01:04:06.249Z · LW · GW


Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-04-05T16:21:15.042Z · LW · GW

I think part of the explanation is 'most consequentialists in professional philosophy dislike utilitarianism' and 'there are lots of deontologists too (in general, somewhat more deontologists than consequentialists)'.

Comment by Rob Bensinger (RobbBB) on Rationalism before the Sequences · 2021-04-03T05:42:39.454Z · LW · GW

I ~agree with this comment. If we do ever want a noun, I've proposed error-reductionism. Or maybe we want something more Anglophone... lessening-of-mistake-ism, or something......

Comment by Rob Bensinger (RobbBB) on 2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import” · 2021-04-02T16:48:10.998Z · LW · GW

General note: I'm confident Luke and Anna wouldn't endorse Robin's characterization of their position here. (Nor do I think Robin's trying to summarize their view in a way they'd endorse. Rather, he's deliberately reframing their view in other terms to try to encourage original seeing.)

A quick response on Luke and Anna's behalf (though I'm sure their own response would look very different):


You can call a nuclear weapon design or a computer virus our 'child' or 'descendant', but this wouldn't imply that we should have similar policies for nukes or viruses as the ones we have for our actual descendants. There needs to be a specific argument for why we should expect AGI systems to be like descendants on the dimensions that matter.

(Or, if we have the choice of building AGI systems that are more descendant-like versus less descendant-like, there needs to be some argument for why we ought to choose to build highly descendant-like AGI systems. E.g., if we have the option of building sentient AGI vs. nonsentient AGI, is there a reason we should need to choose "sentient"?)

It's true we shouldn't mistreat sentient AGI systems any more than we should mistreat humans; but we're in the position of having to decide what kind of AGI systems to build, with finite resources.

If approach U would produce human-unfriendly AGI and approach F would produce human-friendly AGI, you can object that choosing F over U is "brainwashing" AGI or cruelly preventing U's existence; but if you instead chose U over F, you could equally object that you're brainwashing the AGI to be human-unfriendly, or cruelly preventing F's existence. It's true that we should expect U to be a lot easier than F, but I deny that this or putting on a blindfold creates a morally relevant asymmetry. In both cases, you're just making a series of choices that determine what AGI systems look like.

Comment by Rob Bensinger (RobbBB) on 2012 Robin Hanson comment on “Intelligence Explosion: Evidence and Import” · 2021-04-02T16:31:39.133Z · LW · GW

They even seem wary of descendants who are cell-by-cell emulations of prior human brains, “brain-inspired AIs running on human-derived “spaghetti code”, or `opaque’ AI designs …produced by evolutionary algorithms.” Why? Because such descendants “may not have a clear `slot’ in which to specify desirable goals.”

I think Robin is misunderstanding Anna and Luke here; they're talking about vaguely human-brain-inspired AI, not about literal human brains run on computer hardware. In general, I think Robin's critique here makes sense as a response to someone saying 'we should be terrified of how strong and fast-changing ems will be, and potentially be crazily heavy-handed about controlling ems'. I don't think AGI systems are relevantly analogous, because AGI systems have a value loading problem and ems just don't.

Comment by Rob Bensinger (RobbBB) on Rationalism before the Sequences · 2021-03-31T18:16:36.663Z · LW · GW
Comment by Rob Bensinger (RobbBB) on Rob B's Shortform Feed · 2021-03-31T17:44:44.607Z · LW · GW

This is being cute, but I do think parsing 'effective altruist' this way makes a bit more sense than tacking on the word 'aspiring' and saying 'aspiring EA'. (Unless you actually are a non-EA who's aspiring to become one.)

I'm not an 'aspiring effective altruist'. It's not that I'm hoping to effectively optimize altruistic goals someday. It's that I'm already trying to do that, but I'm uncertain about whether I'm succeeding. It's an ongoing bet, not an aspiration to do something in the future.


'Aspiring rationalist' is better, but it feels at least a little bit artificial or faux-modest to me -- I'm not aspiring to be a rationalist, I'm aspiring to be rational. I feel like rationalism is weight-training, and rationality is the goal.

If people are unhealthy, we might use 'health-ism' to refer to a community or a practice for improving health.

If everyone is already healthy, it seems fine to say they're healthy but weird to say 'they're healthists'. Why is it an ism? Isn't it just a fact about their physiology?

Comment by Rob Bensinger (RobbBB) on Rob B's Shortform Feed · 2021-03-31T17:42:53.592Z · LW · GW

Yeah, I'm an EA: an Estimated-as-Effective-in-Expectation (in Excess of Endeavors with Equivalent Ends I've Evaluated) Agent with an Audaciously Altruistic Agenda.

Comment by Rob Bensinger (RobbBB) on Logan Strohl on exercise norms · 2021-03-30T20:02:08.851Z · LW · GW

I'd be interested to hear a steelman of 'social shame', which I think is something you're gesturing at — that, regardless of what Logan's talking about, a utopian society would make important use of 'social shame' to make things go well. (Perhaps that's not your point, but it's at least a thing I'm curious about!)

I'd need more evidence/arguments in order to be persuaded of any of these claims:

  • History, there's been no such thing as private 'shame'.
  • Private 'shame' is synonymous with 'guilt'. (I think these are close together in concept space, but I strongly guess you're missing important shades of meaning if you're rounding off Logan's topic to 'guilt'. See also In Defense of Shame.)

In general, I think you should ask more initial questions and make fewer assumptions about what's in Logan's head. E.g., you're assuming a very moralistic perspective on Logan's part (that they're really talking about guilt, which is really about "when we do something morally bad, or fail to do something morally good"), but Logan is a very aesthetics-oriented, very anti-morality sort of thinker.

Comment by Rob Bensinger (RobbBB) on Logan Strohl on exercise norms · 2021-03-30T18:32:02.132Z · LW · GW

Logan prefers the pronoun "they" over "he", FYI.

Comment by Rob Bensinger (RobbBB) on Logan Strohl on exercise norms · 2021-03-30T16:36:33.860Z · LW · GW

This flies directly in the face of the historical record. Until modernity began, shame was explicitly and strategically a public tool almost everywhere.

I don't think you understood what Logan was saying. Maybe you took "break it" to mean "cause it to have no effect", where Logan meant something like "cause it to have bad effects that don't capture the valuable things about shame".

Comment by Rob Bensinger (RobbBB) on Logan Strohl on exercise norms · 2021-03-30T16:33:54.980Z · LW · GW

Non-meta is good!

Comment by Rob Bensinger (RobbBB) on Coherence arguments imply a force for goal-directed behavior · 2021-03-30T13:10:59.655Z · LW · GW

Maybe changing the title would prime people less to have the wrong interpretation? E.g., to 'Coherence arguments require that the system care about something'.

Even just 'Coherence arguments do not entail goal-directed behavior' might help, since colloquial "imply" tends to be probabilistic, but you mean math/logic "imply" instead. Or 'Coherence theorems do not entail goal-directed behavior on their own'.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-30T12:59:03.185Z · LW · GW

I guess I was thinking of 9/10 as a relatively low bar in the grand scheme of things ("pretty good"), and placing it so far from journalism (etc.) to express my low regard for the latter more so than my high regard for econ. But it sounds like it may belong lower on the scale regardless.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-30T03:11:08.770Z · LW · GW

At the same time, I do not get any impression of relevant expertise either such that I feel good about this group being in a privileged position regarding any kind of ethics decision.

I've cross-posted a relevant excerpt from Julia Galef's Feb. 2021 interview of Matt Yglesias here: Julia Galef and Matt Yglesias on bioethics and "ethics expertise".

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-30T02:44:51.588Z · LW · GW

value drift makes older papers seem much much worse than they would've seemed at the time.

I don't think 'value drift' is the explanation for this. 

Comment by Rob Bensinger (RobbBB) on Misconceptions about continuous takeoff · 2021-03-29T22:12:57.237Z · LW · GW

Some skepticism from Eliezer here: 

Comment by Rob Bensinger (RobbBB) on Covid 3/25: Own Goals · 2021-03-25T22:31:51.097Z · LW · GW

My worry here is that some people who were even more paranoid than that got put in the no category, and given who follows Rob I’m guessing that could be a sizable group, but it’s cool to see. This is an overall 13.1% rate of getting Covid, and 17.8% even in the “no” group, which is substantially lower than the nationwide average of about 30% (some people answering weren’t American, but still that’s a big gap), which supports the idea that this group is a lot more cautious than usual. The “yes” group had a 9% rate, about half of the “no” group. 

Between people thinking they followed such rules when they didn’t, the lizardman constant slash misclicks/misunderstandings, false beliefs about having had Covid (which aren’t that rare) and the most cautious people of all being in the “no” group, that 2:1 ratio is almost certainly too low.

The poll was retweeted by Aella, who has 65x my follower count, so the respondents are much less rationalist and academic than you might expect. Also, only 4 of the 72 people who answered "yes, yes" followed the instructions ("leave a comment"), and 2 of those 4 admitted to being misclicks. Between this and the other issues, I think the poll results are probably almost useless.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-24T22:18:47.442Z · LW · GW

Sounds like we need a social media site where all top-level posts must just be links to papers, and you can only reply to top-level posts, not to other replies. :)

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-24T21:56:30.202Z · LW · GW

Most commenters seemed surprised by the quality of the 33 bioethics papers I sampled. This made me wonder if our views might just be out-of-date — describing the bioethics of a few decades ago, rather than the field as it existed in 2014-2020. Luke Muehlhauser's review of the history of bioethics says:

  • Another source of annoyance [for doctors] may have been that bioethicists of the time tended to be more theological and deontological (i.e. less utilitarian), and more cautious about developing and deploying new medical capabilities, compared to doctors.[10]
  • The early laws and court decisions related to bioethics continue to have an outsized effect,[11] though bioethicists today are probably more diverse than they were in the earliest years of bioethics, and (e.g.) many of them are explicitly utilitarian.


The Rockefeller Foundation provided substantial initial funding for the Hastings Center, the first major institute focused on bioethics. As the Hastings Center grew during the 1970s, it continued to be substantially funded by philanthropists.[14] Among other early activities, the Hastings Center hired some staff researchers, organized workshops, created a visiting scholars program, and created The Hastings Center Report, which soon became the leading journal in the field.[15]

The Hastings Center Report is still listed as one of the highest-impact-factor medical ethics journals, so I decided to sample ten random papers they released in 2000 to see if these better fit my and other LWers' stereotypes about the field. A few caveats:

  • I haven't looked much at Hastings' recent issues. If their 2000 and 2020 papers are both bad, the case for 'bioethics has improved' is somewhat weaker, and we might need to think about specific subgroups of bioethicists.
  • This time, I tried to make the papers easier to digest by skimming them and picking out the claims that struck me as most central and/or actionable, trying to avoid excerpting fluff. This is totally different from what I did in the OP, so if the papers below seem very different from the ones in the OP, you might want to double-check the papers to see whether my editorial choices are doing most of the work.

The random papers this time:

1. Richard H. Nicholson. "'If It Ain't Broke, Don't Fix It.'" The Hastings Center Report 30(1). Old World News.

[...] The most important conclusion of the workshop was that, given its worldwide acceptance, it would be a serious mistake to rewrite the Declaration of Helsinki. 

[...] It was generally felt that the question of when placebos or “best proven treatment” may be used was determined by the physician’s fundamental duty to do his best for each individual patient, from which derives the ethical concept of equipoise. Clinical trials are only ethical when there is genuine uncertainty as to which arm of the trial will prove better. Several workshop participants— including Robert Temple, director of drug evaluation at the FDA—argued that the low risk to subjects justifies the use of placebo arms in clinical trials when effective treatments are available and equipoise is therefore impossible. But that puts the interests of science and society before the interests of the research subject, which is prohibited by the Declaration, and the physician fails in his duty to do his best for the patient.

2. Robert Zussman. "The Contributions of Sociology to Medical Ethics." The Hastings Center Report 30(1). Original Articles.

[...] A good deal of medical ethics is based on consequentialist claims that social scientists are well equipped to assess. If an ethical claim is based on the assertion that a practice or arrangement is ethically questionable because it results in a particular outcome, then that claim is empirically testable. Philosophical medical ethicists rarely mount those tests themselves. Social scientists, whether using ethnographic methods, reviews of records, or survey instruments, can test those claims.

3. James Lindemann Nelson. "Moral Teachings from Unexpected Quarters: Lessons for Bioethics from the Social Sciences and Managed Care." The Hastings Center Report 30(1). Original Articles.

[...] The social sciences might make a contribution to bioethics by helping the field’s practitioners understand better what’s behind its deeply installed respect for individual autonomy and whether it has assumed more the character of an ideology than a moral philosophy.

[...] Bioethical interventions into health care practice have tended to rely on rational persuasion based on arguments about values—that is to say, on the kind of educative models familiar in university settings, addressed in the main to individuals. Bioethical pedagogy thus chimes with the individualist approach that characterizes so much of mainstream ethics. An approach to both analysis and action that looks less at individuals, and more at the characteristics of institutions and how they shape human response, surely seems worth trying in SUPPORT’s wake.

[... A]t the economic level, managed care relies on the notion of the person as a consumer, as a savvy bargainer in the marketplace. But at the level of service provision, managed care—particularly when it takes the form of an HMO—suggests that people can be responsive to other, less individualist concerns, that they are willing to subordinate some of their own interests to ensure the viability and flourishing of the whole.

[...] It is fairly patent that an appreciation of the character of social structures, of the “cultures” that operate within them, and of their relationship to broader aspects of society are bioethically pertinent matters for which the tools of social scientific inquiry are key. However, the values and sensibilities that are prevalent in the social sciences may make an even bigger contribution. For example, seeing potential moral interest and perhaps even insight in at least some forms of managed care might well be easier for bioethicists if their assessment of fee-for-service financing was tempered with the kind of suspicion about professions that have been an important part of sociology. One’s willingness to see collectivist approaches as in principle at least on a par with individualist assessments of medical ethics might also be easier to achieve if one better appreciated the moral significance of social and group relations as social scientists not infrequently do.

4. Joanne Silberner. "A Gene Therapy Death." The Hastings Center Report 30(2). Capital Report.

It was what the New York Times headlined a “biotech death.” Eighteen-year-old Jesse Gelsinger died four days after receiving gene therapy for a rare metabolic disorder. It was also the first death associated with gene therapy. As details of the experiment and the death unfolded, with often contradictory information, the field of gene experimentation appeared neither orderly nor well regulated.

[...] On the national level the episode has sparked a closer look at a field that is increasingly lightly regulated even though many gene therapy protocols have corporate sponsors. Some insiders are questioning former NIH director Harold Varmus’s decision to change the RAC’s status from regulatory to advisory. Several politicians notified Varmus of their concerns. In January the FDA halted all gene therapy trials at Penn. And Congress is considering whether it should get involved in oversight of the young field.

5. Joanne Silberner. "Health Care and the Presidential Election." The Hastings Center Report 30(4). Capital Report.

[...] Polls have ranked health care fairly high as an issue that’s important to voters, and the number of those uninsured is up to 44 million. As of this writing, however, health care has yet to be a galvanizing issue. Looking back at the last few elections, I’m afraid this year is going to be another case of much noise and little action.

On a personal note, this is my last Capital Report. A column is a chance for a journalist to stretch and vent a little, and I’ve enjoyed it. But I’ve run out of fresh ways to complain about Washington’s inability to come up with a workable health plan, or a good organ transplant policy, or effective safety standards for gene therapy.

6. "Chaplain's Role in End of Life Care." The Hastings Center Report 30(5). Letters.

[...] Chaplains are committed to ministering to the spiritual needs of the patient as defined by the patient.

[...] But most patients have had little experience with hospital chaplains. We must not base care for the dying on the assumption that everyone—Hindus, Jews, Moslems, Buddhists, and atheists (including some who have struggled for years to free themselves from a Christianity they considered pernicious)— should recognize that they need to call on the Christian chaplain if they want help in dealing with spiritual issues at the end of life.

7. "The Million Dollar Question." The Hastings Center Report 30(5). Case Study.

M.C. is a seven-year-old girl diagnosed with relapsed acute lymphoblastic leukemia (ALL). She lives in a Third World country in South America. A few months after her diagnosis M.C., with her parents and younger brother, presented to the emergency room of a large Catholic teaching hospital in the United States[....]

Currently the hospital has incurred costs of $800,000 for M.C.’s care[....]

The hospital administration has requested an “ethics consult” to decide what to do. [... It] wonders whether the hospital must continue to provide uncompensated care for M.C. If it performs the [bone marrow transplant], then given its current financial situation, layoffs of up to fifteen employees will probably be necessary to offset the additional financial losses.

[...] Commentary by Lauren S. Cobbs: [...] I believe it was appropriate and indeed obligatory to admit M.C., evaluate her, and begin some form of chemotherapy. Actively pursuing therapeutic options beyond this point, however, and specifically pursuing BMT, led to an ethical gray zone. Given her aspergillus sinusitis, M.C.’s prognosis for survival with cure following BMT is reduced so as to be comparable to that for repeat intensive chemotherapy following first relapse. [... B]ecause other equivalent therapies are available and no reasonable option has been made completely unavailable, the clinicians and the institution are under no direct moral obligation to pursue BMT.

[...] Commentary by Peter A. Clark: [...] It was not only medically but ethically and legally obligatory to admit, evaluate, and diagnose M.C. when she came to the emergency room. However, after diagnosing her condition and stabilizing her, M.C. should have been referred to her home country for further treatment.

[... The hospital] does not have a moral obligation to provide a bone marrow transplant if this would generate large financial losses and jeopardize the safety and quality of care available to other patients and to the community as a whole. There is an ethical obligation to continue medical treatment once it has been initiated and determined to be beneficial, but the extenuating circumstances in this case limit that obligation.

[...] No one can be obliged to do what is impossible to do. It is certainly unjust that all people do not have equal access to health care resources; however, this is the reality of our present situation. As a matter of justice, we have an obligation to distribute the medical resources available in a manner that will bring about a reasonable balance of benefits and burdens. No hospital can be obligated to act in a manner that would threaten its ability to sustain its mission of providing health care for the good of society.

[...] Commentary by Margherita Brusa: [...] In the similar case five years ago, the hospital was apparently able to provide treatment. M.C.’s parents could not have known that the hospital’s economic situation had changed. They believed that the first treatment had set a precedent and that they had reason to expect similar treatment. That line of reasoning corresponded to a conception of justice that holds that like cases should be treated alike. Moreover, if they felt that failure to treat would be a form of discrimination, they would have felt justified in threatening to bring the case to the attention of the press.

In fact, a decision to withhold treatment would carry graver consequences than a decision to treat: without treatment, the child will die; the institution will risk bringing scandal to its Catholic identity; and the bad publicity associated with a decision to withhold treatment would harm the hospital’s and its workers’ ability to thrive in a competitive health care marketplace.

Weighing the potential harms and benefits for both the patient and the hospital leads to the following conclusion. The hospital has a prima facie obligation to provide the best care possible to all who present themselves for care. That prima facie obligation can be overridden if the hospital does not have sufficient resources to provide an optimal level of care to all its patients, particularly those who do not have a specific claim on the hospital’s resources allocated for care for the indigent. However, considering the prima facie obligation to treat, coupled with the facts that the funds may be raised over time to cover the expenses and that the negative publicity of denying care would also harm the institution, the appropriate decision in this case is to complete the treatment.

8. Kathi E. Hanna. "Research Ethics: Reports, Scandals, Calls for Change." The Hastings Center Report 30(6). Capital Report.

Protecting individuals who participate in research, although a requirement for federal research grantees for over twenty years, is suddenly a hot topic in some Washington circles. The renewed attention is the fallout from a recent series of events—some tragic and some trivial. For those who thought that the days of the Tuskegee syphilis study and Willowbrook were long gone, more recent events, although not of the same magnitude, have reminded policymakers that not all is well in the world of biomedical and behavioral research.

[...] Thus the most recent round of public scrutiny followed the death of eighteen-year-old Jesse Gelsinger during a gene transfer study at the University of Pennsylvania. Before that was the revelation that researchers with the Department of Veterans Affairs in West Los Angeles were performing risky research without obtaining subjects’ consent, and in the background were shutdowns of federally funded research at seven major research universities. Although no one had died at these research sites, the suspensions, meted out by the federal Office for Protection from Research Risks (OPRR), were a red flag that a pattern of disregard for the regulations existed that had to be corrected.

9. Farhat Moazam. "Families, Patients, and Physicians in Medical Decisionmaking: A Pakistani Perspective." The Hastings Center Report 30(6). Original Articles.

In Pakistan, as in many non‐Western cultures, decisions about a patient's health care are often made by the family or the doctor. For doctors educated in the West, the Pakistani approach requires striking a balance between preserving indigenous values and carving out room for patients to participate in their medical decisions.

10. Dena Davis. "Groups, Communities, and Contested Identities in Genetic Research." The Hastings Center Report 30(6). Original Articles.

[... E]ven if we were to agree with Charles Weijer that the three principles of research ethics (beneficence, distributive justice, respect for persons) need to be updated to include a fourth principle of respect for communities,[40] it would be quite difficult to know what that means. Does “respecting a community” mean deference to its “legitimate political authority” (p. 510) even if only the males in the community vote?

[... T]hus despite the attractions of the call for community consent for genetic research, I conclude that it is a notion too deeply flawed to be given effect.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-24T19:11:18.159Z · LW · GW

I would expect bioethics to be healthier than theoretical ethics, for reasons related to "it more often requires coming to a decision on a current moral question": decisions feel more consequential and you get more feedback on the impact of ideas.

I'm not sure I'd expect bioethics to be healthier than the average Anglophone-philosophy subfield. I predict that adding morality to the mix makes humans get the wrong answer a lot more than they otherwise would, for a variety of reasons: emotions run higher; there's more temptation to grandstand and merely-signal; it's harder to point to good consensus models of what 'successful normative reasoning' looks like (unlike 'successful descriptive reasoning'); etc.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-23T23:00:07.191Z · LW · GW

There are two sources of variance that make it a bit hard to update on others' impressions. Imagine a Scholarly Goodness scale like this:

10 - physics

9 - economics; other pretty-good fields

8 - mediocre well-functioning scientific fields

7 - normal fields that only have medium-sized replication crises

6 - medicine; other fields with catastrophically large replication crises

5 - social psychology

4 - journalism; criminology

3 - normal, relatively calm Twitter arguments

2 - philosophy of religion; loud angry Twitter arguments

1 - Scientology propaganda

First, some people may have higher expectations than others. Maybe (making up numbers) Kaj and you both now think bioethics is a 5, but previously Kaj expected it to be a 4 while you expected it to be a 5, so he comes away with a happy surprise while you come away exasperated with the social-psych-tier shoddiness.

Second, some people may have higher standards than others. If your standard is 'better than Twitter' (say, 'above 2.5'), then counting up to 5 might feel downright refreshing. If your standard is 'meeting the bare-minimum level of econoliteracy and quantitativeness to not yield norms and institutions that cause millions of unnecessary COVID-19 deaths or the senseless death and disability of hundreds of infants', then you're applying a different test to the papers.

I think there are also different orientations to take here, like 'bioethics has a job to do; are they getting the job done?' versus 'bioethicists are human beings with thoughts and ideas; how interesting do I find the thoughts and ideas?'. I think both frames should be in the mix: even if you come away from this still thinking bioethics is a diseased discipline, seeing the sausage get made makes it clearer how smart, well-intentioned people could end up in a situation like that.

It's less "othering"; I could imagine friends of mine in philosophy and social science starting a field that ends up causing similar problems in society.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-23T19:00:47.494Z · LW · GW

Thanks, fixed.

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-23T18:29:50.573Z · LW · GW

Having article-by-article summaries seems useful here. And since none of us have read most of these articles, this might be a good use case for a Google Doc, so different people can improve on summaries if they spot an error.

Here's a Google Doc that starts with aphyer's summaries and allows for article-specific follow-ups: If you add something to the Google Doc, I encourage mentioning it in this comment section so others can be notified.

I'll also add a link to the Google Doc in the OP.

Some people might also prefer the Google Doc format for critiquing particular articles, and not just summarizing? Whether in the Doc or on LW, I want to encourage people to critique specific articles or arguments if something catches their eye, so we don't stay at too high a level of generality to notice subtle errors or biases, disagreements with other LWers, etc.


My own responses to aphyer's summaries:

1. Argues in favor of more bioenhancement and against refusing to do bioenhancement for reasons of 'egalitarianism', though in a somewhat wishy-washy way 'we endorse a cautious proposal'.

This and the other bioenhancement paper were written by Julian Savulescu, who studied under Peter Singer and co-edited a book with Nick Bostrom.

3. Argues in favor of 'nudging' patients to obtain consent for treatments.

Also argues that informed consent is based on “social conventions” rather than on theories of autonomy.

4. Argues that if you favor assisted suicide when doctors do it, you should also favor it when for-profit entities do it.

Treats it as obvious that commercially assisted suicide is a terrible idea; therefore physician-assisted suicide is bad too.

6. Tells the story of a patient who demanded his doctors do something really stupid and refused to budge.  Unclear what moral, if any, they want to draw.

I revisited the paper itself to find the main morals it draws:

(1) Even though paternalism is theoretically a bad idea when a patient "has been judged competent to make his or her own decisions", this example shows that there can be practical, case-by-case reasons to be unusually pushy in trying to change patients’ minds.

(2) Normally, doctors and patients are basically strangers. The "ethics consulation process" provides an opportunity to build a higher-trust relationship and/or build a mutual understanding allowing for this kind of useful paternalism.

7. A weirdly meta paper that evaluates methods for evaluating ethics.  I have no idea what this means.

From a part of the paper that wasn’t excerpted: “This project was undertaken in response to a growing concern both within and outside the field of clinical ethics that CE consultants who interact with patients, families, and health care professionals need to demonstrate competency to respond effectively to ethics consultation requests (Fox 2014, Dubler et al. 2009). Responding ‘effectively’ raises the question of what constitutes ‘quality’ in ethics consultation. While there has been historic value in the field placed on preserving a diversity of approaches to ethics consultation, diversity should not preclude evaluation of quality. Clinical ethics must continue to embrace diversity of approaches while sharpening our commitment to measurement of quality.”

The proposal is to assess quality by evaluating a portfolio consisting of the candidate’s eductional background, "a written summary of the candidate's philosophy" of clinical ethics consulation, letters of recommendation, and the "centerpiece of the portfolio": "six in-depth, detailed case discussions in which the candidate led or co-led the ethics consulation". Since there are a lot of open questions and disagreements in bioethics, "presentation of arguments regarding controversies was given credit as long as they were argued clearly and coherently".

9. I have no idea what this paper is about.  It tells the story of a kid called James, but I don't know what it wants to draw from it.

A hospital has insufficient staffing over the weekends, forcing them to switch a patient to a worse therapy. The paper argues that this is ethically unacceptable, and the hospital should have foreseen this possibility and sent the patient to a better-staffed hospital at the outset.

On an unrelated side-note, the paper asks whether the “worse therapy” is actually worse, noting that there’s disagreement/ambiguity in the medical literature.

10. Argues against 'nudging' people to register as organ donors.

Also argues for “mandated active choice,” which “presents people with the choice of registering as an organ donor or not, and requires them to make a decision, for example, by making the renewal of their driver’s license conditional on them stating their donation preference.”

12. Argues that bioethicists should pay attention to neuroscience?

And that they should do less theoretical work, and engage more with real-world specifics and stakeholders’ views.

E.g. (from the paper itself): “since deep brain stimulation (DBS) trials for treating motor disorders such as Parkinson’s disease and other neuropsychiatric disorders began almost 25 years ago, there have been hundreds of theoretical papers about numerous possible ethical challenges (e.g., dehumanization, loss of autonomy, changes to personal identity, authenticity of affective states, how to obtain meaningful informed consent, therapeutic misconception, human enhancement). However, in 25 years of work on DBS there is a surprisingly small amount of empirical literature about the perspectives and experiences of stakeholders (e.g., patient-participants, caregivers, clinical trial or treatment decliners, clinicians, researchers) regarding these neuroethics issues and whether and how these issues are manifested.”

16. Talks about 'right-to-try' trials where terminally ill patients try untested drugs.  Some waffling, I'm unclear if they approve or not.

The paper is opposed to right-to-try, and warns that “the use of investigational drugs may gradually turn into fantasy therapy”.

20. Proposes a different standard for how to evaluate parent's decisions re. medical care for their children.  Unclear how it differs.

The “best interest standard” says that a parental decision should be respected if "a reasonable argument [can] be offered that the decision is best for the child, all things considered", and if the decision does not expose the child to obvious risk of harm.

The paper objects that (a) a lot of terrible practices can be 'reasonably argued for' if you start with trash assumptions (e.g., religion); and (b) some harms are worth the benefits.

The paper instead recommends the “reasonable subject standard”, which says "parents should decide for their child as the child would if she were a moral agent trying to act prudently within the constraints of morality".

28. Talks about how to handle decision-making for unrepresented patients.  Unclear what they think you should do.

Argues that "the multi-stakeholder process is ethically superior to solo decision making": "solo decisions about what is in a patient's best interest run a serious risk of being arbitrary or biased, even if both the physician and the surrogate make their respective decisions after careful consideration".

30. Argues that the HEC-C program (again!) should be more diverse - what they seem to mean by this is not the standard concept of 'diversity' but that it should cover a more diverse set of medical situations.

Specific objections include:

The paper authors haven’t taken the HEC-C examination, but they’re worried it might have similar flaws to a more standard examination, USMLE, where “questions do not depict complex ethical situations to critically think through and resolve, the content is often based on federal laws and regulations rather than ethical rules, students are prompted to either choose the ‘next best step’ or the most plausible answer that tests students’ reading comprehension more than their ethical knowledge, and correct answers often contradict the reality of patient care in clinical settings.”

Worries that curriculum standardization “restricts diversity of shared ideas and experiences”, that HEC-C focuses too much on inpatient settings, and that the $650 fee for taking the test is too high.

They want less reliance on multiple-choice questions. They support “introducing short answer questions on the HEC-C examination, and we strongly recommend a standardized patient or mock consultation component by which communication skills, including empathic responses, are assessed, as well as, how well participants are able to ‘read the environment’ and the overall context of the presented ethical issue.”

31. Argues that informed consent is 'nonsense' because people sometimes believe multiple different things.

Rather, the author thinks it’s nonsense because people have different social roles and relationships, and because goodness is about what serves the group, not about what the individual wants or consents to.

More from the paper: “Everything I do is affected by and affects others—for good or for ill. And if this conclusion is philosophically or politically uncomfortable, well, tough: we are all members of society; we’ve all put our signature to some form of social contract. [... B]ecause of the kinds of animals humans are, the good of society will also be the good of the individual. [... P]art of the job of ethics and law is to help a person’s subjective conception of the good coincide with the objective good. That is best achieved by facilitating recognition of the fact that the individual patient’s good is the wider good. This accords well with what we know empirically about the roots of human happiness. Altruism and relationality make people happy. Selfishness makes them miserable. (Foster and Herring 2015). That’s what one would expect if the relational model of human beings is correct.”

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-23T15:33:11.159Z · LW · GW

What are the biggest things that ought to appear in bioethics papers, but don't appear here? (Or don't appear as much as they should.)

Comment by Rob Bensinger (RobbBB) on Thirty-three randomly selected bioethics papers · 2021-03-23T01:43:06.032Z · LW · GW

Interested to hear people's high-level takeaways (even if they're only provisional), things that did and didn't surprise you, etc.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-19T03:03:13.487Z · LW · GW

So in the US, the benefits of a policy proposal are, unfortunately, almost irrelevant. We have essentially infinite lists of suggestions for better policies; adding more does basically nothing. The entirety of the problem is the construction of the law-making machine

I guess I don't currently believe this is true. My model is sort of:

  • Everything is a chaotic jumble and there's lots of variation from politician to politician and from government agency to government agency. Within that jumble, new good ideas (and normal smart well-intentioned actions, etc.) sometimes pop up, and periodically make a big difference.
  • To the extent this look homogeneous it's usually because of mimicry; what gets mimicked is highly contingent; and some of the things that shift the mimicry focus are good new ideas.
  • So there's a lot of low-hanging fruit unplucked, and yet new good ideas are still quite useful... somehow. I don't really have a theory for why that is. UBI-ish ideas seem to be catching on even though plenty of milder, less-weird improvements have languished in obscurity for decades. To my eye, academics and bloggers continuing to chatter about this seems to have made a difference.

One possible explanation for things like UBI: perhaps policymakers tend to look for modest improvements because their local incentives and bad models make them feel this is better; weird bloggers tend to gravitate toward more ambitious improvements because they're fun to think about; but in fact ambitious improvements are better across the board because they generate more enthusiasm and they more effectively shift the Overton window / break bad equilibria of silence.

and so to me, your suggestion in that sphere is that we deny the only thing that matters: the sausage-making machine.

I think some people should talk about electability and popular appeal, but hidden in specialized blogs rather than on the front page of newspapers or in most online policy discussions.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-18T18:25:34.255Z · LW · GW

Yep. Part of why the fixation on this topic seems bad to me is that it's not very informative. We're looking at tea leaves and throwing away the tea.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-18T18:23:27.336Z · LW · GW

Yep. Doesn't seem to pose the kinds of risks I talked about.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-18T18:18:14.383Z · LW · GW

An example of something that's "meta" in a different sense, and that I actually want to see more of in reporting, is "reflection on the process we're using to generate this story". E.g., I think it's good when reporters talk about their probability estimates and their calibration and discrimination track records, are transparent about how they reached a conclusion, publicly discuss and iteratively improve their policies, etc. I think it's good when people don't pretend to be Ra-like caricatures of objectivity, and instead are allowed to be human beings striving for objectivity.

It's similar to the advice I'd give a mathematician: focus on the technical problem you're thinking about rather than spending a bunch of cycles modeling group dynamics and what's popular or prestigious; but do take time occasionally to reflect on your reasoning process and see if there are ways to improve your math output.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-18T18:10:33.320Z · LW · GW

And ∞-gaffe for the kind of miasma that arises when there are too many levels to keep track of anymore, or when miasma sticks around out of habit, or when you're just imitating someone else who's treating the statement as a gaffe — statements that the brain rounds off to 'controversial' even though it can't cash that out in terms of any imagined person who would perceive it as a 0-gaffe.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T22:36:06.950Z · LW · GW

Sounds right! If there's anything I should read in order to understand and agree with your view, send it my way (including things that get written in the future).

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T21:58:36.958Z · LW · GW

Paul Graham's The Refragmentation argues that mainstream media 50 years ago in the US was a rare and fragile historical anomaly.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T21:56:38.500Z · LW · GW

I think it's good to be really cynical about the media as it exists today. I'm not sure it's good to be cynical about the-media-two-years-from-now — that has something of the property of a self-fulfilling prophecy.

I have my own personal sense of how likely it is that the media will suddenly turn over a new leaf tomorrow, but since it might turn out to be easier than I think, I won't start the conversation by stating that. Instead, I'll mention some of the specific forces I think create the status quo:

  • Self-deception and plausible deniability. Reporters don't want to think of themselves as doing a bad thing. If there were common knowledge within many newsrooms that this level of meta is bad, all or most of those newsrooms would behave a lot better. (Not perfectly, but a lot better.) Even more so if their readers and colleagues felt the same.
  • Lack of an ideology that recognizes these things as bad and clearly tells reporters what to do instead.
  • Bad ideologies filling the vacuum: ideologies that say "do the normal pragmatic thing", and ones that say "do the virtuous principled thing, but that principle is about advancing a specific political agenda that doesn't care much about epistemic principle".
  • Economic incentives. But these are partly shaped by the above incentives: many people choose to work in journalism because they want to purchase a sense that they're doing something noble and good. Many people choose to consume the news in order to purchase a sense that they're doing something responsible and virtuous.
Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T21:34:55.510Z · LW · GW


  • 0-gaffe for "something that personally offends people present because they think it violates epistemic norms or is directly harmful",
  • 1-gaffe for "something that seems offensive to someone present because they think a hypothetical absent person might consider it a 0-gaffe if they were here",
  • 2-gaffe for "something that seems offensive to someone present because they think a hypothetical absent person might consider it a 1-gaffe if they were here",
  • etc.?
Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T18:04:43.933Z · LW · GW

Level-1 non-well-founded gaffes, level-2 non-well-founded gaffes, etc.?

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T18:00:55.841Z · LW · GW

(Note that I don't think I've provided a satisfactory battle plan here, and I think a good battle plan could well involve finding ways to better align journalists' economic interests with what's virtuous, rather than just trying to market virtue to them.)

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T17:59:29.646Z · LW · GW

There's a certain perception of what "respectable journalism" looks like, and this perception is what causes the New York Times and CNN to not immediately rush down the slope to tabloid journalism in pursuit of short-term clicks.

I think this "respectable journalism" image affects newspapers' behavior because the public has this concept in mind, and many people will consume news less if it seems too far from respectable journalism. Separately, this image also affects newspapers' behavior because the journalists care about "respectable journalism" to some degree. Monetary incentives are a thing, but "how much do my peers respect me?" is also a thing, and the micro-hedonics of "how much do I respect myself?" are a thing as well.

So what I really want to do is change journalists' and newspaper-buyers' conception of what "respectable journalism" is, to have higher standards that are more robust to weird failure modes. That wince of pain people feel when they stray from what feels virtuous (whether for reputational or internal reasons) is exactly the thing society uses to not have everything fall to Moloch as soon as it possibly could.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T17:50:41.982Z · LW · GW

Yes, this is a very hypocritical post. Surely the problem with that New York Times front page isn't some fancy meta-level principle it was violating, it's that the object-level arguments against the email thing being this important are so strong!

To my eye, the object-level errors are a warning sign that there's a deeper thing going wrong.

But hopefully if I succeeded in convincing the world this is a problem, I'd be able to stop talking about "deeper things" and go back to simple object-level arguments. Like how the purpose of good philosophy is often just to help people unlearn bad philosophy.

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T17:37:51.642Z · LW · GW

(Thanks to Chana Messinger for discussing some of these topics with me. Any remaining errors in the post are society's fault for raising me wrong, not Chana's.)

Comment by Rob Bensinger (RobbBB) on Politics is way too meta · 2021-03-17T07:29:49.514Z · LW · GW

Note on the definitions: People use the word "meta" to refer to plenty of other things. If you're in a meeting to discuss Clinton's electability and someone raises a point of process, you might want to call that "meta" and distinguish it from "object-level" discussion of electability. When I define "meta", I'm just clarifying terminology in the post itself, not insisting that other posts use "meta" to refer to the exact same things.

Comment by Rob Bensinger (RobbBB) on Deflationism isn't the solution to philosophy's woes · 2021-03-11T16:51:55.034Z · LW · GW

Ben Levinstein:

I guess I have a fair amount to say, but the very quick summary of my thoughts on SI remain the same:

1. Solomonoff Induction is really just subjective bayesianism+ Cromwell's rule + prob 1 that the universe is computable. I could be wrong about the exact details here, but I think this could even be exactly correct. Like for any subjective Bayesian prior that respects Cromwell's rule and is sure the universe is computable there exists some UTM that will match it. (Maybe there's some technical tweak I'm missing, but basically, that's right.) So if that's so, then SI doesn't really add anything to the problem of induction aside from saying that the universe is computable.

2. EY makes a lot out of saying you can call shenanigans with ridiculous-looking UTMs. But I mean, you can do the same with ridiculous looking priors under subjective bayes. Like, ok, if you just start with a prior of .999999 that Canada will invade the US, I can say you're engaging in shenanigans. Maybe it makes it a bit more obvious if you use UTMs, but I'm not seeing a ton of mileage shenanigans-wise.

3. What I like about SI is that it basically is just another way to think about subjective bayesianism. Like you get a cool reframing and conceptual tool, and it is definitely worth knowing about. But I don't at all buy the hype about solving induction and even codifying Ockham's Razor.

4. Man, as usual I'm jealous of some of EY's phrase-turning ability: that line about being a young intelligence with just two bits to rub together is great.

Comment by Rob Bensinger (RobbBB) on What I'd change about different philosophy fields · 2021-03-11T15:16:09.852Z · LW · GW

I don't agree with that. For one thing, that's already a branch of psychology. For another, it's purely descriptive , and so gives up on improving ethics.

I agree with this. I don't really mind if moral philosophy ends up merging into moral psychology, but I do think there's a potentially valuable things philosophers can add here, that might not naturally occur to descriptive psychology as important: we can try to tease apart meta-values of varying strengths, and ask what we might value if we knew more, had more ability to self-modify, were wiser and more disciplined, etc.

Ethics is partly a scientific problem of figuring out what we currently value; but it's also an engineering problem of figuring out and implementing what we should value, which will plausibly end up cashing out as something like 'the equilibrium of our values under sufficient reflection and (reflectively endorsed!) self-modification'.