Posts

Big Advance in Infinite Ethics 2017-11-28T15:10:47.396Z

Comments

Comment by bwest on Why everything might have taken so long · 2018-01-04T21:53:51.248Z · LW · GW

  1. People’s brains were actually biologically less functional fifty thousand years ago.

link?

Comment by bwest on Big Advance in Infinite Ethics · 2017-12-29T19:46:03.515Z · LW · GW

Thanks for the response. EDIT: Adam pointed out to me that LDU does not suffer from dictatorship of the present as I originally stated below and as you argued above. What you are saying is true for a fixed discount factor, but in this case we take the limit as .

The property you describe is known as "dictatorship of the present", and you can read more about it here. In order to get rid of this "dictatorship" you end up having to do things like reject stationary, which are plausibly just as counterintuitive.

> I'm surprised that this is presented as a big advance in infinite ethics as people have certainly thought about this in economics, machine learning and ethics before.

Could you elaborate? The reason that I thought this was important was:

> Previous algorithms like the overtaking criterion had fairly "obvious" incomparable streams, with no real justification for why those streams would not be encountered by a decision-maker. LDU is not complete, but we at least have some reason to think that it may be all we "practically" need.

Are there other algorithms which you think are all we will "practically" need?

Comment by bwest on Big Advance in Infinite Ethics · 2017-12-29T19:37:57.410Z · LW · GW

FYI, I was still confused about this so I posted on math .se. Someone responded that the above proof is incorrect, but they gave their own proof that there is no computable ordering over which respects Pareto.

Comment by bwest on Big Advance in Infinite Ethics · 2017-11-29T23:43:35.593Z · LW · GW

Thanks! But is that correct? I notice that your argument seems to work for finite sequences as well (or even single rational numbers), but clearly we can order the rational numbers.

Comment by bwest on Big Advance in Infinite Ethics · 2017-11-29T00:20:57.996Z · LW · GW

Thanks! Someone (maybe it was you?) pointed me to Chen and Rubio's stuff before, and it sounds interesting.

I don't fully understand the informal write up you have above, but I'm looking forward to seeing the final thing!

Comment by bwest on Big Advance in Infinite Ethics · 2017-11-28T16:40:22.461Z · LW · GW

Thanks! Your idea is interesting – I put a comment on that post.

Something you are probably aware of is that accepting "anonymity" (allowing the sequence to be reordered arbitrarily) requires us to reject seemingly intuitive principles like Pareto (if you can make someone better off and no one worse off, then you should).

Personally, I would rather keep Pareto than anonymity, but I think it's cool to explore what anonymous orderings can do.

Comment by bwest on Big Advance in Infinite Ethics · 2017-11-28T15:39:18.656Z · LW · GW

The canonical problem in infinite ethics is to create a preference relation over which is in some sense "reasonable". That's what this approach does.

For more background you can see this background article or my overview of the field.

Comment by bwest on 11/07/2017 Development Update: LaTeX! · 2017-11-28T15:11:43.511Z · LW · GW

Thanks Oliver and Ben!

I believe it is working now; I posted it here: https://www.lesserwrong.com/posts/tW37uofzXd7ngW8Np/big-advance-in-infinite-ethics

Comment by bwest on 11/07/2017 Development Update: LaTeX! · 2017-11-28T00:20:14.308Z · LW · GW

Does latex work in posts? I wrote up a draft post and I'm pretty sure that I used the correct syntax, but I don't see the formulas being formatted.

Maybe it does not work unless the post is fully submitted instead of just a draft? I didn't want to submit the post just to test that out though.

Comment by bwest on Moloch's Toolbox (1/2) · 2017-11-06T14:42:39.677Z · LW · GW

That's a great question and yes, there is generally some amount of risk adjustment performed. To take the example Eliezer used early in this post: you can find much more information then you probably wanted about the risk adjustment for central line associated bloodstream infections (CLABSI) here .

Comment by bwest on Moloch's Toolbox (1/2) · 2017-11-06T01:19:59.524Z · LW · GW
No hospital would benefit from being the first to publish statistics, so none of them do.

Hospital statistics have been published for several years now on Hospital compare. Similar programs exist for outpatient and nursing home quality metrics. This is largely due to effort by the Obama administration.

(Of course, this might just shift the "inadequate equilibrium" question to: if these statistics are published, how come so few people use them?)

Comment by bwest on Against EA PR · 2017-09-24T17:22:25.180Z · LW · GW

Approximately 10% of respondents to the EA survey said the animal welfare was the most important cause: http://effective-altruism.com/ea/1e5/ea_survey_2017_series_cause_area_preferences/

Comment by bwest on Against EA PR · 2017-09-24T17:18:56.109Z · LW · GW

Thanks for the interesting post!

It seems to me that there are two types of simplification:

  1. Simplification for pedagogical purposes (e.g. "imagine this as a point mass moving on a frictionless plane…")

  2. Simplification for no good reason (e.g. "balls rolling on a surface will eventually stop because of their natural motion")

I agree that overhead ratios are simplification of the second form: they are not much more complex than things like QALY/dollar yet are much less informative.

I disagree though that QALY/dollar is pointless simplification. It is definitely the case that we need to consider flow-through effects, how are decisions affect the unborn etc., but for both practical as well as pedagogical reasons we might say something like "let us suppose that we only care about human beings living right now." This seems very analogous to a physicist talking about point masses or an economist talking about perfect competition.

I'm curious if you disagree that these simplifications are useful? Or do you just think we should do a better job of calling out that they are simplifications?