Meta-Preference Utilitarianism 2020-02-04T20:24:36.814Z · score: 8 (6 votes)
Is the Reversal Test overrated? 2020-01-25T19:01:36.316Z · score: 9 (3 votes)
Another argument against cryonics 2019-12-30T15:36:48.716Z · score: -22 (7 votes)


Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-16T18:24:14.778Z · score: 1 (1 votes) · LW · GW

This is talking about the underlying preferences, not the surface level preferences. It's an abstract moral system where we try to optimize people's utility function, not a concrete political one where we ask people what they want.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T19:33:39.398Z · score: 2 (2 votes) · LW · GW

You're right that the system of 'do what you want' is an all-encompassing system. But it also leaves a lot of things underspecified (basically everything), which was (in my opinion) the more important insight.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T18:33:39.064Z · score: 1 (1 votes) · LW · GW

I mentioned the utilitarian voting method, also known as score voting. This is the most accurate way to gauge peoples preferences (especially if the amount of nuance is unbounded e.g 0,827938222...) if you don't have to deal with people voting strategically (which would be the case if we were just checking people's utility function)

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T17:15:37.381Z · score: 2 (2 votes) · LW · GW

Thank you very much for this comment, it explained my thoughts better than I could have ever written.

Yes, I think moral realism is false and didn't realize that was not a mainstream position in the EA community. I had trouble accepting it myself for the longest time and I was incredibly frustrated that all evidence seemed to point away from moral realism. Eventually I realized that freedom could only exist in the arbitrary and that a clockwork moral code would mean a clockwork life.

I'm only a first-year student so I'll be very interested in seeing what a professional (like yourself) could extrapolate from this idea. The rough draft you showed me is already very promising and I hope you get around to eventually making a post about it.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T14:05:12.022Z · score: 2 (2 votes) · LW · GW

And yes you go as many levels of meta as needed to solve the problem. I only call it 'meta-preference utilitarianism' because 'gauging-a-potentially-infinite-amount-of-meta-preferences utilitarianism' isn't quite as catchy.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T13:56:19.912Z · score: 1 (1 votes) · LW · GW

I did answer that question (albeit indirectly) but let me make it explicit.

Because of score voting the issue between total and average-aggregating is indeed dissolved (even with a fixed population)

Now I will note that in the case of the second problem score voting will also solve this the vast majority of the time, but let's look at a (very) rare case where it would actually be a tie:

Alice and Bob want: Total (0,25), Average (1), Median (0)

Cindy and Dan want: Total (0,25), Average (0), Median (1)

And Elizabeth wants: Total (1), Average (0), Median (0)

So the final score is: Total (2), Average (2), Median (2)

(Note that for convenience I assume that this is with the ambivalence factor already calculated in)

In this case only one person is completely in favor of total with the others being lukewarm to it, but with a very strong split among the average-median question (Yes this is a very bizarre scenario)

Now numerically these all have the same preference, so the next question becomes: what do we pursue? This could be solved with a score vote too: How strong is your preference for:

(1) Picking one strategy at random (2) Pursuing all strategies 33% of the time (3) Picking the method that the least amount of people gave a zero (4) Only pursuing the methods that more than one person gave a 1 proportionally ...etc, etc...

But what if, due to some unbelievable cosmic coincidence, that next vote also ends in a tie?

Well you go up one more level until either the ambivalence takes over (I doubt I would care after 5 levels of meta) or until there is a tie-breaker. Although it is technically possible to have a tie in an infinite amount of meta-levels, in reality this will never happen.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T12:45:25.292Z · score: 4 (3 votes) · LW · GW

We are talking about a hypothetical vote here, where we could glean people's underlying preferences. Not what people think they want (people get that wrong all the time) but their actual utility function. This leaves us with three options:

1) You do not actually care about how we aggregate utility, this would result in an ambivalence score of 0

2) You do have an underlying preference that you just don't know consciously, this means your underlying preference gets counted.

3) You do care about how we aggregate utility, but aren't inherently in favor of either average or total. So when we gauge your ambivalence we see that you do care (1 or something high), but you really like both average (e.g 0,9) and total (e.g 0,9) with other methods like median and mode getting something low (like e.g 0,1)

In all cases the system works to accommodate your underlying preferences.

Comment by bob-jacobs on Meta-Preference Utilitarianism · 2020-02-05T11:14:03.264Z · score: 1 (1 votes) · LW · GW

Imagine a universe full of Robin Hanson lookalikes (all total utilitarians) that desperately want to kickstart the age of em (the repugnant conclusion). The dictator of this universe is a median utilitarian that uses black magic and nano-bots to euthanize all depressed people and sabotage any progress towards the age of em. Do you think that in this case the dictator should ideally change his behavior as to maximize the meta-preferences of his citizens?

Comment by bob-jacobs on Is the Reversal Test overrated? · 2020-01-26T16:26:57.687Z · score: 1 (1 votes) · LW · GW

I think the marginal version is indeed a good way of dissecting arguments (and I thought I did use that version)

The counterfactual version is a bit more icky. I'm not saying it can never be used, but if we take this example I feel like if "I" always had a brain that ran smoothly even though it was 50 degrees higher that wouldn't really be "me".

Maybe it's just a failure of imagination on my part, but in most cases I feel like I'm supposed to speak for a creature that I can't really speak for.

Comment by bob-jacobs on Another argument against cryonics · 2020-01-25T18:39:49.376Z · score: 1 (1 votes) · LW · GW

Very sad. I'm not saying people have the strength to do these things, I'm just saying they are (from a utilitarian perspective) irrational.

Comment by bob-jacobs on Another argument against cryonics · 2019-12-30T17:06:25.357Z · score: 1 (1 votes) · LW · GW

I understood that they cut off tissue for research, unless you know one where they don't. I also couldn't find a source for how long they preserve brains. But if there is one that keeps your brain intact (As much intact as an oxygen-deprived transported brain can be) and they preserve it for a long time, then that does sound like a reasonable option for people living within donating distance to it.

Comment by Bob Jacobs on [deleted post] 2019-12-25T22:28:40.271Z