Open thread, Mar. 2 - Mar. 8, 2015

post by MrMind · 2015-03-02T08:19:47.940Z · LW · GW · Legacy · 156 comments

Contents

156 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

156 comments

Comments sorted by top scores.

comment by G0W51 · 2015-03-03T01:23:15.691Z · LW(p) · GW(p)

Perhaps it would be beneficial to use a unary numeral system when discussing topics on which biases like scope insensitivity, probability neglect, and placing too much weight on outcomes that are likely to occur. Using a unary numeral system could prevent these biases by presenting a more visual representation of the numbers, which might give readers more intuition on them and thus be less biased about them. Here’s an example: “One study found that people are willing to pay $80 to save || 1000 (2,000) birds, but only $88 to save |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 1000 (200,000 birds).”

Edit: Made it a bit easier to read.

Replies from: None, ilzolende, emr
comment by [deleted] · 2015-03-03T11:04:57.281Z · LW(p) · GW(p)

A unary number system is a really fancy name to an ASCII graph :)

Reminds me of the "irony meter" some of my friends use instead of smilies, as smilies are binary, while this can express that something is almost but but not quite serious: [...........|.....]

comment by ilzolende · 2015-03-03T06:43:39.528Z · LW(p) · GW(p)

The popular method right now seems to be using areas of shapes or heights of bars on graphs when this sort of visual representation is necessary.

However, I like the way you showed it here, mostly because I have wanted to enter repeating sequences of characters like that into a comment on this site to see what it would look like. ;). I hope people represent numbers with long lines of repeating characters on this website more often. I vote for alternating '0' & 'O'.

Replies from: G0W51
comment by G0W51 · 2015-03-03T22:04:52.631Z · LW(p) · GW(p)

Though using bar graphs is pretty, it often seems to take up too much space and takes a bit too long to make in some cases. I suppose both bar graphs and unary numeral systems are useful, and which one to use depends on how much space you're willing to use up.

Edit: Also, why alternating 0s and Os? To make counting them easier?

Replies from: ilzolende
comment by ilzolende · 2015-03-04T02:00:09.081Z · LW(p) · GW(p)

I asked for them because (a)I want to highlight long lines of characters in the LW comment interface and watch the Mac anti-aliasing overlap with itself, which looks cool, and (b)I don't want to just post a series of comments that have no valuable content but are just playing with the reply nesting system and posting repeating lines of characters and whatnot, because I don't want to get down voted into oblivion.

Alternating 0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O0O is visually appealing to me, and I want to see visually appealing things, so I asked to see more visually appealing things on the website. The request was made purely for selfish reasons.

Replies from: G0W51
comment by G0W51 · 2015-03-04T02:33:38.779Z · LW(p) · GW(p)

I can see that. Still, 0s and Os take up more space than | and take a bit longer to type due to needing to alternate them.

comment by emr · 2015-03-09T03:56:37.298Z · LW(p) · GW(p)

If you would like to be horrified, represent the number of deaths from WWII in unary in a text document and scroll through it (by copy pasting larger and larger chunks, or by some other method).

There are about 4000 "1" characters in a page in MS Word, so at 20 million battle deaths, you'll get about 5000 pages.

Replies from: G0W51
comment by G0W51 · 2015-03-09T23:05:24.885Z · LW(p) · GW(p)

If you really want to be horrified, make a document with one "I" for every sentient being whose life would be prevented from an existential catastrophe. Oh wait, that's too many to store in memory...

comment by jaime2000 · 2015-03-02T15:06:29.680Z · LW(p) · GW(p)

365tomorrows recently published a hard science-fiction story of mine called "Procrastination", which was inspired by the ideas of Robin Hanson. I believe LessWrong will find it enjoyable.

Replies from: shminux, Kaj_Sotala
comment by shminux · 2015-03-03T00:38:38.122Z · LW(p) · GW(p)

Nice work. The story is quite uplifting, actually. It would be nice to retain some memory of one's other instances, of course. But still beats having just one physical life.

comment by Kaj_Sotala · 2015-03-02T19:19:05.794Z · LW(p) · GW(p)

I thought that the ideas seemed awfully familiar, when the story popped up on 365!

comment by emr · 2015-03-02T20:43:45.892Z · LW(p) · GW(p)

Woody Allen on time discounting and path-dependent preferences:

In my next life I want to live my life backwards. You start out dead and get that out of the way. Then you wake up in an old people's home feeling better every day. You get kicked out for being too healthy, go collect your pension, and then when you start work, you get a gold watch and a party on your first day. You work for 40 years until you're young enough to enjoy your retirement. You party, drink alcohol, and are generally promiscuous, then you are ready for high school. You then go to primary school, you become a kid, you play. You have no responsibilities, you become a baby until you are born. And then you spend your last 9 months floating in luxurious spa-like conditions with central heating and room service on tap, larger quarters every day and then Voila! You finish off as an orgasm!

The rationality gloss is that a naive model of discounting future events implies a preference for ordering experiences by decreasing utility. But often this ordering is quite unappealing!

A related example (attributed to Gregory Bateson):

If the hangover preceded the binge, drunkenness would be considered a virtue and not a vice.

Replies from: Jiro, Toggle, Salemicus
comment by Jiro · 2015-03-02T22:09:01.066Z · LW(p) · GW(p)

Tsk, tsk. You don't collect your pension or gold watches, or drink alcohol, etc. You pay someone else your pension, give away a gold watch, and un-drink the alcohol.

Replies from: Houshalter
comment by Houshalter · 2015-03-04T00:55:19.648Z · LW(p) · GW(p)

He didn't say that time flowed backwards, just the order of major life events. And you'd start out collecting your pension out of the nursing home, and give it up when you start working.

comment by Toggle · 2015-03-02T22:13:41.160Z · LW(p) · GW(p)

A simiar one by Vonnegut:

It was a movie about American bombers in the Second World War and the gallant men who flew them. Seen backwards by Billy, the story went like this: American planes, full of holes and wounded men and corpses took off backwards from an airfield in England. Over France a few German fighter plans flew at them backwards, sucked bullets and shell fragments from some of the planes and crewmen. They did the same for wrecked American bombers on the ground, and those planes flew up backwards to join the formation. The formation flew backwards over a German city that was in flames. The bombers opened their bomb bay doors, exerted a miraculous magnetism which shrunk the fires, gathered them into cylindrical steel containers, and lifted the containers into the bellies of the planes. The containers were stored neatly in racks. The Germans below had miraculous devices of their own, which were long steel tubes. They used them to suck more fragments from the crewmen and planes. But there were still a few wounded Americans, though, and some of the bombers were in bad repair. Over France, though, German fighters came up again, made everything and everybody good as new. When the bombers got back to their base, the steel cylinders were taken from the racks and shipped back to the United States of America, where factories were operating night and day, dismantling the cylinders, separating the dangerous contents into minerals. Touchingly, it was mainly women who did this work. The minerals were then shipped to specialists in remote areas. It was their business to put them into the ground, to hide them cleverly so they would never hurt anybody ever again. The American fliers turned in their uniforms, became high school kids. And Hitler turned into a baby, Billy Pilgrim supposed. That wasn't in the movie. Billy was extrapolating. Everybody turned into a baby, and all humanity, without exception, conspired biologically to produce two perfect people named Adam and Eve, he supposed.

comment by Salemicus · 2015-03-03T10:30:36.213Z · LW(p) · GW(p)

As Jiro and Toggle point out, this isn't time reversal, this is Benjamin Button disease). I think the original short story, much more than the film, portrays this correctly as a tragi-comedy. For example, he's a Brigadier-General, but he gets laughed out of the army because he looks like a 16-year-old.

I wonder about people who think that life would be better lived backwards, or that effect should precede cause. Isn't this the universe telling you "Change your ways" in neon capital letters?

Replies from: RowanE
comment by RowanE · 2015-03-03T14:59:30.392Z · LW(p) · GW(p)

Well, the central thing would seem to be changing aging, which isn't induced by any human actions (although you might say people who live healthier get to age more slowly) - if there's any message from the universe in aging, that message is simply "fuck you for being here".

comment by khafra · 2015-03-03T16:18:41.728Z · LW(p) · GW(p)

...supporters say the opposition leader was assassinated to silence him...

I see headlines like this fairly regularly.

Does anybody know of a list of notable opposition leaders, created when all members of the list were alive? Seems like it could be educational to compare the death rate of the list (a) across countries, and (b) against their respective non-notable demographics.

comment by btrettel · 2015-03-04T02:36:09.775Z · LW(p) · GW(p)

I want to make some new friends outside of my current social circle. I'm looking to meet smart and open minded people. What are some activities I could do or groups I could participate in where I'm likely to meet such people?

I'm particularly interested in personal experiences meeting people, rather than speculation, e.g. "I imagine ballroom dancing would be great" is not as good as "I met my partner and best friend ballroom dancing."

Also of interest would be groups where this is bad, e.g., if ballroom dancing was no good then "I never made any friends ballroom dancing, despite what I initially thought" would be a useful comment.

(I have a small list of candidate groups already, but I want to see what other people suggest to verify my thinking.)

Replies from: James_Miller, Vaniver, philh, MathiasZaman, polymathwannabe, ChristianKl, Manfred
comment by James_Miller · 2015-03-04T04:18:38.634Z · LW(p) · GW(p)

See http://www.meetup.com/

Replies from: btrettel
comment by btrettel · 2015-03-04T21:23:40.898Z · LW(p) · GW(p)

Perhaps I was unclear. It's not that I can't find groups, it's that I want to know which groups have environments more conducive to meeting people of interest to me.

For example, I went to a meditation event once and enjoyed it for its stated purpose, but basically everyone left before I could talk to anyone aside from the instructor. Clearly, this meditation event is not what I am looking for.

comment by Vaniver · 2015-03-04T15:07:05.830Z · LW(p) · GW(p)

Speaking of dancing, there was an extra follower at the introductory class at the Fed when Harsh and I went--so if you had come along, it would have been even!

Also on the project list now is to make a DDR-style game where you are responding just to the aural stimulus, rather than a visual one, with actual songs and actual dances, to have a single-player way of picking up the right thing to do with your feet and when to do it. (Does this already exist?)

comment by philh · 2015-03-04T15:01:45.613Z · LW(p) · GW(p)

I really enjoy dancing, but I've been doing it for years and haven't really met anyone through it. YMMV, and I've heard many people's M does V.

I met most of my friends through reddit meetups.

comment by MathiasZaman · 2015-03-04T11:51:50.979Z · LW(p) · GW(p)

If you aren't playing already, Magic: The Gathering can be a great hobby for meeting new people. The community trends towards smart (and open-minded, but less clearly so). Most stores have events each Friday. There is some barrier to entry, but I found the game easy enough to grasp.

comment by polymathwannabe · 2015-03-04T03:44:47.212Z · LW(p) · GW(p)

Join Facebook groups that follow your hobbies or favorite books/films/anime/anything. Wait for scheduled meetup events. Rinse and repeat.

comment by ChristianKl · 2015-03-07T12:12:46.648Z · LW(p) · GW(p)

Without knowing about your interest and the kind of people you want to meet it's hard to give targeted advice.

It also depends a lot on local customs. One meditation has a culture where the people who attend the event bond together, others don't.

As far as dancing goes, the default interaction is physical. If you want to make friends you also have to talk. If talking comes hard to you then dancing won't produce strong friendships.

If your goal is to build a social circle it's also vital to attend events together with other people. Constantly going alone to events doesn't fit that purpose.

Replies from: btrettel
comment by btrettel · 2015-03-08T21:26:31.067Z · LW(p) · GW(p)

Good points. I was intentionally keeping things general (and thus vague) for a few reasons. To be more specific, I'm looking for people who are similar to myself. The main restriction here is that I'm looking to meet reasonably smart people, which I think is a prerequisite to knowing me better. (I could be much more specific if you'd like to help me out more, but I'd prefer to take that to a private message.)

I'm curious about your thoughts on attending events with other people. Why would this help?

Replies from: ChristianKl
comment by ChristianKl · 2015-03-08T22:40:49.324Z · LW(p) · GW(p)

I'm curious about your thoughts on attending events with other people. Why would this help?

I once read somewhere that being a friend means seeing a person in at least three different contexts. Going to a meditation event together with people from my local LessWrong meetup increases the feeling of friendship between me and the other LW'ler.

If I dance Salsa with a girl that I first meet at a Salsa unrelated birthday party it feels like there a stronger friendship bond than if I just see her from time to time at Salsa events.

It's important to interact with people in different contexts if you want to build a friendship with them.

(I could be much more specific if you'd like to help me out more, but I'd prefer to take that to a private message.)

I can't promise that I have useful advice before knowing the specifics, but I'm happy to take my shoot.

Replies from: btrettel
comment by btrettel · 2015-03-10T00:53:45.027Z · LW(p) · GW(p)

Thanks, this all makes sense. I'll have to take you up on the offer later, as my priorities are shifting now.

comment by Manfred · 2015-03-07T05:19:22.332Z · LW(p) · GW(p)

I've met some friends swing dancing, so consider it somewhat recommended.

I don't know where you are, but you could try starting a local LW meetup group, that sometimes works for me.

I don't know what your housing situation is, but if it's currently not contributing to your social life, consider moving into a group house either of people you'd like to get closer with, or with some selection process that makes them likely to be compatible with you.

Replies from: btrettel
comment by btrettel · 2015-03-08T21:09:17.488Z · LW(p) · GW(p)

Thanks for the comment.

We already have a local LW meetup, and many of my local friends I've met through there. It's a small but highly appreciated group.

The group house idea is excellent. I have read of a number of houses in the area targeting people with certain lifestyles (vegan, recovering alcoholics, etc.) but I never looked that closely into them. Nor have I considered looking for a group house that might not have explicit goals but be composed of people I'd find interesting. I'll take a closer look.

comment by tog · 2015-03-04T16:37:44.942Z · LW(p) · GW(p)

What gets more viewership, an unpromoted post in main or a discussion post? Also, are there any LessWrong traffic stats available?

Replies from: TsviBT
comment by TsviBT · 2015-03-06T16:16:11.356Z · LW(p) · GW(p)

http://www.alexa.com/siteinfo/lesswrong.com

(The recent uptick is due to hpmor, I suppose?)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2015-03-06T20:22:32.695Z · LW(p) · GW(p)

Probably yes, see: http://www.alexa.com/siteinfo/hpmor.com

Replies from: TsviBT
comment by TsviBT · 2015-03-06T20:49:33.806Z · LW(p) · GW(p)

Lol yeah ok. I was unsure because alexa says 9% of search traffic to LW is from "demetrius soupolos" and "traute soupolos" so maybe there was some big news story I didn't know about.

comment by D_Malik · 2015-03-03T00:14:31.864Z · LW(p) · GW(p)

Assuming no faster-than-light travel and no exotic matter, a civilization which survives the Great Filter will always be contained in its future light cone, which is a sphere expanding outward with constant speed c. So the total volume available to the civilization at time t will be V(t) ~ t^3. As it gets larger, the total resources available to it will scale in the same way, R(t) ~ V(t) ~ t^3.

Suppose the civilization has intrinsic growth rate r, so that the civilization's population grows as P(t) ~ r^t.

Since resources grow polynomially and population grows exponentially in t, as t goes to infinity the resources per person R(t) / P(t) ~ t^3 / r^t goes to zero. And since there is presumably a lower limit on the resources required to support one member of the civilization, r must approach 1 as t goes to infinity. This could mean, for instance, that each person has only one child before dying, or that everybody is immortal and never has children.

Of course these conclusions are pretty well-known round here, but I thought the scaling argument was neat and I haven't seen it before.

This seems likely to be a problem even if we get FAI, since certainly some people's CEVs include having children, and even with a starting population of a single person with such values, we'll run up against resource limits, if the premises hold. (I suppose those future people lucky enough to live under FAI will just have to dry their tears with million-utilon bills.)

Another thought: Perhaps we could use relativity to get around this. If we expand outward at the speed of light, subjective time for people close to the edge will be greatly reduced, so each person could grow up, travel to the frontier at near lightspeed, and have their 2 children or whatever. To the rest of us I think this'll look like increasingly many people crammed at the edge of the sphere, through length contraction. (I haven't studied relativity, so this might be wrong.)

Replies from: jaime2000, Slider
comment by jaime2000 · 2015-03-03T00:29:53.472Z · LW(p) · GW(p)

I like Eliezer's solution better. Rather than wait until exponential population growth eats all the resources, we just impose population control at the start and let every married couple have a max of two children. That way, population grows at most linearly (assuming immortality).

Replies from: Toggle
comment by Toggle · 2015-03-06T07:42:10.798Z · LW(p) · GW(p)

Broadly speaking, I'm suspicious of social solutions to problems that will persist for geological periods of time. If we're playing the civilization game for the long haul, then the threat of overpopulation could simply wait out any particular legal regime or government.

That argument goes hinky in the event of FAI singleton, of course.

comment by Slider · 2015-03-04T14:57:42.724Z · LW(p) · GW(p)

Wouldn't this conclusion require that indivudals and their resources are not resources to other individuals? A society that doesn't share their lives in some way is not a community. If you share "the internet" among the number that use it the resultant amount of information migth be quite low, but that doesn't mean that people don't have access to information. In the same way if one chunk of energy is used in one way, it migth fullfill more wishes than just one person. What this means is you either need to cooperate or compete. The only one that can have the universe to himself without contention is the lone survivor. But if you are in a populated universe politics becomes necceary and it sphere widens.

comment by [deleted] · 2015-03-05T18:38:15.593Z · LW(p) · GW(p)

A smuggler's view of learning:)

Knowledge as acquired in school-time (attending + holidays; just about until you graduate almost all your time is governed by school) is like an irregular shoreline with islets of trivia learned through curiosity and rotting marshland lost because reasons and never regained. (We congratulate ourselves for not risking malaria, seeing as we are experienced pirates and all.)

And we forget the layout and move inland, because that's where stuff happens. Jobs, relationships, kids, even dead ends are more grownup then the crumbling edge of - nothing definable, 'cause you need a thing to have more than one side to define it, and there's just water, right?

And now I look back and am vaguely surprised that I have never been interested in the science of space exploration or large-scale food production or anything more advanced than early XX century (roughly). We had to cram it into our last year, physics, biology, economic geography, everything. There was more to the problem than just no time to bother understanding, there was no time to hear about it!

And knowledge propagates. Forget natural history museums as educational tools. Forget rewriting textbooks every year, if all the topics stay the same. We need a separate course of 'XX century advances' to teach people what's out there in the mist.

comment by [deleted] · 2015-03-03T11:02:42.405Z · LW(p) · GW(p)

I have not yet read the sequences in full, let met ask, is there maybe an answer to what is bothering me about ethics: why is basically all ethics in the last 300 years or so universalistic? I.e. prescribing to treat everybody without exception according to the same principles? I don't understand it because I think altruism is based on reciprocity. If my cousin is starving and a complete stranger is halfway accross the world is starving even more, and I have money for food, most ethics would figure out I should help the stranger. But from my angle, I am obviously getting less reciprocity, less personal utility out of that than out of helping my cousin. I am not even considering the chance of a direct payback, simply the utility of having people I like and associate with not suffer is a utility to me, obviously. Basically you see altruism as an investment, you get a lot back from investing into people close to you, and then with the distance the return on investment is less and less to you, although never completely zero because making humankind as such better off is always better for you. This explains things like that kind of economic nationalism that if free trade makes Chinese workers better off with 100 units and American or European workers worse off with 50, a lot of people still don't want it, this is actually rational, 100 units to people far away make you better off with 1 unit, 50 units lost to basically your neighbors makes you worse off with 5.

And this is why I don't understand why most ethics are universalistic?

Of course one could argue this is not ethics when you talk about what is the best investment for yourself. After all with that sort of logic you would get the most return if you never give anything to anyone else, so why even help your cousin?

Anyway, was this sort of reciprocal and thus non-universalistic ethics ever discussed here?

Replies from: Salemicus, gedymin, Richard_Kennaway, TheAncientGeek, polymathwannabe
comment by Salemicus · 2015-03-03T13:29:48.624Z · LW(p) · GW(p)

why is basically all ethics in the last 300 years or so universalistic?

Because so much of it comes out of a Christian tradition with a deep presumption of Universalism built into it. But you are not the first person to ask this tradition "What is the value of your values?".

Your "reciprocal ethics" might be framed as long-term self-interest, or as a form of virtue ethics. It immediately makes me think of Adam Smith's The Theory of Moral Sentiments.

There's a nice discussion on related themes here, or try googling the site for "virtue ethics".

Replies from: None
comment by [deleted] · 2015-03-03T14:52:51.183Z · LW(p) · GW(p)

Hm, I would call it "graded ingroup loyalty", to quote an Arab saying "me and by brother against my cousin, me and my cousin against the world". Instead of a binary ingroup and outgroup, other people are gradually more or less your ingroup, spouse more than cousin, cousin more than buddy, buddy more than compatriot, compatriot more than someone really far away.

Replies from: Salemicus
comment by Salemicus · 2015-03-03T15:36:01.422Z · LW(p) · GW(p)

But note that reciprocity is almost the opposite of loyalty. That kind of tribalism is dysfunctional in the modern world, because:

  • You can't necessarily rely on reciprocity in those tribal relationships any more
  • You can achieve reciprocity in non-tribal relationships

Rather than a static loyalty, it is more interesting to ask how people move into and out of your ingroup? What elicits our feelings of sympathy for some more than others? What kind of institutions encourage us to sympathise with other people and stand in their shoes? What triggers our moral imagination?

I'd tell a story of co-operative trade forcing us to stand in the shoes of other people, to figure out what they want as customers, thus not only allowing co-operation between people with divergent moral viewpoints, but itself giving rise to an ethic of conscientiousness, trustworthiness, and self-discipline. The "bourgeois virtues" out-competing the "warrior ethic."

comment by gedymin · 2015-03-04T15:11:04.440Z · LW(p) · GW(p)

I think universalism is an obvious Schelling point. Not just moral philosophers find it appealing, ordinary people do it too (at least when thinking about it in an abstract sense). Consider Rawls' "veil of ignorance".

comment by Richard_Kennaway · 2015-03-03T13:01:09.094Z · LW(p) · GW(p)

And this is why I don't understand why most ethics are universalistic?

I think one reason is that as soon as one tries to build ethics from scratch, one is unable to find any justification that sounds like "ethics" for favouring those close to oneself over those more distant. Lacking such a magic pattern of words, they conclude that universalism must be axiomatically true.

In Peter Singer's view, to fail to save the life of a remote child is exactly as culpable as to starve your own children. His argument consists of presenting the image of a remote child and a near one and challenging the reader to justify treating them unequally. It's not a subject I particularly keep up on; has anyone made a substantial argument against Singerian ethics?

Anyway, was this sort of reciprocal and thus non-universalistic ethics ever discussed here?

It is often observed here that favouring those close to oneself over those more distant is universally practised. It has not been much argued for though. Here are a couple of arguments.

  1. It is universally practiced and universally approved of, to favour family and friends. It is, for the most part, also approved of to help more distant people in need; but there are very few who demand that people should place them on an equal footing. Therefore, if there is such a thing as Human!ethics or CEV, it must include that.

  2. As we have learned from economics, society in general works better when people look after their own business first and limit their inclination to meddle in other people's. This applies in the moral area as well as the economic.

Replies from: None, None
comment by [deleted] · 2015-03-03T15:13:05.355Z · LW(p) · GW(p)

one tries to build ethics from scratch,

Wait, I didn't even noticed it. That is interesting! So if something to qualify as a philosophy or theory you need to try to build from scratch? I know people who would consider it hubris. Who would say that it is more like, you can amend and customize and improve on things that were handed to you by tradition, but you can never succeed at building from scratch.

Replies from: seer, Richard_Kennaway
comment by seer · 2015-03-04T04:50:09.831Z · LW(p) · GW(p)

So if something to qualify as a philosophy or theory you need to try to build from scratch?

Not necessarily, but that is certainly the currently fashionable approach. Also if you want to convince someone from a different culture, with a different set of assumptions, etc., this is the easiest way to go about doing it.

Replies from: None
comment by [deleted] · 2015-03-04T08:32:14.636Z · LW(p) · GW(p)

I am not very optimistic about that happening. I think should write an article about Michael Oakeshott. Basically Oakie was arguing that the cup you are pouring into is never empty. Whatever you tell people they will frame in their previous experiences. So the from-scratch philosophy, the very words, do not mean the same thing to people with different backgrounds. E.g. Hegel's "Geist" does not exactly mean what "spirit" means in English.

comment by Richard_Kennaway · 2015-03-03T15:38:46.500Z · LW(p) · GW(p)

So if something to qualify as a philosophy or theory you need to try to build from scratch?

That's what philosophers do. Hence such things as Rawls' "veil of ignorance", whereby he founds ethics on the question "how would you wish society to be organised, if you did not know which role you would have in it?"

Who would say that it is more like, you can amend and customize and improve on things that were handed to you by tradition, but you can never succeed at building from scratch.

And there are also intellectuals (they tend to be theologians, historians, literary figures, and the like, rather than professional philosophers), who say exactly that. That has the problem of which tradition to follow, especially when the history of all ages is available to us. Shall we reintroduce slavery? Support FGM? Execute atheists? Or shall the moral injunction be "my own tradition, right or wrong", "jede das seine"?

Replies from: Salemicus, None
comment by Salemicus · 2015-03-03T15:59:14.628Z · LW(p) · GW(p)

That's what philosophers do

No, that's what some philosophers do. You can't just expel the likes of Michael Oakeshott or Nietzsche from philosophy. Even Rawls claimed at times to be making a political, rather than ethical, argument. The notion that ethics have to be "built from scratch" would be highly controversial in most philosophy departments I'm aware of.

comment by [deleted] · 2015-03-03T15:47:22.098Z · LW(p) · GW(p)

Of all these approaches, only the latest is really worthy of consideration IMHO, different houses, different customs.

One thing is clear, namely that things that are largely extict for any given "we" (say, culture, country, and so on) do not constitute a tradition. The kind of reactionary bullshit like reinventing things from centuries ago and somehow calling it traditionalism merely because they are old should not really be taken seriously. A tradition is something that is alive right now, so for the Western civ, it is largely things like liberal democracy, atheism and light religiosity, anti-racism and less-lethal racism.

The idea here is that the only thing truly realistic is to change what you already have, inherited things have only a certain elasticity, so you can have modified forms of liberal democracy, more or less militant atheism, a bit more serious or even lighter religiosity, a more or less stringent anti-racism and a more or less less-lethal racism. But you cannot really wander far from that sort of set.

This - the reality of only being able to modify things that already exist, and not to create anew, and modify them only to a certain extent - is what I would called a sensible traditionalism, not some kind of reactionary dreams about brining back kings.

comment by [deleted] · 2015-03-03T13:52:15.145Z · LW(p) · GW(p)

is unable to find any justification that sounds like "ethics"

I think that is the issue. "Sounds like ethics" when you go back to Kant, comes from Christian universalism. Aristotle etc. were less universal.

has anyone made a substantial argument against Singerian ethics?

Is Singer even serious? He made the argument that if I find eating humans wrong, I should find eating animals also wrong because they are not very different. I mean, how isn't it OBVIOUS that would not be an argument against eating animals but an argument for eating humans? Because unethical behavior is the default and ethical is the special case. Take away speciality and it is back to the jungle. To me it is so obvious I hardly even think it needs much discussion... ethics is that thing you do in the special rare cases when you don't do what you want to do, but what you feel you ought to. Non-special ethics is not ethics, unless you are a saint.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2015-03-03T15:55:23.635Z · LW(p) · GW(p)

Is Singer even serious?

I see no reason to doubt that he means exactly what he says.

I mean, how isn't it OBVIOUS that would not be an argument against eating animals but an argument for eating humans?

Modus ponens, or modus tollens? White and gold, or blue and black?

Because unethical behavior is the default and ethical is the special case. Take away speciality and it is back to the jungle.

On the whole, we observe that people naturally care for their children, including those who still live in jungles. There is an obvious evolutionary argument that this is not because this has been drummed into them by ethical preaching without which their natural inclination would be to eat them.

To me it is so obvious I hardly even think it needs much discussion...

To be a little Chestertonian, the obvious needs discussion precisely because it is obvious. Also a theme of Socrates. Some things are justifiably obvious: one can clearly see the reasons for a thing being true. For others, "obvious" just means "I'm not even aware I believe this." As Eliezer put it:

The way a belief feels from inside, is that you seem to be looking straight at reality. When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.

Replies from: Jiro
comment by Jiro · 2015-03-03T17:11:59.567Z · LW(p) · GW(p)

Most people who are against eating human children would also be against eating human children grown in such a way as to not have brains. Yet clearly, few of the ethical arguments apply to eating human children without brains. So the default isn't "ethical behavior", it's "some arbitrary set of rules that may happen to include ethical behavior at times".

comment by TheAncientGeek · 2015-03-03T13:49:29.874Z · LW(p) · GW(p)
  1. The Nazi's and Ayn Rands Egoism were in the last 300 years, so no.

  2. That said, it is now harder to ignore people in far off lands, and easier to help them.

  3. Utilitarianism is popular on LW because, AFAICT, its mathy.

  4. You haven't explained why you reciprocal ethics should count as ethics at all.

Replies from: None
comment by [deleted] · 2015-03-03T14:58:54.332Z · LW(p) · GW(p)

Of 4: Is there a definition of what counts as ethics? I suppose being universal is part of the definition and then it is defined out. Fine. But the problem is, if Alice or Bob comes and says "Since I am only interested in this sort of thing by definition I am unethical", this is also not accurate, because it does not really predict what they are. They are not necessarily Randian egotists, they may be the super good people who are very reliable friends and volunteer at local soup kitchens and invest into activism to make their city better and so on, they just change the subject if someone talks about the starvation in Haiti. That is not what "unethical" predicts.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-03-03T15:24:06.702Z · LW(p) · GW(p)

I'm talking about reciprocal ethics.

Most people would say that volunteering at a soup kitchen is good, but many would change their mind if they heard that some advantage was being expected in return. And if it isnt, in what way is it reciprocal?

Replies from: None
comment by [deleted] · 2015-03-03T15:30:38.695Z · LW(p) · GW(p)

Either I really need to write clearer or you need to read with more attention. Above, "I am not even considering the chance of a direct payback, simply the utility of having people I like and associate with not suffer is a utility to me, obviously." Making your city better by making sure all of its members are fed is something that makes you better off. It is not a payback or special advantage, but still a return. It makes the place on the whole more functional and safer and having a better vibe. Of course it is not an investment with positive returns, this is why it is still ethics, there is always some sacrifice made. It is always negative return, just not 0 return like "true" altruism. Rather it is like this:

If you have a million utils and invest it into Earth, you get 1 back by making Earth better for you. If you invest it into your country, you get 10 back by making your country better for you. Invest it into your city, you get 1000 back, by making your city better for you. Invest it into your cousin, 10K by making your relatives better for you, your bro, 100K by making your family better for you and so on.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-03-03T19:33:51.268Z · LW(p) · GW(p)

But what would that be objectivily the right way to behave? It seems as if ityou are saying people distant from you are objectively worth less. I think you would need to sell this theory as a compromise between what is right anfpd what is motivating.

Replies from: None
comment by [deleted] · 2015-03-03T19:39:14.897Z · LW(p) · GW(p)

Sorry, cannot parse it. My behavior with others does not reflect their objective worth (what is that?) but my goals. Part of my goals may be being virtuous or good, which is called ethics. Or it can be raising the utility of certain people or even all people, but that is also a goal. My behavior with diamonds does not reflect the objective worth of diamonds (do they have any?) but my goals wrt to diamonds. Motivating: yes, that is close to the idea of goals. That is a good approach.

How about this: if you want to work from the angle of objective worth, well, you too do not worth objectively less than others. So basically you want your altruism to be a kind of reciprocal contract: "I have and you not, so I give you, but if it is ever so in the future that you have and I not you should give me too, because I do not worth less than you."

If that sounds okay, then the next stage could be working from the idea that this is not a clearly formulated, signed contract, but more of a tacit agreement of mutual cooperation if and when the need arises, and then you get you have more of such a tacit agreement with people closer to you.

comment by polymathwannabe · 2015-03-03T13:14:16.606Z · LW(p) · GW(p)

I think altruism is based on reciprocity

Maybe that's what it feels like for you. My altruistic side feeds on my Buddhist ethics: I am just like any other human, so their suffering is not incomprehensible to me, because I have suffered too. I can identify with their aversion to suffering because that's exactly the same aversion to suffering that I feel. It has nothing to do with exchange or expected gain.

Replies from: None
comment by [deleted] · 2015-03-03T13:25:15.835Z · LW(p) · GW(p)

That is interesting that you mention that, because I spent years going to Buddhist meditation centers (of the Lama Ole type) and at some level still identify with it. However I never understood it as a sense of ethical duties or maxims I must exert my will to follow, but rather a set of practices that will put me in a state of bliss and natural compassion where I won't need to exert wil in this regard, goodness will just naturally flow from me. In this sense I am not even sure Buddhist ethics even exists if we define ethics as something you must force yourself to follow even if you really not feel like doing so. And I have always seen compassion in the B. sense as a form of gain to yourself - reducing the ego by focusing on other people's problems, thus our own problems will look smaller because we see our own self as something less important. (I don't practice it much anymore, because I realized if a "religion" is based on reincarnation there is no pressing need to work on it right now, it is not like I can ever be too late for that bus, so you should only work on it if you really feel like doing so. And frankly, these years I feel like being way more "evil" than Ole :) )

comment by emr · 2015-03-06T04:42:04.546Z · LW(p) · GW(p)

Note: This post raises a concern about the treatment of depression.

If we treat depression with something like medication, should we be worried about people getting stuck in bad local optima, because they no longer feel bad enough that the pain of changing environments seems small by comparison? For example, consider someone in a bad relationship, or an unsuitable job, or with a flawed philosophic outlook, or whatever. The risk is that you alleviate some of the pain signal stemming from the lover/job/ideology, and so the patient never feels enough pressure to fix the lover/job/ideology.

Also, I'm pretty confident that the medical profession has thought about this in detail, but I've been spinning my wheels trying to find the right search terms. Does anyone know where to look, or have other recommendations?

Replies from: gjm, ChristianKl, polymathwannabe
comment by gjm · 2015-03-06T10:06:14.516Z · LW(p) · GW(p)

I am neither a medical professional, nor have I ever been treated for depression, but my impression is that being depressed is itself a more serious risk factor for getting stuck in bad local optima like that; as well as making sufferers feel bad it also tends to reduce how much how they feel varies. I haven't heard that giving depressed people antidepressants reduces the range of their affective states.

Replies from: kalium
comment by kalium · 2015-03-08T07:52:51.047Z · LW(p) · GW(p)

It depends on the type of local optimum. I am reasonably sure that becoming too depressed to do enough work to stay in was the only was I could have gotten out of graduate school given my moral system at the time. (I hated being there but believed I had an obligation to try to contribute to human knowledge.)

Also flat affect isn't at all a universal effect of antidepressant usage, but it does happen for some people.

Replies from: gjm
comment by gjm · 2015-03-08T14:59:10.736Z · LW(p) · GW(p)

Isn't flat affect also a rather common effect of depression?

Replies from: kalium
comment by kalium · 2015-03-08T23:30:59.308Z · LW(p) · GW(p)

It happens but again it's not at all universal. Scott Alexander seems to think emotional blunting is a legitimate effect of SSRIs, not just a correlation–causation confusion. He also notes that

There is a subgroup of depressed patients whose depression takes the form of not being able to feel anything at all, and I worry this effect would exacerbate their problem, but I have never heard this from anyone and SSRIs do not seem less effective in that subgroup, so these might be two different things that only sound alike.

comment by ChristianKl · 2015-03-07T11:43:28.237Z · LW(p) · GW(p)

You assume that someone who's depressed is more motivated to change than a person who isn't depressed. Depression usually comes with reduced motivation to do things.

A lot of depression mediation even comes with warnings that it might increase suicide rates because the person feels more drive to take action.

comment by polymathwannabe · 2015-03-06T13:45:47.360Z · LW(p) · GW(p)

Yvain has written this and many other comprehensive posts on that topic (in the same blog).

comment by [deleted] · 2015-03-04T11:31:55.167Z · LW(p) · GW(p)

It seems people make friends two ways:

1) chatting people and finding each other interesting

2) going through difficult shit together and thus bonding, building camaraderie (see: battlefield or sports team friendships)

If your social life lags and 1) is not working, try 2)

My two best friends come from a) surviving a "deathmarch" project that was downright heroic (worst week was over 100 hours logged) together b) going to a university preparation course, both get picked on by the teacher who did not like us, and then both failing the entry exam in a spectacular way.

Questions:

a) correct?

b) how do you intentionally put yourself into difficult shit with other people so that you can bond and build camaraderie?

Replies from: gedymin, CAE_Jones, ChristianKl
comment by gedymin · 2015-03-04T15:04:36.568Z · LW(p) · GW(p)

Mountaineering or similar extreme activities is one option.

comment by CAE_Jones · 2015-03-04T13:32:10.855Z · LW(p) · GW(p)

I am now imagining someone engineering a great disaster or battle solely so they can make friends, who will, naturally, turn on them once they discover what happened.

I'm given to believe that going through lots of fun things together can be friendship-building, if not quite the same as going through lots of difficult things together.

Replies from: RowanE, None
comment by RowanE · 2015-03-04T14:27:39.726Z · LW(p) · GW(p)

Things can be both fun and difficult, and that category seems to be the obvious kind to look for when you want to intentionally put yourself through it. The problem then is that with most such things, people attempt difficult-but-fun projects or adventures with people they're already friends with to at least some degree, so you'll have to look for such an opportunity or create it yourself.

comment by [deleted] · 2015-03-04T13:39:10.315Z · LW(p) · GW(p)

Well, it is not that bad, thankfully. Just imagine a friendly soccer match between two village's teams. Putting in your damndest to win it is already a significantly more difficult thing than everyday life and creates bonding between team members.

Since life started to get too easy for some people - and for some people, that was really long ago - they started to generate artificial difficulties to make things more exciting, sports, games like poker, gambling, and so on.

Then what am I even asking? I am mainly just confused by choice and return on investment. Suppose I have not much interest nor time to invest in learning hobbies, yet would be willing to pay this tax for bonding, and would be looking for a team activity that feels difficult and uncertain enough to generate bonding. The kind of thing people later brag about. What would be the most effective one, I wonder.

comment by ChristianKl · 2015-03-07T11:51:45.941Z · LW(p) · GW(p)

a) correct?

Those two factors do matter, but the don't go to the meat of the issue.

Given that you speak German I would recommend you to read Wahre Männerfreundschaft (disclosure: the author is a personal friend).

b) how do you intentionally put yourself into difficult shit with other people so that you can bond and build camaraderie?

Various initiation rituals of fraternities use that mechanism.

Replies from: None
comment by [deleted] · 2015-03-09T08:13:39.998Z · LW(p) · GW(p)

Interesting! Practically an artofmanliness.com in German? I didn't know this exists. I actually like it - I thought our European culture is too "civilized" for this. Also useful as language practice for me - I am a textbook "kitchen speaker", perfectly fluent but crappy grammar. Thanks a lot of this idea. I was asking around on Reddit about interesting German language blogs years ago, and generally I got boring recommendations, so if you have another a few, please shoot. I think the German language blogosphere and journalism suffers from a generic boredom problem esp. in Austria, I have no idea who reads diepresse.com or derstandard.at without falling asleep. I think the English-language journosphere is better at presenting similar topics in more engaging ways e.g. The Atlantic.

Various initiation rituals of fraternities use that mechanism.

There is a time and place for those, such as universities, either the American "Greek letter culture" or the old German "putting scars on each others faces with foils" kind. I don't think similar organizations compatible with family fathers approaching 40 exist. However, I hope once I get good enough at boxing to be allowed to spar full force, I will make some marvelous friendships through giving each other bruises, same logic as the face-scar fencing stuff.

comment by [deleted] · 2015-03-03T00:40:17.511Z · LW(p) · GW(p)

If there is a way of copying and pasting or importing text of a google doc into an article while retaining LessWrong's default formatting, would be very happy to know it....

Replies from: richard_reitz
comment by richard_reitz · 2015-03-03T04:32:48.878Z · LW(p) · GW(p)

Turns out you're not the only one who wants to know this. Seems your best bet is to use C-S-v to paste raw text and then format it in the article editor.

Replies from: None
comment by [deleted] · 2015-03-04T01:42:33.123Z · LW(p) · GW(p)

Worked, thank you....

comment by [deleted] · 2015-03-06T10:33:50.297Z · LW(p) · GW(p)

Where do I start reading about this AI superintelligence stuff from the very basics on? I would especially be interesed in this: why do we consider our current paradigms of software and hardware 1) close enough to human intelligence in order to base a superintelligence on 2) why don't we think by the time we get there the paradigms will be different? I.e. AI rewriting its own source code? Why do we think AI is a software? Why do we think a software-hardware separation will make sense? Why do we think software will have a source code as we know it? Why would alphabetical letters invented thousands of years ago to express human sounds are ideal tools to define an intelligence? Even if yes, why would it be as today organized into words, functions and suchlike?

Replies from: g_pepper
comment by g_pepper · 2015-03-06T15:39:01.198Z · LW(p) · GW(p)

One obvious source if you haven’t already read it is Nick Bostrom’s Superintelligence. Bostrom addresses many of the issues that you list, e.g. an AI rewriting its own software, why an AI is likely to be software (and Bostrom discusses one or two non-software scenarios as well), etc. This book is quite informative and well worth reading, IMO.

Some of your questions are more fundamental than what is covered in Superintelligence. Specifically, to understand why “alphabetical letters invented thousands of years ago to express human sounds” are adequate for any computing task, including AI, you should explore the field of theoretical computer science, specifically automata and language theory. A classic book in that field is Hopcroft and Ullman’s Introduction to Automata Theory, Languages and Computation (caution: don’t be fooled by the “cute” cover illustration; this book is tough to get through and assumes that the reader has a strong mathematics background). Also, you should consider reading books on the philosophy of mind – but I have not read enough in this area to make specific recommendations.

To explore the question of “why do we think software will have a source code as we know it?” you will need to understand the role of software, machine language, the relationship between source code and machine language, and the role of compilers and interpreters. All of this is covered in a typical computer science curriculum. If you have a software background but have not studied these topics formally, a classic book on compilers and machine translation is Aho, Sethi and Ullman’s Compilers, Principles, Techniques and Tools (the dragon book). The dragon book is quite good but has been around for quite a while; a CS professor or recent graduate may be able to recommend something newer.

An additional step would be to explore the current state-of-the-art of AI techniques – e.g. neural nets, Bayesian inference, etc. There are quite a few people on LW who can probably give you good recommendations in this area.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2015-03-08T17:50:12.683Z · LW(p) · GW(p)

But current neural nets don't have source code as we know it: the intelligence is coded very implicitly into the weighting, post training, and the source code explicitly specifies a net that doesn't do anything.

Replies from: g_pepper
comment by g_pepper · 2015-03-09T14:59:07.518Z · LW(p) · GW(p)

It is true that much of the intelligence in a neural network is stored implicitly in the weights. The same (or similar) can be said about many other machine-learning techniques. However, I don't think that anything I said above indicated otherwise.

comment by G0W51 · 2015-03-04T02:44:06.780Z · LW(p) · GW(p)

Regulation to prevent forming space junk seems beneficial, as space junk could create Kessler syndrome, which would make it much harder to colonize space, which would increase existential risk, as without space colonization a catastrophe on Earth could kill off all intelligent Earth-originating life.

I know this isn't completely on-topic, but I don't know of any forum on x-risk, so I don't know of any better place to put it. On a related note, is there any demand for an x-risk forum? Someone (such as myself) should make one if there is enough demand for it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-03-04T17:47:52.178Z · LW(p) · GW(p)

There is a general problem that a commons transitions from abundant to tragic as demand grows. At what point do you introduce some kind of centralized regulation (eg, property rights)? How do you do that?

But space is nowhere near that point. Not worrying about Kessler syndrome is the right answer. And if it were going to be a problem in the foreseeable future, there are very few users of space, so they could easily negotiate a solution. If you expect that in the future every city of a million people will be sovereign with its own space program, then there is more of a tragic commons, but in that scenario space is the least of your problems.

Replies from: G0W51
comment by G0W51 · 2015-03-05T16:30:23.831Z · LW(p) · GW(p)

I'm not so sure that space around Earth is nowhere near that point. There is a concern that a collision with the single large satellite Envisat could trigger Kessler Syndrome, and "two catalogued objects pass within about 200m of it every year."

comment by Xerographica · 2015-03-06T06:44:16.086Z · LW(p) · GW(p)

Does Netflix have a shortage of fictional content that stimulates your mind?

Yes/No

My answer is yes.

comment by Houshalter · 2015-03-02T11:00:47.267Z · LW(p) · GW(p)

In Pascal's Mugging, the problem seems to be using expected values, which is highly distorted by even a single outlier.

The post led to a huge number of proposed solutions. They all seem pretty bad, and none of them even address the problem itself, just the specific thought experiment. And others, like bounding the utility function, are ok, but not really elegant. We don't really want to disregard high utility futures, we just don't want them to highly distort our decision process. But if we make decisions based on expected utility, they inevitably do.

So why is it taken as a given that we decide based on expected utility? Why not "median expected utility"? That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.

I'm not certain that this would generate consistent behavior, although you could possibly fix that by making it self referencing. That is, predetermine your future actions now so they lead to the future you desire. Or modify your decision making algorithm to the same effect.

I'm more concerned that there's also weird edge cases where this also doesn't line up with our decision making algorithm. It solves the outlier problem by giving outliers absolutely zero weight. If you have a choice to buy a dollar lottery ticket that has a 20% chance at giving you millions, you would pass it up. (Although, if you expect to encounter many such opportunities in the future, you would predetermine yourself to take them, but only up to a certain point. And this intuitively seems to me the sort of reasoning humans use to choose to obey expected utility calculations.) The same with avoiding large risks.

But not all is lost, there wasn't a priori any reason to believe that was the ideal human decision algorithm either. There are an infinite number of possible algorithms for converting a distribution to a single value. Granted most of them aren't elegant like these, but who says humans are?

We should expect this from evolution. Not just because it's messy, but any creature that actually follows expected utility calculation in extreme cases would almost certainly die. The best strategy would be to follow it in everyday circumstances but break from it in the extremes.

The point is just that the utility function isn't the only thing we need to worry about. I think not paying the Mugger or worshiping the Christian God are perfectly valid options. Even if you really have a boundless utility function and non-balancing priors. And most likely we will be fine if we do that.

Replies from: D_Malik, philh, Vaniver, Lumifer, shminux, Slider, Pfft
comment by D_Malik · 2015-03-02T17:07:02.817Z · LW(p) · GW(p)

VNM utility is basically defined as "that function whose expectation we maximize". There exists such a function as long as you obey some very unobjectionable axioms. So instead of saying "I do not want to maximize the expectation of my utility function U", you should say "U is not my utility function".

Replies from: seer, Houshalter
comment by seer · 2015-03-07T05:33:35.048Z · LW(p) · GW(p)

The problem with this argument, is that it boils down to, if we accept intuitive axioms X we get counter-intuitive result Y. But why is ~Y any less worthy of being an axiom then X?

comment by Houshalter · 2015-03-02T19:21:19.468Z · LW(p) · GW(p)

You miss my point. I am objecting to those axioms. I don't want to change my utility function. If God is real, perhaps he really could offer infinite reward or infinite punishment. You might really think murdering 3^^^^3 people is just that bad.

However these events have such low probability that I can safely choose to ignore them, and that's a perfectly valid choice. Maximizing expected utility means you will almost certainly do worse in the real world than an agent that doesn't.

Replies from: None
comment by [deleted] · 2015-03-02T19:30:01.146Z · LW(p) · GW(p)

Which axiom do you reject?

Replies from: MrMind, Houshalter
comment by MrMind · 2015-03-03T08:15:46.667Z · LW(p) · GW(p)

Continuity, I would say.

Replies from: hairyfigment, IlyaShpitser
comment by hairyfigment · 2015-03-30T06:38:36.658Z · LW(p) · GW(p)

That makes no sense in context, since continuity is equivalent to saying (roughly) 'If you prefer staying on this side of the street to dying, but prefer something on the other side of the street to staying here, there exists some probability of death which is small enough to make you prefer crossing the street.'

This sounds almost exactly like what Houshalter is arguing in the great-grandparent ("these events have such low probability that I can safely choose to ignore them,") so it can't be the axiom s/he objects to.

I could see objecting to Completeness, since in fact our preferences may be ill-defined for some choices. I don't know if rejecting this axiom could produce the desired result in Pascal's Mugging, though, and I'd half expect it to cause all sorts of trouble elsewhere.

comment by IlyaShpitser · 2015-03-03T12:09:51.333Z · LW(p) · GW(p)

That sounds right, actually.

comment by Houshalter · 2015-03-02T20:18:42.302Z · LW(p) · GW(p)

That for any bet with an infinitesimally small value of p, there is a value of u high enough that I would take it.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-03-03T05:55:51.284Z · LW(p) · GW(p)

That's not one of the axioms. In fact, none of the axioms mention u at all.

Replies from: Houshalter
comment by Houshalter · 2015-03-03T07:54:57.653Z · LW(p) · GW(p)

True, but they must imply it in order to imply the expected utility algorithm.

comment by philh · 2015-03-02T11:15:43.287Z · LW(p) · GW(p)

That is, if you look at the space of all possible outcomes, and select the point where exactly 50% of them are better, and exactly 50% are worse. Choose actions so that this median future is the best.

This seems vulnerable to the following bet: I roll a d6. If I roll 3+, I give you a dollar. Otherwise I shoot you.

Replies from: Houshalter
comment by Houshalter · 2015-03-02T11:32:51.743Z · LW(p) · GW(p)

I mention that vulnerability further down. Obviously it doesn't fit human decision making either, but I think it's qualitatively closer.

An example of an algorithm that's closer to the desired behavior would be to sample n counterfactuals from your probability distribution. Then take the average of these n outcomes, and take the median of this entire setup. E.g. so 50% of the time the average of the n outcomes is higher, and 50% of the time it's lower.

As n approaches infinity it becomes equivalent to expected utility, and as it approaches 1 it becomes median expected utility. A reasonable value is probably a few hundred. So that you select outcomes where you come out ahead the vast majority of the time, but still take low probability risks or ignore low probability rewards.

comment by Vaniver · 2015-03-02T22:23:55.375Z · LW(p) · GW(p)

Why not "median expected utility"?

This might sound silly, but it's deeper than it looks: the reason why we use the expected value of utility (i.e. means) to determine the best of a set of gambles is because utility is defined as the thing that you maximize the expected value of.

The thing that's nice about VNM utility is that it's mathematically consistent. That means we can't come up with a scenario where VNM utility generates silly outputs with sensible inputs. Of course we can give VNM silly inputs and get silly outputs back--scenarios like Pascal's Mugging are the equivalent of "suppose something really weird happens; wouldn't that be weird?" to which the answer is "well, yes."

The really nice thing about VNM is that it's the only rule that's mathematically consistent with itself and a handful of nice axioms. You might give up one of those axioms, but for any of those axioms we can show an example where a rule that doesn't follow that axiom will take sensible inputs and give silly outputs. So I don't think there's much to be gained by trying to replace a mean decision rule with a median decision rule or some other decision rule--but there is a lot to be gained by sharpening our probability distributions and more clearly figuring out our mapping from world-histories to utilities.

Replies from: IlyaShpitser, Houshalter
comment by IlyaShpitser · 2015-03-03T12:01:29.962Z · LW(p) · GW(p)

"suppose something really weird happens; wouldn't that be weird?"

To me, consequentialism is either something trivial or something I reject, but that said this is a fully general (and kind of weak) counter-argument. I can apply it to Newcomb from the CDT point of view. I can apply it to Smoking Lesion from the EDT point of view! I can apply it to astronomic data from the Ptolemy's theory of celestial motion point of view! We have to deal with things!

Replies from: Vaniver
comment by Vaniver · 2015-03-03T14:31:57.810Z · LW(p) · GW(p)

To me, consequentialism is either something trivial or something I reject

I hesitate to call consequentialism trivial, because I wouldn't use it to describe a broad class of 'intelligent' agents, but I also wouldn't reject it, because it does describe the design of those agents.

this is a fully general (and kind of weak) counter-argument.

I don't see it as a counter-argument. In general, I think that a method is appropriate if hard things are hard and easy things are easy--and, similarly, normal things are normal and weird things are weird. If the output is weird in the same way that the input is weird, the system is behaving appropriately; if it adds or subtracts weirdness, then we're in trouble!

For example, suppose you supplied a problem with a relevant logical contradiction to your decision algorithm, and it spat out a single numerical answer. Is that a sign of robustness, or lack of robustness?

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-03-03T15:33:12.632Z · LW(p) · GW(p)

I hesitate to call consequentialism trivial

I just meant I accept the consequentialist idea in decision theory that we should maximize, e.g. pick the best out of alternatives. But said in this way, it's a trivial point. I reject more general varieties of consequentialism (for reasons that are not important right now, but basically I think a lot of weird conclusions of consequentialism are due to modeling problems, e.g. the set up that makes consequentialism work doesn't apply well).

normal things are normal and weird things are weird

I don't know what you are saying here. Can you taboo "weird?" Newcomb is weird for CDT because it explicitly violates an assumption CDT is using. The answer here is to go meta and think about a family of decision theories of which CDT is one, indexed by their assumption sets.

Replies from: Vaniver
comment by Vaniver · 2015-03-03T16:21:49.874Z · LW(p) · GW(p)

I just meant I accept the consequentialist idea in decision theory that we should maximize, e.g. pick the best out of alternatives. But said in this way, it's a trivial point.

I understood and agree with that statement of consequentialism in decision theory--what I disagree with is that it's trivial that maximization is the right approach to take! For many situations, a reflexive agent that does not actively simulate the future or consider alternatives performs better than a contemplative agent that does simulate the future and considerate alternatives, because the best alternative is "obvious" and the acts of simulation and consideration consume time and resources that do not pay for themselves.

That's obviously what's going on with thermostats, but I would argue is what goes on all the way up to the consequentialism-deontology divide in ethics.

Can you taboo "weird?"

I would probably replace it with Pearl's phrase here, of "surprising or unbelievable."

To use the specific example of Newcomb's problem, if people find a perfect predictor "surprising or unbelievable," then they probably also think that the right thing to do around a perfect predictor is "surprising or unbelievable," because using logic on an unbelievable premise can lead to an unbelievable conclusion! Consider a Mundane Newcomb's problem which is missing perfect prediction but has the same evidential and counterfactual features: that is, Omega offers you the choice of one or two boxes, you choose which boxes to take, and then it puts a million dollars in the red box and a thousand dollars in the blue box if you choose only the red box and it puts a thousand dollars in the blue box if you choose the blue box or no boxes. Anyone that understands the scenario and prefers more money to less money will choose just the red box, and there's nothing surprising or unbelievable about it.

What is surprising is the claim that there's an entity who can replicate the counterfactual structure of the Mundane Newcomb scenario without also replicating the temporal structure of that scenario. But that's a claim about physics, not decision theory!

Replies from: IlyaShpitser
comment by IlyaShpitser · 2015-03-07T14:26:17.775Z · LW(p) · GW(p)

because the best alternative is "obvious" and the acts of simulation and consideration consume time and resources that do not pay for themselves.

Absolutely. This is the "bounded rationality" setting lots of people think about. For instance, Big Data is fashionable these days, and lots of people think about how we may do usual statistics business under severe computational constraints due to huge dataset sizes, e.g. stuff like this:

http://www.cs.berkeley.edu/~jordan/papers/blb_icml2012.pdf


But in bounded rationality settings we still want to pick the best out of our alternatives, we just have a constraint that we can't take more than a certain amount of resources to return an answer. The (trivial) idea of doing your best is still there. That is the part I accept. But that part is boring, thinking of the right thing to maximize is what is very subtle (and may involve non-consequentialist ideas, for example a decision theory that handles blackmail may involve virtue ethical ideas because the returned answer depends on "the sort of agent" someone is).

comment by Houshalter · 2015-03-03T03:21:33.438Z · LW(p) · GW(p)

I don't agree. Utility is a separate concept from expected value maximization. Utility is a way of ordering and comparing different outcomes based on how desirable they are. You can say that one outcome is more desirable than another, or even quantify how many times more desirable it is. This is a useful and general concept.

Expected utility does have some nice properties being completely consistent. However I argued above that this isn't a necessary property. It adds complexity, sure, but if you self modify your decision making algorithm or predetermine your actions, you can force your future self to be consistent with your present self's desires.

Expected utility is perfectly rational as the number of "bets" you take goes to infinity. Rewards will cancel out the losses in the limit, and so any agent would choose to follow EU regardless of their decision making algorithm. But as the number of bets becomes finite, it's less obvious that this is the most desirable strategy.

That means we can't come up with a scenario where VNM utility generates silly outputs with sensible inputs. Of course we can give VNM silly inputs and get silly outputs back--scenarios like Pascal's Mugging are the equivalent of "suppose something really weird happens; wouldn't that be weird?" to which the answer is "well, yes."

Pascal's Mugging isn't "weird", it's perfectly typical. There are probably an infinite number of pascal's mugging type situations. Hypotheses with exceedingly low probability but high utility.

If we built an AI today, based on pure expected utility, it would most likely fail spectacularly. These low probability hypotheses would come to totally dominate it's decisions. Perhaps it would start to worship various gods and practice rituals and obeying superstitions. Or something far more absurd we haven't even thought of.

And if you really believe in EU, you can't say that this behavior is wrong or undesirable. This is what you should be doing, if you could, and you are losing a huge amount of EU by not doing it. You should want more than anything in existence, the ability to exactly calculate these hypotheses so you can collect that EU.

I don't want that though. I want a decision rule such that I am very likely to end up in a good outcome. Not one where I will mostly likely end up in a very suboptimal outcome, with an infinitesimal probability of winning the infinite utility lottery.

Replies from: Epictetus, Vaniver, Kindly
comment by Epictetus · 2015-03-04T05:09:24.743Z · LW(p) · GW(p)

Expected utility is convenient and makes for a nice mathematical theory.

It also makes a lot of assumptions. One assumes that the expectation does, in fact, exist. It need not. For example, in a game where two players toss a fair coin, we expect that in the long run the number of heads should equal the number of tails at some point. It turns out that the expected waiting time is infinite. Then there's the classic St. Petersburg paradox.

There are examples of "fair" bets (i.e. expected gain is 0) that are nevertheless unfavorable (in the sense that you're almost certain to sustain a net loss over time).

Expected utility is a model of reality that does a good job in many circumstances but has some key drawbacks where naive application will lead to unrealistic decisions. The map is not the territory, after all.

comment by Vaniver · 2015-03-03T14:06:24.988Z · LW(p) · GW(p)

Utility is a separate concept from expected value maximization. Utility is a way of ordering and comparing different outcomes based on how desirable they are.

To Bentham, sure; today, we call something that generic "ranking" or something similar, because VNM-utility is the only game in town when it comes to assigning real-valued desirabilities to consequences.

But as the number of bets becomes finite, it's less obvious that this is the most desirable strategy.

Disagreed. The proof of the VNM axioms goes through for a single bet; I recommend you look that up, and then try to create a counterexample.

Note that it's easy to come up with a wrong utility mapping. One could, say, map dollars linearly to utility and then say "but I don't prefer a half chance of $100 and half chance of nothing to a certain $50!" , but that's solved by changing the utility mapping from linear from sublinear (say, log or sqrt or so on). In order to exhibit a counterexample it has to look like the Allais paradox, where someone confirms two preferences and then does not agree with the consequence of those preferences considered together.

There are probably an infinite number of pascal's mugging type situations. Hypotheses with exceedingly low probability but high utility.

It probably isn't the case that there are an infinite number of situations where the utility times the probability is higher than the cost, and if there are, that's probably a faulty utility function or faulty probability estimator rather than a faulty EU calculation. Consider this bit from You and Your Research by Hamming:

Let me warn you, `important problem' must be phrased carefully. The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important. When I say that most scientists don't work on important problems, I mean it in that sense.

An AI might correctly calculate that time travel is the most positive technology it could possibly develop--but also quickly calculate that it has no idea where to even start, and so the probability of success from thinking about it more is low enough that it should go for a more credible option. That's what human thinkers do and it doesn't seem like a mistake in the way that the Allais paradox seems like a mistake.

Replies from: Houshalter
comment by Houshalter · 2015-03-04T02:35:59.877Z · LW(p) · GW(p)

But as the number of bets becomes finite, it's less obvious that this is the most desirable strategy.

Disagreed. The proof of the VNM axioms goes through for a single bet; I recommend you look that up, and then try to create a counterexample.

Pascal's wager is the counterexample, and it's older than VNM. EY's Pascal's mugging was just an attempt to formalize it a bit more and prevent silly excuses like "well what if we don't allow infinites or assume the probabilities exactly cancel out."

Counterexample in that it violates what humans want, not that it produces inconsistent behavior or anything. It's perfectly valid for an agent to follow EU, as it is for it to follow my method. What we are arguing about is entirely subjective.

If you really believe in EU a priori, then no argument should be able to convince you it is wrong. You would find nothing wrong with Pascal situations, and totally agree with the result of EU. You wouldn't have to make clever arguments about the utility function or probability estimates to get out of it.

It probably isn't the case that there are an infinite number of situations where the utility times the probability is higher than the cost, and if there are, that's probably a faulty utility function or faulty probability estimator rather than a faulty EU calculation.

This is pretty thoroughly argued in the original Pascal's Mugging post. Hypotheses of vast utility can grow much faster than their improbability. The hypothesis "you will be rewarded/tortured 3^^^3 units" is infinitesimally smaller in an EU calculation to the hypothesis "you will be rewarded/tortured 3^^^^^^^3 units", and only takes a few more bits to express, and it can grow even further.

Replies from: Vaniver
comment by Vaniver · 2015-03-04T03:52:14.564Z · LW(p) · GW(p)

Pascal's wager is the counterexample, and it's older than VNM.

Counterexample in what sense? If you do in fact receive infinite utility from going to heaven, and being Christian raises the chance of you going to heaven by any positive amount over your baseline chance, then it is the right move to be Christian instead of baseline.

The reason people reject Pascal's Wager or Mugging is, as I understand it, they don't see the statement "you receive infinite utility from X" or "you receive a huge amount of disutility from Y" as actual evidence about their future utility.

In general, I think that any problem which includes the word "infinite" is guilty until proven innocent, and it is much better to express it as a limit. (This clears up a huge amount of confusion.) And the general principle- that as the prize for winning a lottery gets better, the probability of winning the lottery necessary to justify buying a fixed-price ticket goes down, seems like a reasonable principle to me.

It's perfectly valid for an agent to follow EU, as it is for it to follow my method. What we are arguing about is entirely subjective.

I think money pumps argue against subjectivity. Basically, if you use an inconsistent decision theory, someone else can make money off your inconsistency or you don't actually use that inconsistent decision theory.

If you really believe in EU a priori, then no argument should be able to convince you it is wrong. You would find nothing wrong with Pascal situations, and totally agree with the result of EU. You wouldn't have to make clever arguments about the utility function or probability estimates to get out of it.

I will say right now: I believe that if you have a complete set of outcomes with known utilities and the probabilities of achieving those outcomes conditioned on taking actions from a set of possible actions, the best action in that set is the one with the highest probability-weighted utility sum. That is, EU maximization works if you feed it the right inputs.

Do I think it's trivial to get the right inputs for EU maximization? No! I'm not even sure it's possible except in approximation. Any problem that starts with utilities in the problem description has hidden the hard work under the rug, and perhaps that means they've hidden a ridiculous premise.

Hypotheses of vast utility can grow much faster than their improbability.

Assuming a particular method of assigning prior probabilities to statements, yes. But is that the right method of assigning prior probabilities to statements?

(That is, yes, I've read Eliezer's post, and he's asking how to generate probabilities of consequences given actions. That's a physics question, not a decision theory question.)

Replies from: Houshalter
comment by Houshalter · 2015-03-05T03:55:49.890Z · LW(p) · GW(p)

If you do in fact receive infinite utility from going to heaven, and being Christian raises the chance of you going to heaven by any positive amount over your baseline chance, then it is the right move to be Christian instead of baseline.

Where "right" is defined as "maximizing expected utility", then yes. It's just a tautology, "maximizing expected utility maximizes expected utility".

My point is if you actually asked the average person, even if you explained all this to them, they would still not agree that it was the right decision.

There is no law written into the universe that says you have to maximize expected utility. I don't think that' what humans really want. If we choose to follow it, in many situation it will lead to undesirable outcomes. And it's quite possible that those situations are actually common.

It may mean life becomes more complicated than making simple EU calculations, but you can still be perfectly consistent (see further down.)

In general, I think that any problem which includes the word "infinite" is guilty until proven innocent, and it is much better to express it as a limit. (This clears up a huge amount of confusion.)

You could express it as a limit trivially (e.g. a hypothesis that in heaven you will collect 3^^^3 utilons per second for an unending amount of time.)

And the general principle- that as the prize for winning a lottery gets better, the probability of winning the lottery necessary to justify buying a fixed-price ticket goes down, seems like a reasonable principle to me.

Sounds reasonable, but it breaks down in extreme cases, where you end up spending almost all of your probability mass in exchange for a single good future with arbitrarily low probability.

Here's a thought experiment. Omega offers you tickets for 2 extra lifetimes of life, in exchange for a 1% chance of dying when you buy the ticket. You are forced to just keep buying tickets until you finally die.

Maybe you object that you discount extra years of life by some function, so just modify the thought experiments so the reward increase factorially per ticket bought, or something like that.

Fortunately we don't have to deal with these situations much, because we happen to live in a universe where there aren't powerful agents offering us very high utility lotteries. But these situations occur all the time if you deal with hypotheses instead of lotteries. The only reason we don't notice it is because we ignore or refuse to assign probability estimates to very unlikely hypotheses. An AI might not, and so it's very important to consider this issue.

I think money pumps argue against subjectivity. Basically, if you use an inconsistent decision theory, someone else can make money off your inconsistency or you don't actually use that inconsistent decision theory.

My method isn't vulnerable to money pumps, as is an infinite number of arbitrary algorithms of the same class. See my comment here for details.

You don't even need the stuff I wrote about predetermining actions, that just minimizes regret. Even a naive implementation of expected median utility should not be money pumpable.

Assuming a particular method of assigning prior probabilities to statements, yes. But is that the right method of assigning prior probabilities to statements?

The method by which you assign probabilities should be unrelated to the method you assign utilities to outcomes. That is, you can't just say you don't like the outcome EU gives you and so assign it a lower probability, that's a horrible violation of Bayesian principles.

I don't know what the correct method of assigning probabilities, but even if you discount complex hypotheses factorially or something, you still get the same problem.

I certainly think these scenarios have reasonable prior probability. God could exist, we could be in the matrix, etc. I give them so low probability I don't typically think about them, but for this issue that is irrelevant.

Replies from: Vaniver, Normal_Anomaly
comment by Vaniver · 2015-03-05T17:58:13.570Z · LW(p) · GW(p)

It's just a tautology, "maximizing expected utility maximizes expected utility".

Yes. That's the thing that sounds silly but is actually deep.

Here's a thought experiment. Omega offers you tickets for 2 extra lifetimes of life, in exchange for a 1% chance of dying when you buy the ticket. You are forced to just keep buying tickets until you finally die.

Maybe you object that you discount extra years of life by some function, so just modify the thought experiments so the reward increase factorially per ticket bought, or something like that.

That is the objection, but I think I should explain it in a more fundamental way.

What is the utility of a consequence? For simplicity, we often express it as a real number, with the caveat that all utilities involved in a problem have their relationships preserved by an affine transformation. But that number is grounded by a gamble. Specifically, consider three consequences, A, B, and C, with u(A)<u(B)<u(C). If I am indifferent between B for certain and A with probability p and C otherwise, I encode that with the mathematical relationship:

 u(B)=p u(A)+(1-p) u(C)

As I express more and more preferences, each number is grounded by more and more constraints.

The place where counterexamples to EU calculations go off the rails is when people intervene at the intermediate step. Suppose p is 50%, and I've assigned 0 to A, 1 to B, and 2 to C. If a new consequence, D, is introduced with a utility of 4, that immediately implies:

  1. I am indifferent between (50% A, 50% D) and (100% C).
  2. I am indifferent between (75% A, 25% D) and (100% B).
  3. I am indifferent between (67% B, 33% D) and (100% C).

If one of those three statements is not true, I can use that and D having a utility of 4 to prove a contradiction. But the while the existence of D and my willingness to accept those specific gambles implies that D's utility is 4, the existence of the number 4 does not imply that there exists a consequence where I'm indifferent to those gambles!

And so very quickly Omega might have to offer me a lifetime longer than the lifetime of the universe, and because I don't believe that's possible I say "no thanks, I don't think you can deliver, and in the odd case where you can deliver, I'm not sure that I want what you can deliver." (This is the resolution of the St. Petersburg Paradox where you enforce that the house cannot pay you more than the total wealth of the Earth, in which case the expected value of the bet comes out to a reasonable, low number of dollars, roughly where people estimate the value of the bet.)

But these situations occur all the time if you deal with hypotheses instead of lotteries. The only reason we don't notice it is because we ignore or refuse to assign probability estimates to very unlikely hypotheses. An AI might not, and so it's very important to consider this issue.

To me, this maps on to basic research. There's some low probability that a particular molecule cures a particular variety of cancer, but it would be great if it did--so let's check. The important conceptual addition from this analogy is that both hypotheses are entangled (this molecule curing cancer implies things about other molecules) and values are entangled (the second drug that can cure a particular variety of cancer is less valuable than the first drug that can).

And so an AI might have somewhat different basic research priorities than we do--indeed, humans vary widely on their preferences for following various uncertain paths--but it seems likely to me that it could behave reasonably when coming up with a portfolio of actions to take, even if it looks like it might behave oddly with only one option.

The method by which you assign probabilities should be unrelated to the method you assign utilities to outcomes. That is, you can't just say you don't like the outcome EU gives you and so assign it a lower probability, that's a horrible violation of Bayesian principles.

Playing against nature, sure--but playing against an intelligent agent? Running a minimax calculation to figure out that my opponent is not likely to let me checkmate him in one move is hardly a horrible violation of Bayesian principles!

Replies from: Houshalter
comment by Houshalter · 2015-03-07T10:49:40.581Z · LW(p) · GW(p)

And so very quickly Omega might have to offer me a lifetime longer than the lifetime of the universe, and because I don't believe that's possible I say "no thanks, I don't think you can deliver, and in the odd case where you can deliver, I'm not sure that I want what you can deliver."

the house cannot pay you more than the total wealth of the Earth, in which case the expected value of the bet comes out to a reasonable, low number of dollars, roughly where people estimate the value of the bet.

This is a cop out. Obviously that specific situation can't occur in reality, that's not the point. If your decision algorithm fails in some extreme cases, at least confess that it's not universal.

And so very quickly Omega might have to offer me a lifetime longer than the lifetime of the universe, and because I don't believe that's possible I say "no thanks, I don't think you can deliver

Same thing. Omega's ability and honesty are premises.

The point of the thought experiment is just to show that EU is required to trade away huge amounts of outcome-space for really good but improbable outcomes. This is a good strategy if you plan on making an infinite number of bets, but horrible if you don't expect to live forever.

I don't get your drug research analogy. There is no pascal's equivalent situation in drug research. At best you find a molecule that cures all diseases, but that's hardly infinite utility.

Instead it would be more like, "there is a tiny, tiny probability that a virus could emerge which causes humans not to die, but to suffer for eternity in the worst pain possible. Therefore, by EU calculation, I should spend all of my resources searching for a possible vaccine for this specific disease, and nothing else."

Replies from: Vaniver
comment by Vaniver · 2015-03-07T15:12:35.930Z · LW(p) · GW(p)

This is a cop out. Obviously that specific situation can't occur in reality, that's not the point. If your decision algorithm fails in some extreme cases, at least confess that it's not universal.

What does it mean for a decision algorithm to fail? I'll give an answer later, but here I'll point out that I do endorse that multiplication of reals is universal--that is, I don't think multiplication breaks down when the numbers get extreme enough.

Same thing. Omega's ability and honesty are premises.

And an unbelievable premise leads to an unbelievable conclusion. Don't say that logic has broken down because someone gave the syllogism "All men are immortal, Socrates is a man, therefore Socrates is still alive."

The point of the thought experiment is just to show that EU is required to trade away huge amounts of outcome-space for really good but improbable outcomes. This is a good strategy if you plan on making an infinite number of bets, but horrible if you don't expect to live forever.

How does [logic work]? Eliezer puts it better than I can:

The power of logic is that it relates models and statements. ... And here is the power of logic: For each syntactic step we do on our statements, we preserve the match to any model. In any model where our old collection of statements was true, the new statement will also be true. We don't have to check all possible conforming models to see if the new statement is true in all of them. We can trust certain syntactic steps in general - not to produce truth, but to preserve truth.

EU is not "required" to trade away huge amounts of outcome-space for really good but improbable outcomes. EU applies preference models to novel situations, not to produce preferences but to preserve them. If you gave EU a preference model that matched your preferences, it will preserve the match and give you actions that best satisfy your preferences in underneath the uncertainty model of the universe you gave it.

And if it's not true that you would trade away huge amounts of outcome-space for really good but improbable outcomes, this is a fact about your preference model that EU preserves! Remember, EU preference models map lists of outcomes to classes of lists of real numbers, but the inverse mapping is not guaranteed to have support over the entire reals.

I think a decision algorithm fails if it makes you predictably worse off than an alternative algorithm, and the chief ways to do so are 1) to do the math wrong and be inconsistent and 2) to make it more costly to express your preferences or world-model.

I don't get your drug research analogy.

We have lots of hypotheses about low-probability, high-payout options, and if humans make mistakes, it is probably by overestimating the probability of low-probability events and overestimating how much we'll enjoy the high payouts, both of which make us more likely to pursue those paths than a rational version of ourselves.

So it seems to me that if we have an algorithm that can correctly manage the budget of a pharmaceutical corporation, balancing R&D and marketing and production and so on, that requires solving this philosophical problem. But we have the mathematical tools to correctly manage the budget of a pharmaceutical corporation given a world-model, which says to me we should turn our attention to getting more precise and powerful world-models.

Replies from: Houshalter
comment by Houshalter · 2015-03-16T11:50:15.258Z · LW(p) · GW(p)

What does it mean for a decision algorithm to fail?

When it makes decisions that are undesirable. There is no point deciding to run a decision algorithm which is perfectly consistent but results in outcomes you don't want.

In the case of the Omega's-life-tickets scenario, one could argue it fails in an objective sense since it will never stop buying tickets until it dies. But that wasn't even the point I was trying to make.

And an unbelievable premise leads to an unbelievable conclusion.

I don't know if there is a name for this fallacy but there should be. Where someone objects to the premises of a hypothetical situation intended just to demonstrate a point. E.g. people who refuse to answer the trolley dilemma and instead say "but that will probably never happen!" It's very frustrating.

EU is not "required" to trade away huge amounts of outcome-space for really good but improbable outcomes. EU applies preference models to novel situations, not to produce preferences but to preserve them. If you gave EU a preference model that matched your preferences, it will preserve the match and give you actions that best satisfy your preferences in underneath the uncertainty model of the universe you gave it.

This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function.

What I've been saying from the very beginning is that there isn't any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer.

I think a decision algorithm fails if it makes you predictably worse off than an alternative algorithm

Even if you are an EU maximizer, EU will make you "predictably" worse off, as in the majority of cases you will be worse off. A true EU maximizer doesn't care so long as the utility of the very low probability outcomes is high enough.

Replies from: Vaniver
comment by Vaniver · 2015-03-16T14:15:41.374Z · LW(p) · GW(p)

I don't know if there is a name for this fallacy but there should be.

One name is fighting the hypothetical, and it's worth taking a look at the least convenient possible world and the true rejection as well.

There are good and bad reasons to fight the hypothetical. When it comes to these particular problems, though, the objections I've given are my true objections. The reason I'd only pay a tiny amount of money for the gamble in the St. Petersburg Paradox is that there is only so much financial value that the house can give up. One of the reasons I'm sure this is my true objection is because the richer the house, the more I would pay for such a gamble. (Because there are no infinitely rich houses, there is no one I would pay an infinite amount to for such a gamble.)

This is very subtle circular reasoning. If you assume your goal is to maximize the expected value some utility function, then maximizing expected utility can do that if you specify the right utility function.

I'm not sure why you think it's subtle--I started off this conversation with:

This might sound silly, but it's deeper than it looks: the reason why we use the expected value of utility (i.e. means) to determine the best of a set of gambles is because utility is defined as the thing that you maximize the expected value of.

But I don't think it's quite right to call it "circular," for roughly the same reasons I don't think it's right to call logic "circular."

What I've been saying from the very beginning is that there isn't any reason to believe there is any utility function that will produce desirable outcomes if fed to an expected utility maximizer.

To make sure we're talking about the same thing, I think an expected utility maximizer (EUM) is something that takes both a function u(O) that maps outcomes to utilities, a function p(A->O) that maps actions to probabilities of outcomes, and a set of possible actions, and then finds the action out of all possible A that has the maximum weighted sum of u(O)p(A->O) over all possible O.

So far, you have not been arguing that every possible EUM leads to pathological outcomes; you have been exhibiting particular combinations of u(O) and p(A->O) that lead to pathological outcomes, and I have been responding with "have you tried not using those u(O)s and p(A->O)s?".


It doesn't seem to me that this conversation is producing value for either of us, which suggests that we should either restart the conversation, take it to PMs, or drop it.

comment by Normal_Anomaly · 2015-03-05T15:39:45.440Z · LW(p) · GW(p)

Here's a thought experiment. Omega offers you tickets for 2 extra lifetimes of life, in exchange for a 1% chance of dying when you buy the ticket. You are forced to just keep buying tickets until you finally die.

This suggests buying tickets takes finite time per ticket, and that the offer is perpetually open. It seems like you could get a solid win out of this by living your life, buying one ticket every time you start running out of life. You keep as much of your probability mass alive as possible for as long as possible, and your probability of being alive at any given time after the end of the first "lifetime" is greater than it would've been if you hadn't bought tickets. Yeah, Omega has to follow you around while you go about your business, but that's no more obnoxious than saying you have to stand next to Omega wasting decades on mashing the ticket-buying button.

Replies from: Houshalter
comment by Houshalter · 2015-03-05T20:36:32.212Z · LW(p) · GW(p)

Ok change it so the ticket booth closes if you leave.

comment by Kindly · 2015-03-03T04:40:51.736Z · LW(p) · GW(p)

Expected utility is perfectly rational as the number of "bets" you take goes to infinity.

That's not the way in which maximizing expected utility is perfectly rational.

The way it's perfectly rational is this. Suppose you have any decision making algorithm; if you like, it can have an internal variable called "utility" that lets it order and compare different outcomes based on how desirable they are. Then either:

  • the algorithm has some ugly behavior with respect to a finite collection of bets (for instance, there are three bets A, B, and C such that it prefers A to B, B to C, and C to A), or

  • the algorithm is equivalent to one which maximizes the expected value of some utility function: maybe the one that your internal variable was measuring, maybe not.

Replies from: Houshalter
comment by Houshalter · 2015-03-03T08:35:22.568Z · LW(p) · GW(p)

The first condition is not true, since it gives a consistent value to any probability distribution of utilities. The second condition is not true other since the median function is not merely a transform of the mean function.

I'm not sure what the "ugly" behavior you describe is, and I bet it rests on some assumption that's too strong. I already mentioned how inconsistent behavior can be fixed by allowing it to predetermine it's actions.

Replies from: Kindly
comment by Kindly · 2015-03-03T13:40:57.294Z · LW(p) · GW(p)

You can find the Von Neumann--Morgenstern axioms for yourself. It's hard to say whether or not they're too strong.

The problem with "allowing [the median algorithm] to predetermine its actions" is that in this case, I no longer know what the algorithm outputs in any given case. Maybe we can resolve this by considering a case when the median algorithm fails, and you can explain what your modification does to fix it. Here's an example.

Suppose I roll a single die.

  • Bet A loses you $5 on a roll of 1 or 2, but wins you $1 on a roll of 3, 4, 5, or 6.

  • Bet B loses you $5 on a roll of 5 or 6, but wins you $1 on a roll of 1, 2, 3, or 4.

Bet A has median utility of U($1), as does bet B. However, combined they have a median utility of U(-$4).

So the straightforward median algorithm pays money to buy Bet A, pays money to buy Bet B, but will then pay money to be rid of their combination.

Replies from: Houshalter
comment by Houshalter · 2015-03-03T23:49:42.989Z · LW(p) · GW(p)

I think I've found the core of our disagreement. I want an algorithm that considers all possible paths through time. It decides on a set of actions, not just for the current time step, but for all possible future time steps. It chooses such that the final probability distribution of possible outcomes, at some point in the future, is optimal according to some metric. I originally thought of median, but it can work with any arbitrary metric.

This is a generalization of expected utility. The VNM axioms require an algorithm to make decisions independently and Just In Time. Whereas this method lets it consider all possible outcomes. It may be less elegant than EU, but I think it's closer to what humans actually want.

Anyway your example is wrong, even without predetermined actions. The algorithm would buy bet A, but then not buy bet B. This is because it doesn't consider bets in isolation like EU, but considers it's entire probability distribution of possible outcomes. Buying bet B would decrease it's expected median utility, so it wouldn't take it.

Replies from: Douglas_Knight, IlyaShpitser, Vaniver, Kindly
comment by Douglas_Knight · 2015-03-05T18:53:12.699Z · LW(p) · GW(p)

The VNM axioms require an algorithm to make decisions independently and Just In Time.

No, they don't.

Replies from: Houshalter
comment by Houshalter · 2015-03-07T10:03:43.091Z · LW(p) · GW(p)

Assuming the bet has a fixed utility, then EU gives it a fixed estimate right away. Whereas my method considers it along with all other bets that it's made or expects to make, and it's estimate can change over time. I should have said that it's not independent or fixed, but that is what I meant.

Replies from: Kindly
comment by Kindly · 2015-03-07T15:11:39.849Z · LW(p) · GW(p)

In the VNM scheme where expected utility is derived at a consequence of the axioms, the way that a bet's utility changes over time is that its utility is not fixed. Nothing at all stops you from changing the utility you attach to a 50:50 gamble of getting a kitten versus $5 if your utility for a kitten (or for $5) changes: for example, if you get another kitten or win the lottery.

Generalizing to allow the value of the bet to change when the value of the options did not change seems strange to me.

comment by IlyaShpitser · 2015-03-05T19:07:30.403Z · LW(p) · GW(p)

This is a generalization of expected utility.

I am lost, this is just EU in a longitudinal setting? You can average over lots of stuff. Maximizing EU is boring, it's specifying the right distribution that's tricky.

Replies from: Houshalter
comment by Houshalter · 2015-03-07T09:51:45.299Z · LW(p) · GW(p)

It's not EU, since it can implement arbitrary algorithms to specify the desired probability distribution of outcomes. Averaging utility is only one possibility, another I mentioned was median utility.

So you would take the median utility of all the possible outcomes. And then select the action (or series of actions in this case) that leads to the highest median utility.

No method of specifying utilities would let EU do the same thing, but you can trivially implement EU in it, so it's strictly more general than EU.

comment by Vaniver · 2015-03-05T14:56:27.664Z · LW(p) · GW(p)

I think I've found the core of our disagreement. I want an algorithm that considers all possible paths through time. It decides on a set of actions, not just for the current time step, but for all possible future time steps.

So, I think you might be interested in UDT. (I'm not sure what the current best reference for that is.) I think that this requires actual omniscience, and so is not a good place to look for decision algorithms.

(Though I should add that typically utilities are defined over world-histories, and so any decision algorithm typically identifies classes of 'equivalent' actions, i.e. acknowledges that this is a thing that needs to be accepted somehow.)

Replies from: Douglas_Knight
comment by Douglas_Knight · 2015-03-05T18:56:04.338Z · LW(p) · GW(p)

UDT is overkill. The idea that all future choices can be collapsed into a single choice appears in the work of von Neumann and Morgenstern, but is probably much older.

comment by Kindly · 2015-03-04T03:02:19.700Z · LW(p) · GW(p)

Oh, I see. I didn't take that problem into account, because it doesn't matter for expected utility, which is additive. But you're right that considering the entire probability distribution is the right thing to do, and under than assumption we're forced to be transitive.

The actual VNM axiom violated by median utility is independence: If you prefer X to Y, then a gamble of X vs Z is preferable to the equivalent gamble of Y vs Z. Consider the following two comparisons:

  • Taking bet A, as above, versus the status quo.

  • A 2/3 chance of taking bet A and a 1/3 chance of losing $5, versus a 2/3 chance of the status quo and a 1/3 chance of losing $5.

In the first case, bet A has median utility U($1) and the status quo has U($0), so you pick bet A. In the second case, a gamble with a possibility of bet A has median utility U(-$5) and a gamble with a possibility of the status quo still has U($0), so you pick the second gamble.

Of course, independence is probably the shakiest of the VNM axioms, and it wouldn't surprise me if you're unconvinced by it.

comment by Lumifer · 2015-03-02T17:57:03.898Z · LW(p) · GW(p)

the problem seems to be using expected values, which is highly distorted by even a single outlier ... Why not "median expected utility"?

This is a common problem which is handled by robust statistics. Means, while efficient, are notably not robust. The median is a robust alternative from the class of L-estimators (L is for Linear), but a popular alternative for location estimates nowadays is something from the class of M-estimators (M is for Maximum Likelihood).

Replies from: Houshalter
comment by Houshalter · 2015-03-02T19:15:44.431Z · LW(p) · GW(p)

Maximum Likelihood doesn't really lead to desirable behavior when the number of possibilities is very large. E.g. i roll a dice with a 2 and a 3 give you a dollar, and unrelated but horrible things happen on any other number.

Replies from: Lumifer
comment by Lumifer · 2015-03-02T19:29:26.103Z · LW(p) · GW(p)

Huh?

Replies from: Houshalter
comment by Houshalter · 2015-03-02T19:35:47.668Z · LW(p) · GW(p)

Maximum likelihood means taking the outcome with the highest probability relative to everything else, correct? This isn't really desirable since the outcome with the highest probability, might still have very low absolute probability.

Replies from: Lumifer
comment by Lumifer · 2015-03-02T19:41:58.385Z · LW(p) · GW(p)

Maximum likelihood means taking the outcome with the highest probability relative to everything else, correct?

No, not at all, what you are talking about is called the mode of the distribution.

Why don't you look at the links in my post?

Replies from: Houshalter
comment by Houshalter · 2015-03-02T20:00:17.765Z · LW(p) · GW(p)

a maximum-likelihood estimate is often defined to be a zero of the derivative of the likelihood function with respect to the parameter

And the equation.

I don't see how it's different than the mode. Even the graphs show it as being the same: 1 2.

Replies from: Lumifer
comment by Lumifer · 2015-03-02T20:10:25.359Z · LW(p) · GW(p)

I don't see how it's different than the mode

Think about a bimodal distribution, for example. But in any case, we're talking about M-estimates, weren't we?

comment by shminux · 2015-03-02T16:04:38.919Z · LW(p) · GW(p)

Among other issues with aiming for the middle of the road, I suspect that a Pascal's mugger who knows that you go for median (or, more generally, x-percentile by count) expected utility will be able to manufacture an offer where the median utility offer is makes you give in, just like the maximum utility offer does.

Replies from: gjm
comment by gjm · 2015-03-02T17:38:50.598Z · LW(p) · GW(p)

I bet a median-utility maximizer can be exploited. But I don't believe one can be exploited by a Pascal's mugging. What makes a Pascal's mugging a Pascal's mugging is that it involves a very low probability of a very large change in utility.

Replies from: shminux
comment by shminux · 2015-03-02T19:02:39.076Z · LW(p) · GW(p)

Do you believe that the 99.999-percentile by utility-ordered outcome count can be Pascal-mugged? How about 90%? Where is the cut-off?

Replies from: gjm
comment by gjm · 2015-03-02T20:07:30.697Z · LW(p) · GW(p)

I'm not sure this is a useful question. I mean, if you choose the (1-p) quantile (I'm assuming this means something like "truncate the distribution at the p and 1-p quantiles and then take the mean of what's left", which seems like the least-crazy way to do it) then any given Pascal's Mugging becomes possible once p gets small enough. But what I have in mind when I hear "Pascal's Mugging" is something so outrageously improbable that the usual way of dealing with it is to say "eh, not going to happen" and move on (accompanied by a delta-U so outrageously large as to allegedly outweigh that), and I take Houshalter to be suggesting truncating at a not-outrageously-small p, and the two don't really seem to overlap.

comment by Slider · 2015-03-02T14:58:29.616Z · LW(p) · GW(p)

There is reason to believe that "expected amount of reproductions" is more aligned with natural selection than most other candidates. However organisms can't directly decide to prosper. They have to do it via spesific ways. That is why a surrogate is expected. You can't say that utility maximization would be a bad surrogate as it is almost defined to be the best surrogate. Now that doesn't mean that what you cognitive ritual calls calls utility need to correspond to actaul utility but it doesn't destroy the concept.

Replies from: Houshalter
comment by Houshalter · 2015-03-02T19:30:55.416Z · LW(p) · GW(p)

In an infinite world, expected reproductions would be a good thing to maximize. An organism that had 3^^^^3 babies would vastly increase the spread of it's genes, and so it would be worth taking very very low probability bets. But in a finite world all such bets will lose, leaving behind only organisms which don't take such bets, in the vast majority of worlds.

Replies from: Lumifer, Slider
comment by Lumifer · 2015-03-02T19:41:05.507Z · LW(p) · GW(p)

An organism that had 3^^^^3 babies would vastly increase the spread of it's genes

Not quite, such an organism is likely to devastate its ecosystem in one generation and die out soon after that.

Replies from: Slider
comment by Slider · 2015-03-04T15:05:02.660Z · LW(p) · GW(p)

a reason why any amont of sustainable growth is preferable to a large oneshot.

comment by Slider · 2015-03-04T15:25:16.339Z · LW(p) · GW(p)

Your argument seems to use expected amount of copies to argue in favour of forgetting about expected amount of copies. In a way this is illustrative, an organism that only cares about sex but not about defence is more naive than one that sometimes forgoes sex to meet defence needs. But in a way the defence option provides for more copies. In this way sex isn't choosing to make more copies, it is only one strategy path to it that might fail.

Arguing about finiteness is like knowing the maximum size of bets the universe can offer. But how can one be sure about the size of that limit? There is althought an argument that a species that has lived a finite time will have only finite amount of evidence and thus a limit on certainty that it can archieve. There are some propositions that might exceed this limit. However using any probability analysis to solve how to tune your behaviour to these propositions would be arbitrary. That is there is no way to calculate unexpected utility and expected utility doesn't take a stance on what grounds you expect that utility to take place.

comment by Pfft · 2015-03-02T21:15:12.129Z · LW(p) · GW(p)

It seems one problem with using median is that the result depends on how coarsely you model the possible outcomes. E.g. suppose I am considering a bus trip: the bus may be on time, arrive early, or arrive late; and it may be late because it drove over a cliff killing all the passengers, or because it caught fire horribly maiming the passengers, or because it was stuck for hours in a snowstorm, or because it was briefly caught in traffic.

With expected utility it doesn't matter how you group them: the expected value of the trip is the weighted sum of the expected value of being late/on-time/early. But the median of [late, on time, early] is different from the median of [cliff, fire, snowstorm, traffic, on time, early]

comment by [deleted] · 2015-03-08T19:53:16.543Z · LW(p) · GW(p)

It seems that watching talkative sports fans watch sports might be a big opportunity to observe that bias that makes people evaluate bad and good properties in a lump, the Affect Heuristic. And that sports like biathlon are more handy than, say, football, since they give rapid binary updates (for the shooting), and almost-binary (?) ones for the running. And you can control for variables like 'country', etc. What do you think?

comment by Thomas · 2015-03-02T10:21:50.890Z · LW(p) · GW(p)

I have devised (automatically, I have just let it grow) an algorithm, which enlists all the leap years in the Gregorian calendar using the Cosine function. Scraping the ugly constants of 100 and 400.

Here

Replies from: ctintera
comment by ctintera · 2015-03-02T17:16:29.866Z · LW(p) · GW(p)

I'm having difficulty envisioning what problem this solves. Leap years are already defined by a very simple function, and subbing in a cosine for a discrete periodicity adds complexity, does it not?

Replies from: gjm, TylerJay
comment by gjm · 2015-03-02T17:40:08.664Z · LW(p) · GW(p)

I think (although Thomas leaves it frustratingly unclear) the point is that this algorithm was discovered by some kind of automatic process -- genetic programming or something. (If Thomas is seriously suggesting that his algorithm is an improvement on the usual one containing the "ugly constants" then I agree that that's misguided.)

comment by TylerJay · 2015-03-02T17:42:48.402Z · LW(p) · GW(p)

Last line of the article explains the motivation:

I wouldn’t mention it at all, but the inventor is not a human being and it’s a very good example of a “pure mechanical invention”.

Replies from: ctintera
comment by ctintera · 2015-03-02T19:15:12.223Z · LW(p) · GW(p)

Having an algorithm fit a model to some very simple data is not noteworthy either. It's possible that the means by which the "pure mechanical invention" was obtained are interesting, but they are not elaborated on in the slightest.