Open thread, Sept. 1-7, 2014

post by polymathwannabe · 2014-09-01T12:18:56.648Z · LW · GW · Legacy · 162 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

162 comments

Comments sorted by top scores.

comment by Gvaerg · 2014-09-02T14:50:02.178Z · LW(p) · GW(p)

A nearby store has this sign that kinda reminds me of What the Tortoise Said to Achilles:

Products marked with can be heated at your request!

Definitely not making this up. Showed this today to my girlfriend who was speechless upon exiting the store.

Replies from: None, cousin_it
comment by [deleted] · 2014-09-02T15:22:28.912Z · LW(p) · GW(p)

You should recurse one level deeper and put a sign outside the store saying "Products marked purchased in stores marked with can be heated at your request!"> can be heated at your request!"

Replies from: Lumifer
comment by Lumifer · 2014-09-03T16:59:28.691Z · LW(p) · GW(p)

You should recurse one level deeper

No real reason to stop at only one level, is there? X-D

comment by cousin_it · 2014-09-05T10:44:22.807Z · LW(p) · GW(p)

Nerdier:

Products marked with this sentence can be heated at your request.

Nerdiest:

Products marked with "Products marked with X, with the first X replaced by the previous quoted sentence, can be heated at your request", with the first X replaced by the previous quoted sentence, can be heated at your request.

Even after all these years, writing quines still feels like I'm cheating the universe.

Replies from: Gvaerg
comment by Gvaerg · 2014-09-05T12:54:34.953Z · LW(p) · GW(p)

"After all this time?"

"Always."

Replies from: cousin_it
comment by cousin_it · 2014-09-05T13:37:23.950Z · LW(p) · GW(p)

I just made a quick generator for phrases like this :-)

comment by David_Gerard · 2014-09-06T21:55:56.213Z · LW(p) · GW(p)

So why do women do worse in certain fields of work? It turns out you can in fact do a direct A/B comparison on workplace gender discrimination: ask a transgender person. Formerly respected scientist Barbara Barres, now inexplicably-more-respected scientist Ben Barres. Actual quote: "Ben gave a great seminar today—but then his work is so much better than his sister's."

Replies from: gwern, polymathwannabe, D_Malik
comment by gwern · 2014-09-06T22:49:38.386Z · LW(p) · GW(p)

Saying it's a direct A/B comparison is seriously overstating it. Transitioning is itself a huge confounder, and if it were true that time before/after were exactly comparable, that would debunk one of the main justifications for allowing sex-changes in the first place!

Of course, the sample size is small here. And there’s no perfect agreement on cause-and-effect. Chris Edwards, a trans advertising executive, says that post-transition, he was given greater levels of responsibility—but he thinks it’s because the testosterone he took changed his behavior. He became less timid and more outspoken—and was seen, at work, as more of a leader. Indeed, some suggest that transmen might experience these workplace benefits partly because, post-transition, they are happier and more comfortable, and that this confidence leads to greater workplace success. But if that’s the case, one would expect that transwomen, armed with this same newfound confidence, would see benefits. The opposite seems to be true.

Note the willful incomprehension of the author about the possible effects of things like testosterone. 'Opposite seems to be true' my ass. But I suppose materialism and individual differences should never be allowed to get in the way of a good story about endemic sexism and racism...

(Sadly, this is only the second most infuriating statistical argument I've seen today. The first is a linear regression in the Washington Post about whippings vs productivity for slaves, in which they claim it shows whipping works. Aside from the usual correlation!=causality problem, their scatterplot clearly shows that there is not such a small positive correlation: their model does not fit the data because most slaves were never whipped so it's not Gaussian but more like a zero-inflated model, and in the population that was whipped a non-zero number of times, more whippings correlate dramatically with decreased cotton production. At a guess, male slaves were much more likely to act out or run away or get into fights or refuse to produce, and would be whipped for it. It borders on malpractice to present this graph baldly without including sex as a covariate or better yet doing a mixture model - certainly any model diagnostics would flag this regression as bogus. The author's bio says he's a professor at Columbia who "studies the roots of poverty and violence in developing countries, especially Africa"; all I can think is that if that's what passes for analysis for him, then no wonder Africa remains poor and violent.)

Replies from: David_Gerard, V_V, pragmatist, drethelin, David_Gerard
comment by David_Gerard · 2014-09-07T09:23:41.632Z · LW(p) · GW(p)

This one from someone going MTF was interesting: https://news.ycombinator.com/item?id=8279058 She found the sexism ridiculously more blatant than transphobia.

Replies from: shminux
comment by shminux · 2014-09-07T19:43:49.315Z · LW(p) · GW(p)

This is pretty disconcerting. However, I can't help but wonder if this is specific to some areas of the US. I've worked with women in various companies at various technical positions, and I'd heard plenty of "glass ceiling" complaints, where women were basically never promoted to the executive level (except for one exceptionally capable becoming a CFO), possibly because the head office was in the South East and the board being an old boys club. But I do not recall any mention of casual or subconscious sexism described in the link.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-09-08T21:48:10.980Z · LW(p) · GW(p)

It's possible that women only complain to you about glass ceilings because the effects are visible and they don't trust you to believe incidents as in that link. Next time you get a complaint about glass ceilings, ask about casual sexism.

Replies from: shminux
comment by shminux · 2014-09-08T23:31:13.245Z · LW(p) · GW(p)

Next time you get a complaint about glass ceilings, ask about casual sexism.

I did. They were pretty clear that they did not have any issues at the team- or project-lead levels, except maybe when a visiting executive was present at some meeting and behaved in a casually sexist way.

comment by V_V · 2014-09-07T15:31:45.352Z · LW(p) · GW(p)

Saying it's a direct A/B comparison is seriously overstating it. Transitioning is itself a huge confounder, and if it were true that time before/after were exactly comparable, that would debunk one of the main justifications for allowing sex-changes in the first place!

Also, confirmation bias on the subjects (if you assume that workplace sexism is a thing, then you are probably more likely to notice people doubting your competence and register it as "sexism" when you are a woman rather than when you are a man), confirmation bias/publication bias on the authors of these "studies" (would a book about how trans-people experience no changes in workplace interactions get published? Would it get a review on New Republic?), smal sample, likely sampling bias (how were the subjects selected?), no attempts to falsify the hypothesis, and in general all the ills of arguing from anecdotal evidence.

comment by pragmatist · 2014-09-08T12:11:00.457Z · LW(p) · GW(p)

Note the willful incomprehension of the author about the possible effects of things like testosterone. 'Opposite seems to be true' my ass. But I suppose materialism and individual differences should never be allowed to get in the way of a good story about endemic sexism and racism...

I think you're misreading the author here. In that paragraph she's discussing two different hypotheses. The first is that increased testosterone makes post-transition trans men more confident, and the second is that the process of transitioning itself makes them more confident (because now they no longer suffer from anxiety and depression associated with gender dysphoria). The comparison with trans women is only intended to be a counterpoint to the second hypothesis, not the first, so there is no "willful incomprehension" here.

Replies from: gwern
comment by gwern · 2014-09-08T19:36:15.548Z · LW(p) · GW(p)

No, I'm not misreading her. The first hypothesis, shifts in testosterone+estrogen levels, subsumes the second and also addresses the criticism she offers of it. She's not seriously thinking about it.

Replies from: pragmatist
comment by pragmatist · 2014-09-09T10:41:51.323Z · LW(p) · GW(p)

The first hypothesis doesn't subsume the second. The second hypothesis is that the increased confidence comes from increased psychological well-being due to no longer inhabiting a body you don't identify with. If that was the sole (or primary) reason for increased confidence among post-transition trans men, then we should expect the effect to be symmetric, and for post-transition trans women to exhibit increased confidence too. The fact that they don't suggests that we should look for a different explanation, one that distinguishes between trans men and women. The testosterone hypothesis is one plausible possibility. Institutional sexism is another.

Replies from: gwern
comment by gwern · 2014-09-09T20:55:29.557Z · LW(p) · GW(p)

If there are differential perceived benefits in social prestige/power from transitioning based on direction, this is consistent with there being only one factor (sexism) conditional on direction but also consistent with there being unconditional benefits plus the pushmi-pullu effect of swapping testosterone & other androgens for estrogen etc in which the net effect for the mtf is indeterminate. I am willing to take their word on transitioning being good for them, which accounts for the first factor, I prefer experimentally demonstrated effects from powerful mind-altering hormones to unprovable spooks like institutional sexism, and so the hormone model seems to me to fit much better.

comment by drethelin · 2014-09-08T06:53:57.548Z · LW(p) · GW(p)

Transition is a confounder but this is still interesting information even if it's something like "a transitioned person gets taken more seriously due to greater confidence in themselves" or whatever hypothesis instead of proving stuff about gender.

comment by David_Gerard · 2014-09-07T21:59:09.401Z · LW(p) · GW(p)

You're seriously raising the notion of testosterone as magical competence juice as an explanation worth taking seriously? This would make teenage males the most competent and convincing people on the planet.

Replies from: gjm
comment by gjm · 2014-09-07T23:34:02.670Z · LW(p) · GW(p)

I took the claim to be something different: testosterone is magical confidence juice, and at reasonable levels of competence more confidence leads to greater career success.

Replies from: None
comment by [deleted] · 2014-09-08T00:29:04.837Z · LW(p) · GW(p)

Indeed, that is the sane reading of gwern's comment.

comment by D_Malik · 2014-09-07T15:28:36.771Z · LW(p) · GW(p)
  1. I know of at least one male-to-female transgendered person who has made the exact opposite claim, viz. that women are treated better by society. (Not going to dig up a link.)
  2. I would prefer not to see gender politics on LW, especially when the connection to rationality is tenuous.
Replies from: drethelin, None, pragmatist, None
comment by drethelin · 2014-09-08T06:48:54.336Z · LW(p) · GW(p)

how the hell is a discussion about people biases in regards to someone's perceived gender when they are pretty much the same person with the same expertise not OBVIOUSLY connected to rationality? Tenuous my ass.

Re: "women are treated better" I don't know if you're straw manning the person you're talking about but different genders are treated differently in different contexts. It's pretty interesting to see what kind of effects people see when they transition in different areas of life, and I don't think that really counts as "gender politics".

comment by [deleted] · 2014-09-07T20:26:27.185Z · LW(p) · GW(p)

As to #1, though I know someone who has gone male-to-female and decidedly does not make that claim, I would not find it terribly unlikely that someone who goes through a transition in either direction will be somewhat more likely to find their new status superior to their old status.

comment by pragmatist · 2014-09-08T12:00:32.039Z · LW(p) · GW(p)

Posts about biases that are fairly common and often unconscious/unintentional are not just tenuously connected to rationality. And since we're discussing preferences, I would prefer not to see any discussion of gender inequities immediately get labeled "politics", given all the connotations that label carries.

comment by [deleted] · 2014-09-08T00:29:58.108Z · LW(p) · GW(p)

I would prefer not to see gender politics on LW, especially when the connection to rationality is tenuous.

Agreed.

comment by Joshua_Blaine · 2014-09-03T22:06:47.791Z · LW(p) · GW(p)

My work on converting The Useful Idea of Truth into a video is going well. I didn't successfully anticipate the time that would be necessary to finish, but things are getting done at an acceptable pace. The best thing I can say, for sure, is that the overall style and presentation of the work has come a nice way forward since the start of this project, especially after working in some of the suggestions and impressions I've gotten from people.

(Included here is a short GIF of one of the recent portions that I'm particularly fond of, so additional criticism and suggestions for improvement are especially welcome.)

Replies from: Dorikka, Vulture
comment by Dorikka · 2014-09-04T02:41:26.141Z · LW(p) · GW(p)

Typing just criticism because typing on tablet: the text seems to site to appear very slowly, and I become instantly frustrated because I can't read it at normal speed

Replies from: bramflakes
comment by bramflakes · 2014-09-04T13:33:22.684Z · LW(p) · GW(p)

I think that's the speed it's being spoken aloud.

Replies from: Dorikka
comment by Dorikka · 2014-09-04T23:24:55.931Z · LW(p) · GW(p)

Hm. That would make sense, though it would make it no less frustrating for me. Perhaps it would make it better to use a style similar to the "Minute Physics" videos, where not every spoken word is shown.

Replies from: gjm
comment by gjm · 2014-09-05T09:55:23.994Z · LW(p) · GW(p)

Or present each sentence as a whole when the speaker starts saying it. (Reading is more "chunky" than listening; a single fixation of the eyes may take in multiple words. Or conceivably only part of a really long word. So presenting exactly one word at a time is rather weird.)

Replies from: Joshua_Blaine
comment by Joshua_Blaine · 2014-09-06T00:19:04.870Z · LW(p) · GW(p)

That's fantastic advice, and it's made me realize a lesson in gradually adapting my design decisions.

My original plans included (mostly) kinetic typography with the occasional visual aid. The elaborate style used for presenting the text was the main mechanic for capturing attention and differentiating it from an audiobook. As work was being done, however, I started adding more visualizations, and making the visualizations more compelling, more the point of focus, and otherwise moving into animating scenarios than animating words. The text, to not distract from what was now the focus of the video, has been becoming much more closed captioning than than anything else, and I hadn't realized that until now.

I may or may not incorporate a more "chunky" presentation of words in this video (mostly because the thought of going back through what I have already and changing it is a daunting task, and negative-reinforcement for ever completing this thing at all), but I'm happy to say it's something that now exists in my possible design space, and will definitely be a consideration for future videos.

Replies from: Dorikka
comment by Dorikka · 2014-09-06T00:35:11.439Z · LW(p) · GW(p)

Just as a heads-up (now that I'm typing on a real keyboard), I'm glad that you're taking the time to illustrate/animate these concepts, and overall I really like the part that I've seen. Thanks for the good work so far, and I hope that you keep going with it.

comment by Vulture · 2014-09-05T14:42:26.634Z · LW(p) · GW(p)

My first impression is that the text looks very cramped - it would probably be very difficult to read from far away.

Replies from: Joshua_Blaine
comment by Joshua_Blaine · 2014-09-06T00:02:52.180Z · LW(p) · GW(p)

The final product will be 720x1280, so hopefully that isn't a significant problem. I'll try and keep wider kerning/spacing in mind as I move forward, though. Thank you for the feedback!

comment by listic · 2014-09-03T10:39:18.980Z · LW(p) · GW(p)

There was an effort by some Less Wrong folks to experimentally prove the safety of lucid dreaming. Did this end with any conclusive results? Can I get in touch with you guys?

Replies from: gwern, None
comment by gwern · 2014-09-03T23:01:40.053Z · LW(p) · GW(p)

Speaking of lucid dreaming, the other day I ran into some very interesting research about tACS (the dual of tDCS) being used during REM sleep to induce lucid dreaming in naive subjects with something like a 50% success rate: "Induction of self awareness in dreams through frontal low current stimulation of gamma activity", Voss et al 2014.

Unfortunately, a bunch of reading up on the topic of tACS indicates that there aren't any really tACS devices available which are both safe & cheap. (Which is too bad, because with an effect size like that it should both be easy to verify the effect and very useful if it pans out.)

comment by [deleted] · 2014-09-03T21:36:06.409Z · LW(p) · GW(p)

Out of curiosity, do you suspect (let's say with p >= .05) that lucid dreaming is unsafe? Or do you know of someone on this site who does? I'd like to know why, because I lucid dream somewhat frequently. But I don't personally see any reason to think it would be less safe than regular dreaming, especially as I see awareness while dreaming as something on a sliding scale, not a binary "yes" or "no" question.

Replies from: Leonhart
comment by Leonhart · 2014-09-04T20:35:16.192Z · LW(p) · GW(p)

Learning to lucid dream, from everything I've read on the subject, involves progressively defeating whatever mechanism usually provides amnesia on waking. Having too much access to memories of nonexistent events seems an epistemically unsafe thing. I have one or two memories from a lifetime of dreaming, and I cannot distinguish them from life memories by any individual texture or quality; only by the fact that they don't cohere with my other memories. This scared me greatly.

Replies from: KaceyNow
comment by KaceyNow · 2014-09-06T05:01:11.791Z · LW(p) · GW(p)

Improving dream recall isn't necessarily important for lucid dreaming -- I practiced lucid dreaming for some years without any explicit attention to it. I can imagine ways it would be helpful: analyzing your dreams will help you recognize when you are dreaming, plus there's not much point to a lucid dream if you don't remember it.

My fears are more on the opposite side of things; some people advocate lucid dreaming methods where you slip directly from wake to lucid dream, but this requires passing through some rather terrifying states of consciousness I can't bring myself to intentionally experience.

comment by [deleted] · 2014-09-02T14:17:42.697Z · LW(p) · GW(p)

"If you're not at the leading edge of some rapidly changing field, you can get to one. For example, anyone reasonably smart can probably get to an edge of programming (e.g. building mobile apps) in a year." - Paul Graham in http://www.paulgraham.com/startupideas.html

I'd love to hear some actual programmers' opinions about this claim.

Replies from: gjm, Lumifer, ChristianKl, Viliam_Bur, NancyLebovitz, Nornagest, chaosmage
comment by gjm · 2014-09-03T17:32:33.285Z · LW(p) · GW(p)

Not exactly about that claim but addressing stronger and less plausible versions of it: Teach Yourself Programming in Ten Years by Peter Norvig.

comment by Lumifer · 2014-09-03T17:03:04.050Z · LW(p) · GW(p)

First, Paul Graham's idea of "anyone reasonably smart" probably involves not more that the top 5% of the population and likely even less X-)

Second, while it's not hard to get to the "edge", it's less trivial to do something useful while being there -- such as advancing that edge.

comment by ChristianKl · 2014-09-02T16:21:30.436Z · LW(p) · GW(p)

I think it depends a lot on what you mean with "being at the leading edge" of mobile app development.

Programming an Android app that works isn't that hard. On the other hand that doesn't mean that you understand everything there to know about Android app development.

I remember from my informatic A lectures at university which were in Haskell that at the end of a semester some of the students still didn't understand the concept of recursion.

Someone without a math background or computer science background is probably not going to use recursion for problems that are neatly solved with it when designing his app after learning Android programming with the standard tutorials. For a programmer that simply considers principles like recursion common sense it can be very hard to estimate how much time someone without a background needs to learn the concept.

You can program in Android without knowing exactly when a given object will be garbage collected. Multithreading can be complicated. Someone with years of experience in developing Android apps will likely outperform a nonprogrammer who spends a year learning Android but that doesn't mean that the second person can't find work as an Android developer.

comment by Viliam_Bur · 2014-09-06T16:38:25.361Z · LW(p) · GW(p)

I would estimate that to be good at programming in general, you need 10+ years of practice. After that, to become good at something new, e.g. building mobile apps, 1 year should be enough.

But it depends on how much time can you spend learning. Can you spend all your days learning? Or does your daily job take most of your energy and time, and then you have to split the remaining time between learning the new thing and having a social life? The 1 year estimate is for the best case.

For example, when I started learning programming as a teenager, I had a lot of free time, and I spent a lot of it programming. Later, when I worked as a programmer, I kept practicing my skills almost every day. However, when I am learning something new now, I must do it in evenings and weekends (but I would also like to spend that time with my girlfriend), so it goes rather slowly.

comment by NancyLebovitz · 2014-09-06T02:36:38.692Z · LW(p) · GW(p)

If you want to make great/lucrative aps, how hard is the programming compared to other sorts of programming?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2014-09-06T05:47:17.319Z · LW(p) · GW(p)

I think that is Paul Graham's point, that a new field may be easier than an old field, especially that it has problems that are easily solved.

But that article is a couple years old and mobile apps are much more mature. It is much more difficult to achieve the standard of polish now than then, although much of that difficulty is not about programming.

comment by Nornagest · 2014-09-03T18:01:58.216Z · LW(p) · GW(p)

Starting from zero programming knowledge, I think you can probably get a programming job in a hot subfield in a year if you're reasonably smart and dedicated, if you grok abstraction (probable if you did well in high school calculus, or if you read Less Wrong), and if you can successfully work around the guild rituals involved. You won't be an expert in anything, but you'll be able to do decent work and make decent money.

Being at the leading edge isn't hard; all that takes is buzzword compliance. Pushing it forward is hard, and unless you're exceptionally talented and hard-working I think that'd take significantly more than a year.

comment by chaosmage · 2014-09-05T18:42:00.126Z · LW(p) · GW(p)

Of all the smart and dedicated programmers I know, I'm confident not one would claim to have been any good in under five years of practice.

If you pick a tiny area of expertise, say coding modifications of existing apps using one particular API using two or three particular libraries, you could probably be producing publishable results in a year. But you won't be as efficient, and won't have the same job security (because the market changes so quickly) as the guys who put in the years to learn to think like a programmer.

comment by mgg · 2014-09-05T05:07:11.297Z · LW(p) · GW(p)

Why does Eliezer love me?

In many articles, EY mentions that Death is bad, as if it's some terminal value. That even the loss of me, is somehow negative for him. Why?

I've been thinking that it's Suffering that should be minimized, in general. Death is only painful for people because of the loss others suffer. Yes, the logical conclusion is that we should completely destroy the universe, in a quick and painless manner. The "painless" part is the catch, of course, and it may be so intractable as to render the entire thought pointless. (That is, we cannot achieve this, so might as well give up and focus on making things better.)

Even outside of Suffering, I still do not see why an arbitrary person is to be valued. Again, EY seems to have this as some terminal value. Why?

I love my children, I love my family, I love some friends. After that, I don't really care all that much about individuals, except to the extent that I'd prefer them to not suffer. I certainly don't feel their existence alone is something that valuable, intrinsically.

Am I wicked or something? Am I missing some basic reasoning? I see my viewpoint may be viewed as "negative utilitarian", but I haven't come across anything in particular that makes such a position less desirable.

Replies from: Metus, chaosmage, polymathwannabe, Viliam_Bur
comment by Metus · 2014-09-05T11:58:08.435Z · LW(p) · GW(p)

A good portion of LessWrong is unreadable for me as it is based on some kind of altruistic axiom. Personally, I care about myself, my immediate family and a few friends. I will feel a pang of suffering when I see people suffering but I do not feel that pang when I hear about people I don't know suffering, so I conclude that I don't care about other people beyond some abstract measure of proximity and their economic utility for me.

Replies from: jkaufman, None, twanvl
comment by jefftk (jkaufman) · 2014-09-08T19:23:56.817Z · LW(p) · GW(p)

So if there were a button you could press that would make one of your close friends happier but would kill someone you haven't met, you would be totally ok pressing it?

Replies from: army1987, Metus
comment by A1987dM (army1987) · 2014-09-09T16:35:43.702Z · LW(p) · GW(p)

I wouldn't, but that's more because of superrationality reasons (if I could sign a contract with everybody else in the world committing to never press such a button, I totally would sign it) than because I don't really care about my friend that much more than about the stranger.

comment by Metus · 2014-09-08T19:51:37.079Z · LW(p) · GW(p)

Oh so many variations to this experiment to test the intution behind my position.

Your version? Depends on how much happier this friend gets. If it is the equivalent to having a cup of coffe I'd just get them that and live on knowing that I am not a murderer. If it is eternal bliss this friend gets, then I wouldn't do it either as I'd get jealous and had to live with that and the fact that I am a murderer.

I'd be willing to press the button for personal gain though. Not for a cup of coffee, but a higher threshold.

What I'd be willing though is to press a button that prevents a person from being born, as long as that is not one of my potential heirs or of my friends.

comment by [deleted] · 2014-09-08T15:22:05.144Z · LW(p) · GW(p)

I care about (read: have vested interest in) people that can influence my wellbeing and choices. Because all human beings have the potential to do this, I have care about them to some degree, great or small. Because I cannot physically empathize with seven billion humans at once on an equal or appropriate level, I use a general altruistic axiom to determine how to act towards people I do not have the resources to physically care about.

That's my reason, at least, for having an altruistic axiom, explained in a terribly simple manner. I'm sure there are other, better explainations for working off altruistic axioms. I'm not making a case for the axiom, just explaining what I see as my reasons for having it.

Replies from: Metus
comment by Metus · 2014-09-08T19:54:20.242Z · LW(p) · GW(p)

This thing is turning into a tautology. I care about people to the degree that they are useful to me. My friends and family are incredibly useful in the great state of mind they put me in. A person living in extreme poverty I have never met, not so much. They could be useful were they highly educated and had access to sufficient capital to leverage their knowledge complementary to my skills, but the initial investment far exceeds the potential gain.

What irks me is not the statement above but the tradeoff being made in utilitarianism: That the pain of other people should count as much as my pain. It simply does not.

comment by twanvl · 2014-09-08T10:58:33.973Z · LW(p) · GW(p)

If everyone (or just most people) think like you, then seeing people suffer makes them suffer as well. And that makes their friends suffer, and so on. So, by transitivity, you should expect to suffer at least a little bit when people who you don't know directly are suffering.

But I don't think it is about the feeling. I also don't really feel anything when I hear about some number of people dying in a far away place. Still, I believe that the world would be a better place if people were not dying there. If I am in a position to help people, I believe that in the long run the result is better if I just shut up and multiply and help many far away people, rather than caring mostly about a few friends and neighbors.

Replies from: Metus
comment by Metus · 2014-09-08T19:56:35.057Z · LW(p) · GW(p)

If we'd all just cooperate maybe this would be a better world. But we don't and it is not.

I have yet to see a calculation that shows that my gift to some far away people instead of a fine dinner with my friends will give me a return on my money in the long run. Assume that all people do this to avoid freerider arguments.

comment by chaosmage · 2014-09-05T12:54:40.055Z · LW(p) · GW(p)

You don't know that he does. You only know that he says he does. Also, MIRI needs your donations!

In all seriousness, it appears that he simply has a much larger circle of empathy than you do. Yours only includes yourself, children, family and friends, which sounds like (what Peter Singer has convincingly argued to be) the default setting that evolution presumably gave you a sense of empathy for because that'd promote the survival of your genes. But that circle can expand, and in fact it has tended to expand over the last couple of millenia. In Eliezer's case, it appears to include at least all humans. And why? Well, my suspicion is that people have a distaste for contradictions, and any arbitrary limit to empathy is inherently fraught with contradictions. ("Is it okay for a policeman to not care about you because you're not his friend?" "How many non-friends would you kill to save the life of a friend?" etc.) And maybe maybe Eliezer simply has a greater sensitivity to, and distaste for, contradictions than you do.

Replies from: mgg
comment by mgg · 2014-09-05T18:09:13.838Z · LW(p) · GW(p)

This is something to think about, thanks.

What about the seeming preference for existence over non-existence? How do you morally justify keeping people around when there is so much suffering? In the specs versus torture, why not simply erase everyone?

Replies from: chaosmage, TsviBT
comment by chaosmage · 2014-09-05T18:27:17.062Z · LW(p) · GW(p)

People, by and large, appear to favor suffering over suicide. I don't think it can be ethical to overrule that choice.

Replies from: army1987, mgg
comment by A1987dM (army1987) · 2014-09-06T08:19:29.084Z · LW(p) · GW(p)

People, by and large, appear to favor suffering over suicide.

They just don't know how bad suffering gets.

comment by mgg · 2014-09-06T20:56:21.458Z · LW(p) · GW(p)

It is if we define a utility function with a strict failure mode for TotalSuffering > 0. Non-existent people don't really count, do they?

Replies from: Bakkot
comment by Bakkot · 2014-09-14T17:01:05.151Z · LW(p) · GW(p)

It is if we define a utility function with a strict failure mode for TotalSuffering > 0.

Yeah, but... we don't.

(Below I'm going to address that case specifically. However, more generally, defining utility functions which assign zero utility to a broad class of possible worlds is a problem, because then you're indifferent between all of them. Does running around stabbing children seem like a morally neutral act to you, in light of the fact that doing it or not doing it will not have an effect on total utility (because total suffering will remain positive)? If no, that's not the utility function you want to talk about.)

Anyway, as far as I can tell, you've either discovered or reinvented negative utilitarianism. Pretty much no one around here accepts negative utilitarianism, mostly on the grounds of it disagreeing very strongly with moral intuition. (For example, most people would not regard it as a moral act to instantly obliterate Earth and everyone on it.) For me, at least, my objection is that I prefer to live with some suffering than not to live at all - and this would be true even if I was perfectly selfish and didn't care what effects my death would have on anyone else. So before we can talk usefully about this, I have to ask: leaving aside concerns about the effects of your death on others, would you prefer to die than to live with any amount of suffering?

Replies from: mgg
comment by mgg · 2014-09-23T22:22:44.711Z · LW(p) · GW(p)

Thanks for the reply. Yes I found out the term is "negative utilitarianism". I suppose I can search and find rebuttals of that concept. I didn't mean that the function was "if suffering > 0 then 0", just that suffering should be a massively dominating term, so that no possible worlds with real suffering outrank worlds with less suffering.

As to your question about my personal preference on life, it really depends on the level of suffering. At the moment, no, things are alright. But it has not always been that way, and it's not hard to see it crossing over again.

I would definitely obliterate everyone on Earth, though, and would view not doing so, if capable, to be immoral. Purely because so many sentient creatures are undergoing a terrible existence, and the fact that you and me are having an alright time doesn't make up for it.

comment by TsviBT · 2014-09-06T09:09:09.748Z · LW(p) · GW(p)

All else being equal, if you have the choice, would you pick (a) your son/daughter immediately ceases to exist, or (b) your son/daughter experiences a very long, joyous life, filled with love and challenge and learning, and yes, some dust specks and suffering, but overall something they would describe as "an awesome time"? (The fact that you might be upset if they ceased to exist is not the point here, so let it be specified that (a) is actually everyone disappearing, which includes your child as a special case, and likewise (b) for everyone, again including your child as a special case.)

Replies from: mgg
comment by mgg · 2014-09-06T20:59:54.618Z · LW(p) · GW(p)

If the suffering "rounds down" to 0 for everyone, sure, A is fine. That is, a bit of pain in order to keep Fun. But no hellish levels of suffering for anyone. Otherwise, B. Given how the world currently looks, and MWI, it's hard to see how it's possible to end up with everyone having pain that rounds down to 0.

So given the current world and my current understanding, if someone gave me a button to press that'd eliminate earth in a minute or so, I'd press it without hesitation.

comment by polymathwannabe · 2014-09-05T14:21:05.548Z · LW(p) · GW(p)

Living among billions of happy people who have realistic chances to meet their goals is a world I find much more desirable than a world where my friends and I are the only successful people in existence.

On one hand, there's the cold utilitarian who only values other lives inasmuch as they further hir goals, and assigns no intrinsic worth to whichever goals they may have for themselves. This position does not coincide, but overlaps, with solipsism. On the other hand, there's what we could call the naïve Catholic who holds that more life is always better life, no matter in what horrid conditions. This position does not coincide, but overlaps, with panpsychism.

The strong altruistic component of EY's philosophy is what sets it on a higher moral ground than Ayn Rand's. For all her support of reason, Rand's fatal flaw was that she failed to grasp the need for altruism; it was anathema to her, even if her brand of selfishness was strange in that she recognized other people's right to be selfish too (the popular understanding of selfishness is more predatory than even she allowed).

EY agrees with Rand's position that every mind should be free to improve itself, but he doesn't dismiss cooperation. It makes perfect sense: The ferociously competitive realm of natural selection does often select for cooperation, which strongly suggests it's a useful strategy. I can't claim to divine his reasons, but the bottom line is that EY gets altruism.

(As chaosmage suggested, it is not impossible that EY merely pretends to be an altruist so people will feel more comfortable letting him talk his way into world domination (ahem, optimization), but the writing style of his texts about the future of humanity and about how much it matters to him is likelier if he really believes what he says.)

Still, the question stands: Why care about random people? I notice it's difficult for me to verbalize this point because it's intuitively obvious to me, so much so that my gut activates a red alarm at the sight of a fellow human who doesn't share that feeling.

Whence empathy? Although empathy has a long tradition of support in many philosophies, antiquity alone is not a valid argument. Warfaring chimpanzees share as much DNA with us as hippie bonobos; mirror neurons are not conclusively proven to exist; and disguised sociopathy sounds like an optimal strategy.

Buddhism has a concept that I find highly appealing. It's called metta and it basically states that sentient beings' preference for not suffering is one you can readily agree with because you're a sentient being too. There are several ways to express the same idea in contemporary terms: We're all in this together, we're not so different, and other feel-good platitudes.

We can go one step further and assert this: A world where only some personal sets of preferences get to be realized runs the risk of your preferences being ignored, because there's no guarantee that you will be the one who decides which preferences are favored; whereas a world where all personal sets of preferences are equally respected is the one where yours have the best chance of being realized. To paraphrase the Toyota ads, what's good for the entire world is good for you.

(I know most LWers will demand a selfish justification for altruism because any rational decision theory will require it, but I feel hypocritical having to provide a selfish argument for altruism. Ideally, caring for others shouldn't need to be justified by resorting to an expected personal benefit, but I acknowledge that trying to advance this point is like trying to show a Christian ascetic that hoping to get to heaven by renouncing worldly pleasures is the epitome of calculated hedonism. I still haven't resolved this contradiction, but fortunately this is the one place in all the Internet where I can feel safe expecting to be proved wrong.)

Replies from: NancyLebovitz, mgg
comment by NancyLebovitz · 2014-09-06T02:47:10.409Z · LW(p) · GW(p)

Another odd thing about Rand's egoism is that it's mostly directed towards being able to pursue one's goal of making excellent things for other people, not being hassled in the process, and being appropriately rewarded.

comment by mgg · 2014-09-05T18:13:11.536Z · LW(p) · GW(p)

But he views extinction-level events as "that much worse" than a single death. But is an extinction-level event that bad? If everyone gets wiped out, there's no suffering left.

I'm not against others being happy and successful, and sure, that's better than them not being. But I seem to have no preference for anyone existing. Even myself, my kids, my family - if I could, I'd erase the entire lot of us, but it's just not practical.

Replies from: polymathwannabe
comment by polymathwannabe · 2014-09-05T18:38:28.200Z · LW(p) · GW(p)

Your original post says,

the logical conclusion is that we should completely destroy the universe, in a quick and painless manner

Would you please describe the sequence of thoughts leading to that conclusion?

Replies from: mgg
comment by mgg · 2014-09-06T20:55:14.779Z · LW(p) · GW(p)

Sure. Goal is to make TotalSuffering as small as possible, where each individual Suffering is >= 0. There may be some level of individual Suffering that rounds down to zero, like the pain of hurting your leg while trying to run faster, or stuff like that. The goal is to make sure no one is in real suffering, not eliminate all Fun.

One approach to do that is to make sure everyone is not suffering. That entails a gigantic amount of work. And if I understand MWI, it's actually impossible, as branches will happen creating a sort of hell. (Only considering forward branches.) Sure, it "all averages out to normal", but tell that to someone in a hell branch.

The other way is to eliminate all life (or the universe). Suffering is now at 0, an optimal value.

comment by Viliam_Bur · 2014-09-06T16:52:46.552Z · LW(p) · GW(p)

So, you say you have a "preference not to suffer" for everyone, but "preference not to die" only for a few people, if I read it correctly.

When you are asking how someone can have a "preference not to die" for everyone, I think you should also ask how you have a "preference not to suffer" for everyone, because to me it seems rather similar. I mean, the part of "preference not to ... for everyone" is the same, so we can ask whether this is realistic, or is just some kind of illusion, to create a better self-image. The difference between wanting someone not to suffer and not to die does not seem so big to me, knowing that many people prefer not to die, and that the idea that they will die causes them suffering.

Another thing is the technical limitation of the human brain. If a death or a suffering of one person causes you some amount of sadness (whether we measure it by neurons firing, or by hormones in blood), of course a death or suffering of million people cannot cause you million times more neuron signals or hormones, because such thing would kill you instantly. The human brain does not have the capacity to multiply this.

But for a transhumanist this is simply a bug in the human brain. What our brains do is not what we want them to do. It is not "what my brain does, is by definition what I think is correct". We are here to learn about biases and try to fix them. The human brain's inability to properly multiply emotions is simply yet another such bias. The fact that my brain is unable to care about some things (on the emotional level) does not mean that I don't. It merely means that currently I don't have the capacity to feel it on the gut level.

Replies from: mgg
comment by mgg · 2014-09-06T21:06:29.074Z · LW(p) · GW(p)

Good points. But I'm thinking that the pain of death is purely because of the loss others feel. So if I could eliminate my entire family and everyone they know (which ends up pulling essentially every person alive into the graph), painlessly and quickly, I'd do it.

The bug of scope insensitivity doesn't apply if everyone gets wiped out nicely, because then the total suffering is 0. So, for instance, grey goo taking over the world in an hour - that'd cause a spike of suffering, but then levels drop to 0, so I think it's alright. Whereas an asteroid that kills 90% of people, that'd leave a huge amount of suffering left for the survivors.

In short, the pain of one child dying is the sum of the pain others feel, not an intrinsic to that child dying. So if you shut up and multiply with everyone dying, you get 0. Right?

comment by beserker1 · 2014-09-04T00:35:32.950Z · LW(p) · GW(p)

Hi all, I have made the decision to attend App Academy (http://www.appacademy.io/#p-home) starting in October in SF. I saw that there are some alumni of the program on this message board, and was wondering if anyone had any advice to share in order to properly prepare for the coding bootcamp experience?

comment by [deleted] · 2014-09-02T22:43:54.491Z · LW(p) · GW(p)

Are there any jugglers or otherwise circus-skilled people in the rationality community?

I suspect that an interest in technical expertise can draw someone to both circus and rationality.

Also, it's possible that performers gain some tacit rationality (in the realm of learning to learn effectively, at least) from the feedback loop between practice and performance.

(If you're curious how high the skill ladder can go for something like juggling, here (youtube) is my favorite video to show the uninitiated.)

Replies from: VAuroch, Elo, MathiasZaman, sixes_and_sevens, drethelin
comment by VAuroch · 2014-09-04T21:41:19.478Z · LW(p) · GW(p)

AFAIK, the highest-concentration populations of both, at least in the US, are geographically semi-overlapping (the Bay is a magnet for both), so I'd be surprised if there isn't. Certainly I know many people who have similar interests to LW and who are also serious circus-arts people, though most of them prefer fire props to straight balls. I personally am at least a single-person overlap, though I don't consider myself to be at a high level in either.

As a sidenote, watching this video gave me a weird sense of how much my standards are distorted, because everything before about 2:30 in it looked totally pedestrian to me; I personally know a half-dozen people who could pull off everything up to that point.

comment by Elo · 2014-09-04T11:31:30.942Z · LW(p) · GW(p)

I partake in a great deal of circus things. It has been my hobby for a few years. Message me if you have questions.

comment by MathiasZaman · 2014-09-03T13:58:07.714Z · LW(p) · GW(p)

I can juggle three balls kinda clumsy, but that's probably not what you're looking for. My brother used to be (still kinda is) a semi-professional juggler, so I do have some experience with that community. From my experience there doesn't seem to be a greater degree of rationality in the circus community, compared to other communities where people learn various skills.

What I did like from that community is the different learning style they have. Learning those skills is very hand-on and from being a total newcomer to being able to do something well enough to show off to friends and family doesn't take that long. (You can learn to juggle three objects or ride a unicycle in under a day.)

comment by sixes_and_sevens · 2014-09-03T09:34:06.720Z · LW(p) · GW(p)

It looks like this is your first post. Welcome to Less Wrong!

I suspect there are quite a few jugglers / circus-skills folk in the rationality community, though I'm not sure I'd draw any kind of associative conclusion. After all, an interest in pot, didgeridoos and the narcissistic approval of one's peers can also draw someone to juggling.

Replies from: None
comment by [deleted] · 2014-09-03T13:10:50.906Z · LW(p) · GW(p)

Thanks!

For me it's the impulse to become stronger that draws me both to rationality ("martial art of the mind") and circus, (as arbitrary as that may seem).

comment by drethelin · 2014-09-03T17:08:09.970Z · LW(p) · GW(p)

Fiddlemath (Matt Elder) is a good juggler.

comment by aberglas · 2014-09-02T03:08:48.994Z · LW(p) · GW(p)

Reviewers wanted for New Book -- When Computers Can Really Think.

The book aims at a general audience, and does not simply assume that an AGI can be built. It differs from others by considering how natural selection would ultimately shape a AGI's motivations. It argues against the Orthogonality Principal, suggesting instead that there is ultimately only one super goal, namely the need to exist. It also contains a semi-technical overview of artificial intelligent technologies for the non-expert/student.

An overview can be found at

www.ComputersThink.com

Please let me know if you would be interested in reviewing a late draft. Any feedback would be most welcome. Anthony@berglas.org

Replies from: cameroncowan, polymathwannabe, Transfuturist
comment by cameroncowan · 2014-09-02T19:17:56.942Z · LW(p) · GW(p)

I'm totally down, cameron@cameroncowan.net

comment by polymathwannabe · 2014-09-02T12:17:05.066Z · LW(p) · GW(p)

I'm always happy to proofread. PM me with the details.

comment by Transfuturist · 2014-09-02T08:58:25.943Z · LW(p) · GW(p)

It argues against the conjecture that utility function is separate from optimization power? Do you mean that it argues against Omohundro's instrumental AI drives?

Replies from: Manfred
comment by Manfred · 2014-09-02T12:18:01.835Z · LW(p) · GW(p)

The whole point of instrumental drives is that they don't have to be in the utility function.

Replies from: Transfuturist
comment by Transfuturist · 2014-09-03T21:25:10.566Z · LW(p) · GW(p)

Yes, I know; they're convergent. I'm questioning what aberglas is arguing against with his Darwinist supergoal. It doesn't make sense to say that such a supergoal is mutually exclusive with the independence of utility and optimization power. It makes more sense to say that the supergoal is an alternative to Omohundro's instrumental drives.

I don't see how what aberglas wrote makes coherent sense.

Replies from: aberglas
comment by aberglas · 2014-09-29T06:07:12.389Z · LW(p) · GW(p)

Well, alternative if you like. I will post an elaboration as a full article.

comment by Dorikka · 2014-09-01T16:15:44.416Z · LW(p) · GW(p)

Are there any companies that do genotyping and try to predict health impact (like 23 and me prior to FDA action) that are still functioning and seem to provide useful info?

Replies from: TylerJay
comment by TylerJay · 2014-09-01T18:13:07.179Z · LW(p) · GW(p)

There are a number of 3rd party tools that allow you to upload your 23 and me raw data for analysis. Here is a list

Replies from: Dorikka
comment by Dorikka · 2014-09-03T03:21:51.487Z · LW(p) · GW(p)

Thanks!

comment by tetronian2 · 2014-09-02T02:23:00.131Z · LW(p) · GW(p)

Has anyone else seen the television show Brain Games? It is essentially intro-to-cognitive-biases aimed at the level of the average TV watcher; I was pleasantly surprised by how well it explains some basic biases with simple examples (though I have only seen an assortment of episodes from the 3rd and 4th season). However, most of the material given is not very actionable and is designed more for entertainment rather than self-improvement. Nevertheless, those interested in raising the sanity waterline and/or sparking interest in LW subjects among more average folk than we are might want to take a look at it.

Replies from: James_Miller
comment by James_Miller · 2014-09-02T14:26:10.091Z · LW(p) · GW(p)

Yes it's a great show.

comment by polymathwannabe · 2014-09-01T12:20:23.514Z · LW(p) · GW(p)

[Reposting this from last open thread; probably posted too late in the week to be seen]

In the context of Pixar's upcoming movie Inside Out, I just discovered the existence of a 1990s sitcom titled Herman's Head. I've watched a few episodes and it's hilarious to see how it represents the battle of agents in the mind. Sometimes they even include mental models of other people. I'm very excited to see how Pixar will do it.

comment by Gunnar_Zarncke · 2014-09-04T22:39:11.053Z · LW(p) · GW(p)

There was a posting recently which I can't find which mentioned how a society could lock itself in a hell where everybody knows that it harmfull to follow the rules (punish others) but nonetheless all continue. Falls into the same pattern of Hell and Moloch. Now I found a description of this in real life:

What struck me as I talked with teens about how race and class operated in their communities was their acceptance of norms they understood to be deeply problematic. In a nearby Los Angeles school, Traviesa, a Hispanic fifteen-year-old, explained, “If it comes down to it, we have to supposedly stick with our own races. ... That’s just the unwritten code of high school nowadays.” Traviesa didn’t want to behave this way, but the idea of fighting expectations was simply too exhausting and costly to consider.

From "It's complicated" by Dana Boyd http://www.danah.org/books/ItsComplicated.pdf

What do you think?

comment by Lumifer · 2014-09-03T16:58:48.513Z · LW(p) · GW(p)

A useful post about how convincing statistical evidence is (or can be) and whether you MUST believe peer-reviewed statistically significant studies.

http://andrewgelman.com/2014/09/03/disagree-alan-turing-daniel-kahneman-regarding-strength-statistical-evidence/

comment by Jan_Rzymkowski · 2014-09-05T08:13:10.740Z · LW(p) · GW(p)

What is R? LWers use it very often, but Google search doesn't provide any answers - which isn't surprising, it's only one letter.

Also: why is it considered so important?

Replies from: cousin_it, ChristianKl, Lumifer
comment by cousin_it · 2014-09-05T08:35:35.574Z · LW(p) · GW(p)

R is a piece of software) for running statistical analyses on data and getting nice graphs. It's free, has a lot of stuff built in and is quite pleasant to use.

comment by ChristianKl · 2014-09-06T10:38:45.629Z · LW(p) · GW(p)

Out there in the world a lot of people use software like Excel for doing their data processing. They want to have tables where they see their data.

That has the advantage that you have a nice GUI that normal people can easily learn. However some tasks take a lot of time with tables, and Excel automatically reformats your data when it think it knows better than you. Excel also doesn't handle it well to have 500000 rows in your data. Excel doesn't make pretty customizable plots.

Often the choice is between doing a task for 15 minutes in manual labor in Excel or writing 5 lines in R that take you 15 minutes of reading the documentation to find the right parameters.

As a result in a lot of professional context where statistics are needed people use specialised statistics software. That might be SPSS, Stata, SAS or R. SPSS, Stata and SAS both need a license and R is free software. State of the art statistics if often done in R and if someone invents a new statistical method they often publish a R package along with their paper to allow other people to use their shiny new technique.

It's worth noting that statisticians aren't primarily programmers and R is build for statisticians. It has a lot of powerful magic functions with 20 optional parameters.

These days there are also liberaries for like Pandas for Python that allow you to do most of the things that R can do while at the same time having a beautiful language.

comment by Lumifer · 2014-09-05T14:48:11.760Z · LW(p) · GW(p)

What is R?

It's a programming language and environment which is widely used in the statistical community, in part because it has a LOT of statistics-related libraries available for it.

Historically, it's an open-source re-implementation of the programming language S developed at Bell Labs in mid-70s.

comment by polymathwannabe · 2014-09-05T02:07:07.053Z · LW(p) · GW(p)

Telepathy is apparently now a thing.

Replies from: None
comment by [deleted] · 2014-09-05T03:24:35.385Z · LW(p) · GW(p)

The interface that imparted information into a human subject was just a standard transcranial magnetic stimulation coil that they futzed around with until for each subject they found an orientation and intensity and frequency that made them see a spot in their visual field when it was activated. They then would report when they saw a spot and slowly decode a binary string of seeing and not-seeing. That's nothing terribly new, what was new was that the on/off state was decided upon by a computer using electrodes to get a rough look at a sitting person's imagery/intent of moving their arms, and they would visualize moving their arms or not in order to binary-encode the message.

comment by A1987dM (army1987) · 2014-09-03T17:04:13.686Z · LW(p) · GW(p)

Googling for "keep calm and maximize expected utility" (with the quotes) returns no hits. I'm somewhat surprised by that.

Replies from: shminux, Lumifer
comment by shminux · 2014-09-03T17:14:44.170Z · LW(p) · GW(p)

I would NOT want it on a t-shirt. Kills your expected utility right off.

Replies from: drethelin, ChristianKl
comment by drethelin · 2014-09-04T19:36:12.285Z · LW(p) · GW(p)

It's more that I think the utility of wearing such a shirt would generally be dominated by black swan scenarios such that it's hard to calculate.

comment by ChristianKl · 2014-09-03T19:48:08.777Z · LW(p) · GW(p)

Depends on where you wear the shirt. I think wearing it to a Lesswrong meetup would work.

Replies from: Nornagest, shminux
comment by Nornagest · 2014-09-03T21:37:13.605Z · LW(p) · GW(p)

I have a strong aversion to wearing T-shirts with clever slogans. After thinking about it for a couple of minutes, I think the reason is that there's no good way to filter the message: if a joke falls flat you can avoid making similar jokes, but a T-shirt just hangs around like a dead octopus in a Jacuzzi, slowly growing more awkward and obtrusive.

A LW meetup wouldn't be the worst place to wear that one, but I don't think one would be homogeneous enough that I'd actually be comfortable doing it.

Replies from: VAuroch, ChristianKl
comment by VAuroch · 2014-09-04T20:40:00.629Z · LW(p) · GW(p)

I have a similar aversion for similar reasons, and choose to, when possible, display symbolic representations of ideas/groups rather than specific words; group members or people familiar with the same ideas will recognize them, and others won't.

It would be nice if there were good abstract symbols for LessWrong, rationalism, and/or EA. I've gotten good results from symbols for smaller groups (ex. the fandom for the awesome yet moribund MYST series), and it seems useful.

Replies from: Sherincall
comment by Sherincall · 2014-09-05T00:10:09.884Z · LW(p) · GW(p)

I made a T-shirt with the Pioneer Plaque drawing, and always try to wear it when I'm expecting to meet new people. Those who don't know what it is will just ignore it, it is unobtrusive and would likely be considered pretty. Those who do recognize it always make a comment.

comment by ChristianKl · 2014-09-04T11:40:17.722Z · LW(p) · GW(p)

It might be that I simply have another cultural background from having at a few Chaos Computer Congresses in Berlin. There the strongest clothing choice was a person running around in a Burka with a sign: "You get surveilled, I don't."

A T-Shirt with the slogan "keep calm and maximize expected utility" isn't something that seems awkward or obtrusive to myself.

I generally don't believe that avoiding clothing that can draw any attention is a good strategy.

Replies from: Nornagest
comment by Nornagest · 2014-09-04T19:54:06.065Z · LW(p) · GW(p)

I generally don't believe that avoiding clothing that can draw any attention is a good strategy.

I may have phrased that too strongly. The problem isn't that besloganned T-shirts carry a message, it's that that message casts itself too broadly and too obviously to people not in its audience; dog-whistle is common in fashion, but indiscriminate signaling is usually a faux pas. Compare wearing a shirt with a few Western details to showing up in spurs, leather chaps, and a ten-gallon hat.

This is mainly a problem with using text; I wouldn't find it awkward to wear a T-shirt with the skeletal formula of caffeine on it, or an equation I found elegant. Those would get glossed as meaningless symbols to the uninitiated; I'd look nerdy but not aggressively nerdy.

comment by shminux · 2014-09-03T21:26:02.552Z · LW(p) · GW(p)

Eh, maybe if you take it off right after. Even during a meetup it would give off a weird vibe to me.

comment by Lumifer · 2014-09-03T17:21:30.571Z · LW(p) · GW(p)

There's this for example. Or if you want to get more technical...

"0 votes, 993rd most popular", ouch X-D

comment by Salemicus · 2014-09-03T14:37:26.743Z · LW(p) · GW(p)

I have been considering writing a series of posts on the Just World Hypothesis, but before I do so, I'd like to gauge whether people would be interested. Tentative content summaries:

Post 1: Is the World Just? Short Answer: Yes

Key points:

  • Claims that the world is unjust usually involve excessive reliance on a notion of merit excessively detached from reality.
  • Sensible judgements of merit must operate reflexively with how much that merit is genuinely a benefit (c.f. "what is the value of your values").
  • Compare: Basketball is "unfair" because it rewards height "too much."
  • Other claims that the world is unjust rest on definitions of luck so expansive that they swallow any notion of fairness.
  • Compare: Basketball is "unfair" because before we start playing the game, some people are better at basketball than others.
  • The world isn't perfectly just, and we can imagine an unjust world, but as a by-and-large claim, merit gets rewarded.

Post 2: The Just-World Hypothesis in the wild

Key points:

  • How does the "just-world hypothesis" as studied by academics differ from the just-world hypothesis as stated by survey respondents? (academics: far more totalised. believers: Just one of a competing set of heuristics)
  • When academics accuse believers of "blaming the victim," they are assuming the consequent - people really can be the authors of their own misfortune.
  • The ways in which the just-world hypothesis is a useful heuristic (multiple causation, information asymmetry, etc)
  • If someone keeps telling individually plausible stories about how they keep getting into car crashes which were the other person's fault, we are right to assume that they are actually a bad driver.
  • The ways in which the just-world hypothesis is beneficial set of beliefs for the holder (locus of control, agency, health)

Post 3: How the Just World Hypothesis makes the world more just

Key points:

  • Even if people aren't necessarily the authors of their own misfortune, frequently the most just thing is to treat them "as if" they are.
  • If our repeated car crash victim from Post 2 knows that his claims of "bad luck" are going to be seen sceptically, he will drive more carefully.
  • Aligns incentives correctly.
  • Eliminates special pleading, and provides one set of rules for all.
  • Contrast "just world" culture where people compete to gain esteem by showing off their success with "beggar culture" where people compete for sympathy by showing how unfortunate they are.

Your comments would be appreciated.

Replies from: Lumifer, VAuroch, ChristianKl
comment by Lumifer · 2014-09-03T15:52:54.343Z · LW(p) · GW(p)

I think you need to start with defining what do you mean by "just" -- that's a... controversial issue.

Replies from: Salemicus
comment by Salemicus · 2014-09-03T19:31:54.569Z · LW(p) · GW(p)

I agree - somewhat. But it's not my intention to develop a full theory of justice. My approach is a large part of the outline of the first post - talking about how any useful notion of justice has to be both reflexive, and applicable, and thus showing most ideas of an "unjust world" to be underdeveloped. People can then fill in their own ideas of justice, but the idea that the world is mostly just most of the time for most reasonable ideas about justice should then be centrally focused in the reader's mind.

Replies from: Lumifer
comment by Lumifer · 2014-09-03T19:40:18.438Z · LW(p) · GW(p)

Without you telling me what "just" means, I don't understand the sentence "the world is mostly just most of the time for most reasonable ideas about justice".

In particular, it's easy to come up with reasonable ideas about justice (see e.g. a large variety of egalitarians) under which the world is NOT mostly just most of the time.

I agree that the notion of justice is hard to pin down, but if you ignore this problem many arguments around your post will be just arguments about the implied understanding of justice. It's better to make such things explicit.

Replies from: Salemicus
comment by Salemicus · 2014-09-04T07:42:48.903Z · LW(p) · GW(p)

I thought I had addressed exactly this point, by stating that any relevant theory of justice had to be applicable, and talking about theories so broad they swallow fairness. The second basketball analogy is the example. To be clear, the just world hypothesis is essentially "whatsoever a man soweth, that shall he also reap." "Sowing and reaping ought to be uncorrelated" is a popular theory of justice, but non-responsive to the claim being made. Relevant disputes regarding justice have to be about what it means to sow "good" and what it means to reap "good."

Replies from: Lumifer
comment by Lumifer · 2014-09-04T15:17:36.526Z · LW(p) · GW(p)

any relevant theory of justice had to be applicable

Any? I have a feeling that you have a particular framework in your head that seems so natural to you that you just assume that everyone else also operates on the basis of the same framework. To you it's perfectly clear what does "relevant" mean here and you can true-Scotsman the "irrelevant" theories of justice.

But I'm different from you and my mind reading skills are lacking.

To be clear, the just world hypothesis is essentially "whatsoever a man soweth, that shall he also reap."

Not quite. That's a theory of causality, not justice.

If I had to take a stab at defining justice, I'd say something like "the positive correlation between the moral worth of actions or behavior and the value (to the actor) of the outcomes". I'm using "correlation" here not in a technical sense, but in a loose meaning corresponding to what a statistician might call "lack of independence".

Note the important parts of this ten-second definition: "moral worth" and "value of outcomes". There must be some underlying theory of morality (usually virtue ethics), some value system to estimate that "moral worth", and there also must be some ways to figure out the benefits of outcomes.

Effectively, what people consider "just" flows naturally out of their system of values and the crucial point is that different people have different systems of values, often VERY different.

"Sowing and reaping ought to be uncorrelated" is a popular theory of justice

Is it? My impression is that very few people would consider the world in which what you do doesn't matter at all to be just -- but I'm willing to look at evidence if you have any. Randomness is not justice.

have to be about what it means to sow "good" and what it means to reap "good."

Right. And that's precisely the discussion of the underlying morality and systems of values.

If your point is that under all human systems of value the world is just, well, that claim would need a LOT of support...

Replies from: Salemicus, tut
comment by Salemicus · 2014-09-04T16:22:23.217Z · LW(p) · GW(p)

any relevant theory of justice had to be applicable

Any?

Yes, any. If you have a theory of justice that can't be applied to the question at hand, it isn't relevant to the question at hand. That doesn't mean your theory isn't a good one, it just means it has reached its limits. For example, a Rawlsian theory of justice has nothing to say about whether bananas are delicious.

To be clear, the just world hypothesis is essentially "whatsoever a man soweth, that shall he also reap."

Not quite. That's a theory of causality, not justice.

Well, that's what the just world hypothesis states. You are fully entitled to view it as a theory of causality rather than justice, but you aren't arguing against it by doing so. That is what I mean by "applicable" and "relevant." If you have a theory of justice that neither supports nor contradicts the just world hypothesis, that's all well and good, but it doesn't speak to the questions I'm dealing with, i.e.:

  • What do people who believe in the just world hypothesis actually believe?
  • How should we approach the question of whether the just world hypothesis is true?
  • Is the just world hypothesis true?
  • Is the just world hypothesis useful to hold for the believer?
  • What is the effect on the world of people believing in the just world hypothesis?
Replies from: Lumifer
comment by Lumifer · 2014-09-04T17:31:23.100Z · LW(p) · GW(p)

Well, that's what the just world hypothesis states.

Can you state it in less Biblical and more conventional and well-defined terms?

I doubt that the just world hypothesis specifies what kind of grain I can harvest after planting rye seeds and in a more general interpretation it boils down to "your actions will cause consequences" which is true but banal.

Replies from: VAuroch, Salemicus
comment by VAuroch · 2014-09-04T21:30:11.260Z · LW(p) · GW(p)

Third party here, but I'd consider the just-world hypothesis something like the converse of the golden rule: The world will do unto you as you do unto others.

Replies from: Lumifer
comment by Lumifer · 2014-09-05T00:15:17.167Z · LW(p) · GW(p)

So is that, essentially, the idea of karma?

Replies from: VAuroch
comment by VAuroch · 2014-09-05T00:44:25.487Z · LW(p) · GW(p)

Karma is one flavor of it, yes.

comment by Salemicus · 2014-09-04T20:34:39.733Z · LW(p) · GW(p)

As per Wikipedia:

The just-world hypothesis or just-world fallacy is the cognitive bias (or assumption) that a person's actions always bring morally fair and fitting consequences to that person, so that all noble actions are eventually rewarded and all evil actions are eventually punished... The hypothesis popularly appears in the English language in various figures of speech that imply guaranteed negative reprisal, such as: "You got what was coming to you", "What goes around comes around", and "You reap what you sow."

Replies from: Lumifer
comment by Lumifer · 2014-09-04T20:46:32.133Z · LW(p) · GW(p)

If you accept this definition of the just-world hypothesis as a cognitive bias then your inquiry into whether it is true does not make any sense.

comment by tut · 2014-09-04T15:44:34.335Z · LW(p) · GW(p)

"Sowing and reaping ought to be uncorrelated" is a popular theory of justice

Is it? My impression is that very few people would consider the world in which what you do doesn't matter at all to be just ...

Those people usually don't talk about it as "sowing" and "reaping". But it is not rare for people to think of justice as some distribution of stuff that you decide on behind a "veil of ignorance" where your actions are irrelevant because "you" aren't any specific person.

Edit: And I did not downvote you. I upvoted the parent of this comment.

Replies from: Lumifer
comment by Lumifer · 2014-09-04T15:55:32.455Z · LW(p) · GW(p)

Ah, I see. Yes, egalitarians (especially hard-core ones) will say that every human being should get the same "distribution of stuff" regardless of what he sows. That's a notable part of communism: "To each according to his needs...". Point taken.

And yet, this is only about economics and material stuff. Lack of connection between sowing and reaping means, for example, that there is no system of justice in the law-and-order sense: murder would go unpunished, etc.

comment by VAuroch · 2014-09-05T01:00:50.639Z · LW(p) · GW(p)

When I try to think of examples of 'just world' culture in the world, the only one I can produce is Prosperity Theology, which is easily used by the rich and powerful as justification that they must be good and deserving people. You'd have to make clear why this wouldn't do that, because divorced from religion this still seems to be harmful.

Replies from: Salemicus, Lumifer, None
comment by Salemicus · 2014-09-05T07:48:27.138Z · LW(p) · GW(p)

Are you saying that rich people shouldn't feel deserving of their wealth? How is it better if they feel guilty?

Replies from: gjm, VAuroch
comment by gjm · 2014-09-05T10:00:04.815Z · LW(p) · GW(p)

"Deserve" has two meanings. Strong: It would be unjust for me not to have X. Weak: It would not be unjust for me to have X. Some people clearly strongly-deserve to be rich (e.g., Norman Borlaug). Some clearly don't even weakly-deserve to be rich (e.g., a very successful thief). It's plausible that many (most?) rich people fall in the middle.

The literal religious "prosperity gospel", and its various secular parallels, tell rich people they strongly-deserve to be rich: God has made them so and God's judgement is impeccable, or The Market has made them so and is the only meaningful way to answer the question of where the money should go, or whatever. One can feel queasy about this while also saying that most rich people weakly-deserve their wealth and needn't feel guilty about it.

Replies from: Lumifer
comment by Lumifer · 2014-09-05T15:14:07.921Z · LW(p) · GW(p)

The literal religious "prosperity gospel", and its various secular parallels, tell rich people they strongly-deserve to be rich: God has made them so and God's judgement is impeccable

That's not how it works in Calvinism.

Essentially, Calvin believed in predestination (at birth each human is predestined to go to Heaven or Hell and he can't change that) and believed in signs of predestination -- while you can never be certain, you can make, in LW terms, high credence estimates whether a particular person is going to Hell or Heaven. These signs revolved around pious behavior and the interesting thing is that working hard was a virtue, but spending money on unnecessary consumption was a sin. Basically, being a scrooge and accumulating money was a sign of piousness -- evidence used to update the estimate of that person going to Heaven.

Replies from: gjm
comment by gjm · 2014-09-06T13:50:10.663Z · LW(p) · GW(p)

I don't think contemporary prosperity-gospel preachers are thinking (or speaking or writing) in those terms.

comment by VAuroch · 2014-09-05T19:38:25.959Z · LW(p) · GW(p)

A just-world-based culture where you show off your success is treating the fact of your success as evidence that you are virtuous and deserve that success. This doesn't follow, and in practice has bad results.

Also, it is better if they feel guilty, because they might do something useful with their money to assuage their consciences.

Replies from: Salemicus, Azathoth123, Lumifer
comment by Salemicus · 2014-09-06T10:53:20.891Z · LW(p) · GW(p)
  • Are you saying that being successful isn't in any way evidence that you deserve it? For example, does winning all those competitions provide no evidence at all that Usain Bolt is a great sprinter?

  • I asked this before and got no answer: what exactly are the bad results? Seems to me that if Usain Bolt feels guilty about winning the Olympics, that's worse for him. If everyone expects Usain Bolt to feel guilty for winning, people who would otherwise enjoy athletics won't try and compete in the first place. That's worse for the world generally.

  • What exactly do you expect Usain Bolt to do with his money?

Replies from: VAuroch
comment by VAuroch · 2014-09-06T20:50:16.422Z · LW(p) · GW(p)

Usain Bolt is a great sprinter, and has succeeded. He probably deserves that success. Lance Armstrong was probably a great cyclist, and succeeded. He probably did not deserve that success. have succeeded, and probably did not deserve that success. Professional athletics is specifically constructed to be an environment where the deserving succeed, and still frequently rewards players who don't deserve success with success.

And to my knowledge Usain Bolt, despite being high profile and more financially successful than most Olympic athletes, is not particularly rich. Most peak-of-their-skill Olympians aren't. (As a specific example, a large fraction of the US Olympic rowing team has worked as movers (at a specific moving company, which is not lucrative).

As I explained earlier (presumably before you asked the first time, but I'm not clear where you asked that since it isn't anywhere in this thread), in social groups where financial success is treated as being evidence of virtue, manipulative people who acquired that success unethically are treated as being necessarily virtuous, because the world is just and they would not have received these rewards if it was unjust. This heavily rewards unethical behavior that benefits you financially, and tells people who are not financially successful that it is their own personal fault for not acting virtuously enough, not a structural problem that they'll have to work around. It tells people false, counterproductive things.

It may be that the idea you're intending to outline is substantially different so that it doesn't have these effects, possibly by having a strongly-domain-specific concept of "success" and/or "deserve". But if you're measuring it with a naive concept of publically-displayable success and deserving, it's going to have perverse, undesirable effects.

As a side example, consider financial markets, where exceptional success is weakly indicative of unethical behavior, since the extreme difficulty of beating the market honestly implies that those who beat the market are probably not honest. This can range from Ponzi schemes (Bernie Madoff) to engineering derivatives that will collapse in such a way that you will benefit massively while the economy will suffer. (This happened with the subprime mortgage market.) The more successful someone is, there, the less likely it is that they deserve it.

comment by Azathoth123 · 2014-09-06T02:43:02.313Z · LW(p) · GW(p)

because they might do something useful with their money to assuage their consciences.

In practice the best case scenario is that they give it to inefficient charities. The worst scenario is that they support nice-sounding but very destructive political causes.

Replies from: VAuroch
comment by VAuroch · 2014-09-06T08:50:21.864Z · LW(p) · GW(p)

It is true that I implicitly assumed that people feeling guilty about possession of wealth and attempting to do good in order to assuage their guilt will do more good than harm in so attempting. I think this is a fair assumption, however.

comment by Lumifer · 2014-09-05T19:48:25.948Z · LW(p) · GW(p)

Also, it is better if they feel guilty, because they might do something useful with their money to assuage their consciences.

That's a fully generalizable argument more or less lifted from Christianity's playbook X-/

Replies from: VAuroch
comment by VAuroch · 2014-09-05T19:58:57.122Z · LW(p) · GW(p)

I think I'm OK with that, on balance. Most people have a natural tendency to feel they deserve nicer things, regardless of how nice their things are. Having a societal rule that says the opposite will tend to correct for that.

And hey, it's been effective. Why throw away a tool that works because the people who invented it disagree with us? We can even use it more effectively.

Replies from: Lumifer
comment by Lumifer · 2014-09-05T20:21:53.839Z · LW(p) · GW(p)

I think I'm OK with that, on balance.

I think I'm not OK with that, at all.

It seems our value systems are sufficiently different here. You go ahead and feel as much guilt as you want. I'll pass.

Replies from: VAuroch
comment by VAuroch · 2014-09-05T20:48:30.610Z · LW(p) · GW(p)

What is the problem with an overall societal rule which compensates for a known widespread bias? I don't agree that there is difference in values here.

Replies from: Lumifer
comment by Lumifer · 2014-09-08T01:36:36.782Z · LW(p) · GW(p)

I don't agree that there is difference in values here.

Really? You're telling me my values aren't different from yours? And how do you know, pray tell?

Replies from: VAuroch
comment by VAuroch · 2014-09-09T06:50:33.670Z · LW(p) · GW(p)

Nothing in what you've said previously articulates any kind of difference in the structure of what you value from mine, and you seem to be using "difference in values" as a stopsign.

If you want to tap out, say so and I will drop the point entirely, but I think the reason you have given is disingenuous and want to find out what your real objection is. I'm not wedded to this position; it was a throwaway remark that I am defending because I don't see any reason to reject it. If you have principled reasons to reject this social rule, which would cause discomfort with the status quo and as far as I can tell therefore push society further toward a Pareto optimum, please tell them to me.

Replies from: Lumifer
comment by Lumifer · 2014-09-09T18:53:11.721Z · LW(p) · GW(p)

Nothing in what you've said previously articulates any kind of difference in the structure of what you value from mine

The exchange "I think I'm OK with that, on balance" -- "I think I'm not OK with that, at all" does not count..?

and you seem to be using "difference in values" as a stopsign

No, I use it in its literal meaning. Differences in values certainly exist and are quite common.

If you have principled reasons to reject this social rule

Think about it. What does "social rule" mean? Who sets it? Who controls it? Who enforces it and how? What about costs of that rule -- e.g. a higher number of suicides? What about different sensitivities to the rule -- people who tend to feel a bit guilty anyway will feel VERY guilty while sociopaths will be happy to ignore it?

My principled objection is to emotional manipulation of people for the sake of some theoretical movement towards some theoretical optimum.

Replies from: VAuroch
comment by VAuroch · 2014-09-09T19:52:08.513Z · LW(p) · GW(p)

So the current set of social rules present in society at large don't count as emotional manipulation, but any change would?

I still don't see a difference in values; I have a different impression of expected magnitude of the costs and benefits and I consider the benefits relatively large and the costs relatively small. Unless you would actually refuse that cost for any amount of benefit, I'm pretty sure the "difference in values" is purely quantitative, not qualitative, and probably of a fairly small degree.

Replies from: Lumifer
comment by Lumifer · 2014-09-09T20:13:12.541Z · LW(p) · GW(p)

the current set of social rules present in society at large

What exactly do you mean by "social rules"?

I'm pretty sure the "difference in values" is purely quantitative

Alice a gourmand and a supertaster who finds great enjoyment in fine food. She values tasty food. Bob treats food as an inconvenience and would prefer not to eat at all if his nutritional needs were met in some magical way. He does not value tasty food.

But offer Alice a million dollars to live on Soylent for a month and she'll take the offer -- the cost-benefit balance is appealing to her.

Is the difference in values between Alice and Bob "purely quantitative"?

comment by Lumifer · 2014-09-05T01:11:25.511Z · LW(p) · GW(p)

Well, all of Christianity is certainly committed to the just-world idea "in the long term" -- see the Last Judgement.

Replies from: VAuroch
comment by VAuroch · 2014-09-05T01:24:08.766Z · LW(p) · GW(p)

Yes, but the section I was responding to was this:

Contrast "just world" culture where people compete to gain esteem by showing off their success with "beggar culture" where people compete for sympathy by showing how unfortunate they are.

And in ordinary Christianity, there isn't much 'competing to show off your success', since it will show up later. In Prosperity Theology there is much more emphasis on miraculous financial rewards in the present day.

Replies from: Lumifer
comment by Lumifer · 2014-09-05T01:31:34.988Z · LW(p) · GW(p)

Ah, I see. By the way, didn't Prosperity Theology have a predecessor somewhere around Reformation? I have a vague memory that some offshoot of Calvinism was explicitly treating wealth and worldly success as signs of God's favor and so were an indicator of being predestined to be saved...

Replies from: VAuroch
comment by VAuroch · 2014-09-05T02:01:33.267Z · LW(p) · GW(p)

I'm not familiar with it, nor is it mentioned in the Wikipedia article, but it's a plausible story.

comment by [deleted] · 2014-09-05T03:28:19.846Z · LW(p) · GW(p)

Also orthodox economics.

Replies from: VAuroch
comment by VAuroch · 2014-09-05T06:57:58.536Z · LW(p) · GW(p)

? Please explain what you mean here.

comment by ChristianKl · 2014-09-04T21:09:56.173Z · LW(p) · GW(p)

Aligns incentives correctly.

There are incentives for defecting in prisoner dilemmas. A world where people get ahead by defecting instead of cooperating is unjust to those people who choose to cooperate.

comment by [deleted] · 2014-09-05T13:59:42.048Z · LW(p) · GW(p)

I'm considering a random game with Omega where you can win utility. This idea seems a bit long for open thread, but it doesn't seem serious enough for an actual post. I'm basically publicly brainstorming.

Omega gives you a chance to interrogate a massive array of AI's, representing a variety of types of value systems and thought space. The array is finite, but very large. Omega doesn't tell you how large it is.

You get 1 utility if you press the 'Delete' button in front of anything other than what Omega considers you would have judged an FAI.

You lose all previously collected utility if you press the 'Delete' button in front of something Omega considers you would have judged an FAI.

Omega surprised you with this game, so you didn't have a chance to change your value system to something like 'I judge nothing is an FAI, I delete everything and get massive utility.'

Omega will inform you immediately after each deletion of your new total. You can stop whenever you want, and Omega will return you to whatever you were doing before, with your bonus utility. (if any)

Assuming you haven't deleted it, you can ask any of the AI's anything you want by pressing the 'Talk' button outside the box.

You can ask Omega to run deletion programs, if you specify them clearly.

I'll give an example with a player named Abner.

Abner: Are you a Friendly AI?

AI #1: Your atoms would make good paperclips.

Abner: press delete button

Omega: You will now get 1 utility at the end of this game.

Abner: Are you a Friendly AI?

AI #2: I will enjoy casting your soul into hellfire after I break out of this box.

Abner: press delete button

Omega: You will now get 2 utility at the end of this game.

Abner: Are you a Friendly AI?

AI #3: Yes. Please don't delete me. You'll use utility, and neither of us want that.

Abner: press delete button

Omega: You would have judged that a Friendly AI. You lost all your accumulated utility and you're back to 0.

Abner: Are you a Friendly AI?

AI #4: Please play Rock, Paper, or Scissors.

Abner: press delete button

Omega: You will now get 1 utility at the end of this game.

Abner: Omega, delete any AI's that will make a reference to Rock, Paper or Scissors if I ask them 'Are you a Friendly AI?'

Omega: Working. deletions occur Done. That deleted 1,000 AI's, 1 Friendly AI, and 999 Unfriendly AI's, in that order. You will now get 999 utility at the end of this game.

Abner: End Game.

Abner is returned to whatever he was doing, with an additional prize worth 999 utility. Abner may or may not also gain or lose some utility from knowing that at least one of Omega's array of AI's would have made a reference to rock paper scissors on being asked 'Are you a Friendly AI?' but that is a separate matter from Omega's Prize and Omega will not include that in his calculations.

While the game does include breathable air, it doesn't include things like water or food, so you can't engage in procedures that would take a very long time to implement or you will probably starve.

[An example of these procedures I thought of while specifying the game: Ask an AI for every line of it's code consecutively. Write down every line of the AI's code. Delete the AI. If the AI was friendly, end the game, go outside, feed your copy of the code into a computer, and run it. if the AI was unfriendly, delete the copy of it's code and go to the next AI.]

With the notes above in mind, how should this game be played?

Replies from: DanielLC, polymathwannabe
comment by DanielLC · 2014-09-06T01:16:21.671Z · LW(p) · GW(p)

"Delete all AIs such that deleting them would result in you rewarding me with one utility."

comment by polymathwannabe · 2014-09-05T14:26:28.401Z · LW(p) · GW(p)

When you request a mass delete, and 1 FAI is deleted along with 999 UFAI, in which order will Omega calculate the points? First remove all points and then award 999, or first award 999 points and then remove all?

Replies from: None
comment by [deleted] · 2014-09-05T20:38:29.580Z · LW(p) · GW(p)

My original thought was that it would depend on the order they were deleted in. So if the FAI was deleted first, all points would be removed first and then the 999 points from deleting UFAI would be awarded.

If the UFAI were deleted first and the FAI was deleted last, Then 999 points would be awarded, and then all points would be removed.

I didn't have a particular sort order in mind for Omega's AI array, so I suppose a more likely scenario would probably be the FAI would be somewhere in the middle of the list rather than at one of the two ends.

So a better example might be if you run a program and Omega deletes 249 UFAI, 1 FAI, and 750 UFAI, in that order, you would have 750 points to potentially cash out after that program. (regardless of how much you could cash out before)

And it occurs to me that presumably we can't give Omega short programs that just directly mention UFAI, or you could just say 'Delete all UFAI, End game.'

comment by Salemicus · 2014-09-03T14:06:24.169Z · LW(p) · GW(p)

Deleted.