"Stupid" questions thread

post by gothgirl420666 · 2013-07-13T02:42:56.635Z · LW · GW · Legacy · 854 comments

Contents

854 comments

r/Fitness does a weekly "Moronic Monday", a judgment-free thread where people can ask questions that they would ordinarily feel embarrassed for not knowing the answer to. I thought this seemed like a useful thing to have here - after all, the concepts discussed on LessWrong are probably at least a little harder to grasp than those of weightlifting. Plus, I have a few stupid questions of my own, so it doesn't seem unreasonable that other people might as well. 

854 comments

Comments sorted by top scores.

comment by RomeoStevens · 2013-07-13T04:43:19.484Z · LW(p) · GW(p)

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself. To the extant that this easily trumps the vast majority of other failings (epistemic rationality wise) as discussed on LW. So why aren't we discussing how to do better at this regularly? A couple explanations immediately leap to mind:

  1. Not a core competency of the sort of people LW attracts.

  2. Rewards not as immediate as the sort of epiphany porn that some of LW generates.

  3. Ugh fields. Especially in regard to things that are considered manipulative when reasoned about explicitly, even though we all do them all the time anyway.

Replies from: Qiaochu_Yuan, John_Maxwell_IV, ChristianKl, None, whateverfor, drethelin, wwa
comment by Qiaochu_Yuan · 2013-07-13T04:54:43.542Z · LW(p) · GW(p)

LW's foundational posts are all very strongly biased towards epistemic rationality, and I think that strong bias still affects our attempts to talk about instrumental rationality. There are probably all sorts of instrumentally rational things we could be doing that we don't talk about enough.

Replies from: ikrase
comment by ikrase · 2013-07-13T08:05:34.653Z · LW(p) · GW(p)

Would also be useful to know how to get other people around you to up meta-ness or machiavellianism.

comment by John_Maxwell (John_Maxwell_IV) · 2013-07-13T05:29:25.944Z · LW(p) · GW(p)

Do you have any experience doing this successfully? I'd assume that powerful people already have lots of folks trying to make friends with them.

Replies from: sdr, RomeoStevens
comment by sdr · 2013-07-14T02:58:39.182Z · LW(p) · GW(p)

Specifically for business, I do.

The general angle is asking intelligent, and forward-pointing questions, specifically because deep processing for thoughts (as described in Thinking Fast and Slow) is rare, even within the business community; so demonstrating understanding, and curiosity (both of which are strength of people on LW) is an almost instant-win.

Two of the better guides on how to approach this intelligently are:

The other aspect of this is Speaking the Lingo. The problem with LW is:

1, people developing gravity wells around specific topics , and having a very hard time talking about stuff others are interested in without bringing up pet topics of their own; and

2, the inference distance between the kind of stuff that puts people into powerful position, and the kind of stuff LW develops a gravity well around is, indeed, vast.

The operational hack here is 1, listening, 2, building up the scaffolds on which these people hang their power upon; 3, recognizing whether you have an understanding of how those pieces fit together.

General algorithm for the networking dance:

1, Ask intelligent question, listen intently

2, Notice your brain popping up a question/handle that you have an urge to speak up. Develop a classification algo to notice whether the question was generated by your pet gravity well, or by novel understanding.

3, If the former,SHUT UP. If you really have the urge, mimic back what they've just said to internalize / develop your understanding (and move the conversation along)

Side-effects might include: developing an UGH-field towards browsing lesswrong, incorporating, and getting paid truckloads. YMMV.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-16T05:38:38.089Z · LW(p) · GW(p)

If you have an "UGH-field towards", do you mean attracted to, or repulsed by browsing LW, making money, etc?

Replies from: gwern
comment by gwern · 2014-04-15T17:16:21.012Z · LW(p) · GW(p)

The 'towards' scopes over browsing LW, not the rest of the itemized list: '1. developing an ugh-fiend (towards browsing LW); 2. incorporating (and building a business with your new spare time); 3. getting paid (truckloads).'

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2014-04-22T21:57:37.022Z · LW(p) · GW(p)

Unambiguous mistake or ambiguous parallel construction? I agree w/ your parse, on grounds of the indisputable goodness of truckloads of money.

Replies from: gwern
comment by gwern · 2014-04-23T17:19:14.945Z · LW(p) · GW(p)

I didn't misunderstand it when I read it initially, so I think latter.

comment by RomeoStevens · 2013-07-13T09:49:41.878Z · LW(p) · GW(p)

Sure, but rationalists should win.

Replies from: Yosarian2
comment by Yosarian2 · 2013-07-17T02:10:29.209Z · LW(p) · GW(p)

I'm not sure that being a rationalist gives you a significant advantage in interpersonal relationships. A lot of our brain seems to be specifically designed for social interactions; trying to use the rational part of your brain to do social interactions is like using a CPU chip to do graphics instead of a GPU; you can do it, but it'll be slower and less efficient and effective then using the hardware that's designed for that.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-18T15:28:17.946Z · LW(p) · GW(p)

Perhaps the slow thinking could be used later at home to review the social interactions of the day. It will not directly help to fix the problems caused by quick wrong reactions, but it could help discover some strategical problems.

For example: You are failing to have a good relation with this specific person, but maybe it's just the person randomly disliking you, or the person remembering your past errors which you have already fixed. Try spending some time with other people and notice whether their reactions are different.

More obvious, but less frequent example: This person seems to like you and invites your to their cult. Be careful!

Replies from: Yosarian2
comment by Yosarian2 · 2013-07-24T14:32:23.668Z · LW(p) · GW(p)

Yeah, that's very true; I'm not claiming that rational thought is useless for social interaction, it is good to sometimes stop and think about your social interactions on your own when you have some downtime.

That being said, there are downsides as well. If you're using rational thought instead of the social parts of your brain to decide how to react to social situations, you will tend to react differently. Not that you're wrong, or irrational, but you just won't respond to social cues in the way people expect, and that itself might give you a disadvantage.

Thinking about this, it is actually reminding me of the behavior of a friend of mine who has a form of high-functioning autism; she's very smart, but she reacts quite differently in social situations then most people would expect. Perhaps that is basically what she is doing.

comment by ChristianKl · 2013-07-13T13:01:35.894Z · LW(p) · GW(p)

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself.

Power isn't one dimensional. The thing that matters isn't so much to make relationships with people who are more powerful than you in all domains but to make relationship with people who are poweful in some domain where you could ask them for help.

comment by [deleted] · 2013-07-13T18:00:08.521Z · LW(p) · GW(p)

Because it's hard. That's what kept me from doing it.

I am very close to explicitly starting a project to do just that, and didn't get to this point even until one of my powerful friends explicitly advised me to take a particular strategy to get relationships with more powerful people.

I find myself unable to be motivated to do it without calling it "Networking the Hard Way", to remind myself that yes, it's hard, and that's why it will work.

Replies from: Dorikka
comment by Dorikka · 2013-07-13T23:14:54.360Z · LW(p) · GW(p)

I would be interested in hearing about this strategy if you feel like sharing.

Replies from: None, None
comment by [deleted] · 2013-07-14T05:05:21.064Z · LW(p) · GW(p)

Soon. Would rather actually do it first, before reporting on my glorious future success.

Replies from: Dorikka
comment by Dorikka · 2013-07-16T03:18:13.616Z · LW(p) · GW(p)

Mmhmm, good catch. Thanks.

comment by [deleted] · 2013-07-22T01:14:41.200Z · LW(p) · GW(p)

Not done much on it yet, but here's the plan.

Replies from: Dorikka
comment by Dorikka · 2013-07-22T01:56:45.889Z · LW(p) · GW(p)

Thanks for sharing. Tell me if you want me to bug you about whether you're following your plan at scheduled points in the future.

Replies from: None
comment by [deleted] · 2013-07-22T02:35:15.121Z · LW(p) · GW(p)

Thanks for the offer. It feels great when people make such offers now, because I no longer need that kind of help, which is such a relief. I use Beeminder now, which basically solves the "stay motivated to do quantifiable goal at some rate" problem.

comment by whateverfor · 2013-07-13T20:24:01.555Z · LW(p) · GW(p)

Realistically, Less Wrong is most concerned about epistemic rationality: the idea that having an accurate map of the territory is very important to actually reaching your instrumental goals. If you imagine for a second a world where epistemic rationality isn't that important, you don't really need a site like Less Wrong. There's nods to "instrumental rationality", but those are in the context of epistemic rationality getting you most of the way and being the base you work off of, otherwise there's no reason to be on Less Wrong instead of a specific site dealing with the sub area.

Also, lots of "building relationships with powerful people" is zero sum at best, since it resembles influence peddling more than gains from informal trade.

comment by drethelin · 2013-07-13T20:27:33.680Z · LW(p) · GW(p)

Insofar as MIRI folk seem to be friends with Jaan Tallin and Thiel etc. they appear to be trying to do this, though they don't seem to be teaching it as a great idea. But organizationally, if you're trying to optimize the world in a more rational way, spreading rationality might be a better way than trying to befriend less rational powerful people. Obviously this is less effective on a more personal basis.

comment by wwa · 2013-07-13T17:52:06.525Z · LW(p) · GW(p)

It seems to me that, unless one is already a powerful person, the best thing one can do to gain optimization power is building relationships with people more powerful than oneself.

Depends on how powerful you want to become. Those relationships will be a burden the moment you'll "surpass the masters" so to speak. You may want to avoid building too many.

comment by Qiaochu_Yuan · 2013-07-13T05:15:48.159Z · LW(p) · GW(p)

I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?

Replies from: TimS, John_Maxwell_IV, None, ChristianKl, Richard_Kennaway, ChrisHallquist, Alexei, pragmatist, shminux
comment by TimS · 2013-07-13T05:26:17.430Z · LW(p) · GW(p)

The downsides of talking to strangers are really, really low. Your feelings of anxiety are just lies from your brain.

I've found that writing a script ahead of time for particular situations, with some thoughts of different possible variations in how the conversation could go.

Honestly, not sure I understand the question.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-13T05:37:48.816Z · LW(p) · GW(p)

Yeah, it was deliberately vague so I'd get answers to a wide variety of possible interpretations. To be more specific, I have trouble figuring out what my opening line should be in situations where I'm not sure what the social script for introducing myself is, e.g. to women at a bar (I'm a straight male). My impression is that "hi, can I buy you a drink?" is cliché but I don't know what reasonable substitutes are.

Replies from: malcolmocean, gothgirl420666, army1987, TimS
comment by MalcolmOcean (malcolmocean) · 2013-07-13T18:14:04.177Z · LW(p) · GW(p)

"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.

I feel like I read something interesting about this on Mark Manson's blog but it's horribly organized so I can't find it now.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-14T23:08:26.955Z · LW(p) · GW(p)

"hi, can I buy you a drink?" is also bad for other reasons, because this often opens a kind of transactional model of things where there's kind of an idea that you're buying her time, either for conversation or for other more intimate activities later. Now, this isn't explicitly the case, but it can get really awkward, so I'd seriously caution against opening with it.

That sort of things vary a lot depending on what kind of culture you're in.

comment by gothgirl420666 · 2013-07-13T06:43:29.124Z · LW(p) · GW(p)

I've been reading PUA esque stuff lately and something they stress is that "the opener doesn't matter", "you can open with anything". This is in contrast to the older, cheesier, tactic based PUAs who used to focus obsessively over finding the right line to open with. This advice is meant for approaching women in bars, but I imagine it holds true for most ocassions you would want to talk to a stranger.

In general if you're in a social situation where strangers are approaching each other, then people are generally receptive to people approaching them and will be grateful that you are putting in the work of initiating contact and not them. People also understand that it's sometimes awkward to initiate with strangers, and will usually try to help you smooth things over if you initially make a rough landing. If you come in awkwardly, then you can gauge their reaction, calibrate to find a more appropriate tone, continue without drawing attention to the initial awkwardness, and things will be fine.

Personally, I think the best way to open a conversation with a stranger would just be to go up to them and say "Hey, I'm __" and offer a handshake. It's straightforward and shows confidence.

If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.

EDIT: If an approach goes badly, don't take it personally. They might be having a bad day. They might be socially awkward themselves. And if someone is an asshole to you just for going up and saying hi, they are the weirdo, not you. On the other hand, if ten approaches in a row go badly, then you should take it personally.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T10:23:55.137Z · LW(p) · GW(p)

If you're in a situation where it's not necessarily common to approach strangers, you'll probably have to to come up with some "excuse" for talking to them, like "that's a cool shirt" or "do you know where the library is?". Then you have to transition that into a conversation somehow. I'm not really sure how to do that part.

Here's a recent example (with a lady sitting beside me in the aeroplane; translated):

  • Her: Hi, I'm [her name].
  • Me: Hi, I'm [my name].
  • Her: Can you speak French?
  • Me: Not much. Can you speak English?
  • Her: No. Can you speak Portuguese?
  • Me: A little.
  • Her: Spanish? Italian?
  • Me: Yes, I'm Italian. But why the hell can you speak all of those languages but not English, anyway?
  • Her: [answers my question]

from which it was trivially easy to start a conversation.

Replies from: pragmatist, gothgirl420666
comment by pragmatist · 2013-07-13T13:19:40.504Z · LW(p) · GW(p)

Don't leave us hanging! Why the hell could she speak all those languages but not English?

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T13:49:29.408Z · LW(p) · GW(p)

She had been born in Brazil to Italian parents, had gone to school in Italy, and was working in the French-speaking part of Switzerland.

Replies from: TobyBartels
comment by TobyBartels · 2014-08-10T00:42:29.998Z · LW(p) · GW(p)

That gives two explanations for Italian and zero explanations for Spanish, so I'm wondering if one of the Italian explanations was supposed to be Spanish.

Replies from: army1987
comment by A1987dM (army1987) · 2014-08-10T14:12:21.342Z · LW(p) · GW(p)

I can't remember whether she told me why she could speak Spanish. Anyway, romance languages are similar enough that if you're proficient in Portuguese, Italian and French you can probably learn decent Spanish in a matter of weeks.

Replies from: TobyBartels
comment by TobyBartels · 2014-08-11T05:04:31.681Z · LW(p) · GW(p)

True; in fact, I would expect her to be able to read it pretty well without any study whatsoever, once she had those three surrounding languages down. Actual study then becomes very cost-effective, and might even be done on one's own in one's spare time, once one has decided to do it.

comment by gothgirl420666 · 2013-07-13T14:45:40.625Z · LW(p) · GW(p)

Well, you did start with introducing yourself, as opposed to a situational "excuse".

Edit: Or actually, she introduced her self.

comment by A1987dM (army1987) · 2013-07-13T09:58:58.744Z · LW(p) · GW(p)

My impression is that "hi, can I buy you a drink?" is cliché but I don't know what reasonable substitutes are.

"Hi, what's your name?" or "Hi, I'm Qiaochu" (depending on the cultural context, e.g. ISTM the former is more common in English and the latter is more common in Italian). Ain't that what nearly any language course whatsoever teaches you to say on Lesson 1? ;-)¹

Or, if you're in a venue where that's appropriate, "wanna dance?" (not necessarily verbally).

(My favourite is to do something awesome in their general direction and wait for them to introduce themselves/each other to me, but it's not as reliable.)


  1. I think I became much more confident in introducing myself to strangers in English or Italian after being taught explicitly how to do that in Irish (though there are huge confounders).
Replies from: gjm
comment by gjm · 2013-07-13T13:50:11.713Z · LW(p) · GW(p)

I conjecture that "Hi, I'm Qiaochu" is a very uncommon greeting in Italian :-).

comment by TimS · 2013-07-13T05:40:57.922Z · LW(p) · GW(p)

I think you need to taboo "introducing yourself." The rules are different based on where you want the conversation to end up.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-13T05:54:00.768Z · LW(p) · GW(p)

I think to a first-order approximation it doesn't matter where I want the conversation to end up because the person I'm talking to will have an obvious hypothesis about that. But let's say I'm looking for women to date for the sake of concreteness.

Replies from: TimS
comment by TimS · 2013-07-13T06:14:25.245Z · LW(p) · GW(p)

Sorry, I have no experience with that, so I lack useful advice. Given your uncertainty about how to proceed, I suggest the possibility that this set of circumstances is not the easiest way for you to achieve the goal you identified.

Replies from: wedrifid
comment by wedrifid · 2013-07-13T06:23:49.717Z · LW(p) · GW(p)

Given your uncertainty about how to proceed, I suggest the possibility that this set of circumstances is not the easiest way for you to achieve the goal you identified.

I am wary of this reasoning. It would make sense if one was uncertain how to pick up women in bars specifically but was quite familiar with how to pick up women in a different environment. However the uncertainty will most likely be more generalised than that and developing the skill in that set of circumstances is likely to give a large return on investment.

This uncertainty is of the type that calls for comfort zone expansion.

Replies from: yanavancat
comment by yanavancat · 2013-07-13T19:31:39.174Z · LW(p) · GW(p)

Environment matters a lot. Bars (for example) are loud, dark, sometimes crowded, and filled with inebriated people.

I CANNOT STRESS THIS POINT ENOUGH:

Thinking in terms of "picking up women" is the first problem. One should take the approach that they are "MEETING women". The conceptual framing is important here because it will influence intentionality and outcome.

A "meeting" mindset implies equal footing and good intentions, which should be the foundation for any kind of positive human interaction. Many women are turned off by the sense that they are speaking to a man who wants to "pick them up", perhaps sensing that you are nervous about adding them to your dating resume. It's hard to relate to that.

Isn't the goal to engage romantically with a peer, maybe learn something about relationships.

With that little rant out of the way, I think its important to think of where you are best able to have a relaxed and genuine conversation - even with a friend.

If you see a woman at the bar that is especially attractive and worthy of YOUR attention, perhaps admit to her candidly that the location is not your milieu and ask inquisitively if she normally has good conversations at bars. If she says yes and stops at that, chances are she's not interested in talking more with you or simply is not a good conversationalist.

Replies from: wedrifid
comment by wedrifid · 2013-07-13T20:05:52.371Z · LW(p) · GW(p)

A "meeting" mindset implies equal footing and good intentions, which should be the foundation for any kind of positive human interaction.

Beware of 'should'. Subscribing to this ideal of equality rules out all sorts of positive human interactions that are not equal yet still beneficial. In fact, limiting oneself to human interactions on an equal footing would be outright socially crippling.

comment by John_Maxwell (John_Maxwell_IV) · 2013-07-13T05:36:10.483Z · LW(p) · GW(p)

A good way to start is to say something about your situation (time, place, etc.). After that, I guess you could ask their names or something. I consider myself decent at talking to strangers, but I think it's less about what you say and more about the emotions you train yourself to have. If you see strangers as friends waiting to be made on an emotional level, you can just talk to them the way you'd talk to a friend. Standing somewhere with lots of foot traffic holding a "free hugs" sign under the influence of something disinhibiting might be helpful for building this attitude. If you currently are uncomfortable talking to strangers then whenever you do it, afterwards comfort yourself internally the same way you might comfort an animal (after all, you are an animal) and say stuff like "see? that wasn't so bad. you did great." etc. and try to build comfort through repeated small exposure (more).

comment by [deleted] · 2013-07-13T17:43:35.914Z · LW(p) · GW(p)

I was climbing a tree yesterday and realized that I hadn't even thought that the people watching were going to judge me, and that I would have thought of it previously, and that it would have made it harder to just climb the tree. Then I thought that if I could use the same trick on social interaction, it would become much easier. Then I wondered how you might learn to use that trick.

In other words, I don't know, but the question I don't know the answer to is a little bit closer to success.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-16T05:22:42.733Z · LW(p) · GW(p)

This works well for things that aren't communicating with a person. But your interlocutor will certainly be evaluating you and your words. You'd have to be pretty absorbed or incompetent to lose awareness of that :)

If you mean to be bravely ignorant about how bystanders view your attempt, okay.

comment by ChristianKl · 2013-07-13T12:54:57.170Z · LW(p) · GW(p)

I like this idea! I feel like the current questions are insufficiently "stupid," so here's one: how do you talk to strangers?

I think the question is badly formed. I think it's better to ask: "How do I become a person who easily talks to strangers?" When you are in your head and think about: "How do I talk to that person over there?" you are already at a place that isn't conductive to a good interaction.

Yesterday during the course of traveling around town three stangers did talk to me, where the stranger said the first word.

The first was a woman in mid 30s with a bicycle who was searching the elevator at the public train station. The second was an older woman who told me that the Vibriam Fivefinger shoes in wearing look good. The third was a girl who was biking next to me when her smart phone felt down. I picked it up and handed it back to her. She said thank you.

I'm not even counting beggars at the train in public transportation.

Later that evening I went Salsa dancing. There two woman I didn't know who were new to Salsa asked me to dance.

Why was I at a vibe that let's other people approach me? I spent five days at a personal development workshop given by Danis Bois. The workshop wasn't about doing anything to strangers but among other things teaches a kind of massage and I was a lot more relaxed than I was in the past.

If you get rid of your anxiety interactions with strangers start to flow naturally.


What can you do apart from visiting personal development seminars that put you into a good emotional state?

Wear something that makes it easy for strangers to start a conversation with you. One of the benefits of Vibriam Fivefingers is that people are frequently curious about them.


Do good exercises.

1) One exercise is to say 'hi' or 'good morning' to every stranger you pass. I don't do it currently but it's a good exercise to teach yourself that interaction with strangers is natural.

2) Learn some form of meditation to get into a relaxed state of mind.

3) If you want to approach a person at a bar you might feel anxiety. Locate that anxiety in your body. At the beginning it makes sense to put your hand where you locate it.

Ask yourself: "Where does that feeling want to move in my body" Tell it to "soften and flow". Let it flow where it wants to flow in your body. Usually it wants to flow at a specific location out of your body.

Do the same with the feeling of rejection, should a stranger reject you.


Exercise three is something that I only learned recently and I'm not sure if I'm able to explain it well over the internet. In case anybody reading it finds it useful I would be interested in feedback.

Replies from: army1987, army1987
comment by A1987dM (army1987) · 2013-07-14T23:15:53.097Z · LW(p) · GW(p)

3) If you want to approach a person at a bar you might feel anxiety.

I recently found a nice mind hack for that: “What would my drunken self do?”

comment by A1987dM (army1987) · 2013-07-14T23:13:19.251Z · LW(p) · GW(p)

The second was an older woman who told me that the Vibriam Fivefinger shoes in wearing look good.

o.O

Sure she wasn't being sarcastic? ;-)

Replies from: ChristianKl
comment by ChristianKl · 2013-07-14T23:34:09.255Z · LW(p) · GW(p)

Sure she wasn't being sarcastic? ;-)

In this case yes, because of the bodylanguage and the vibe in which the words were said.

If a person wants to make you a compliement, wearing an item that extraordinary makes it easy for the other person to start a conversation.

I also get frequently asked where I brought my Vibriams.

comment by Richard_Kennaway · 2013-07-16T11:08:37.900Z · LW(p) · GW(p)

I'd like to ask an even stupider one: why do people want to talk to strangers?

I've had a few such conversations on trains and the like, and I'm not especially averse to it, but I think afterwards, what was the point of that?

Well, that passed the time.

*It would have passed anyway.*

Yes, but not as quickly.

At least the train eventually arrives.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-16T21:02:24.988Z · LW(p) · GW(p)

To meet new people?

comment by ChrisHallquist · 2013-07-16T09:32:43.419Z · LW(p) · GW(p)

While it's probably good not to have a crippling fear of talking to strangers, it's not something I really do unless I have a specific reason to. Like, a week ago I met a girl on a bus with a t-shirt that said something about tuk-tuks, which told me she'd probably been to southeast Asia and therefore probably interesting to talk to, so I commented on the shirt and that got the conversation rolling. Or if I'm at a meetup for people with shared interests, that fact is enough for me to want to talk to them and assume they want to talk to me.

comment by Alexei · 2013-07-18T15:14:15.946Z · LW(p) · GW(p)

One of the most simple lessons that I got a lot of milage out of is: don't be afraid to state the obvious. A lot of conversations and human interactions are more about the positive vibe that's being created than about the content. (Somewhat less so in these circles.)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-18T17:49:41.842Z · LW(p) · GW(p)

This is good advice for continuing an existing interaction with people, which I already feel like I'm pretty good at, but what I'm not so good at is starting such interactions with people I don't already know.

comment by pragmatist · 2013-07-13T18:24:00.162Z · LW(p) · GW(p)

I travel long-distance by plane alone quite a lot, and I like talking to people. If the person I'm sitting with looks reasonably friendly, I often start with something like "Hey, I figure we should introduce ourselves to each other right now, because I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances. Why the hell do people do that? Anyway, I'm so-and-so. What's your name?"

I've gotten weird looks and perfunctory responses a couple of times, but more often than not people are glad for the icebreaker. There are actually a couple of people I met on planes with whom I'm still in regular touch. On the downside, I have sometimes got inextricably involved in conversation with people who are boring and/or unpleasant. I don't mind that too much, but if you are particularly bothered by that sort of thing, maybe restrict your stranger-talking to contexts where you have a reasonable idea about the kind of person you're going to be talking with. Anyway, my advice is geared towards a very specific sort of situation, but it is a pretty common situation for a lot of people.

Replies from: gwillen, RolfAndreassen, Qiaochu_Yuan, army1987, Vaniver
comment by gwillen · 2013-07-13T22:12:54.744Z · LW(p) · GW(p)

Data point counter to the other two replies you've gotten: I -- and, I perceive, most people, both introverted and extraverted -- am neither overjoyed nor horrified to have someone attempt to start a conversation with me on an airplane. I would say that as long as you can successfully read negative feedback, and disengage from the conversation, it is absolutely reasonable to attempt to start a conversation with a stranger next to you on an airplane.

Now, I can't tell if the objection is to 1) the mere act of attempting to talk to someone on an airplane at all, which I can't really understand, or 2) to the particular manner of your attempt, which does seem a bit talkative / familiar, and could perhaps be toned down.

comment by RolfAndreassen · 2013-07-13T19:20:28.524Z · LW(p) · GW(p)

Data point: I would find this annoying to the point of producing seething, ulcerating rage. Please back the fuck off and leave others alone.

Replies from: aelephant, pragmatist, wedrifid, Kawoomba
comment by aelephant · 2013-07-14T01:43:38.487Z · LW(p) · GW(p)

Someone introducing themselves to you produces "seething, ulcerating rage"? Have you ever considered counseling or therapy?

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-14T23:22:25.333Z · LW(p) · GW(p)

In comment threads to feminist blog posts in reaction to a particular xkcd comic, I've seen good reasons why certain people might be very pissed off when other people try to talk to them somewhere they cannot get away from, though they mostly apply to women being talked to by men.

Replies from: Caspian, aelephant
comment by Caspian · 2013-07-15T15:41:03.651Z · LW(p) · GW(p)

I would always find people in aeroplanes less threatening than in trains. I wouldn't imagine the person in the next seat mugging me, for example, whereas I would imagine it on a train.

What do other people think of strangers on a plane versus on a train?

Replies from: NancyLebovitz, satt
comment by NancyLebovitz · 2013-07-15T18:22:28.263Z · LW(p) · GW(p)

I don't see a difference.

comment by satt · 2013-07-22T13:22:41.752Z · LW(p) · GW(p)

I would always find people in aeroplanes less threatening than in trains.

Hadn't noticed that before but now you mention it, I think I have a weaker version of the same intuition.

Replies from: Caspian
comment by Caspian · 2013-07-24T23:53:38.562Z · LW(p) · GW(p)

I expect part of it's based on status of course, but part of it could be that it would be much harder for a mugger to escape on a plane. No crowd of people standing up to blend into, and no easy exits.

Also on some trains you have seats facing each other, so people get used to deliberately avoiding each others gaze (edit: I don't think I'm saying that quite right. They're looking away), which I think makes it feel both awkward and unsafe.

Replies from: satt
comment by satt · 2013-07-25T02:12:17.989Z · LW(p) · GW(p)

For comparison, here's what I come up with when I introspect about my intuition:

  1. The planes I'm on usually have higher people density than the trains I ride.

  2. People seem more likely to step in if a fight breaks out on a plane than on a train. (Although I wonder why I believe that, since I've never witnessed a fight on a plane. Maybe I'm influenced by point 1. I guess fliers are also quite proactive nowadays about piling on people who get violent on planes.)

  3. Passengers on planes are screened for weapons before they board, and when they're on-board there's less room for them to take a swing at me than on a train.

  4. Someone who confronts me on a plane is less likely/able to follow me home, or to somewhere isolated, than someone who confronts me on a train.

comment by aelephant · 2013-07-14T23:41:14.908Z · LW(p) · GW(p)

I could understand if it was persistent unwanted communication, but the dude is just trying to break the ice for Odin's sake. Just ignore him or tell him you'd rather not chit chat. How difficult is that?

Replies from: NancyLebovitz, army1987, CronoDAS
comment by NancyLebovitz · 2013-07-15T04:42:34.210Z · LW(p) · GW(p)

Surprisingly difficult if you've been trained to be "nice".

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-15T09:08:50.265Z · LW(p) · GW(p)

Surprisingly difficult if you've been trained to be "nice".

Sucks to be that person. Solution! Don't be that person!

Replies from: wedrifid
comment by wedrifid · 2013-07-15T09:35:57.791Z · LW(p) · GW(p)

Sucks to be that person. Solution! Don't be that person!

Or, more precisely, if you are that person then do the personality development needed to remove the undesirable aspects of that social conditioning.

(You can not control others behaviour in the past. Unless they are extraordinarily good predictors, in which case by all means wreak acausal havoc upon them to prevent their to-be-counterfactual toxic training.)

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-15T09:48:59.157Z · LW(p) · GW(p)

Yes, that is precisely the meaning I intended.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T18:21:52.335Z · LW(p) · GW(p)

I'm amazed.

I've been furious at the way you apparently discounted the work it takes to get over niceness conditioning, and the only reason I haven't been on your case about it is that I was distracted by wanting to be nasty-- but I lack the practice at flaming people.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-15T20:13:48.812Z · LW(p) · GW(p)

Both nice and nasty are errors, although I can imagine nasty being useful as a learning exercise on the way to curing oneself of niceness.

I didn't mean to belittle the effort (although that is a fair reading of what I wrote). Just to say that that is the task to be done, and the thing to do is to do it, whatever effort it takes. aelephant's comment above, that's what I would call dismissive.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T21:29:35.633Z · LW(p) · GW(p)

Thanks for saying it was a fair reading.

I'm not sure why I found your comment that much more annoying than aelephant's. I have a hot button about being given direct orders about my internal states, so that might be it.

It's possible that practicing niceness could be a learning exercise on the way to curing nastiness, too, but we're both guessing.

comment by A1987dM (army1987) · 2013-07-15T10:20:01.714Z · LW(p) · GW(p)

the dude is just trying to break the ice for Odin's sake

With three lines and a half's worth (on my screen) of blathering before you have even said “Hi” to him.

Replies from: satt
comment by satt · 2013-07-22T13:56:11.179Z · LW(p) · GW(p)

On the bright side, that particular kind of blathering signals someone who's probably self-aware and open to a similarly rambling, self-referential reply. So I'd feel OK parrying pragmatist's opener with something that's also explicit & meta-conversational, e.g.: "Ah, we're doing the having-a-conversation-about-having-a-conversation thing, and now I feel like I have to match your openness about your awkwardness, so I'd better do that: I find it awkward to try to manufacture conversation with somebody in a cramped, uncomfortable, noisy environment for hours. Fortunately, I mostly just want to sleep on this flight, and I brought a book in case I can't, so you don't have to worry about me nervously stealing quick glances at you."

comment by CronoDAS · 2013-07-15T08:23:13.680Z · LW(p) · GW(p)

I've heard stories of men who react very, very badly when women try this.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-16T12:09:45.464Z · LW(p) · GW(p)

I can attest from personal experience that it's not only women to whom people will sometimes react very negatively. This is one of the factors which has conditioned me into being less comfortable attempting to politely disengage than continuing a conversation I don't want.

comment by pragmatist · 2013-07-13T20:00:04.796Z · LW(p) · GW(p)

Yikes. Duly noted. That is a useful data point, and it's the sort of the thing I need to keep in mind. I'm an extrovert temperamentally, and I grew up in a culture that encourages extroversion. This has mostly been an apparent advantage in social situations, because the people from whom you get an overt response are usually people who either share or appreciate that personality trait. But I've begun to realize there is a silent minority (perhaps a majority?) of people who find behavior like mine excessively familiar, annoying, perhaps even anxiety-inducing. And for various reasons, these people are discouraged from openly expressing their preferences in this regard in person, so I only hear about their objections in impersonal contexts like this.

I usually try to gauge whether people are receptive to spontaneous socializing before engaging in it, but I should keep in mind that I'm not a perfect judge of this kind of thing, and I probably still end up engaging unwilling participants. There is something selfish and entitled about recruiting a stranger into an activity I enjoy without having much of a sense of whether they enjoy it at all (especially if there are social pressures preventing them from saying that they don't enjoy it), and I should probably err on the side of not doing it.

Replies from: Kaj_Sotala, savageorange, drethelin, aelephant
comment by Kaj_Sotala · 2013-07-14T08:05:51.303Z · LW(p) · GW(p)

I would guess that the part that caused such a strong reaction was this:

because I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances. Why the hell do people do that?

You're not just introducing yourself: you are putting pressure on the other person to be social, both with the notion that you would find sitting in silence "excruciatingly" uncomfortable, and with the implication that a lack of communication is unusual and unacceptable.

Usually if somebody would introduce themselves and try to start a conversation, one could try to disengage, either with a polite "sorry, don't feel like talking" or with (more or less) subtle hints like giving short one-word responses, but that already feels somewhat impolite and is hard for many people. Your opening makes it even harder to try to avoid the conversation.

Replies from: pragmatist
comment by pragmatist · 2013-07-14T08:15:18.263Z · LW(p) · GW(p)

Hmm... good point. What I typed isn't exactly what I usually say, but I do tend to project my personal opinion that sitting quietly side by side is awkward and alien (to me) behavior. I can see how conveying that impression makes it difficult to disengage. And while I do find the silence pretty damn awkward, other people have no obligation to cater to my hang-ups, and its kind of unfair to (unconsciously) manipulate them into that position. So on consideration, I'm retracting my initial post and reconsidering how I approach these conversations.

Replies from: army1987, Richard_Kennaway, satt
comment by A1987dM (army1987) · 2013-07-14T23:25:21.575Z · LW(p) · GW(p)

My suggestion: say “Hi” while looking at them; only introduce yourself to them if they say “Hi” back while looking back at you, and with an enthusiastic-sounding tone of voice.

(Myself, I go by Postel's Law here: I don't initiate conversations with strangers on a plane, but don't freak out when they initiate conversations with me either.)

Replies from: Caspian
comment by Caspian · 2013-07-15T15:55:11.789Z · LW(p) · GW(p)

I think sitting really close beside someone I would be less likely to want to face them - it would feel too intimate.

comment by Richard_Kennaway · 2013-07-15T09:47:50.186Z · LW(p) · GW(p)

What I typed isn't exactly what I usually say

So, you wrote an imaginary, exaggerated version of how you would offer conversation, to which RolfAndreassen responds with an imaginary, exaggerated version of his response, and SaidAchmiz adds "Some such people apparently think", and others chip in with "I've heard stories of" and "the dude is just trying to break the ice", and...and.

Where has reality got to in all this?

FWIW, I would find your approach obnoxiously presumptuous and would avoid any further conversation. Look at this:

I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances.

In other words, "You will be hurting me if you do not talk to me. If you do not talk to me you are an evil, hurtful person." Sorry, I don't care.

Why the hell do people do that?

This is resentment at other people not magically conforming to your wishes. I don't expect you to magically conform to mine. I'll just stifle that conversation at birth if it ever happens. I put the telephone down on cold callers too.

Replies from: pragmatist
comment by pragmatist · 2013-07-15T11:50:33.635Z · LW(p) · GW(p)

So, you wrote an imaginary, exaggerated version of how you would offer conversation

I didn't say it was exaggerated (nor did I think it when I wrote the grandparent), although now that you mention it, perhaps the adverb "excruciatingly" is an exaggerated version of what I usually express.

In other words, "You will be hurting me if you do not talk to me. If you do not talk to me you are an evil, hurtful person." Sorry, I don't care.

I don't think "in other words" means what you think it does. Also, this paraphrase is pretty rich coming from someone who was just complaining about exaggeration in other comments.

Apart from that, yeah, I see your point.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-07-15T11:56:34.219Z · LW(p) · GW(p)

I didn't say it was exaggerated.

Well then, in what other way does it differ from what you usually say?

Replies from: pragmatist
comment by pragmatist · 2013-07-15T11:58:18.148Z · LW(p) · GW(p)

Sorry, I edited to qualify before I read your response. The major difference is probably that it is delivered more as part of a conversation than a monolog. I don't just rattle off that script as soon as I encounter the person without waiting for a response.

comment by satt · 2013-07-22T16:33:05.656Z · LW(p) · GW(p)

I usually take an army1987-type approach in this situation, but here's another possible compromise.

Recently I was flying and wanted to ask someone next to me about the novel they were reading. I waited until half an hour before landing to talk to them, to set a limit on the conversation's length — no implicit request they chat with me for the whole flight. When I did talk to them, I (briefly!) acknowledged the interruption, and kept it specific: "Pardon the intrusion, but what do you think of the novel? I've read some of John Lanchester's nonfiction but I haven't read Capital yet and I've been thinking about picking it up."

Asking a specific question lowers the conversational stakes since someone can just answer the question and then resume what they were doing without violating politeness norms. (That time my question led to a full-blown conversation anyway, but the important thing was giving the other person a chance to gracefully avoid that.)

Things are of course different when you want to improvise small talk instead of asking about a specific thing, but you can still use external circumstances to implicitly limit the conversation's potential length, and ask about something arbitrary as a conversation starter. (This is no doubt a reason English people making small talk stereotypically talk about the weather. English weather's variable enough that there's always a little to say about it, it's a bland topic that won't offend, everyone in England has experience of it, and there are well-known cached responses to weather-related comments, so bringing up the weather doesn't demand much mental effort from other people. And since it's a low-commitment topic it's easy to round off the conversation smoothly, or to make brief, just-polite-enough noncommittal responses to signal an unwillingness to chat.)

comment by savageorange · 2013-07-14T03:10:50.034Z · LW(p) · GW(p)

As far as I'm concerned, although people like RolfAndreasson exist, they should in no way be included in the model of 'average person'. Seething rage at a mere unsolicited introduction is totally un-ordinary and arguably self-destructive behaviour, and I have no compunction about saying that RA definitely needs to recalibrate his own response, not you.

My impression of your introductory thing is that it's overly involved, maybe slightly overbearing. You don't need to justify yourself, just introduce yourself. A general rule that I've found reliable for social situations is "Don't explain things if explanations haven't been requested (unless you happen to really enjoy explaining this thing)"; it stops me from coming across as (or feeling) desperate and lets people take responsibility for their own potential discomfort.

Don't err on the side of not doing it. People are already encouraged to be way too self-involved, isolated, and "individualistic". Doing things together is good, especially if they challenge you both (whether that's by temporary discomfort, new concepts, or whatever). If they don't want to be involved let them take responsibility for communicating that, because it is their responsibility.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T07:19:29.241Z · LW(p) · GW(p)

You are clearly an extrovert, and that's fine, but please refrain from speaking as if introverts are inherently inferior and incorrect. It's incredibly annoying and insulting.

Also, you say

People are already encouraged to be way too self-involved, isolated, and "individualistic".

And then you say

Doing things together is good, especially if they challenge you both (whether that's by temporary discomfort, new concepts, or whatever). If they don't want to be involved let them take responsibility for communicating that, because it is their responsibility.

Do you not see the irony of forcing yourself on other people, despite their wishes, and justifying this by saying that they're too self-involved?

Like RolfAndreassen said: please back the fuck off and leave others alone.

Replies from: Kawoomba, savageorange, Caspian, savageorange
comment by Kawoomba · 2013-07-14T07:57:32.117Z · LW(p) · GW(p)

Do you not see the irony of forcing yourself on other people, despite their wishes, and justifying this by saying that they're too self-involved?

You are sitting so close to someone that parts of your bodies probably touch, you smell them, you feel them, you hear them. The one doing the forcing with all that is the evil aircraft company, and though it's customary to regard such forced close encounters as "non-spaces" by pretending that no, you're not crammed in with a stranger for hours and hours, the reality is that you are.

The question is how you react to that, and offering to acknowledge the presence of the other and to find out their wishes regarding the flight is the common sense thing to do. Like pinging a server, if you will. If you don't ask, you won't find out.

Well, if there are non-verbal hints (looking away etc), by all means, stay quiet. However, you probably clearly notice that a protocol which forbids offering to start a conversation would result in countless acquaintances and friends never meeting, even if both may have preferred conversation.

In the end, even to an introvert, simply stating "Oh hello, I'm so and so, unfortunately I have a lot on my mind, I'm sure you understand" isn't outside the bounds of the reasonable. Do you disagree?

Replies from: SaidAchmiz, TobyBartels, Desrtopa
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T16:22:25.064Z · LW(p) · GW(p)

The question is how you react to that, and offering to acknowledge the presence of the other and to find out their wishes regarding the flight is the common sense thing to do.

Only for an extrovert.

In the end, even to an introvert, simply stating "Oh hello, I'm so and so, unfortunately I have a lot on my mind, I'm sure you understand" isn't outside the bounds of the reasonable. Do you disagree?

Yes.

Replies from: drethelin
comment by drethelin · 2013-07-14T17:22:58.819Z · LW(p) · GW(p)

As someone who has been "trapped" in dozens of conversations with someone seemingly nice but uninteresting it's surprisingly hard to straight up tell someone you don't want to talk to them. I

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T17:29:29.861Z · LW(p) · GW(p)

Exactly. I would be far more ok with a social norm that condoned introducing oneself to (and starting conversations with) people on plans if there was also a social norm that condoned saying "I don't want to talk to you. Kindly go away and leave me alone." Current social norms regard this as rude. (I take it our esteemed extrovert colleagues see the problem here.)

Replies from: Error
comment by Error · 2013-07-15T12:23:32.277Z · LW(p) · GW(p)

Datapoint: I don't care for Achmiz's hostility but I do agree with his point here. There is no polite way to explicitly tell someone you don't want to communicate. This is a bug that should be fixed. It harms both parties; the silent one can't indicate that without paying a social cost, and the talkative one can't really be sure they're not annoying their counterpart.

(it is possible the talkative one doesn't actually care if they're annoying their counterpart. If so, fuck them.)

comment by TobyBartels · 2014-08-10T01:25:40.010Z · LW(p) · GW(p)

FWIW, I am an introvert, and I agree with you. I have no desire to start conversations with strangers on the plane, but I understand that extroverts do. I refuse them politely along the lines that you suggest here, and nobody has ever thought me rude because of it. (Or if they did, they were polite enough not to say so.)

comment by Desrtopa · 2013-07-16T11:43:03.167Z · LW(p) · GW(p)

In the end, even to an introvert, simply stating "Oh hello, I'm so and so, unfortunately I have a lot on my mind, I'm sure you understand" isn't outside the bounds of the reasonable. Do you disagree?

Personally, as quite an extreme introvert, I would probably not make any excuses to get out of the conversation, but I would wish they had never spoken up in the first place.

We live in a culture of extroversion, where transparent excuses to avoid talking to another person overwhelmingly tend to be viewed as rude.

While I sympathize with extroverts who would be discomforted by a long train or plane ride in close proximity with no conversation, starting one in a place where the other person does not have the option to physically disengage, even without applying intentional pressure against their doing so, does carry a risk of discomforting the other person.

comment by savageorange · 2013-07-14T09:33:08.131Z · LW(p) · GW(p)

Only in a very isolated point of view is introducing yourself to someone nearby an invasion. The rest of the world regards it as an ordinary action. Saying that you've got a different temperament does NOT excuse you from being an ordinary human being who can handle other people doing socially normal things that you have not yet explicitly okayed.

As a stranger, If I walk up to you and randomly try to hug you, THAT'S an invasion. If I try to talk to you, that's just Tuesday (so to speak).

Please note that I'm not in any way suggesting anyone should force their company on another. I'm just saying, if you have ANY major reaction to something as ordinary as someone trying to introduce themselves to you, it is YOU that has the problem and you should be looking at yourself to see why you are having this extreme reaction to a non-extreme circumstance. On the other side of the equation, if you have introduced yourself and received a prompt and clear rejection, if you react majorly to that in any way (including forcing your continued company on them), you also have a problem of a similar nature.

If anyone is on either side of that equation, they have a problem with their emotional calibration (and as an antecedent to that, their habits of thinking). Your emotions need to respond mildly to ordinary occurrences and more strongly to extraordinary occurrences; that's one way to tell how well connected to the reality of things you are.

Also, it may not be obvious to you, but extroverts are almost as isolated as introverts in our modern culture. Merely talking with people doesn't mean you're getting outside of yourself. You have to actually OPEN your mind, surrender your preconceptions, and engage in an honest exchange. This is hard for almost everyone, everyone has trust issues. Extroverts are just better at appearing to 'get along well' socially, but perhaps even worse at actually connecting with people on any real level (as far as I can tell, we just get lucky sometimes through sheer volume of exposure and somewhat greater willingness to relax control).

The sense in which I'm promoting getting involved is not a 'do stuff! with people! cause it feels good!' sense -- that's just the how. I'm trying to point out that when you really get involved, you stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way; And that's priceless, something that EVERYONE needs.

(and I also mean it in the sense that Kawoomba mentions, that "you don't really know that well just from a tiny initial taste, whether this person is someone worth having in your life." If they aren't allowed to even try to know you better, then you are undoubtedly missing out on some amazing people who would contribute a lot to your life)

Replies from: NancyLebovitz, SaidAchmiz, Desrtopa, somervta
comment by NancyLebovitz · 2013-07-14T13:17:50.404Z · LW(p) · GW(p)

The sense in which I'm promoting getting involved is not a 'do stuff! with people! cause it feels good!' sense -- that's just the how. I'm trying to point out that when you really get involved, you stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way; And that's priceless, something that EVERYONE needs.

I really recommend not framing that sort of thing as a series of orders mixed with insults.

Replies from: savageorange
comment by savageorange · 2013-07-14T13:34:22.142Z · LW(p) · GW(p)

You mean it's not taken for granted that me, you, and everyone have this excessive belief that their conclusions are correct, have difficulty accurately thinking about long term things and the big picture, and in general have tunnel vision? Those premises seem to be solidly supported by neuroscience as well as well covered in LessWrong articles .

FWIW I wrote that from the point of view of seeing my own behaviour and being frustrated with it, not aiming to insult anyone else. I was hoping to invoke the sense of 'yes, I see myself do this, and it's frustrating and ridiculous and I want to change it' rather than insulting anyone. I'm not sure how to change it without losing that sense.

And uh.. I don't see the order thing at all (at least in the section you quoted). Unless you think that the claim that we need to see things more realistically is anything but stating the obvious.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T14:17:14.721Z · LW(p) · GW(p)

My apologies on that last bit-- I just saw "stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way" and reacted without checking, instead of seeing that you'd actually written "when you really get involved, you stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way".

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-31T07:17:13.358Z · LW(p) · GW(p)

At the moment, I have 8 upvotes (83% positive) on the grandparent, and 4 upvotes (83% positive) on my retraction. This is weird. My retraction isn't based on anything complicated, it was a simple misreading.

It's not that I think I have too many upvotes on the retraction, though 4 might be a little high-- I think of 2 as more typical. I'm just wondering why I didn't lose more of the upvotes on the grandparent.

I'm hoping it's just a matter of fewer people reading a thread as it gets older.

comment by Said Achmiz (SaidAchmiz) · 2013-07-14T17:21:46.121Z · LW(p) · GW(p)

You continue to speak as if extroversion is the norm and introversion is an aberration, and as if more extroversion is good. Not everyone agrees; don't assume that you're saying obvious things here. For example:

If they aren't allowed to even try to know you better, then you are undoubtedly missing out on some amazing people who would contribute a lot to your life

Well, I guess that's my own problem then, isn't it? Do you suppose I might resent the idea that you (the extrovert) can just decide that I (the introvert) have this problem ("missing out on some amazing people") and undertake to fix it for me by inserting yourself into interactions with me? Do you think maybe I'm the one who should be deciding whether this is a problem at all, and whether/how I should fix it?

I'm trying to point out that when you really get involved, you stop thinking you're so fucking right, stop being so short-sightedly involved in your immediate problems, and start looking at things in a more neutral, realistic way; And that's priceless, something that EVERYONE needs.

Great. Maybe everyone does need it. But kindly do not take it upon yourself to force this wonderful thing on people by interacting with them against their will. (This, by the way, is a general principle, applicable far more widely than introductions.)

Only in a very isolated point of view is introducing yourself to someone nearby an invasion. The rest of the world regards it as an ordinary action. Saying that you've got a different temperament does NOT excuse you from being an ordinary human being who can handle other people doing socially normal things that you have not yet explicitly okayed.

I note that the social norms are written (so to speak) by the extroverts. So maybe reconsider reasoning from "this is a social norm" to "this is okay" or even to "a supermajority of humans, even most introverts, believe this to be okay".

In general, savageorange, you seem to be well-intentioned; but I read in your posts a whole lot of typical mind fallacy and also a heavy dose of being oblivious to the experience of introverts. I don't mean to be hostile, but given that most introverts prefer not to protest (being introverted and all), I think it's best that I speak up.

Replies from: daenerys, savageorange
comment by daenerys · 2013-07-14T18:27:04.187Z · LW(p) · GW(p)

You are claiming to speak for all introverts, which turns this into an "introvert v extrovert" discussion. In other words, you are saying that half the population is forcing themselves onto the introverted half of the population. In reality, introverts are often the MOST happy that someone else initiated a conversation that they would be too shy to start themselves.

In reality, the situation is more like "NTs v non-NTs", and you are speaking for the non-NT part of the population. The same way you say half the population shouldn't force their preferences on the other half, I'm sure you can agree that 5% of the population shouldn't force their preferences (of non-interaction) onto the other 95%. Especially when the cost of nobody ever initiating conversations is significantly higher than the cost of being momentarily bothered by another person.

Actionable advice (for stopping an unwanted interaction): Answer in monosyllables or "hmm.." sounds. DON'T look at the person and smile. Maintain a neutral expression. Pull out your phone or a book, and direct your attention towards it, instead of the person.

Ways to end the conversation in a polite way: Say "Well, it's very nice to meet you." then turn your attention to your book/phone, OR add "but I'm at a really good part in this book, and I want to see what happens next....I really need to get this done... I'm really tired and was hoping to rest on the flight...etc." It's alright if the reason is vague. It is generally understood that providing a weak excuse is just a polite way of saying "no", and everyone plays along.

Replies from: Desrtopa, savageorange, SaidAchmiz, None
comment by Desrtopa · 2013-07-16T12:01:55.329Z · LW(p) · GW(p)

In reality, introverts are often the MOST happy that someone else initiated a conversation that they would be too shy to start themselves.

Not all introverted people are shy, and vice versa. Personally, I do not have a degree of shyness that holds me back from the level of social contact I want.

Ways to end the conversation in a polite way: Say "Well, it's very nice to meet you." then turn your attention to your book/phone, OR add "but I'm at a really good part in this book, and I want to see what happens next....I really need to get this done... I'm really tired and was hoping to rest on the flight...etc." It's alright if the reason is vague. It is generally understood that providing a weak excuse is just a polite way of saying "no", and everyone plays along.

... But I feel uncomfortable lying to disengage with another person. As a general policy I prefer to tell the truth lest I lapse in holding up a deception, and this is definitely not a case where everyone recognizes a falsehood as a white lie to disengage politely which should not be taken as offensive if uncovered.

comment by savageorange · 2013-07-15T02:13:35.554Z · LW(p) · GW(p)

Data ("data"?) point: I test reliably as NF (ENFP, specifically) and SaidAchmiz's objections seem quite similar to my father, who is clearly (by both of our estimations, and tests) NT (INTJ). I can think of another relevant person, who tests as INFP and seems to be at pains to encourage interaction, and yet another who is also ENFP and similarly tries hard to encourage interaction. So I was rather surprised to see you painting SaidAchmiz's objections as non-NT.

My current model suggests that what I am promoting is F values (possibly NF, but I don't know any SF's well enough to compare) with an extraverted slant

(but not as much of an extraverted slant as SaidAchmiz seemed to think, I agree that even if at the time being drawn out of ourselves is an unpleasant experience, everyone, extraverted or introverted, gains something of real worth if they really attain that level of self-detachment regularly.)

Replies from: NancyLebovitz, SaidAchmiz
comment by NancyLebovitz · 2013-07-15T04:36:04.777Z · LW(p) · GW(p)

I think it was NT as in NeuroTypical (not on the autism spectrum), not NT as in intuitive-thinking.

Replies from: savageorange
comment by savageorange · 2013-07-15T07:53:09.904Z · LW(p) · GW(p)

Haha, that makes sense.

... Only on LessWrong :)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T18:18:15.205Z · LW(p) · GW(p)

I think science fiction fans (or at least the ones I know) could also have managed the correction.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:36:50.033Z · LW(p) · GW(p)

NancyLebovitz's correction is accurate, but here is another "data" point, because why not:

I test as INTP (strongly INT, with a closer to even split between P and J, though reliably favoring P).

comment by Said Achmiz (SaidAchmiz) · 2013-07-14T19:24:31.234Z · LW(p) · GW(p)

In reality, the situation is more like "NTs v non-NTs", and you are speaking for the non-NT part of the population.

Perhaps. Would you agree that there is much heavier overlap between "NT" and "extrovert", and "non-NT" and "introvert", than vice versa?

The same way you say half the population shouldn't force their preferences on the other half, I'm sure you can agree that 5% of the population shouldn't force their preferences (of non-interaction) onto the other 95%.

"half the population shouldn't force their preferences on the other half" is an inaccurate generalization of what I said; my claims were far more specific. As such, no, I can't agree the 95% / 5% thing. The point is that it depends on the preference in question. You shouldn't force your desire to interact with me on me; conversely, it seems perfectly ok for me to "force" my desire not to interact with you, on you. The situation is not symmetric. It is analogous to "why are you forcing your preference not to get punched in the face on me?!"

Actionable advice [...]

First, I'd like to say thank you for bothering to include concrete advice. This is a practice I endorse. (In this case, the specific advice provided was known to me, but the thought is a good one.)

That said, it is my experience that the kind of people who force interactions on strangers very often ignore such relatively subtle hints (or consider them rude if they notice them at all).

Replies from: savageorange
comment by savageorange · 2013-07-15T02:28:28.625Z · LW(p) · GW(p)

The point is that it depends on the preference in question. You shouldn't force your desire to interact with me on me; conversely, it seems perfectly ok for me to "force" my desire not to interact with you, on you.

The problem here is that this is a difference between saying 'you can do this' and saying 'you can't do this / I have a right to be left alone'.

You CAN arrange to be left alone. I CAN notice some genuine, reliable cue that you want to be left alone, and leave you alone. I CAN attempt to interact with you. You CAN reject that attempt (either rudely or with some tact). As soon as you get into saying what you CAN'T do or what I CAN'T do, that shows that you've stopped trying to genuinely support your own position and switched to attacking the opposing position. As far as I can see, that is inherently a losing game, just like the way hatred and revenge are losing games.

(and no, it is not, in any way, comparable to preferring not to be punched in the face. More comparable to preferring not to exercise, or perhaps preferring not to vote.)

Replies from: army1987, SaidAchmiz
comment by A1987dM (army1987) · 2013-07-28T20:07:17.234Z · LW(p) · GW(p)

perhaps preferring not to vote

Note that certain polities have compulsory voting and others don't.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T02:52:21.466Z · LW(p) · GW(p)

I... don't really understand what you're saying here, I'm afraid. I'm having trouble reading your comment (the parts about "can" and "can't" and such) as a response to what I said rather than a non sequitur. Would you mind rephrasing, or...?

that shows that you've stopped trying to genuinely support your own position and switched to attacking the opposing position.

Huh? I was making an "ought" statement. Supporting one's own position and attacking the opposing position are the same thing when only one position could be the right one.

and no, it is not, in any way, comparable to preferring not to be punched in the face. More comparable to preferring not to exercise, or perhaps preferring not to vote.

Those analogies don't make any sense. Consider: in the "punch in face" case, we have:

Alice: Wants to punch Bob in the face.
Bob: Doesn't want to be punched in the face.

If we support Alice, then Alice has her preferences satisfied and Bob does not; Alice's preferences (to punch Bob) are forced upon Bob, causing Bob to experience preference non-satisfaction. If we support Bob, then vice versa; Bob's preferences (to not be punched by Alice) are forced upon Alice, causing Alice to experience preference non-satisfaction. (Generally, we support Bob in such a case.)

The "exercise" or "vote" case bears no resemblance to this. In both cases, we simply have:

Alice: Doesn't want to vote.

If we support Alice, then Alice has her preferences satisfied. There is no Bob here. There is also no dilemma of any kind. Obviously we should support Alice, because there is no reason not to. (Unless we hate Alice and want her to experience preference non-satisfaction, for some reason.)

The "interact with strangers" case is isomorphic to the "punch in face" case, like so:

Alice: Wants to interact with Bob (i.e. wants to introduce herself to Bob who is her seat-neighbor on a plane).
Bob: Doesn't want to be interacted with.

If we support Alice, then Alice has her preferences satisfied and Bob does not; Alice's preferences (to interact with Bob) are forced upon Bob, causing Bob to experience preference non-satisfaction. If we support Bob, then vice versa; Bob's preferences (to not be interacted with) are forced upon Alice, causing Alice to experience preference non-satisfaction.

Supporting Alice in the "interact with strangers" case is a little like saying, in the "punch in face" case: "Yeah, well, if Bob doesn't want to be punched, then he ought to just block when I throw a right hook at his face. I'll get the hint, I promise!"

Replies from: wedrifid, savageorange
comment by wedrifid · 2013-07-15T03:16:25.245Z · LW(p) · GW(p)

Alice: Doesn't want to vote.

If we support Alice, then Alice has her preferences satisfied. There is no Bob here. There is also no dilemma of any kind. Obviously we should support Alice, because there is no reason not to. (Unless we hate Alice and want her to experience preference non-satisfaction, for some reason.)

False. Even if all things considered you prefer that Alice not be compelled to vote there are reasons to do so. Voting is a commons problem. Compulsory voting (or, "compulsory attendence of the voting booth at which point you can submit a valid vote or not as you please") can be considered analogous to taxation, and happens to be paid in time (approximately non-fungibly). If a country happens to get adequate voting outcomes purely from volunteers then that may be a desirable policy all things considered. However, compelling people to vote does not imply sadism. Merely a different solution to said commons problem.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T03:26:58.459Z · LW(p) · GW(p)

Yes, I considered this objection, thank you for bringing it up. Upon consideration, it seems to me that "compulsory attendance of the voting booth", while probably not literally inspired by actual sadism, is perverse to the point of being indistinguishable from sadism.

If a country gets "inadequate voting outcomes" (what does that mean, exactly?) from volunteer-only voting, compelling people to vote seems to be exactly the wrong solution for many reasons. (Voting is a "commons problem" to the extent that it is a problem — but it's not clear to me that "few eligible voters are actually voting" is, in fact, a problem.)

However, the more relevant-to-the-conversation response is that "society's" interests in this case are far too diffuse and theoretical to serve as any kind of relevant analogue to the case of "one very specific person (i.e. Bob) doesn't want unpleasant experiences inflicted upon him". That's what makes it a poor analogy.

Replies from: wedrifid, army1987
comment by wedrifid · 2013-07-15T04:28:57.129Z · LW(p) · GW(p)

Upon consideration, it seems to me that "compulsory attendance of the voting booth", while probably not literally inspired by actual sadism, is perverse to the point of being indistinguishable from sadism.

Avoid inflationary use of terms. "Sadistic" does not mean "a policy that I disapprove of". Being unable to distinguish the two is a failure of your own comprehension, nothing more.

If a country gets "inadequate voting outcomes" (what does that mean, exactly?)

That means that the writer refrained from prescribing preferences to outcomes or making any claims about the merit of any particular election and left it to the readers judgement. Some examples of things that could be inadequate would include too few people voting, if the selection bias of only aggregating the preferences of people who have nothing better to do at the time than voting rather than the preferences of everyone resulted in inferior candidates or if the psychological impact of the practice is somehow sub-par.

However, the more relevant-to-the-conversation response is that "society's" interests in this case are far too diffuse and theoretical to serve as any kind of relevant analogue to the case of "one very specific person (i.e. Bob) doesn't want unpleasant experiences inflicted upon him". That's what makes it a poor analogy.

You are proposing a general cultural rule for how people must behave (don't introduce yourself to strangers on planes) for the benefit of Bob. This amounts to a large cost in lost opportunity and freedom that is paid by the people you consider "too diffuse and theoretical" to deserve consideration to suit the convenience of Bob who is important enough for you to make up a name for him. All the other people who have Bob's particular psychological disorder presumably warrant your consideration despite being diffuse and theoretical.

(And by 'psychological disorder' I refer to whatever condition results in Bob taking damage equivalent to the physical and psychological damage most people take from being punched in the face.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T05:04:27.452Z · LW(p) · GW(p)

Avoid inflationary use of terms. "Sadistic" does not mean "a policy that I disapprove of". Being unable to distinguish the two is a failure of your own comprehension, nothing more.

I assure you, that was not an inflationary use on my part. I meant precisely what I said.

You are proposing a general cultural rule for how people must behave (don't introduce yourself to strangers on planes) for the benefit of Bob. This amounts to a large cost in lost opportunity and freedom that is paid by the people you consider "too diffuse and theoretical" to deserve consideration to suit the convenience of Bob who is important enough for you to make up a name for him. All the other people who have Bob's particular psychological disorder presumably warrant your consideration despite being diffuse and theoretical.

You misread me, I think... the cost in lost opportunity and freedom in the "interact with strangers" case, just as in the "punch in face" case, is paid by a very concrete person: Alice. She is certainly neither diffuse nor theoretical. I specifically commented on her preferences, and the satisfaction or non-satisfaction thereof.

What is too diffuse and theoretical is "society's" interests in the "vote" case. That is why the "vote" case makes a poor analogy for the "interact with strangers" case.

Replies from: wedrifid
comment by wedrifid · 2013-07-15T05:45:26.762Z · LW(p) · GW(p)

I assure you, that was not an inflationary use on my part. I meant precisely what I said.

I'll repeat with emphasis that being unable to distinguish between a policy decision that you disapprove of and sadism is a significant failure in comprehension. It is enough to make whatever opinions you may express about what social norms should be lose any hope of credibility.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T13:11:29.497Z · LW(p) · GW(p)

There is, however, also a difference between lack of comprehension and disagreement, which you seem to not be recognizing. There are plenty of policies that I disapprove of without considering them to be sadistic. Also: "perverse to the point of being indistinguishable from sadism" does not mean "I actually think this policy was motivated by sadism" (a distinction to which I alluded in the post where I made this comment). In general, I think you are reading me quite uncharitably here.

comment by A1987dM (army1987) · 2013-07-28T20:12:35.789Z · LW(p) · GW(p)

Non-compulsory voting has the disadvantage that certain people will refrain from voting just because of the inconvenience of going to the voting booth and others won't, which may bias the result of the election if the extent to which voting would be inconvenient correlates with political positions for whatever reasons.

comment by savageorange · 2013-07-15T03:01:29.000Z · LW(p) · GW(p)

tl;dr: "CAN" is about a person's ability or capability. This helps them to take responsibility "CAN'T" is about what you(or society) can prevent them from doing. This helps them evade responsibility.

BTW, there is a Bob. Bob is society in the voting case and .. well, if you think about it, also society in the exercise case (but 'the part of you that values wellbeing over comfort' would also qualify there).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T03:13:57.267Z · LW(p) · GW(p)

tl;dr: "CAN" is about a person's ability or capability. This helps them to take responsibility "CAN'T" is about what you(or society) can prevent them from doing. This helps them evade responsibility.

I really don't understand what you're saying here. :(

BTW, there is a Bob. Bob is society in the voting case and .. well, if you think about it, also society in the exercise case (but 'the part of you that values wellbeing over comfort' would also qualify there).

"Society" can't have rights, nor can "society" have preferences, or the satisfactions or non-satisfactions thereof. There is no good but the good of individuals; there is no harm but the harm to individuals.

The idea that "society" has rights, or that "society" can be benefited or harmed, independently from the good or harm to any individuals, is one of the most destructive ideas in human history.

As for 'the part of you that values wellbeing over comfort' ... rights do not accrue to internal aspects of self. "Rights" are about interpersonal morality. (But actually I would prefer we not go off on this particular tangent here, if that's ok; it's rather off-topic.)

Replies from: wedrifid
comment by wedrifid · 2013-07-15T03:22:29.902Z · LW(p) · GW(p)

The idea that "society" has rights, or that "society" can be benefited or harmed, independently from the good or harm to any individuals, is one of the most destructive ideas in human history.

Sure, savageorange could have found a telephone book and tried listing everyone individually. But saying 'society' seems more efficient. It refers tot he case where many unnamed but clearly existing individuals who need not or can not be named would be harmed.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T03:29:08.100Z · LW(p) · GW(p)

Yes, that's the implied assumption, but it's usually a way to mask the fact that were we to try and find any actual, specific individuals who are concretely benefited or harmed by whatever-it-is, we would have quite the hard time doing so.

comment by [deleted] · 2013-07-14T18:30:52.202Z · LW(p) · GW(p)

I'm sure you can agree that 5% of the population shouldn't force their preferences (of non-interaction) onto the other 95%.

What of minority rights? I think you've come to a pretty repugnant conclusion on accident.

Replies from: CronoDAS
comment by CronoDAS · 2013-07-15T08:24:33.631Z · LW(p) · GW(p)

on accident

My brain always flags this as an error (instead of the correct "by accident") and gets annoyed. Am I being too sensitive? Googling tells me that "on accident" is a minority usage that probably doesn't actually count as an error...

Replies from: KnaveOfAllTrades
comment by KnaveOfAllTrades · 2013-07-28T15:39:15.521Z · LW(p) · GW(p)

Perhaps paper-machine said 'on accident' by purpose...

comment by savageorange · 2013-07-15T00:06:32.169Z · LW(p) · GW(p)

To put things briefly, it looks like you've reversed most of the things I said.

I'm talking about "you" (as in, any given individual that finds themselves in a situation where they think they are too self-involved). I can't fix anything for you and I don't want to. I'm just saying, this seems to be one of the things that needs to be done. By me. By anyone who thinks they are too self-involved (and by anyone who doesn't think that but still IS too self-involved). Certainly if they are aware of a sense of excessive self-involvement and they want to change that, the only way to do so seems to be, well.. doing something that moves their locus of attention away from themselves :)

It's what I'll do because I want to be less self-involved, and if anyone else wants to be less self-involved, I believe that this is an effective course of action and hope that they try it. And yes, I believe that people being less self-involved (among many other necessary improvements) is essential to a better society. That's all.

Do you think maybe I'm the one who should be deciding whether this is a problem at all, and whether/how I should fix it?

Totally. That's what the entire thing is about! It is your own problem if you have it, and this is a way that you can address it! And others have it too ( I will absolutely maintain that excessive self-absorption is a problem every human being faces), so seeing you taking action to remedy it in yourself can also encourage them to change their actions.

Social norms are definitely written mostly by extraverts. The only way that's going to ever change is if somehow extraverts decide collectively to be less involved in socializing.. and introverts decide to be -more- involved in socializing. (I'm stating this as a logical necessity, because AFAICS the reason that social norms are written by extraverts is essentially self-selection.).

I recognize this and that's why I'm promoting taking responsibility for saying 'no, I don't want to talk right now' as well as promoting getting involved -- because as far as I can see, there is no alternative that preserves the possibility of people being able to develop relationships beyond merely what is expected in their environment. I'm not saying it's easy to say no, I'm saying it is your responsibility to do so at times, just as it's your responsibility to solve the problem of self-involvement if you have it. You seem to agree with this principle, seeing as you identify as an introvert and are speaking up :)

I've read and discussed temperaments in general, and introverts/extraverts in specific, a lot. I can recommend Dorothy Rowe's books on the subject (eg. 'The successful self'), as they seem to be the only ones that manage to strike precisely at the heart of things.

I am quite familiar with the fact that introverts have difficulty saying no, or to put it another way, being impolite. Also with the fact that they spend a lot of time inside their own head. If you want to see that I can appreciate their good points, I can say that they typically are better at methodical thinking and in general anything that's highly structured, they tend to have a stronger sense of self, and are better at deciding on and following principles. They tend to have fewer relationships but be more invested in the ones they do have. A majority of artists and writers are introverted. Naturally I don't have experience with what it is exactly like to be an introvert, but I do understand that for introverts, essentially the thing that scares them the most is losing control over themselves, so they spend a lot of time honing that control (largely by carefully maintaining and building on their internal meaning-structures). I recognize that being interrupted in this process can be quite jarring. I do maintain that if a person then experiences seething rage or other extreme emotions after being interrupted, that's a problem in their thinking they need to fix.

Fair?

Replies from: NancyLebovitz, SaidAchmiz, SaidAchmiz
comment by NancyLebovitz · 2013-07-15T04:38:21.255Z · LW(p) · GW(p)

Social norms are definitely written mostly by extraverts.

I believe this is more true of America than a number of other cultures.

Replies from: SaidAchmiz, savageorange
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:41:30.932Z · LW(p) · GW(p)

This seems correct. American culture is definitely, in many ways, more extraverted than Russian culture (the only other culture I have significant experience with), despite (somewhat paradoxically) the greater emphasis on collectivity in Russian culture, and a somewhat lesser attention paid to many classes of social faux pas than American culture. "Familiarity" is a greater social sin in Russian culture than it is in American culture.

As a corollary to this, people raised in the Russian culture generally view American social interaction as "fake".

comment by savageorange · 2013-07-15T08:18:33.385Z · LW(p) · GW(p)

I remember discussing today how 'constant improvement' -- a classic introvert value -- is an everyday concept in Japan. So, yes. I do think that there's a general self-selection effect regardless of culture, where introverts don't get as much of a say in social norms precisely because they are usually less involved in socializing, but that's just speculative currently.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T00:33:52.921Z · LW(p) · GW(p)

Also, it occurs to me that there is indeed irony in what you're saying: you think forcing your interaction on others... makes you less self-involved?

Or am I misunderstanding you yet again? If so, then I kindly request that you actually spell out, in detail, just what it is you're advocating, and why.

Replies from: savageorange
comment by savageorange · 2013-07-15T02:47:57.935Z · LW(p) · GW(p)

"forcing" is your framing. To be completely blunt, I reject it. The point is that when two people manage to really genuinely communicate, something is created which transcends either of them, and this draws them both out of their own preconceived frames.

Human social interaction, more specifically talking, is ordinary. Force enters the picture after someone has clearly said "No, I don't want to do this / I'm not interested / etc" and not before.

Otherwise, you're trying to make the person approaching you responsible for your internal state -- A frame I similarly have no compunction about utterly rejecting. You're responsible for your state, they are responsible for theirs. You don't communicate perfectly, so if you're trying to (implicitly, not explicitly) communicate 'not interested' and they are receiving a different message, well, chances are your communication failed. Which is primarily your responsibility.

Overall my impression is that you have this axe to grind about being 'forced' but really no-one except you is talking about force here.

Replies from: NancyLebovitz, SaidAchmiz
comment by NancyLebovitz · 2013-07-15T04:40:54.520Z · LW(p) · GW(p)

Otherwise, you're trying to make the person approaching you responsible for your internal state -- A frame I similarly have no compunction about utterly rejecting. You're responsible for your state, they are responsible for theirs.

People affect each other. I'm dubious about the moral frames which say that people ought to be able to do something (not be affected in some inconvenient way) when it's so clear that few if any people can do that.

Replies from: savageorange
comment by savageorange · 2013-07-15T08:02:06.329Z · LW(p) · GW(p)

I can see what you mean, but I'm afraid that the furthest I can go in agreement is to say that few if any people do do that (or have any idea how)*. We're certainty poverty-stricken WRT tools for taking responsibility for our own thoughts and emotions. I would argue though that that does not change what responsibilities we do have.

* BTW in a strict sense I don't think it's actually that important how you feel in response to an event, as long as you respond appropriately, just that it's useful to treat "experiencing disproportionate emotions" as a flag that one of your habits of thinking is disjuncted from reality.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T03:07:20.554Z · LW(p) · GW(p)

Human social interaction, more specifically talking, is ordinary. Force enters the picture after someone has clearly said "No, I don't want to do this / I'm not interested / etc" and not before.

This would only be true if there did not exist social norms which discourage such responses. But there are, so what you say is not true. In fact, you introducing yourself to me on a plan in the manner described near the top of this thread is inherently forceful, even if you do not recognize it as such.

Otherwise, you're trying to make the person approaching you responsible for your internal state -- A frame I similarly have no compunction about utterly rejecting. You're responsible for your state, they are responsible for theirs.

People are "responsible for" my mental state in the same sense they are "responsible for" my physical state: if someone punches me and then, when I protest, says "Yeah, well, I'm not responsible for your state!", that's rather disingenuous, don't you think?

You don't communicate perfectly, so if you're trying to (implicitly, not explicitly) communicate 'not interested' and they are receiving a different message, well, chances are your communication failed. Which is primarily your responsibility.

That's certainly a very convenient position to take if what you want is to be able to force interaction on others and not incur social disapproval. "What's that? He didn't want me to accost him and start chatting him up? Well I guess he should have communicated that better, now shouldn't he?"

Look, it's true that we often communicate badly; illusion of transparency and all that. But to take this as general license for plowing ahead and leaving behind any attempt to consider your fellow human beings' preferences until such time as they expend significant emotional energy to make them clear to you — that is simply inconsiderate, to say the least. (And this is coming from someone on the autism spectrum, who, I assure you, understands very well the difficulty of divining the mental states of other humans!)

Overall my impression is that you have this axe to grind about being 'forced' but really no-one except you is talking about force here.

Not talking about force does not magically cause there to not be any force.

Finally, I once again note...

when two people manage to really genuinely communicate, something is created which transcends either of them, and this draws them both out of their own preconceived frames.

... that you talk about social interaction as if it's this wonderful and amazing thing that, obviously, everyone should want, because it's obviously so wonderful.

Not everyone feels that way.

Replies from: wedrifid
comment by wedrifid · 2013-07-15T08:18:53.288Z · LW(p) · GW(p)

People are "responsible for" my mental state in the same sense they are "responsible for" my physical state: if someone punches me and then, when I protest, says "Yeah, well, I'm not responsible for your state!", that's rather disingenuous, don't you think?

What it is is an absurd equivocation. Punching someone in the face is not the same as introducing yourself to them.

Replies from: drethelin
comment by drethelin · 2013-07-15T16:38:27.899Z · LW(p) · GW(p)

Of course it's not the same. But the framing of "Is it ok to interact with a person in this way I find enjoyable if they might not." is the part that's important. I am currently seeing a person who is masochistic. When she was a child, she literally had NO IDEA that punching people was not ok because they did not enjoy it the way she would. Said is overemphasizing but the point that a social interaction can be negative and stressful for someone EVEN if you think it's always an awesome thing is an important thing to recognize. I think on net most introductions are probably +value but the original over the top example is a perfect pointer to what NOT to do if you want to introduce yourself but also care about not ruining an Introvert's day.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:43:12.029Z · LW(p) · GW(p)

I endorse this formulation. Well explained.

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T00:22:53.621Z · LW(p) · GW(p)

I'm just saying, this seems to be one of the things that needs to be done. By me. By anyone who thinks they are too self-involved (and by anyone who doesn't think that but still IS too self-involved). ... It's what I'll do because I want to be less self-involved, and if anyone else wants to be less self-involved, I believe that this is an effective course of action and hope that they try it.

(By "this", I take it you are referring to "talking to other people" and "introducing yourself to people on planes" and so forth.)

So you think you need to be less self-involved. And doing so requires that you force your interaction on others.

That makes your hapless seat-neighbor on the plane your victim, a victim of your self-improvement strategy.

That's what the entire thing is about! It is your own problem if you have it, and this is a way that you can address it!

The point is that I don't think it's a problem and don't see any need to address it. Me missing out on the amazing contribution you might make to my life is not a problem for me. (I speak here in the general case; no personal judgment intended.)

Social norms are definitely written mostly by extraverts. The only way that's going to ever change is if somehow extraverts decide collectively to be less involved in socializing.. and introverts decide to be -more- involved in socializing.

Since that is, by definition, rather unlikely, extraverts have a moral obligation to consider the wishes of introverts to a much greater degree than they currently do, especially as far as making and enforcing social norms goes.

as far as I can see, there is no alternative that preserves the possibility of people being able to develop relationships beyond merely what is expected in their environment.

Why on earth are you talking as if this possibility is so obviously and uncontroversially a good thing?

I do understand that for introverts, essentially the thing that scares them the most is losing control over themselves, so they spend a lot of time honing that control

Uh... what.

comment by Desrtopa · 2013-07-16T11:55:37.526Z · LW(p) · GW(p)

Only in a very isolated point of view is introducing yourself to someone nearby an invasion. The rest of the world regards it as an ordinary action.

"Rest of the world" meaning where? This is actually quite an abnormal action in some parts of the world, depending on how strongly the culture encourages extroversion.

Please note that I'm not in any way suggesting anyone should force their company on another. I'm just saying, if you have ANY major reaction to something as ordinary as someone trying to introduce themselves to you, it is YOU that has the problem and you should be looking at yourself to see why you are having this extreme reaction to a non-extreme circumstance.

Do you think that it's similarly problematic if a person is highly discomforted by people reasoning using tribal politics and refusing to consider issues on their individual merits? It's totally ordinary behavior, after all.

A person can be poorly psychologically calibrated for their environment without being defective and in need of change.

Also, it may not be obvious to you, but extroverts are almost as isolated as introverts in our modern culture. Merely talking with people doesn't mean you're getting outside of yourself. You have to actually OPEN your mind, surrender your preconceptions, and engage in an honest exchange.

Most introverts actually have an easier time having deep, honest exchanges than extroverts do. They're also less likely to agree that their lives would be improved by doing it more frequently with strangers. I'd recommend checking out this book, since it seems like you have a somewhat misaligned sense of what it implies for a person to be introverted.

Replies from: savageorange, army1987
comment by savageorange · 2013-07-17T06:19:52.101Z · LW(p) · GW(p)

I agree that approaching strangers is more frowned upon, say, in Japan. Perhaps 'rest of the western world' would have been a better choice of words.

You have the totally wrong sense of what I meant by ordinary. Try fitting what you said into the definition "both normal and healthy"; it doesn't.

A person can be poorly psychologically calibrated for their environment without being defective and in need of change.

Defectiveness is really a subject that I'd prefer to keep out of any equation which is talking about people. Anyway, as far as I can see 'in need of change' holds true as long as said reaction impacts on your ability to live an effective and satisfying life. Personally my impression is that any major repression of emotions leads to personal problems (of the 'excessively cold / unable or unwilling to relate' kind).

They're also less likely to agree that their lives would be improved by doing it more frequently with strangers.

People are disinclined to agree with a number of propositions that seem to hold true, particularly regarding social interactions and universal human faults. Mere disagreement doesn't really constitute any evidence against those propositions.

I do understand, though, that introverts general preference for planned, controlled actions and everything fitting together as far as possible, would lead to disliking interaction with strangers, But as far as I can see, extroverts and introverts both need to try and balance themselves by taking on the virtues demonstrated by their opposites. I don't regard introversion and extraversion as value neutral, rather, I regard them both as lopsided.

Quiet Power

Sure, I'll read that IFF you read The Successful Self. It's certainly true that I find introverts frustrating -- sometimes it seems like the world is divided into those who get it but can't relate to me (Introverts) and those who can relate to me but don't get it (Extraverts)*.

* (for most given values of 'it')

Replies from: Desrtopa
comment by Desrtopa · 2013-07-17T13:37:08.628Z · LW(p) · GW(p)

Sure, I'll read that IFF you read The Successful Self.

I'll see if it's in my library network.

Replies from: savageorange
comment by savageorange · 2013-07-29T09:47:11.924Z · LW(p) · GW(p)

Having read Quiet Power, I certainly appreciate the recommendation, as it is a fascinating book. It has helped somewhat elaborate my model of introversion/extraversion. I especially liked the chapter comparing Western and Eastern social norms and their consequences.

What it hasn't done is told me anything surprising about introverts -- all the new information fits quite well into my existing model, which I derived mainly from Dorothy Rowe's books and conversation with a particular introvert.

So, either I have failed to realize the significance of something I read, or my model is not actually misaligned in the way you thought. Could you be specific about what problem you saw?

(on reflection, I think my whole stance on this subject is orthogonal to the idea of temperament. My perception is that most of the thread starting at my original comment can be boiled down to RalfAndreasson and SaidAchmiz asserting "Don't try to expand your social horizons in this particular way, because it invokes strong negative reactions in me", and my responding "No, DO try. You may need it and there are others that need it, and trying is better than not trying in general. Individual emotional reactions, whether yours or others, shouldn't get a look in as rationales for doing or not doing things.".

No doubt I've idealized the clarity of my message there, but the point is this isn't about marginalizing introverts, it's about not committing the error of treating feelings as any kind of strong evidence, and about the general strength of choosing to try as a policy. Introverts try to arrange things so they can take time to reflect, extroverts try to meet people and do exciting things. Those are both fine and ordinary. If these intents happen to conflict, that's for the individuals involved to resolve, not social norms.

Even though that might satisfy introverts' dislike of conflict somewhat, AFAICS there is no way to implement 'don't disturb my feelings' into social norms without being oppressive -- political correctness being an excellent example of this. Feelings may seem significant or even overwhelming, but I'll stand by the statement that they don't have much worth in decisions.)

Eh, I rambled. Hopefully that clarified something in someone's mind, at least ;)

Replies from: Desrtopa
comment by Desrtopa · 2013-07-29T13:18:52.871Z · LW(p) · GW(p)

The main point I had in mind is that social receptivity is something of an exhaustible resource for introverts, something of which the book contains a number of illustrative examples. When an introvert spends time in active socialization, they're using up the mental resources to do so with other people in the future, at least without taking a toll on their psychological, and in extreme cases physical, health.

If you suggested that given the value of socialization, people should spend more time stopping strangers in the street to hold conversations with them, and I objected that for both participants this is draining the resource of time, and that it will often not be a high value use of that resource, I suspect that you'd accept this as a reasonable objection. For introverts, social interactions such as these contain a similar resource tradeoff.

On another note, if feelings don't have much worth in decisions, what does? What else would you want any kind of success for?

Replies from: savageorange
comment by savageorange · 2013-07-30T00:47:21.065Z · LW(p) · GW(p)

If you suggested that given the value of socialization, people should spend more time stopping strangers in the street to hold conversations with them

To be clear, I didn't intend to suggest this at all. I was responding to the situation where you want to approach but then you think vaguely that their feelings may be disturbed by this. I'm not suggesting introverts stop strangers in the streets to talk to them, just that if people (introverted or extraverted) have already formed the intent to approach a person then they shouldn't allow it to be derailed by vague concerns fueled by anecdotal 'data'. I'm just trying to say "Trying to connect is ordinary, don't accept the proposition that it's not."

On another note, if feelings don't have much worth in decisions, what does? What else would you want any kind of success for?

It's fine to enjoy good feelings -- and they are often the result of living well -- but unless you are extraordinarily grounded/anchored to reality, you can't trust them as any kind of benchmark for your current or future situation. By the time your goals arrive, you've changed (and your feelings may well have too).

A possible exception is the feeling of discomfort, as long as you take a challenging interpretation : "I need to go there", instead of the usual "I mustn't go there!" interpretation. Comfort zone expansion, you probably get the idea.

In general I guess what I'm trying to point at is, any given immediate feeling is usually untrustworthy and essentially useless to pursue. Reproducible emotional trends (for example, feeling better about life when you go for a walk or run, which is well documented) and other types of mental trends (flow?, habits of thinking you have or want to have) are a much more sound basis for decisions and planning. You still have to deal with your feelings on a moment-to-moment level, but it's smart to treat them like children that you have to parent rather than reliable peers.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-30T01:09:00.146Z · LW(p) · GW(p)

To be clear, I didn't intend to suggest this at all. I was responding to the situation where you want to approach but then you think vaguely that their feelings may be disturbed by this. I'm not suggesting introverts stop strangers in the streets to talk to them, just that if people (introverted or extraverted) have already formed the intent to approach a person then they shouldn't allow it to be derailed by vague concerns fueled by anecdotal 'data'. I'm just trying to say "Trying to connect is ordinary, don't accept the proposition that it's not."

This doesn't address the point I was making at all. It's not a matter of the action being ordinary or not, but of it costing psychological resources and not being a good return on investment for them.

In general I guess what I'm trying to point at is, any given immediate feeling is usually untrustworthy and essentially useless to pursue. Reproducible emotional trends (for example, feeling better about life when you go for a walk or run, which is well documented) and other types of mental trends (flow?, habits of thinking you have or want to have) are a much more sound basis for decisions and planning. You still have to deal with your feelings on a moment-to-moment level, but it's smart to treat them like children that you have to parent rather than reliable peers.

This goes back to one of the points for which I made that book recommendation. Introverts can force themselves to behave in an extroverted manner in the long run, but doing so comes with an associated psychological cost. For an introvert, forcing oneself to behave in a more extroverted way as a matter of policy, rather than in select instances, is liable to produce significantly negative long term emotional trends.

Replies from: savageorange
comment by savageorange · 2013-07-31T02:46:02.981Z · LW(p) · GW(p)

For an introvert, forcing oneself to behave in a more extroverted way as a matter of policy, rather than in select instances, is liable to produce significantly negative long term emotional trends.

I'm aware of that. Since it's not what I'm suggesting, and as far as I can see, not what anyone else is suggesting, why is that at all relevant?

If they were routinely forming the intent to approach even though it drained them, THAT would reflect a policy of forcing themselves to behave in an extroverted way. Merely making yourself carry through on an already-formed intent rather than waving it away with a sheaf of vague excuses? That's just good mental hygiene.

OT: It seems like a good idea for extroverts to have a planned curriculum of introverted skills to develop, and vice versa for introverts. Personally I'm keenly aware that my lack in some introverted areas like reflection and planning means I'm missing out on some dimensions of life. AFAICS we need to have the -whole- skillset, not just half of it, to really live life well, and for the bits we are not naturally talented in, they take thought and planned action to achieve, hence my focus on intent.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-31T03:26:57.186Z · LW(p) · GW(p)

If they were routinely forming the intent to approach even though it drained them, THAT would reflect a policy of forcing themselves to behave in an extroverted way. Merely making yourself carry through on an already-formed intent rather than waving it away with a sheaf of vague excuses? That's just good mental hygiene.

Making yourself carry through on an already formed intent to engage in socialization in scenarios of a certain kind is a systematic increase in socialization. It's not the formation of the intent to socialize that's draining, it's the actual socialization. It sounds to me like you're trying to have things both ways, whereby introverts get to engage in extra socialization at no cost, which is just not how it works.

Replies from: savageorange
comment by savageorange · 2013-07-31T03:55:02.755Z · LW(p) · GW(p)

On the contrary, I accounted for the costs. That was the point of the final paragraph -- that they have costs. If they're important actions to take, it makes sense that they have costs. If they're important, it makes sense that you accept those costs as necessary. [1]

If they're not, of course, then no such acceptance, nor any action, is required. But as long as you agree (really agree, not just agree because it's not that far off the truth, or to be nice), you will make the sacrifice. The only alternative is that they're not actually that important to you right now, and you just believe that they are.

[1] For example, as an extrovert, reflection (particularly self-reflection) drains me, but that doesn't mean it's any less important for people universally to regularly, systematically reflect, just because it has that cost to me and many others. In some real sense the drainingness is much magnified by my lack of skills in the area. I don't get to say it's too hard just because it is hard. I can only win if I do it in spite of, or even BECAUSE it's hard.

Replies from: Desrtopa
comment by Desrtopa · 2013-07-31T12:42:25.997Z · LW(p) · GW(p)

For example, as an extrovert, reflection (particularly self-reflection) drains me, but that doesn't mean it's any less important for people universally to regularly, systematically reflect, just because it has that cost to me and many others.

Speaking as an introvert, socialization drains me, but I socialize. Obviously, the costs of not doing so at all would be far greater to me than the cost of engaging in some socialization.

Suppose I told you right now, "You should triple the amount of time you spend in self reflection, because self reflection is highly valuable." We both recognize that self reflection is highly valuable, but that doesn't mean that I'm giving you good advice, because I'd be offering it without regard for the fact that I have no information on your cognitive limits relative to the amount of time you spend at it already.

Whatever amount of self reflection you're currently at, I could ask you "if you really agree self reflection is important, why don't you do it more?" Obviously there are suboptimal levels for a person to engage in, but that doesn't mean I'm in any position to assume that you're still at a point where adding more is worth the costs.

Replies from: savageorange
comment by savageorange · 2013-08-01T00:03:27.755Z · LW(p) · GW(p)

Yes, I had forgotten that introverts have a stronger focus on habits/routines, and so they could form intent without necessarily thinking it good in the particular instance. As someone who mostly struggles to cultivate habits, I was thinking as if intent necessarily indicates that you've decided applying it in this instance to be good already. So I guess I was surprised by the comparison between absolute and relative value.

Anyway I take your point about diminishing returns. I'm aware I tend to behave far too sanguine to properly consider the effect of diminishing returns, and just pick whatever seems to help me charge ahead; or to put it another way, if I don't have an imperative it seems like I have nothing.

At least I'm aware that these effects will diminish through clear thinking.

Thanks for your patience.

comment by A1987dM (army1987) · 2013-07-28T19:55:13.365Z · LW(p) · GW(p)

A person can be poorly psychologically calibrated for their environment without being defective and in need of change.

Yes, they do need to change... their environment. :-)

(Generally, this can be much more easily and effectively achieved by starting to hang around different people than by trying to modify the ones they're already hanging around with.)

comment by somervta · 2013-11-20T06:20:57.600Z · LW(p) · GW(p)

[Edited to be less outraged]

Your emotions need to respond mildly to ordinary occurrences and more strongly to extraordinary occurrences; that's one way to tell how well connected to the reality of things you are.

Why? Why should I respond mildly to ordinary occurrences? If I think an action (say, murder) is reprehensible, I will (or should) respond strongly to it no matter how common it is. If something is physically painful to me, I will respond strongly to someone who attempts to do it to me, no matter how ordinary it is. I don't see why this shouldn't also be true of emotional pain or discomfort.

Replies from: savageorange
comment by savageorange · 2013-11-20T21:48:17.725Z · LW(p) · GW(p)

I'm not sure what twist of thinking would allow you to classify murder as ordinary; There's a rather marked difference between common and ordinary. Similarly, assault is not ordinary. One person socially approaching another is ordinary. Emotional discomfort is ordinary. (not sure about emotional pain. But if you get into emotional pain just from being approached, yeah, you've got a problem.)

Though as a point of descriptive curiosity, the level of our emotional responses do actually seem to normalize against what we perceive is common. We need to take measures to counteract that in cases where what is common is not ordinary.

Replies from: somervta
comment by somervta · 2013-11-21T05:58:58.127Z · LW(p) · GW(p)

I'm not sure what twist of thinking would allow you to classify murder as ordinary;

I was speaking of a world in which it was more so.

There's a rather marked difference between common and ordinary.

Um, OK? What is it? I'd respond to the rest of your comment, but I think it's going to hinge on this. If you're not using 'ordinary' as a synonym for 'common', then how are you using it?

Replies from: savageorange
comment by savageorange · 2013-11-21T08:16:28.334Z · LW(p) · GW(p)

"CEV" would be the succinct explanation, but I don't expect anybody to necessarily understand that,so..

If you could create a group of 7 non-extremist people randomly selected from the world population and they'd probably manage to agree that action X, even if not optimal, is a reasonable response to the situation, then X is an ordinary action to take.

(whether it's a good action to take is a separate question. ordinariness is just about not containing any fatal flaws which would be obvious from the outside)

Replies from: somervta
comment by somervta · 2013-11-21T14:43:24.895Z · LW(p) · GW(p)

this depends entirely on the construction of the world's population. If most people believe that torturing small animals and children for fun is reasonable, then I would definitely be reacting strongly to an 'ordinary' occurence.

Replies from: savageorange
comment by savageorange · 2013-11-22T01:25:32.064Z · LW(p) · GW(p)

True, except for the quotes.

comment by Caspian · 2013-07-15T15:05:28.226Z · LW(p) · GW(p)

Like RolfAndreassen said: please back the fuck off and leave others alone.

Please stop discouraging people from introducing themselves to me in circumstances where it would be welcome.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:15:09.094Z · LW(p) · GW(p)

Well, it seems we have a conflict of interests. Do you agree?

If you do, do you think that it is fair to resolve it unilaterally in one direction? If you do not, what should be the compromise?

To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.

Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.

Does this seem like a fair characterization of the situation?

If so, then certain solutions present themselves, some better than others.

We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should "just deal with it". I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)

The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it's not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place.

Your thoughts?

Replies from: Caspian, army1987
comment by Caspian · 2013-07-17T15:41:22.591Z · LW(p) · GW(p)

Well, it seems we have a conflict of interests. Do you agree?

Yes. We also have interests in common, but yes.

If you do, do you think that it is fair to resolve it unilaterally in one direction?

Better to resolve it after considering inputs from all parties. Beyond that it depends on specifics of the resolution.

If you do not, what should be the compromise?

To concretize: some people (introverts? non-NTs? a sub-population defined some other way?) would prefer people-in-general to adopt a policy of not introducing oneself to strangers (at least in ways and circumstances such as described by pragmatist), because they prefer that people not introduce themselves to them personally.

Several of the objections to the introduction suggest guidelines I would agree with: keep the introduction brief until the other person has had a chance to respond. Do not signal unwillingness to drop the conversation. Signaling the opposite may be advisable.

Other people (extraverts? NTs? something else?) would prefer people-in-general to adopt a policy of introducing oneself to strangers, because they prefer that people introduce themselves to them personally.

Yeah. Not that I always want to talk to someone, but sometimes I do.

Does this seem like a fair characterization of the situation?

Yes.

If so, then certain solutions present themselves, some better than others. We could agree that everyone should adopt one of the above policies. In such a case, those people who prefer the other policy would be harmed. (Make no mistake: harmed. It does no good to say that either side should "just deal with it". I recognize this to be true for those people who have preferences opposite to my own, as well as for myself.)

I think people sometimes conflate "it is okay for me to do this" with "this does no harm" and "this does no harm that I am morally responsible for" and "this only does harm that someone else is morally responsible for, e.g. the victim"

The alternative, by construction, would be some sort of compromise (a mixed policy? one with more nuance, or one sensitive to case-specific information? But it's not obvious to me what such a policy would look like), or a solution that obviated the conflict in the first place. Your thoughts?

Working out such a policy could be a useful exercise. Some relevant information would be: when are introductions more or less bad, for those who prefer to avoid them.

comment by A1987dM (army1987) · 2013-07-16T11:22:12.139Z · LW(p) · GW(p)

But it's not obvious to me what such a policy would look like

Like this (I mean the first paragraph, not the second).

comment by savageorange · 2013-07-14T09:45:18.428Z · LW(p) · GW(p)

Oh and btw you totally reversed this:

they're too self-involved?

It's mainly about YOU being too self involved. You can't control their self-involvement, really (although if they are willing, you can help with it indirectly), only try to moderate your own through appropriate action.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T17:22:29.013Z · LW(p) · GW(p)

Ah, yes, I see. My mistake, sorry. I retract that part of my comment.

comment by drethelin · 2013-07-14T03:16:58.677Z · LW(p) · GW(p)

I would like to put in a vote for middle of the road on this one. I think rolf is seriously over-reacting but I would probably be annoyed at a stranger starting up a conversation in that fashion.

comment by aelephant · 2013-07-14T01:46:11.541Z · LW(p) · GW(p)

It is pretty easy to indicate that you don't want to engage -- just don't engage. If someone asks you a question you don't want to answer, just don't answer. I would rather live in a world where people tried to be social & friendly to one another than one in which people censored themselves in an effort not to offend people.

Replies from: David_Gerard, drethelin
comment by David_Gerard · 2013-07-15T07:38:29.526Z · LW(p) · GW(p)

In general, if you suggest a course of action to others that includes the word "just", you may be doing it wrong.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T16:30:28.571Z · LW(p) · GW(p)

Very much this. Here's an excellent essay on the subject of "lullaby words", of which "just" is one. (The author suggests mentally replacing "just" with "have a lot of trouble to" in such formulations.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T18:15:49.296Z · LW(p) · GW(p)

Excellent essay. I strongly recommend it.

In addition to "just", it goes after "soon", "very soon", "should", "all", "only", "anything", and "all I gotta do".

comment by drethelin · 2013-07-15T16:30:45.942Z · LW(p) · GW(p)

it's not a question of a "world" where this happens, it's a question of a subset of the world where you're forced by circumstance to be very close to a person for very many hours. That's kind of like saying "I don't want to live in a world where you can't stretch your arms out without being considered rude." Yes that world would suck, but we're talking about a frigging airplane.

Replies from: aelephant
comment by aelephant · 2013-07-15T23:49:22.555Z · LW(p) · GW(p)

I get that, but my point stands that while you're forced to be very close to them, you're not forced to talk to them. You could even make up an excuse for not wanting to talk. For example, "Sorry, I don't want to chat right now, I'm going to try to take a nap" or "Sorry, I don't want to chat right now, I'm going to put my headphones on & listen to some music". This isn't Clockwork Orange where he's forcing your eyelids open.

Replies from: Desrtopa, drethelin
comment by Desrtopa · 2013-07-16T12:06:38.389Z · LW(p) · GW(p)

Making up excuses in such a situation is widely seen as rude. If you tell them something like "I'm going to try to take a nap," and do not proceed to take a nap, or at least fake one, you're liable to give offense.

In a situation like this, I would very likely be taking the time to think, and telling someone that you want to think instead of talk to them is widely viewed as rude because it privileges your thoughts over their company.

comment by drethelin · 2013-07-16T04:03:51.262Z · LW(p) · GW(p)

Yes your examples of phrases to say make it WAY easier to talk to strangers, and totally get rid of any awkwardness or anxiety someone might feel. I'm glad you solved this problem! You clearly have a wonderful understanding of folks with brains different from yours, and a lot of empathy for how they feel.

Replies from: aelephant
comment by aelephant · 2013-07-16T07:12:56.306Z · LW(p) · GW(p)

Your acerbic & condescending posts make it clear you also have a wonderful understanding of folks with brains different than yours & a lot of empathy for how they feel.

Replies from: gjm
comment by gjm · 2013-07-16T07:20:37.378Z · LW(p) · GW(p)

I would encourage other LWers to do as I have done and help downvote the parent and grandparent of this comment into oblivion, which is what both deserve for turning a discussion about how to be considerate of others into a personal flamewar.

comment by wedrifid · 2013-07-14T23:39:34.388Z · LW(p) · GW(p)

Data point: I would find this annoying to the point of producing seething, ulcerating rage. Please back the fuck off and leave others alone.

Someone prone to seething, ulcerating rage in response to an introduction tends to be unsafe to be around. Similarly, this expression of not-reasonably-provoked vitriol is unpleasant to be around. Please either fix your emotional issue or, at the very least, simply refrain from verbally abusing others on this particular part of the internet.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-15T00:53:29.723Z · LW(p) · GW(p)

I may have exaggerated a little for effect. Still, I do find it annoying when people, completely out of the blue, intrude on my private thoughts, entirely without provocation or any reason other than, apparently, enjoying the sound of their own voices.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-15T03:48:17.794Z · LW(p) · GW(p)

It's worse than that. Some such people apparently think that they are doing a good thing for you by intruding in this manner; that you will appreciate it.

The road to the introvert's hell is paved with the extravert's good intentions.

(Note for the literal-minded: "hell" is also an exaggeration for effect.)

comment by Kawoomba · 2013-07-14T07:43:03.302Z · LW(p) · GW(p)

While stereotypes about a set don't apply to all members of that set (of course), they are more often than not more applicable than they are for the general population.

As such, it's interesting that your last name is a common one in Norway.

As it goes, what's the difference between a Norwegian introvert and a Norwegian extrovert?

When a Norwegian introvert talks to you, he stares at his shoes.

When a Norwegian extrovert talks to you, he stares at your shoes.

There are e.g. probably very few Americans who would feel "seething, ulcerating rage" at merely the offer of conversation during a flight (as long as they can reject it, and that rejection is accepted).

comment by Qiaochu_Yuan · 2013-07-13T21:20:51.879Z · LW(p) · GW(p)

On the one hand, I want to start such conversations. On the other hand, I have a problem with approaching people in situations where they cannot get away. If I make the person sitting next to me on a plane uncomfortable, they can't just leave the area.

Replies from: jsalvatier
comment by jsalvatier · 2013-07-13T23:49:01.842Z · LW(p) · GW(p)

One solution is to look for signs of disinterest or discomfort and disengage from the conversation if you do. If you have trouble picking up on those things, you can always try being more passive and seeing if they pick up the slack or try to engage you further.

comment by A1987dM (army1987) · 2013-07-14T23:19:40.344Z · LW(p) · GW(p)

I find it almost excruciatingly awkward to sit right next to somebody for hours without any communication except for quick glances. Why the hell do people do that?

Insofar as possible, I try to not even glance at people beside me on a plane unless I'm talking to them.

comment by Vaniver · 2013-07-15T04:31:12.553Z · LW(p) · GW(p)

I am an introvert and I enjoy talking to people on the plane. If they're boring, I put my headphones on and read my book. (If the other person has a book, notice how they interact with it. Wait about thirty seconds- if they're still holding their place with their finger / clearly not disengaging with it, put the conversational ball in their court, and see if they keep talking to you or go back to reading.)

I also recommend introducing yourself as soon as you sit down. They're unlikely to be deep in thought, and it can be awkward to do without a clear opening if you didn't do it immediately. I wasted about half an hour of a flight with a cute guy because I didn't do it on sitting down, and then didn't want to interrupt Skymall. (Thankfully, him finishing reading it was an opening, and then we talked for the remainder of the flight.)

comment by Shmi (shminux) · 2013-07-13T05:19:56.006Z · LW(p) · GW(p)

You ask them to help you find a lost puppy.

Replies from: None
comment by [deleted] · 2013-07-13T06:08:22.917Z · LW(p) · GW(p)

If I have lost a puppy,
I desire to believe that I have lost a puppy.
If I have not lost a puppy,
I desire to believe that I have not lost a puppy.
Let me not become attached to puppies I may not want.

comment by RomeoStevens · 2013-07-13T10:11:51.926Z · LW(p) · GW(p)

I'm in favor of making this a monthly or more thread as a way of subtracting some bloat from open threads in the same way the media threads do.

I also think that we should encourage lots of posts to these threads. After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.

Replies from: ciphergoth, Tenoke
comment by Paul Crowley (ciphergoth) · 2013-07-13T10:42:42.517Z · LW(p) · GW(p)

If no question you ask is ever considered stupid, you're not checking enough of your assumptions.

comment by Tenoke · 2013-07-13T12:48:26.894Z · LW(p) · GW(p)

After all, if you don't at least occasionally have a stupid question to ask, you're probably poorly calibrated on how many questions you should be asking.

Or you know, you might be using Google for asking the questions that would be considered stupid. (In fact for me the definition of a stupid question is a question that could be answered by googling for a few minutes)

Replies from: fubarobfusco
comment by fubarobfusco · 2013-07-13T18:21:08.073Z · LW(p) · GW(p)

Here's a possible norm:

If you'd like to ask an elementary-level question, first look up just one word — any word associated with the topic, using your favorite search engine, encyclopedia, or other reference. Then ask your question with some reference to the results you got.

comment by drethelin · 2013-07-13T04:04:29.530Z · LW(p) · GW(p)

Why does anyone care about anthropics? It seems like a mess of tautologies and thought experiments that pays no rent in anticipated experiences.

Replies from: pragmatist, drnickbone, bogdanb, Qiaochu_Yuan, satt, ikrase, Plasmon, benelliott, cousin_it, shminux, Will_Newsome, DanielLC
comment by pragmatist · 2013-07-13T09:02:31.395Z · LW(p) · GW(p)

An important thing to realize is that people working on anthropics are trying to come up with a precise inferential methodology. They're not trying to draw conclusions about the state of the world, they're trying to draw conclusions about how one should draw conclusions about the state of the world. Think of it as akin to Bayesianism. If someone read an introduction to Bayesian epistemology, and said "This is just a mess of tautologies (Bayes' theorem) and thought experiments (Dutch book arguments) that pays no rent in anticipated experience. Why should I care?", how would you respond? Presumably you'd tell them that they should care because understanding the Bayesian methodology helps people make sounder inferences about the world, even if it doesn't predict specific experiences. Understanding anthropics does the same thing (except perhaps not as ubiquitously).

So the point of understanding anthropics is not so much to directly predict experiences but to appreciate how exactly one should update on certain pieces of evidence. It's like understanding any other selection effect -- in order to properly interpret the significance of pieces of evidence you collect, you need to have a proper understanding of the tools you use to collect them. To use Eddington's much-cited example, if your net can't catch fish smaller than six inches, then the fact that you haven't caught any such fish doesn't tell you anything about the state of the lake you're fishing. Understanding the limitations of your data-gathering mechanism prevents you from making bad updates. And if the particular limitation you're considering is the fact that observations can only be made in regimes accessible to observers, then you're engaged in anthropic reasoning.

Paul Dirac came up with a pretty revisionary cosmological theory based on several apparent "large number coincidences" -- important large (and some small) numbers in physics that all seem to be approximate integer powers of the Hubble age of the universe. He argued that it is implausible that we just happen to find ourselves at a time when these simple relationships hold, so they must be law-like. Based on this he concluded that certain physical constants aren't really constant; they change as the universe ages. R. H. Dicke showed (or purported to show) that at least some of these coincidences can be explained when one realizes that observers can only exist during a certain temporal window in the universe's existence, and that the timing of this window is related to a number of other physical constants (since it depends on facts about the formation and destruction of stars, etc.). If it's true that observers can only exist in an environment where these large number relationships hold, then it's a mistake to update our beliefs about natural laws based on these relationships. So that's an example of how understanding the anthropic selection effect might save us (and not just us, but also superhumans like Dirac) from bad updates.

So much for anthropics in general, but what about the esoteric particulars -- SSA, SIA and all that. Well, here's the basic thought: Dirac's initial (non-anthropic) move to his new cosmological theory was motivated by the belief that it is extraordinarily unlikely that the large number coincidences are purely due to chance, that we just happen to be around at a time when they hold. This kind of argument has a venerable history in physics (and other sciences, I'm sure) -- if your theory classifies your observed evidence as highly atypical, that's a significant strike against the theory. Anthropic reasoning like Dicke's adds a wrinkle -- our theory is allowed to classify evidence as atypical, as long as it is not atypical for observers. In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn't count against it, as long as the theory also says (based on good reason, not ad hoc stipulation) that observers can only exist in those few parts of the universe where phenomenon X occurs. Atypicality is allowed as long as it is correlated with the presence of observers.

But only that much atypicality is allowed. If your theory posits significant atypicality that goes beyond what selection effects can explain, then you're in trouble. This is the insight that SSA, SIA, etc seek to precisify. They are basically attempts to update the Diracian "no atypicality" strategy to allow for the kind of atypicality that anthropic reasoning explains, but no more atypicality than that. Perhaps they are misguided attempts for various reasons, but the search for a mathematical codification of the "no atypicality" move is important, I think, because the move gets used imprecisely all the time anyway (without explicit evocation, most of the time) and it gets used without regard for important observation selection effects.

Replies from: Error
comment by Error · 2013-07-15T15:22:50.046Z · LW(p) · GW(p)

In other words, even if the theory says phenomenon X occurs very rarely in our universe, an observation of phenomenon X doesn't count against it[...]Atypicality is allowed as long as it is correlated with the presence of observers.

I read this as: Rather than judging our theory based on p(X), judge it based on p(X) | exists(observers). Am I interpreting you right?

Replies from: pragmatist
comment by pragmatist · 2013-07-16T12:20:01.037Z · LW(p) · GW(p)

It's a bit more complicated than that, I think. We're usually dealing with a situation where p(X occurs somewhere | T) -- where T is the theory -- is high. However, the probability of X occurring in a particular human-scale space-time region (or wave-function branch or global time-slice or universe or...) given T is very low. This is what I mean by X being rare. An example might be life-supporting planets or (in a multiversal context) fundamental constants apparently fine-tuned for life.

So the naïve view might be that an observation of X disconfirms the theory, based on the Copernican assumption that there is nothing very special about our place in the universe, whereas the theory seems to suggest that our place is special -- it's one of those rare places where we can see X.

But this disconfirmation only works if you assume that the space-time regions (or branches or universes or...) inhabited by observers are uncorrelated with those in which X occurs. If our theory tells us that those regions are highly correlated -- if p(X occurs in region Y | T & observers exist in region Y) >> p(X occurs in region Y | T) -- then our observation of X doesn't run afoul of the Copernican assumption, or at least a reasonably modified version of the Copernican assumption which allows for specialness only in so far as that specialness is required for the existence of observers.

comment by drnickbone · 2013-07-13T09:36:11.287Z · LW(p) · GW(p)

If you taboo "anthropics" and replace by "observation selection effects" then there are all sorts of practical consequences. See the start of Nick Bostrom's book for some examples.

The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed. Almost everyone who's heard of the argument thinks there's something trivially wrong with it, but all the obvious objections can be dealt with e.g. look later in Bostrom's book. Further, alternative approaches to anthropics (such as the "self indication assumption"), or attempts to completely bypass anthropics (such as "full non-indexical conditioning"), have been developed to avoid the Doomsday conclusion. But very surprisingly, they end up reproducing it. See Katja Grace's theisis.

Replies from: timtyler, Manfred, drethelin
comment by timtyler · 2013-07-14T11:18:50.081Z · LW(p) · GW(p)

The other big reason for caring is the "Doomsday argument" and the fact that all attempts to refute it have so far failed.

Jaan Tallinn's attempt: Why Now? A Quest in Metaphysics. The "Doomsday argument" is far from certain.

Given the (observed) information that you are a 21st century human, the argument predicts that there will be a limited number of those. Well, that hardly seems news - our descendants will evolve into something different soon enough. That's not much of a "Doomsday".

Replies from: drnickbone
comment by drnickbone · 2013-07-14T13:20:02.257Z · LW(p) · GW(p)

I described some problems with Tallinn's attempt here - under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.

Also, any analysis which predicts we are in a simulation runs into its own version of doomsday: unless there are strictly infinite computational resources, our own simulation is very likely to come to an end before we get to run simulations ourselves. (Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)

Replies from: timtyler
comment by timtyler · 2013-07-14T23:50:10.384Z · LW(p) · GW(p)

I described some problems with Tallinn's attempt here - under that analysis, we ought to find ourselves a fraction of a second pre-singularity, rather than years or decades pre-singularity.

We seem pretty damn close to me! A decade or so is not very long.

(Think of simulations and sims-within-sims as like a branching tree; in a finite tree, almost all civilizations will be in one of the leaves, since they greatly outnumber the interior nodes.)

In a binary tree (for example), the internal nodes and the leaves are roughly equal in number.

Replies from: drnickbone
comment by drnickbone · 2013-07-16T07:27:51.221Z · LW(p) · GW(p)

Remember that in Tallinn's analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially). I suppose Tallinn's model could be adjusted so that they only explore "branch-points" in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.

On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn's and Bostrom's analysis, m is very much bigger than 2.

Replies from: timtyler, Eugine_Nier
comment by timtyler · 2013-07-21T13:42:47.918Z · LW(p) · GW(p)

I suppose Tallinn's model could be adjusted so that they only explore "branch-points" in their simulations every decade or so, but that is quite arbitrary and implausible. If the simulations branch every year, we should expect to be in the last year; if they branch every second, we should be in the last second.

More likely that there are a range of historical "tipping points" that they might want to explore - perhaps including the invention of language and the origin of humans.

On your second point, if each post-singularity civilization runs an average of m simulations, then the chance of being in an internal node (a civilization which eventually runs sims) rather than a leaf (a simulation which never gets to run its own sims in turn) is about 1/m. The binary tree corresponds to m=2, but why would a civilization run only 2 sims, when it is capable of running vastly more? In both Tallinn's and Bostrom's analysis, m is very much bigger than 2.

Surely the chance of being in a simulated world depends somewhat on its size. Also the chance of a sim running simulations also depends on its size. A large world might have a high chance of running simulations, while a small world might have a low chance. Averaging over worlds of such very different sizes seems pretty useless - but any average of number of simulations run per-world would probably be low - since so many sims would be leaf nodes - and so would run no simulations themselves. Leaves might be more numerous, but they will also be smaller - and less likely to contain many observers.

comment by Eugine_Nier · 2013-07-17T03:25:35.842Z · LW(p) · GW(p)

Remember that in Tallinn's analysis, post-singularity civilizations run a colossal number of pre-singularity simulations, with the number growing exponentially up to the singularity (basically they want to explore lots of alternate histories, and these grow exponentially).

What substrate are they running these simulations on?

Replies from: drnickbone
comment by drnickbone · 2013-07-17T07:37:53.451Z · LW(p) · GW(p)

I had another look at Tallinn's presentation, and it seems he is rather vague on this... rather difficult to know what computing designs super-intelligences would come up with! However, presumably they would use quantum computers to maximize the number of simulations they could create, which is how they could get branch-points every simulated second (or even more rapidly). Bostrom's original simulation argument provides some lower bounds - and references - on what could be done using just classical computation.

comment by Manfred · 2013-07-13T21:21:41.431Z · LW(p) · GW(p)

all attempts to refute it have so far failed.

Well. The claims that it's relevant to our current information state have been refuted pretty well.

Replies from: drnickbone
comment by drnickbone · 2013-07-13T23:25:43.462Z · LW(p) · GW(p)

Citation needed (please link to a refutation).

Replies from: Manfred
comment by Manfred · 2013-07-14T01:25:23.710Z · LW(p) · GW(p)

I'm not aware of any really good treatments. I can link to myself claiming that I'm right, though. :D

I think there may be a selection effect - once the doomsday argument seems not very exciting, you're less likely to talk about it.

comment by drethelin · 2013-07-13T20:29:27.751Z · LW(p) · GW(p)

The doomsday argument is itself anthropic thinking of the most useless sort.

Replies from: drnickbone
comment by drnickbone · 2013-07-13T23:23:09.525Z · LW(p) · GW(p)

Citation needed (please link to a refutation).

Replies from: drethelin
comment by drethelin · 2013-07-14T03:11:38.796Z · LW(p) · GW(p)

I don't need a refutation. The doomsday argument doesn't affect anything I can or will do. I simply don't care about it. It's like a claim that I will probably be eaten at any point in the next 100 years by a random giant tiger.

comment by bogdanb · 2013-07-13T20:22:34.258Z · LW(p) · GW(p)

Take Bayes’ theorem: P(H|O) = P(O|H) × P(H) / P(O). If H is a hypothesis and O is an observation, P(O|H) means “what is the probability of making that observation if the hypothesis is true?”

If a hypothesis has as consequence “nobody can observe O” (say, because no humans can exist), then that P(O|H) is 0 (actually, it’s about the probability that you didn’t get the consequence right). Which means that, once you made the observation, you will probably decide that the hypothesis is unlikely. However, if you don’t notice that consequence, you might decide that P(O|H) is large, and incorrectly assign high likelihood to the hypothesis.

For a completely ridiculous example, imagine that there’s a deadly cat-flu epidemic; it gives 90% of cats that catch it a runny nose. Your cat’s nose becomes runny. You might be justified to think that it’s likely your cat got cat-flu. However, if you know that all cases, the cat’s owner dies of the flu before the cat has any symptoms, the conclusion would be the opposite. (Since, if it were the flu, you wouldn’t see the cat’s runny nose, because you’d be dead.) The same evidence, opposite effect.

Anthropics is kind of the same thing, except you’re mostly guessing about the flu.

comment by Qiaochu_Yuan · 2013-07-13T04:51:41.385Z · LW(p) · GW(p)

The obvious application (to me) is figuring out how to make decisions once mind uploading is possible. This point is made, for example, in Scott Aaronson's The Ghost in the Quantum Turing Machine. What do you anticipate experiencing if someone uploads your mind while you're still conscious?

Anthropics also seems to me to be relevant to the question of how to do Bayesian updates using reference classes, a subject I'm still very confused about and which seems pretty fundamental. Sometimes we treat ourselves as randomly sampled from the population of all humans similar to us (e.g. when diagnosing the probability that we have a disease given that we have some symptoms) and sometimes we don't (e.g. when rejecting the Doomsday argument, if that's an argument we reject). Which cases are which?

Replies from: ESRogs
comment by ESRogs · 2013-07-13T06:39:34.249Z · LW(p) · GW(p)

figuring out how to make decisions once mind uploading is possible

Or even: deciding how much to care about experiencing pain during an operation if I'll just forget about it afterwards. This has the flavor of an anthropics question to me.

comment by satt · 2013-07-21T10:33:02.337Z · LW(p) · GW(p)

Possible example of an anthropic idea paying rent in anticipated experiences: anthropic shadowing of intermittent observer-killing catastrophes of variable size.

comment by ikrase · 2013-07-13T08:04:16.297Z · LW(p) · GW(p)

I'd add that the Doomsday argument in specific seems like it should be demolished by even the slightest evidence as to how long we have left.

comment by Plasmon · 2013-07-13T05:39:39.569Z · LW(p) · GW(p)

There's a story about anthropic reasoning being used to predict properties of the processes which produce carbon in stars, before these processes were known. (apparently there's some debate about whether or not this actually happened)

comment by benelliott · 2013-07-14T02:30:35.884Z · LW(p) · GW(p)

It seems like a mess of tautologies and thought experiments

My own view is that this is precisely correct and exactly why anthropics is interesting, we really should have a good, clear approach to it and the fact we don't suggests there is still work to be done.

comment by cousin_it · 2013-07-13T08:42:13.129Z · LW(p) · GW(p)

Not sure about anthropics, but we need decision theories that work correctly with copies, because we want to build AIs, and AIs can make copies of themselves.

comment by Shmi (shminux) · 2013-07-13T04:31:54.137Z · LW(p) · GW(p)

This question has been bugging me for the last couple of years here. Clearly Eliezer believes in the power of anthropics, otherwise he would not bother with MWI as much, or with some of his other ideas, like the recent writeup about leverage. Some of the reasonably smart people out there discuss SSA and SIA. And the Doomsday argument. And don't get me started on Boltzmann brains...

My current guess that in the fields where experimental testing is not readily available, people settle for what they can get. Maybe anthropics help one pick a promising research direction, I suppose. Just trying (unsuccessfully) to steelman the idea.

comment by Will_Newsome · 2013-07-13T12:32:31.819Z · LW(p) · GW(p)

I care about anthropics because from a few intuitive principles that I find interesting for partially unrelated reasons (mostly having to do with wanting to understand the nature of justification so as to build an AGI that can do the right thing) I conclude that I should expect monads (programs, processes; think algorithmic information theory) with the most decision-theoretic significance (an objective property because of assumed theistic pansychism; think Neoplatonism or Berkeleyan idealism) to also have the most let's-call-it-conscious-experience. So I expect to find myself as the most important decision process in the multiverse. Then at various moments the process that is "me" looks around and asks, "do my experiences in fact confirm that I am plausibly the most important agent-thingy in the multiverse?", and if the answer is no, then I know something is wrong with at least one of my intuitive principles, and if the answer is yes, well then I'm probably psychotically narcissistic and that's its own set of problems.

comment by DanielLC · 2013-07-13T04:59:10.425Z · LW(p) · GW(p)

It tells you when to expect the end of the world.

comment by James_Miller · 2013-07-13T13:23:40.392Z · LW(p) · GW(p)

Do you build willpower in the long-run by resisting temptation? Is willpower, in the short-term at least, a limited and depletable resource?

Replies from: Kaj_Sotala, bramflakes, taelor, gothgirl420666
comment by Kaj_Sotala · 2013-07-13T14:58:32.288Z · LW(p) · GW(p)

Is willpower, in the short-term at least, a limited and depletable resource?

I felt that Robert Kurzban presented a pretty good argument against the "willpower as a resource" model in Why Everyone (Else) Is a Hypocrite:

[After criticizing studies trying to show that willpower is a resource that depends on glucose]

What about the more general notion that “willpower” is a “resource” that gets consumed or expended when one exerts self-control? First and foremost, let’s keep in mind that the idea is inconsistent with the most basic facts about how the mind works. The mind is an information-processing device. It’s not a hydraulic machine that runs out of water pressure or something like that. Of course it is a physical object, and of course it needs energy to operate. But mechanics is the wrong way to understand, or explain, its action, because changes in complex behavior are due to changes in information processing. The “willpower as resource” view abandons these intellectual gains of the cognitive revolution, and has no place in modern psychology. That leaves the question, of course, about what is going on in these studies.

Let’s back up for a moment and think about what the function of self-control might be. Taking the SATs, keeping your attention focused, and not eating cookies all feel more or less unpleasant, but it’s not like spraining your ankle or running a marathon, where the unpleasant sensations are easy to understand from a functional point of view. The feelings of discomfort are probably the output of modules designed to compute costs. When your ankle is sprained, putting weight on it is costly because you can damage it further. When you have been running for a long time, the chance of a major injury goes up. These sensations, then, are probably evolution’s way of getting you to keep your weight off the joint and stop doing all that running, respectively.

There’s nothing obviously analogous for not eating cookies or doing word problems. Why does it feel like something, anything at all, to (not) do these things? As we’ve seen, lots of other stuff happens in your head, all the time, and it doesn’t feel like anything. Further, given that it seems as if exerting self-control is a good thing, that is, that it generally leads to outcomes that might be expected to yield fitness benefits, you might expect that exerting self-control would feel good and easy. Why does it seem hard, and feel even harder over time? What is the sensation of “effort” designed to get you to do?

One reason it seems hard might derive from that fact that “exerting self-control” entails incurring immediate costs in various forms, and “effort” is the representation of these costs. Consider not eating a cookie. There are probably modules in your mind that are designed to compute the benefits of eating nice calorie packages. They’re wired up to the senses, designed to calculate just how good (in the evolutionary sense) eating the calorie package is. From the point of view of these modules, not eating the cookie is a cost, in particular, the lost calories in the cookie. So, the sensation of the effort of not eating it—”temptation”—is probably evolution’s way of getting you to eat the cookie, just as the sensation of pain is evolution’s way of getting you to stay off your sprained ankle. In both cases, the experience is the output of a module designed to compute costs.

The same argument applies to other opportunities, and they take various forms. In some experiments, subjects are told to ignore words flashing on a computer screen, something that feels quite effortful. Why? Well, not reading words on a screen carries a loss of information: What did those words say? A similar argument applies regarding Ariely’s work on decision making during sexual arousal, which we looked at earlier in this chapter. The reason that subjects respond to those survey questions when they are aroused is probably because the mechanisms designed to take advantage of mating opportunities are computing benefits in the environment, though they are being fooled by the fact that the images they are getting are pictures rather than actual people.

Is it also a cost to solve word problems? Sure, but the cost isn’t caloric. Solving word problems requires the use of certain fancy modules, and when one is doing one of these tasks, these modules are kept busy. This means that doing these tasks carries real (opportunity) costs: all the things that these modules could be doing but are not because they are engaged. It’s not unlike what happens when you start up some big piece of software on your computer: Other things suffer, necessarily. Starting up software carries these costs. Working on word problems, similarly, prevents you from using important modular systems from doing other tasks.

So, instead of a resource view, my view is that the issue is more of an effort monitor—an “effortometer”62 in the mind. My guess is that the reason it feels like something to pay close attention to something, solve hard problems, or avoid eating cookies is that doing these things is costly from the perspective of certain modules.63 The feeling of “mental effort,” on this view, is like a counter, adding up all these opportunity costs to determine if it’s worth continuing to do whatever one is doing.64 As these costs get higher—either because one is doing the task for a while, or for some other reason—the effortometer counts higher, giving rise to the sensation of effort, and also giving the impatient modules more and more of an edge.

If I’m working on word problems—but not getting anywhere—using my modules in this way isn’t doing much good, so maybe I should stop. Interestingly, as illustrated by the results of the studies described above, the effect seems to extend from one task to another, even if the tasks are quite different.

This idea suggests that a mechanism is needed that performs these computations, weighing the costs and benefits of doing tasks that make use of certain modules. Some modules are counting up these costs, and when the effortometer increases, there is less suppression of the short-term modules—it’s time to move on. So, it’s not “willpower” that’s exhausted—it’s that the ratio of costs to reward is too high to justify continuing. As Baumeister himself indicated, “it is adaptive to give up early on unsolvable problems. Persistence is, after all, only adaptive and productive when it leads to eventual success.”

The effortometer view suggests a way to “reset” or at least reduce the count. Suppose we give subjects a reward, such as a small gift, or even light praise; this ought to “reset” the counter, just as when a foraging animal’s time is rewarded by finding food morsels. Diane Tice and colleagues conducted some work in which some subjects were told not to think of a white bear,* and others were not. The idea was that not thinking of a white bear takes some “willpower,” and when you’ve just used your willpower, you have less of it left to use in the next task, which was drinking an unpleasant beverage. They found that if you have to suppress thinking of a white bear, you can’t drink as much of the awful Kool-Aid. So, that looks good for a “resource” model. Your willpower sponge has been squeezed out.

Some subjects were, however, given a small gift after suppressing thinking of a white bear. These subjects were able to drink just as much of the nasty stuff as those who were at liberty to think of as many white bears as they wanted. That is, their “willpower” seems to have been restored, making them able to endure the foul-tasting beverage.

These findings are very hard to accommodate with a “resource” model. If my self-control sponge is squeezed dry by not thinking of a white bear, a gift shouldn’t help me exert willpower—I’m all out of it. (And certainly the gift didn’t increase the amount of glucose in my body.) In contrast, this finding fits very well with the effortometer model. If the effortometer is monitoring reward, then a gift resets it, and ought to improve subsequent self-control tasks.

Elsewhere in the book (I forget where) he also notes that the easiest explanation for people to go low on willpower when hungry is simply that a situation where your body urgently needs food is a situation where your brain considers everything that’s not directly related to acquiring food to have a very high opportunity cost. It seems like a more elegant and realistic explanation than saying the common folk-psychological explanation that seems to suggest something like willpower being a resource that you lose when you’re hungry or tired. It’s more of a question of the evolutionary tradeoffs being different when you’re hungry or tired, which leads to different cognitive costs.

Replies from: Caspian, NancyLebovitz, DanielLC
comment by Caspian · 2013-07-15T14:48:55.624Z · LW(p) · GW(p)

I now plan to split up long boring tasks into short tasks with a little celebration of completion as the reward after each one. I actually decided to try this after reading Don't Shoot the Dog, which I think I saw recommended on Less Wrong. It's got me a somewhat more productive weekend. If it does stop helping, I suspect it would be from the reward stopping being fun.

comment by NancyLebovitz · 2013-07-14T13:26:38.304Z · LW(p) · GW(p)

I would assume that thinking does take calories, and so does having an impulse and then overriding it.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-14T18:58:49.205Z · LW(p) · GW(p)

Kurzban on that:

Clarke and Sokoloff remarked way back in the nineties that a “fashionable” view “equates concentrated mental effort with mental work,” but that “there appears to be no increased energy utilization by the brain during such pro-cesses.”52 A more recent review concluded that it is “unlikely that the blood glucose changes observed during and after a difficult cognitive task are due to increased brain glucose uptake.”53

Now, I'm not an expert on the brain's consumption of glucose, but you don't actually have to be an expert physiologist to notice something is amiss. Subjects in this literature who do a few minutes of a “self-control task” are referred to as “depleted.” What, precisely, is missing? Consider that in the radish/cookie experiment, subjects’ brains in both conditions have very similar sets of modules that are active. Basically, everything brains normally do is still going on—the senses, memory, monitoring autonomic activity, and so on. In the radish condition, some modules are, presumably, inhibiting others from causing the subject to indulge in the cookies. I don't really know if it is possible to estimate what fraction of modules differ between these two conditions. My guess is that this number would be small. Could these extra modules be draining the brain of glucose?

Consider that the entire brain uses about .25 calories per minute.54 If we suppose that the “self-control” task increases overall brain metabolism by 10%—a very large estimate55—then the brains of subjects who do one of these tasks for five minutes, who are categorized as “depleted,” have consumed an extra 0.125 calories. Does it seem right that you need 100 calories from lemonade to compensate for a tenth of a Tic Tac?56 Even worse for the glucose model, performance on “self-control” tasks should be much lower after exercise, which consumes orders of magnitude more glucose. However, research in this area shows exactly the reverse.57

Footnotes:

52 Clarke & Sokoloff 1998, p. 673.

53 Messier 2004, p. 39.

54 Clarke & Sokoloff 1998, p. 660.

55 I have in mind here evidence from imaging (PET, fMRI), in which percentage changes are small, and of course restricted to particular regions. See, e.g., Madsen et al. 1995.

56 Note also that in this work, researchers use Splenda for the control. While sucralose, which gives rise to the sensation of sweetness, is itself not metabolized, Splenda packets contain carbohydrates in the medium in which sucralose is delivered, and so have about 3 calories. The “zero calorie control” in these studies has an order of magnitude more calories than this (very large over-) estimate of how many calories are consumed. Note also that performance on physically taxing tasks (riding a stationary cycle) can be improved by simply swishing a sugar solution around in one's mouth (Chambers, Bridge, & Jones 2009). It could be that concentrated sugar in the mouth acts activates reward systems, which would explain why lemonade has this effect.

57 See, for example, Tomporowski 2003.

Cited references:

Chambers E. S., Bridge, M. W., & Jones, D. A. (2009). Carbohydrate sensing in the human mouth: effects on exercise performance and brain activity. Journal of Physiology, 587, 1779–1794.

Clarke, D. D., & Sokoloff, L. (1998). Circulation and energy metabolism of the brain. In G. Siegel, B. Agranoff, R. Albers, S. Fisher, & M. Uhler (eds.), Basic neurochemistry: Molecular, cellular, and medical aspects (6th ed.) (pp. 637–669). Philadelphia, PA: Lippincott-Raven.

Madsen P. L., Hasselbalch, S. G., Hagemann, L. P., Olsen, K. S., Bulow, J., Holm, S., Wildschiødtz, G., Paulson, O. B., & Lassen, N. A. (1995). Persistent resetting of the cerebral oxygen/glucose uptake ratio by brain activation: Evidence obtained with the Kety-Schmidt technique. Journal of Cerebral Blood Flow and Metabolism, 15,485–491.

Messier, C. (2004). Glucose improvement of memory: A review. European Journal of Pharmacology, 490, 33–57.

Tomporowski, P. D. (2003). Effects of acute bouts of exercise on cognition. Acta Psychologica, 112, 297–324.

comment by DanielLC · 2013-07-13T20:41:44.050Z · LW(p) · GW(p)

But what's the explanation for people to go low on willpower after exerting willpower?

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-14T00:33:36.621Z · LW(p) · GW(p)

My reading of the passage Kaj_Sotala quoted is that the brain is decreasingly likely to encourage exerting will toward a thing the longer it goes without reward. In a somewhat meta way, that could be seen as will power as a depletable resource, but the reward need not adjust glucose levels directly.

Replies from: DanielLC
comment by DanielLC · 2013-07-14T04:57:17.887Z · LW(p) · GW(p)

I never suspected it had anything to do with glucose. I'd guess that it's something where people with more willpower didn't do as well in the ancestral environment, since they did more work than strictly necessary, so we evolved to have it as a depletable resource.

comment by bramflakes · 2013-07-13T13:44:29.811Z · LW(p) · GW(p)

I don't know about the first question, but for the second: yes.

Replies from: Kaj_Sotala, army1987
comment by Kaj_Sotala · 2013-07-13T15:03:08.907Z · LW(p) · GW(p)

Apparently the answer to the second question depends on what you believe the answer to the second question to be.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-14T11:12:26.404Z · LW(p) · GW(p)

Interesting. So the willpower seems to be in the mind. Who would have guessed that? :D

How can we exploit this information to get more willpower? The first idea is to give youself rewards for using the willpower successfully. Imagine that you keep a notebook with you, and every time you have to resist a temptation, you give yourself a "victory point". For ten victory points, you buy and eat a chocolate (or whatever would be your favorite reward). Perhaps for succumbing to a temptation, you might lose a point or two.

Perhaps this could rewire the brain, so it goes from "I keep resisting and resisting, but there is no reward, so I guess I better give up" to "I keep resisting and I already won for myself a second chocolate; let's do some more resisting".

But how to deal with long-term temptation. Like, I give myself a point at the morning for not going to reddit, but now it's two hours later, I still have to resist the temptation, but I will not get another point for that, so my brain expects no more rewards. Should I perhaps get a new point every hour or two?

Also, it could have the perverse effect of noticing more possible temptations. Because, you know, you only reward yourself a point for the temptation you notice and resist.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-08-18T13:38:45.853Z · LW(p) · GW(p)

I think that's cheating. Willpower is the ability to unpleasant activities in exchange for positive future consequences. The chocolate / victory point is shifting the reward into the present, eliminating the need for willpower by providing immediate gratification.

(No one said cheating was a bad thing, of course)

comment by A1987dM (army1987) · 2013-07-13T14:13:01.756Z · LW(p) · GW(p)

I once heard of a study finding that the answer is “yes” also for the first question. (Will post a reference if I find it.)

And the answer to the second question might be “yes” only for young people.

comment by taelor · 2013-07-14T17:31:19.700Z · LW(p) · GW(p)

In About Behaviorism (which I unfortunately don't currently own a copy of, so I can't give direct quotes or citations) , B. F. Skinner makes the case that the "Willpower" phenomenon actually reduces to opperant conditioning and scheduals of reinforcement. Skinner claims that people who have had their behavior consistently reinforced in the past will become less sensitive to a lack of reinforcement in the present, and may persist in behavior even when positive reinforcement isn't forthcoming in the short term, whereas people whose past behavior has consistantly failed to be reinforced (or even been actively punished) will abandon a course of action much more quickly when it fails to immediately pay off. Both groups will eventually give up at an unreinforced behavior, though the former group will typically persist much longer at it than the latter. This gives rise to the "willpower as resource" model, as well as the notion that some people have more willpower than others. Really, people with "more willpower" have just been conditioned to wait longer for their behaviors to be reinforced.

comment by gothgirl420666 · 2013-07-13T14:44:08.064Z · LW(p) · GW(p)

The standard metaphor is "willpower is like a muscle". This implies that by regularly exercising it, you can strengthen it, but also that if you use it too much in the short term, it can get tired quickly. So yes and yes.

comment by FiftyTwo · 2013-07-13T05:42:09.552Z · LW(p) · GW(p)

Why is everyone so intereted in decision theory? Especially the increasingly convoluted variants with strange acronyms that seem to be popping up

Replies from: Qiaochu_Yuan, gothgirl420666
comment by Qiaochu_Yuan · 2013-07-13T06:01:46.725Z · LW(p) · GW(p)

As far as I can tell, LW was created explicitly with the goal of producing rationalists, one desirable side effect of which was the creation of friendly AI researchers. Decision theory plays a prominent role in Eliezer's conception of friendly AI, since a decision theory is how the AI is supposed to figure out the right thing to do. The obvious guesses don't work in the presence of things like other agents that can read the AI's source code, so we need to find some non-obvious guesses because that's something that could actually happen.

Replies from: Adele_L
comment by Adele_L · 2013-07-13T17:24:01.255Z · LW(p) · GW(p)

Hey, I think your tone here comes across as condescending, which goes against the spirit of a 'stupid questions' thread, by causing people to believe they will lose status by posting in here.

Replies from: Qiaochu_Yuan, wwa
comment by Qiaochu_Yuan · 2013-07-13T17:29:22.623Z · LW(p) · GW(p)

Fair point. My apologies. Getting rid of the first sentence.

Replies from: Adele_L
comment by Adele_L · 2013-07-13T21:52:16.630Z · LW(p) · GW(p)

Thanks!

comment by wwa · 2013-07-13T17:55:41.263Z · LW(p) · GW(p)

data point: I didn't parse it as condescending at all.

Replies from: Adele_L
comment by Adele_L · 2013-07-13T21:52:30.938Z · LW(p) · GW(p)

Did you read it before it was rephrased?

Replies from: wwa
comment by wwa · 2013-07-14T09:56:04.182Z · LW(p) · GW(p)

Ah, I see there was a race condition. I'll retract my comment.

comment by gothgirl420666 · 2013-07-13T06:48:14.781Z · LW(p) · GW(p)

This was what I gathered from reading the beginning of the TDT paper: "There's this one decision theory that works in every single circumstance except for this one crazy sci-fi scenario that might not even be physically possible, and then there's this other decision theory that works in said sci-fi scenario but not really anywhere else. We need to find a decision theory that combines these two in order to always work, including in this one particular sci-fi scenario."

I guess it might be useful for AI research, but I don't see why I would need to learn it.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T10:03:12.337Z · LW(p) · GW(p)

the sci-fi bit is only to make it easier to think about. The real world scenarios it corresponds to require the reader to have quite a bit more background material under their belt to reason carefully about.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-13T15:00:14.489Z · LW(p) · GW(p)

What are the real world scenarios it corresponds to? The only one I know of is the hitchhiker one, which is still pretty fantastic. I'm interested in learning about this.

Replies from: saturn, TimS, Manfred, bogdanb
comment by saturn · 2013-07-13T19:44:16.296Z · LW(p) · GW(p)

Any kind of tragedy of the commons type scenario would qualify.

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-13T20:44:39.610Z · LW(p) · GW(p)

It's not obvious to me how tragedy of the commons/prisoner's dilemma is isomorphic to Newcomb's problem, but I definitely believe you that it could be. If TDT does in fact present a coherent solution to these types of problems, then I can easily see how it would be useful. I might try to read the pdf again sometime. Thanks.

Replies from: benelliott
comment by benelliott · 2013-07-14T01:52:44.783Z · LW(p) · GW(p)

They aren't isomorphic problems, however it is the case that CDT two-boxes and defects while TDT one boxes and co-operates (against some opponents).

comment by TimS · 2013-07-13T18:41:37.991Z · LW(p) · GW(p)

In general, there are situations where act utilitarianism says a choice is permissible, but rule utilitarianism says the choice is not permissible.

The example I learned involved cutting across the grass as a shortcut instead of walking on a path. No one person can damage the grass, but if everyone walks across the grass, it dies, reducing everyone's utility more than gained by the shortcut.

For a real world example, I suspect that one's intuition about the acceptability of copyright piracy depends on one's intuitions about committing to pay for content and the amount of content that would exist.

In other words, it seems intuitive that the truly rational would voluntarily co-operate to avoid tragedies of the commons. But voluntary commitment to a course of action is hard to formally justify.

Replies from: aelephant
comment by aelephant · 2013-07-14T01:39:11.006Z · LW(p) · GW(p)

If everyone walks across the grass instead of on the path, that is strong evidence that the path is in the wrong place.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T07:29:01.293Z · LW(p) · GW(p)

It does not follow from this that it would be good for everyone to cut across the grass.

comment by Manfred · 2013-07-13T20:27:09.065Z · LW(p) · GW(p)

http://lesswrong.com/lw/4sh/how_i_lost_100_pounds_using_tdt/

Replies from: Rukifellth
comment by Rukifellth · 2013-07-13T20:43:41.818Z · LW(p) · GW(p)

I imagine at least half of those upvotes were generated by the title alone.

comment by bogdanb · 2013-07-13T18:37:17.878Z · LW(p) · GW(p)

It is done for AI research. The “real world scenarios” usually involve several powerful AIs, so depending on what you mean by “sci-fi” they might not apply. (Even if you don’t consider AIs sci-fi, the usual problem statements make lots of simplifying assumptions that are not necessarily realistic, like perfect guessing and things like that, but that’s just like ignoring friction in physics problems, nobody expects for the exact same thing to happen in practice.)

comment by Frood · 2013-07-13T16:45:57.767Z · LW(p) · GW(p)

When I'm in the presence of people who know more than me and I want to learn more, I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?

Replies from: TimS, buybuydandavis, ChristianKl, Vaniver, wwa, fubarobfusco, Error, therufs, NancyLebovitz, mwengler
comment by TimS · 2013-07-13T19:11:52.156Z · LW(p) · GW(p)

Lawyer's perspective:

People want to ask me about legal issues all the time. The best way to get a useful answer is to describe your current situation, the cause of your current situation, and what you want to change. Thus:

I have severe injuries, caused by that other person hitting me with their car. I want that person's driver's license taken away.

Then I can say something like: Your desired remedy is not available for REASONS, but instead, you could get REMEDY. Here are the facts and analysis that would affect whether REMEDY is available.

In short, try to define the problem. fubarobfusco has some good advice about how to refine your articulation of a problem. That said, if you have reason to believe a person knows something useful, you probably already know enough to articulate your question.

The point of my formulation is to avoid assumptions that distort the analysis. Suppose someone in the situation I described above said "I was maliciously and negligently injured by that person's driving. I want them in prison." At that point, my response needs to detangle a lot of confusions before I can say anything useful.

Replies from: buybuydandavis, Frood
comment by buybuydandavis · 2013-07-14T10:49:37.042Z · LW(p) · GW(p)

In short, try to define the problem

I see you beat me to it. Yes, define your problem and goals.

The really bad thing about asking questions is that people will answer them. You ask some expert "How do I do X with Y?". He'll tell you. He'll likely wonder what the hell you're up to in doing such a strange thing with Y, but he'll answer. If he knew what your problem and goals were instead, he'd ask the right questions of himself on how to solve the problem, instead of the wonrg question that you gave him.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-15T01:52:43.215Z · LW(p) · GW(p)

You ask some expert "How do I do X with Y?". He'll tell you. He'll likely wonder what the hell you're up to in doing such a strange thing with Y, but he'll answer.

Also in the event you get an unusually helpful expert, he might point this out. Consider this your lucky day and feel free to ask follow up questions. Don't be discouraged by the pointing out being phrased along the lines of "What kind of idiot would want to do X with Y?"

comment by Frood · 2013-07-13T20:28:58.644Z · LW(p) · GW(p)

describe your current situation, the cause of your current situation, and what you want to change.

That's helpful. Do you think it works as a general strategy? For example, academic discussions:

I just read article M on X because it seems like a better understanding of X will help with PURSUIT. How would you recommend that I proceed?

Or should the question/what I want to change be more specific?

Replies from: TimS
comment by TimS · 2013-07-15T00:54:18.944Z · LW(p) · GW(p)

My advice is geared towards factual questions, so I'm not sure how helpful it would be for more pure intellectual questions. The most important point I was trying to make was that you should be careful not to pre-bake too much analysis into your question.

Thus, asking "what should I do now to get a high paying job to donate lots of money to charity?" is different from "what should I do now to make the most positive impact on the world?"

Many folks around here will give very similar answers to both of those questions (I probably wouldn't, but that's not important to this conversation). But the first question rules out answers like "go get a CompSci PhD and help invent FAI" or "go to medical school and join Doctors without Borders."

In short, people will answer the question you ask, or the one they think you mean to ask. That's not necessarily the same as giving you the information they have that you would find most helpful.

comment by buybuydandavis · 2013-07-14T10:43:47.100Z · LW(p) · GW(p)

Don't ask questions. Describe your problem and goal, and ask them to tell you what would be helpful. If they know more than you, let them figure out the questions you should ask, and then tell you the answers.

comment by ChristianKl · 2013-07-15T09:20:57.446Z · LW(p) · GW(p)

I don't think an answer has to be specific to be useful. Often just understanding how an expert in a certain area thinks about the world can be useful even if you have no specificity.

When it comes to questions: 1) What was the greatest discovery in your field in the last 5 years? 2) Is there an insight in your field that obvious to everyone in your field but that most people in society just don't get?

comment by Vaniver · 2013-07-15T04:15:34.405Z · LW(p) · GW(p)

I never know how to ask questions that will inspire useful, specific answers. They just don't occur to me. How do you ask the right questions?

My favorite question comes from The Golden Compass:

If you were me, what question would you ask of the Consul of the Witches?

I haven't employed it against people yet, though, and so a better way to approach the issue in the same spirit is to describe your situation (as suggested by many others).

comment by wwa · 2013-07-13T18:46:14.233Z · LW(p) · GW(p)

I find "How do I proceed to find out more about X" to give best results. Note: it's important to phrase it so that they understand you are asking for an efficient algorithm to find out about X, not for them to tell you about X!

It works even if you're completely green and talking to a prodigy in the field (which I find to be particularly hard). Otherwise you'll get "RTFM"/"JFGI" at best or they will avoid you entirely at worst.

comment by fubarobfusco · 2013-07-13T17:46:35.701Z · LW(p) · GW(p)

One approach: Think of two terms or ideas that are similar but want distinguishing. "How is a foo different from a bar?" For instance, if you're looking to learn about data structures in Python, you might ask, "How is a dictionary different from a list?"

You can learn if your thought that they are similar is accurate, too: "How is a list different from a for loop?" might get some insightful discussion ... if you're lucky.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T07:26:23.583Z · LW(p) · GW(p)

Of course, if you know sufficiently little about the subject matter, you might instead end up asking a question like

"How is a browser different from a hard drive?"

which, instead, discourages the expert from speaking with you (and makes them think that you're an idiot).

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-14T08:11:06.989Z · LW(p) · GW(p)

I think that would get me to talk with them out of sheer curiosity. ("Just what kind of a mental model could this person to have in order to ask such a question?")

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T16:26:54.316Z · LW(p) · GW(p)

Sadly, reacting in such a way generally amounts to grossly overestimating the questioner's intelligence and informedness. Most people don't have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to

"I have absolutely no idea what's going on. Here's something that sounds like a question, but understand that I probably won't even remotely comprehend any answer you give me. If you want me to understand anything about this, at all, you'll have to go way back to the beginning and take it real slow."

(Source: years of working in computer retail and tech support.)

Replies from: Kaj_Sotala, ChristianKl
comment by Kaj_Sotala · 2013-07-14T18:48:26.272Z · LW(p) · GW(p)

Even "it's a mysterious black box that might work right if I keep smashing the buttons at random" is a model, just a poor and confused one. Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.

This might sound like I'm just being pedantic, but it's also that I find "most people are stupid and have literally no mental models of computers" to be a harmful idea in many ways - it equates a "model" with a clear explicit model while entirely ignoring vague implicit models (that most of human thought probably consists of), it implies that anyone who doesn't have a store of specialized knowledge is stupid, and it ignores the value of experts familiarizing themselves with various folk models (e.g. folk models of security) that people hold about the domain.

Replies from: ChristianKl, SaidAchmiz
comment by ChristianKl · 2013-07-15T09:28:37.459Z · LW(p) · GW(p)

Literally not having a model about something would require knowing literally nothing about it, and today everyone knows at least a little about computers, even if that knowledge all came from movies.

Even someone who has know knowledge about computer will use a mental model if he has to interact with a computer. It's likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.

If people don't have any mental model in which to fit information they will ignore the information.

Replies from: Error, NancyLebovitz, Kaj_Sotala
comment by Error · 2013-07-15T12:50:10.388Z · LW(p) · GW(p)

It's likely that he will borrow a mental model from another field. He might try to treat the computer like a pet.

I think...this might actually be a possible mechanism behind really dumb computer users. I'll have to keep it in mind when dealing with them in future.

Comparing to Achmiz above:

Most people don't have mental models.

Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all. It reminds me of the post on chaotic inversions.

Replies from: ChristianKl, SaidAchmiz
comment by ChristianKl · 2013-07-15T21:19:31.230Z · LW(p) · GW(p)

Both of these feel intuitively right to me, and lead me to suspect the following: A sufficiently bad model is indistinguishable from no model at all.

Mental models are the basis of human thinking. Take original cargo cultists. They had a really bad model of why cargo was dropped on their island. On the other hand they used that model to do really dumb things.

A while ago I was reading a book about mental models. It investigates how people deal with the question: "You throw a steel ball against the floor and it bounches back. Where does the energy that moves the ball into the air come from?"

The "correct answer" is that the ball contracts when it hits the floor and then expands and that energy then brings the ball back into the air. In the book they called it the phenomenological primitives of springiness.

A lot of students had the idea that somehow the ball transfers energy into the ground and then the ground pushes the ball back. The idea that a steel ball contracts is really hard for them to accept because in their mental model of the world steel balls don't contract.

If you simply tell such a person the correct solution they won't remember. Teaching a new phenomenological primitives is really hard and takes a lot of repetition.

As a programmer the phenomenological primitives of recursion is obvious to me. I had the experience of trying to teach it to a struggling student and had to discover how hard it is too teach it from scratch. People always want to fit new information into their old models of the world.

People black out information that doesn't fit into their models of the world. This can lead to some interesting social engieering results.

A lot of magic tricks are based on faulty mental models by the audience.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-16T09:02:08.843Z · LW(p) · GW(p)

Which book was that? Would you recommend it in general?

comment by Said Achmiz (SaidAchmiz) · 2013-07-15T14:31:40.022Z · LW(p) · GW(p)

This reminds me of the debate in philosophy of mind between the "simulation theory" and the "theory theory" of folk psychology. The former (which I believe is more accepted currently — professional philosophers of mind correct me if I'm wrong) holds that people do not have mental models of other people, not even unconscious ones, and that we make folk-psychological predictions by "simulating" other people "in hardware", as it were.

It seems possible that people model animals similarly, by simulation. The computer-as-pet hypothesis suggests the same for computers. If this is the case, then it could be true that (some) humans literally have no mental models, conscious or unconscious, of computers.

If this were true, then what Kaj_Sotala said —

Literally not having a model about something would require knowing literally nothing about it

would be false.

Of course we could still think of a person as having an implicit mental model of a computer, even if they model it by simulation... but that is stretching the meaning, I think, and this is not the kind of model I referred to when I said most people have no mental models.

Replies from: ChristianKl, Kaj_Sotala
comment by ChristianKl · 2013-07-16T14:57:52.939Z · LW(p) · GW(p)

Simulations are models. They allow us to make predictions about how something behaves.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-16T15:36:55.675Z · LW(p) · GW(p)

The "simulation" in this case is a black box. When you use your own mental hardware to simulate another person (assuming the simulation theory is correct), you do so unconsciously. You have no idea how the simulation works; you only have access to its output. You have no ability to consciously fiddle with the simulation's settings or its structure.

A black box that takes input and produces predictive output while being totally impenetrable is not a "model" in any useful sense of the word.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T19:15:50.479Z · LW(p) · GW(p)

The concepts of mental models is very popular in usability design.

It's quite useful to distinguish a websites features from the features that the model of the website that the user has in it's head.

If you want to predict what the user does than it makes sense to speak of his model of the world whether or not you can change that model. You have to work with the model that's there. Whether or not the user is conscious of the feature of his model doesn't matter much.

comment by Kaj_Sotala · 2013-07-16T09:05:22.037Z · LW(p) · GW(p)

and that we make folk-psychological predictions by "simulating" other people "in hardware", as it were.

How does this theory treat the observation that we are better with dealing with the kinds of people that we have experience of? (E.g. I get better along with people of certain personality types because I've learned how they think.) Doesn't that unavoidably imply the existence of some kinds of models?

comment by NancyLebovitz · 2013-07-16T14:41:29.512Z · LW(p) · GW(p)

If people don't have any mental model in which to fit information they will ignore the information.

I'm pretty sure this is correct.

comment by Kaj_Sotala · 2013-07-16T09:00:55.469Z · LW(p) · GW(p)

Thanks, that's a good point.

comment by Said Achmiz (SaidAchmiz) · 2013-07-14T19:30:25.519Z · LW(p) · GW(p)

Fair enough. Pedantry accepted. :) I especially agree with the importance of recognizing vague implicit "folk models".

However:

it implies that anyone who doesn't have a store of specialized knowledge is stupid

Most such people are. (Actually, most people are, period.)

Believe you me, most people who ask questions like the one I quote are stupid.

comment by ChristianKl · 2013-07-15T09:17:19.739Z · LW(p) · GW(p)

Sadly, reacting in such a way generally amounts to grossly overestimating the questioner's intelligence and informedness. Most people don't have mental models. The contents of their minds are just a jumble; a question like the one I quoted is roughly equivalent to

Most people do have mental models in the sense the word get's defined in decision theory literature.

comment by Error · 2013-07-15T15:03:46.578Z · LW(p) · GW(p)

For the narrow subset of technical questions, How to Ask Questions the Smart Way is useful.

But if you don't have a problem to begin with -- if your aim is "learn more in field X," it gets more complicated. Given that you don't know what questions are worth asking, the best question might be "where would I go to learn more about X" or "what learning material would you recommend on the subject of X?" Then in the process of following and learning from their pointer, generate questions to ask at a later date.

There may be an inherent contradiction between wanting nonspecific knowledge and getting useful, specific answers.

comment by therufs · 2013-07-14T04:00:45.172Z · LW(p) · GW(p)

Start by asking the wrong ones. For me, it took a while to notice when I had even a stupid question to ask (possibly some combination of mild social anxiety and generally wanting to come across as smart & well-informed had stifled this impulse), so this might take a little bit of practice.

Sometimes your interlocutor will answer your suboptimal questions, and that will give you time to think of what you really want to know, and possibly a few extra hints for figuring it out. But at least as often your interlocutor will take your interest as a cue that they can just go ahead and tell you nonrelated things about the subject at hand.

comment by NancyLebovitz · 2013-07-13T17:42:01.396Z · LW(p) · GW(p)

What do you want to learn more about? If there isn't an obvious answer, give yourself some time to see if an answer surfaces.

The good news is that this is the thread for vague questions which might not pan out.

comment by mwengler · 2013-07-14T14:49:27.448Z · LW(p) · GW(p)

Ask the smartest questions you can think of at the time and keep updating, but don't waste time on that. After you have done a bit of this, ask them what you are missing, what questions you should be asking them.

comment by pan · 2013-07-13T18:25:34.013Z · LW(p) · GW(p)

To what degree does everyone here literally calculate numerical outcomes and make decisions based on those outcomes for everyday decisions using Bayesian probability? Sometimes I can't tell if when people say they are 'updating priors' they are literally doing a calculation and literally have a new number stored somewhere in their head that they keep track of constantly.

If anyone does this could you elaborate more on how you do this? Do you have a book/spreadsheet full of different beliefs with different probabilities? Can you just keep track of it all in your mind? Or calculating probabilities like this only something people do for bigger life problems?

Can you give me a tip for how to start? Is there a set of core beliefs everyone should come up with priors for to start? I was going to apologize if this was a stupid question, but I suppose it should by definition be one if it is in this thread.

Replies from: Manfred, Qiaochu_Yuan, mwengler, Sarokrae, Alexei, timtyler, Nornagest
comment by Manfred · 2013-07-13T19:01:56.482Z · LW(p) · GW(p)

Nope, not for everyday decisions. For me "remember to update" is more of a mantra to remember to change your mind at all - especially based on several pieces of weak evidence, which normal procedure would be to individually disregard and thus never change your mind.

comment by Qiaochu_Yuan · 2013-07-14T00:18:57.799Z · LW(p) · GW(p)

I never do this. See this essay by gwern for an example of someone doing this.

comment by mwengler · 2013-07-14T14:45:54.837Z · LW(p) · GW(p)

I suspect very little, but this does remind me of Warren Buffett speaking on Discounted Cash Flow calculations.

For quick background, an investment is a purchase of a future cash flow. Cash in the future is worth less to you than cash right now, and it is worth less and less as you go further into the future. Most treatments pretend that the proper way to discount the value of cash in the future is to have a discount rate (like 5% or 10% per year) and apply it as an exponential function to future cash.

Warren Buffett, a plausible candidate for the most effective investor ever (or at least so far), speaks highly of DCF (discounted cash flow) as the way to choose between investments. However, he also says he never actually does one other than roughly in his head. Given his excellent abilities at calculating in his head, I think it would translate to something like he never does a DCF calculation that would take up more than about 20 lines in an excel spreadsheet.

There are a broad range of policies that I have that are based on math: not gambling in Las Vegas because it's expectation value is negative (although mostly I trust the casinos to have set the odds so payouts are negative, I don't check their math). Not driving too far for small discounts (expense of getting discount should not exceed value of discount). Not ignoring a few thousand dollar difference in a multi-hundred thousand dollar transaction because "it is a fraction of a percent."

I do often in considering hiring a personal service compare paying for it to how long it would take me to do the job vs how long I would need to work at my current job to hire the other person. I am pretty well paid so this does generally lead to me hiring a lot of things done. A similar calcuation does lead me to systematically ignore costs below about $100 for a lot of things which still "feels" wrong, but which I have not yet been able to do a calculation that shows me it is wrong.

I am actually discouraging my wife and children from pushing my children towards elite colleges and universities on the basis that they are over-priced for what they deliver. I am very unconfident in this one as rich people that I respect continue to just bleed money into their children's educations. SO I am afraid to go independent of them even as I can't figure out a calculation that shows what they are doing makes economic sense.

I do look at, or caculate, the price per ounce in making buying decisions, I guess that is an example of a common bayesian calculation.

Replies from: Izeinwinter, D_Malik, Lumifer, westward
comment by Izeinwinter · 2013-07-14T16:01:24.027Z · LW(p) · GW(p)

It depends on what your kids want to do. Elite colleges are not selling education, except to the extent that they have to maintain standards to keep their position. They are selling networking cachet. Which is of very high value to people who want to be one of the masters of the universe, and take their chances with the inbound guillotine. If your kids want to be doctors. engineers or archaeologists.. no, not worth the price tag. In fact, the true optium move is likely to ship them to Sweden with a note telling them to find a nice girl, naturalize via marriage and take the free ride through stockholm university. ;)

Replies from: mwengler
comment by mwengler · 2013-07-15T16:10:48.023Z · LW(p) · GW(p)

Elite colleges are not selling education, except to the extent that they have to maintain standards to keep their position. They are selling networking cachet.

I attended Swarthmore College and got a totally bitching education. I would recommend Swarthomre to anybody with better than about 740's on her SATs. For a regular smart kid, Swarthmore educationally is probably a lot of work, and may be valuable but it is not the value proposition I got or that I understand. For me, the incredible quality of the student body and highly motivated and intelligent professors produced an education I do not think you could get no matter how good the profs are if the students were more regular.

My grad-school mate at Caltech, another place with incredible educational results that are not available at lesser institutions, attended Harvard undergrad. His education appeared to be similarly outstanding to the one I got at Swarthmore.

So elite universities may be selling networking cachet, and that may be what some of their customers intend to be buying. But for really smart kids, an elite school is a serious educational opportunity that non-elite schools cannot match.

I certainly do agree that getting a free ride through a good university is a great deal!

comment by D_Malik · 2013-07-15T22:35:29.475Z · LW(p) · GW(p)

I am actually discouraging my wife and children from pushing my children towards elite colleges and universities on the basis that they are over-priced for what they deliver. I am very unconfident in this one as rich people that I respect continue to just bleed money into their children's educations. SO I am afraid to go independent of them even as I can't figure out a calculation that shows what they are doing makes economic sense.

In terms of return on investment, elite colleges seem to be worthwhile. Read this; more coverage on OB and the places linked from there. It's a bit controversial but my impression was that you'd probably be better off going someplace prestigious. Credit to John_Maxwell_IV for giving me those links, which was most of the reason I'm going to a prestigious college instead of an average one. I'm extremely doubtful about the educational value of education for smart people, but the prestige still seems to make it worth it.

These obviously become a really good deal if you can get financial aid, and many prestigious places do need-blind admissions, i.e. they will give you free money if you can convince them you're smart. Also look at the ROIs of places you're considering. Value of information is enormous here.

comment by Lumifer · 2013-07-15T16:29:44.133Z · LW(p) · GW(p)

I am actually discouraging my wife and children from pushing my children towards elite colleges and universities on the basis that they are over-priced for what they deliver.

It's quite hard to do a full estimate of the benefits you get from going to an elite college. There are a lot of intangibles and a lot of uncertainty -- consider e.g. networking potential or the acquisition of good work habits (smart students at mediocre places rapidly become lazy).

Even if you restrict yourself to the analysis of properly discounted future earning potential (and that's a very limited approach), the uncertainties are huge and your error bars will be very very wide.

I generally go by the "get into the best school you can and figure out money later" guideline :-)

comment by westward · 2013-07-14T22:09:12.681Z · LW(p) · GW(p)

I agree with much of what you're saying. I make similar back of the envelope calculations.

One small point of clarity is that "money is worth less in the future" is not a general rule but a function of inflation which is affected strongly by national monetary policy. While it likely won't change in the USA in the near future, it COULD, so I think it's important to recognize that and be able to change behavior if necessary.

Lots of people attend an elite college because of signalling, not because it's an investment. Keep questioning the value of such an education!

Replies from: mwengler, Lumifer
comment by mwengler · 2013-07-15T16:04:53.851Z · LW(p) · GW(p)

One small point of clarity is that "money is worth less in the future" is not a general rule but a function of inflation which is affected strongly by national monetary policy. While it likely won't change in the USA in the near future, it COULD, so I think it's important to recognize that and be able to change behavior if necessary.

I'm sorry I didn't explain that well enough. WHat I meant is money you are going to get in the future is not worth as much as money you are going to get now. Even if we work with inflationless dollars, this is true. It happens because the sooner you have a dollar, the more options you have as to what to do with it. So if I know I am going to get a 2013 dollar in 2023, that is worth something to me because there are things I will want to do in the future. But would I pay a dollar know to get a 2013 dollar in 2023? Definitely not, I would just keep my dollar. Would I pay 80 cents? 50 cents? I would certainly pay 25 cents, and might pay 50 cents. If I paid 50 cents, I would be estimating that the things I might do with 50 cents between 2013 and 2023 are about equal in value to me, right now, with the current value I would place on the things I might do with a $1 in 2023 or later. The implicit discount then for 10 years is 50% if I am willing to pay 50 cents now for a 2013 $1 in 2023. The discount rate, assuming exponential change in time as all interest rate calculations do, is about 7%. Note this is a discount in real terms, as it is a 2013 $1 value I will receive in 2023. In principle, if inflation had accumulated 400% by 2023, I would actually be receiving $5 2023 dollars for a 26%/year nominal return on my initial investment, even though I have only a 7%/year real return and 21%/year inflation.

comment by Lumifer · 2013-07-15T16:24:04.560Z · LW(p) · GW(p)

"money is worth less in the future" is not a general rule but a function of inflation

That is only partially true. The time value of money is a function not only of inflation, but of other things as well, notably the value of time (e.g. human lives are finite) and opportunity costs.

In fact, one of the approaches to figuring out the proper discounting rate for future cash flows is to estimate your opportunity costs and use that.

comment by Sarokrae · 2013-07-14T09:03:08.077Z · LW(p) · GW(p)

I'd be alarmed if anyone claimed to accurately numerically update their priors. Non-parametric Bayesian statistics is HARD and not the kind of thing I can do in my head.

comment by Alexei · 2013-07-18T15:24:21.265Z · LW(p) · GW(p)

I had the same worry/question when I first found LW. After meeting with all the "important" people (Anna, Luke, Eliezer...) in person, I can confidently say: no, nobody is carrying around a sheet of paper and doing actual Bayesian updating. However, most people in these circles notice when they are surprised/confused, act on that feeling, and if they were wrong, then they update their believes, followed soon by their actions. This could happen from one big surprise or many small ones. So there is a very intuitive sort of Bayesian updating going on.

comment by timtyler · 2013-07-14T11:33:06.190Z · LW(p) · GW(p)

Your brain does most of this at lightning speed unconsciously. Often, trying to make the process unto a deliberative, conscious one slows it down so much that the results are of little practical use.

comment by Nornagest · 2013-07-13T21:57:28.792Z · LW(p) · GW(p)

I only literally do an expected outcome calculation when I care more about having numbers than I do about their validity, or when I have unusually good data and need rigor. Most of the time the uncertainties in your problem formulation will dominate any advantage you might get from doing actual Bayesian updates.

The advantage of the Bayesian mindset is that it gives you a rough idea of how evidence should affect your subjective probability estimate for a scenario, and how pieces of evidence of different strengths interact with each other. You do need to work through a reasonable number of examples to get a feel for how that works, but once you have that intuition you rarely need to do the math.

comment by CoffeeStain · 2013-07-13T23:17:35.745Z · LW(p) · GW(p)

How do I get people to like me? It seems to me that this is a worthwhile goal; being likable increases the fun that both I and others have.

My issue is that likability usually means, "not being horribly self-centered." But I usually find I want people to like me more for self-centered reasons. It feels like a conundrum that just shouldn't be there if I weren't bitter about my isolation in the first place. But that's the issue.

Replies from: gothgirl420666, mwengler, CronoDAS, Sarokrae, drethelin
comment by gothgirl420666 · 2013-07-14T03:54:29.272Z · LW(p) · GW(p)

This was a big realization for me personally:

If you are trying to get someone to like you, you should strive to maintain a friendly, positive interaction with that person in which he or she feels comfortable and happy on a moment-by-moment basis. You should not try to directly alter that person's opinion of you, in the sense that if you are operating on a principle of "I will show this person that I am smart, and he will like me", "I will show this person I am cool, and she will like me," or even "I will show this person that I am nice, and he will like me", you are pursuing a strategy that can be ineffective and possibly lead people to see you as self-centered. This might be what people say when they mean "be yourself" or "don't worry about what other people think of you".

Also, Succeed Socially is a good resource.

Replies from: army1987, someonewrongonthenet, CoffeeStain, Creutzer
comment by A1987dM (army1987) · 2013-07-14T23:04:42.340Z · LW(p) · GW(p)

Also, getting certain people to like you is way, way, way, way harder than getting certain other people to like you. And in many situations you get to choose whom to interact with.

Do what your comparative advantage is.

comment by someonewrongonthenet · 2013-08-18T13:59:27.428Z · LW(p) · GW(p)

Another tool to achieve likeability is to consistently project positive emotions and create the perception that you are happy and enjoying the interaction. The quickest way to make someone like you is to create the perception that you like them because they make you happy - this is of course much easier if you genuinely do enjoy social interactions.

he or she feels comfortable and happy on a moment-by-moment basis

It is very good advice to care about other people.

I'd like to add that I think it is common for the insecure to do this strategy in the wrong way. "Showing off" by is a failure mode, but "people pleaser' can be a failure mode as well - it's important that making others happy doesn't come off as a transaction in exchange for acceptance.

"Look how awesome I am and accept me" vs "Please accept me, I'll make you happy" vs "I accept you, you make me happy".

comment by CoffeeStain · 2013-07-14T04:56:16.815Z · LW(p) · GW(p)

Thank you, so very much.

I often forget that there are different ways to optimize, and the method that feels like it offers the most control is often the worst. And the one I usually take, unfortunately.

comment by Creutzer · 2013-07-18T05:22:42.412Z · LW(p) · GW(p)

This sounds immensely plausible. But it immediately prompts the more specific question: how on earth do you make people feel comfortable and happy on a moment-by-moment basis around you?

Especially if you're an introvert who lives in his own head rather a lot. Maybe the right question (for some) is: how do you get people to like you if, in a way, you are self-centered? It pretty much seems to mean that you're screwed.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-20T11:49:33.046Z · LW(p) · GW(p)

This looks to me like a bunch of reasonable questions.

Replies from: Creutzer
comment by Creutzer · 2013-07-20T12:13:36.252Z · LW(p) · GW(p)

I had written the comment before reading on and then retracted it because the how-question is discussed below.

comment by mwengler · 2013-07-14T14:24:58.105Z · LW(p) · GW(p)

In actuality,a lot of people can like you a lot even if you are not selfless. It is not so much that you need to ignore what makes you happy, as much as it is that you need to pay attention and energy to what makes other people happy. A trivial if sordid example is you don't get someone wanting to have sex with you by telling them how attractive you are, you will do better by telling them, and making it obvious that, you find them attractive. That you will take pleasure in their increased attentions to you is not held against you because it means you are not selfless not at all. Your need or desire for them is the attractor to them.

So don't abnegate, ignore, deny, your own needs. But run an internal model where other people's needs are primary to suggest actions you can take that will serve them and glue them to you.

Horribly self-centered isn't a statement that you elevate your own needs too high. It is that you are too ignorant and unreactive to other people's needs.

comment by CronoDAS · 2013-07-14T21:24:54.353Z · LW(p) · GW(p)

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Replies from: Vaniver
comment by Vaniver · 2013-07-15T04:06:39.369Z · LW(p) · GW(p)

The standard reference for this is "How to Win Friends and Influence People" by Dale Carnegie. I have not read it myself.

Much of it boils down to gothgirl420666's advice, except with more technical help on how. (I think the book is well worth reading, but it basically outlines "these are places where you can expend effort to make other people happier.")

Replies from: ChristianKl
comment by ChristianKl · 2013-07-15T08:59:53.570Z · LW(p) · GW(p)

One of the tips from Carnegie that gothgirl420666 doesn't mention is using people names.

Learn them and use them a lot in coversation. Great them with their name.

Say thing like: "I agree with you, John." or "There I disagree with you, John."

Replies from: Vaniver, fubarobfusco
comment by Vaniver · 2013-07-15T18:06:28.583Z · LW(p) · GW(p)

This is a piece of advice that most people disagree with, and so I am reluctant to endorse it. Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

(While we're on the subject of recommendations I disagree with, Carnegie recommends recording people's birthdays, and sending them a note or a call. This used to be a lot more impressive before systems to automatically do that existed, and in an age of Facebook I don't think it's worth putting effort into. Those are the only two from the book that I remember thinking were unwise.)

Replies from: RomeoStevens, ChristianKl
comment by RomeoStevens · 2013-07-15T21:36:46.264Z · LW(p) · GW(p)

Be judicious, and name drop with one level of indirection. "That's sort of what like John was saying earlier I believe yada yada."

comment by ChristianKl · 2013-07-16T08:48:51.944Z · LW(p) · GW(p)

Knowing people's names is important, and it's useful to use them when appropriate, but inserting them into conversations where they do not belong is a known influence technique that will make other people cautious.

It probably depends on the context. If you in a context like a sales conversation people might get cautious. In other context you might like a person trying to be nice to you.

But you are right that there the issue of artificialness. It can be strange if things don't flow naturally. I think that's more a matter of how you do it rather than how much or when.

At the beginning, just starting to greet people with their name can be a step forward. I think in most cultures that's an appropriate thing to do, even if not everyone does it.

I would also add that I'm from Germany, so my cultural background is a bit different than the American one.

comment by fubarobfusco · 2013-07-15T17:27:58.913Z · LW(p) · GW(p)

This is how to sound like a smarmy salesperson who's read Dale Carnegie.

comment by Sarokrae · 2013-07-14T09:01:42.048Z · LW(p) · GW(p)

I second what gothgirl said; but in case you were looking for more concrete advice:

  1. Exchange compliments. Accept compliments graciously but modestly (e.g. "Thanks, that's kind of you").
  2. Increase your sense of humour (watching comedy, reading jokes) until it's at population average levels, if it's not there.
  3. Practise considering other people's point of view.
  4. Do those three things consciously for long enough that you start doing them automatically.

At least, that's what worked for me when I was younger. Especially 1 actually, I think it helped with 3.

comment by drethelin · 2013-07-14T03:05:05.639Z · LW(p) · GW(p)

You can be self-centered and not act that way. If you even pretend to care about most people's lives they will care more about yours.

If you want to do this without being crazy bored and feeling terrible, I recommend figuring out conversation topics of other people's lives that you actually enjoy listening people talk about, and also working on being friends with people who do interesting things. In a college town, asking someone their major is quite often going to be enjoyable for them and if you're interested and have some knowledge of a wide variety of fields you can easily find out interesting things.

comment by drethelin · 2013-07-14T17:58:24.622Z · LW(p) · GW(p)

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Replies from: wedrifid, CronoDAS, MrMind, ChristianKl
comment by wedrifid · 2013-07-14T22:29:34.013Z · LW(p) · GW(p)

Is there any non-creepy way to indicate to people that you're available and interested in physical intimacy? doing something like just telling everyone you meet "hey you're cute want to make out?" seems like it would go badly.

Slightly increase eye contact. Orient towards. Mirror posture. Use touch during interaction (in whatever ways are locally considered non-creepy).

comment by CronoDAS · 2013-07-14T22:24:52.360Z · LW(p) · GW(p)

Tell a few friends, and let them do the asking for you?

Replies from: drethelin
comment by drethelin · 2013-07-14T23:21:01.285Z · LW(p) · GW(p)

The volume of people to whom I tend to be attracted to would make this pretty infeasible.

Replies from: CronoDAS
comment by CronoDAS · 2013-07-15T00:31:02.020Z · LW(p) · GW(p)

Well, outside of contexts where people are expected to be hitting on each other (dance clubs, parties, speed dating events, OKCupid, etc.) it's hard to advertise yourself to strangers without in being socially inappropriate. On the other hand, within an already defined social circle that's been operating a while, people do tend to find out who is single and who isn't.

I guess you could try a T-shirt?

Replies from: drethelin
comment by drethelin · 2013-07-15T01:05:28.155Z · LW(p) · GW(p)

It's not a question of being single, I'm actually in a relationship. However, the relationship is open and I would love it if I could interact physically with more people, just as a casual thing that happens. When I said telling everyone I met "you're cute want to make out" everyone was a lot closer to accurate than you may think when the average person would say it in that context.

Replies from: CronoDAS
comment by CronoDAS · 2013-07-15T01:34:30.139Z · LW(p) · GW(p)

Ah. So you need a more complicated T-shirt!

Incidentally, if you're interested in making out with men who are attracted to your gender, "you're cute want to make out" may indeed be reasonably effective. Although, given that you're asking this question on this forum, I think I can assume you're a heterosexual male, in which case that advice isn't very helpful.

comment by MrMind · 2013-07-15T09:41:47.230Z · LW(p) · GW(p)

The non-creepy socially accepted way is through body language. Strong eye contact, personal space invasion, prolonged pauses between sentences, purposeful touching of slightly risky area (for women: the lower back, forearms, etc.) all done with a clearly visible smirk.
In some context however the explicitly verbal might be effective, especially if toned down (Hey you're interesting, I want to know you better) or up (Hey, you're really sexy, do you want to go to bed with me?), but it is highly dependent on the woman.
I'm not entirely sure what's the parameter here, but I suspect plausible deniability is involved.

comment by ChristianKl · 2013-07-15T06:51:17.400Z · LW(p) · GW(p)

I don't think that trying to skip the whole mating dance between men and women is a good strategy. Most woman don't make calculated mental decisions about making out with men but instead follow their emotions. Those emotions need the human mating dance.

If you actually want to make out flirtation is usually the way to go.

One way that's pretty safe is to purposefully misunderstand what the other person is saying and frame it as them hitting on you. Yesterday, I chatted with a woman via facebook and she wanted to end the chat by saying that she now has to take a shower.

I replied with: "you want me to picture yourself under the shower..."

A sentence like that doesn't automatically tell the woman that I'm interested in her but should encourage her to update in that direction.

Replies from: David_Gerard, army1987
comment by David_Gerard · 2013-07-15T07:26:03.192Z · LW(p) · GW(p)

Boy did that set off my creep detector.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-15T08:24:49.381Z · LW(p) · GW(p)

Of course it always depends on your preexisting relationship and other factors. You always have to calibrate to the situation at hand.

A lot of people automatically form images in their mind if you tell them something to process the thought. I know the girl in question from an NLP/hypnosis context, so she should be aware on some level that language works that way.

In general girls are also more likely to be aware that language has many lavers of meaning besides communicating facts.

Replies from: MileyCyrus, David_Gerard, army1987
comment by MileyCyrus · 2013-07-15T11:00:25.137Z · LW(p) · GW(p)

In general girls are also more likely to be aware that language has many lavers of meaning besides communicating facts.

Please say "women" unless you are talking about female humans that have not reached adulthood.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-15T12:20:37.848Z · LW(p) · GW(p)

Please say "women" unless you are talking about female humans that have not reached adulthood.

That only one meaning of the word. If you look at websters, I think the meaning to which I'm refering here is: "c : a young unmarried woman".

That's the reference class that I talk about when I speak about flirtation. I don't interact with a 60 year old woman the same way as I do with a young unmarried woman.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-16T11:26:36.790Z · LW(p) · GW(p)

Do women forget whether language has many layers of meaning besides communicating facts once they get married or grow old?

Unmarried women are more likely than whom to be aware of that? Than everyone else? Than unmarried men? Than married women? Than David_Gerald?

comment by David_Gerard · 2013-07-15T17:56:58.056Z · LW(p) · GW(p)

Yeah, sorry, I should have garnished that more. "Without knowing more context ..."

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T12:35:58.523Z · LW(p) · GW(p)

I think that a good lesson for all kind of flirting, there no one-side fits all solution to signal it but you always have to react to the specific context at hand.

comment by A1987dM (army1987) · 2013-07-16T12:00:15.191Z · LW(p) · GW(p)

A lot of people automatically form images in their mind if you tell them something to process the thought.

...oh.

recalls times he has told single female friends he was going to take a shower of vice versa; lots of times

considers searching Facebook chat log for words for ‘shower’

Fuck that. A photo of me half naked was my profile picture for a long time, and there are videos of me performing strip teases on there, so what people picture when I tell them I'm going to wash myself shouldn't be among my top concerns.

(Anyway, how do I recognize that kind of people? Feynman figured that out about (IIRC) Bethe because the latter could count in his head while talking but not while reading, but that kind of situations don't come up that often.)

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T13:42:37.731Z · LW(p) · GW(p)

...oh.

recalls times he has told single female friends he was going to take a shower of vice versa; lots of times

Communication has many levels. If I tell you not to think of a pink elephant, on one hand I do tell you that you should try not to think of a pink elephant. On the other hand I give you a suggestion to think of a pink elephant and most people follow that suggestion and do think of a pink elephant.

Different people do that to different extends. There are people who easily form very detailed pictures in their mind and other people who don't follow such suggestion as easily.

On of the things you learn in hypnosis is to put pictures into people heads through principles like that where the suggestion doesn't get criticially analysed.

There are etiquette rules that suggest that it's impolite in certain situation to say "I'm going to the toilet", because of those reasons.

Communication that text based usually doesn't give suggestions that are as strong as in person suggestions. After all the person already perceives the text visually.

As a rule of thumb people look upwards when they process internal images, but that doesn't always happen and not every time someone looks upwards he processes an internal image. That what gets taught in NLP courses. There are some scientific studies that suggest that isn't the case. Those studies have some problems because the don't have good controls about whether a person really thinks about pictures. In any case I don't think recognising such things is something you can easily learn via reading a discussion like this or a book. It rather takes a lot of in person training.

But I don't think you get very far in seducing woman by trying to use such tricks to let woman form naked images of you. There are PUA people who try that under the label "speed seduction" with generally very little results.

Trying to use language that way usually get's people inside their heads. Emotions are more important than images.

You might want to read http://en.wikipedia.org/wiki/Four-sides_model .

If something a woman says to you in a casual context you can think about whether there's a plausible story about how the woman says what she says to signal attraction to you.

Replies from: army1987, army1987, army1987
comment by A1987dM (army1987) · 2013-07-20T22:57:47.041Z · LW(p) · GW(p)

Now that I know how it feels to listen to someone talking about a Feynman diagram while driving on a motorway, I get your points. :-)

comment by A1987dM (army1987) · 2013-07-20T22:56:19.307Z · LW(p) · GW(p)

There are etiquette rules that suggest that it's impolite in certain situation to say "I'm going to the toilet", because of those reasons.

I don't think that's the reason, because if it was it would apply regardless of which words you use, whatever their literal meaning, so long as it's reasonably unambiguous in the context (why would “the ladies' room” or “talk to a man about a horse” be any less problematic, when the listener knows what you mean?), and it wouldn't depend on which side of the pond you're on (ISTM that “toilet” is less often replaced by euphemisms in BrE than in AmE).

Replies from: ChristianKl
comment by ChristianKl · 2013-07-21T09:31:53.980Z · LW(p) · GW(p)

I don't think that's the reason, because if it was it would apply regardless of which words you use, whatever their literal meaning, so long as it's reasonably unambiguous in the context (why would “the ladies' room” or “talk to a man about a horse” be any less problematic, when the listener knows what you mean?)

When a woman goes to the ladies room she might also go to fix up her makeup or hairstyle. Secondly words matter. Words trigger thoughts. If you speak in deep metaphars you will produce less images than if you describe something in detail.

(ISTM that “toilet” is less often replaced by euphemisms in BrE than in AmE).

Americans are more uptight about intimicy, so that fits nicely. They have a stronger ban on cureswords on US television than in Great Britian. I would also expect more people in Bible Belt stats to use suuch euphemisms than in California.

Replies from: bogus, army1987
comment by bogus · 2013-07-21T12:23:18.768Z · LW(p) · GW(p)

Fun fact: Brits and Americans actually use the word 'toilet' in very different ways. An American goes to the restroom and sits on the toilet; a Brit goes to the toilet and sits on the loo. When a Brit hears the word 'toilet', he's thinking about the room, not the implement.

comment by A1987dM (army1987) · 2013-07-21T10:01:42.127Z · LW(p) · GW(p)

When a woman goes to the ladies room she might also go to fix up her makeup or hairstyle.

She can do the same things in the toilet too, can't she?

If you speak in deep metaphars you will produce less images than if you describe something in detail.

But once a metaphor becomes common enough, it stops being a metaphor: if I'm saying that I'm checking my time, is that a chess metaphor? For that matter, “toilet” didn't etymologically mean what it means now either -- it originally referred to a piece of cloth. So, yes, words trigger thoughts, but they don't to that based on their etymology, but based on what situations the listener associates them with.

(Why are specifying Great Britain, anyway? How different are things in NI than in the rest of the UK? I only spent a few days there, hardly any of which watching TV.)

Replies from: ChristianKl
comment by ChristianKl · 2013-07-21T11:00:26.721Z · LW(p) · GW(p)

She can do the same things in the toilet too, can't she?

Yes, but that image isn't as directly conjured up by the word toilet.

I'm also not saying that the term ladies room will never conjure up the same image just that it is less likely to do so.

Furthermore, if you are in a culture where some people use euphemisms while others do not, you signal something by your choice to either use or not use the euphemisms.

Of course what you signal is different when you are conscious that the other person consciously notices that you make that choice than when it happens on a more unconscious level.

(Why are specifying Great Britain, anyway? How different are things in NI than in the rest of the UK? I only spent a few days there, hardly any of which watching TV.)

I didn't intent any special meaning there.

comment by A1987dM (army1987) · 2013-07-18T12:24:45.442Z · LW(p) · GW(p)

But I don't think you get very far in seducing woman by trying to use such tricks to let woman form naked images of you.

That's not something I'd want to do anyway. (That's why my reaction in the first couple seconds after I read your comment was being worried that I might have done that by accident. Then I decided that if someone was that susceptible there would likely be much bigger issues anyway.)

comment by A1987dM (army1987) · 2013-07-16T12:12:18.350Z · LW(p) · GW(p)

If you actually want to make out flirtation is usually the way to go.

I guess it depends on what your long-term goals are.

Hooking up within seconds of noticing each other is not that uncommon in certain venues, and I haven't noticed any downsides to that.¹ (My inner Umesh says this just means I don't do that often enough, and I guess he does have a point, though I don't know whether it's relevant.) Granted, that's unlikely to result in a relationship, but that's not what drethelin is seeking anyway.


  1. Unless you count the fact that you are standing, which if the other person is over a foot shorter than you and your lower body strength and sense of balance are much worse than usual due to tipsiness, tiredness, severe sleep deprivation and not having exercised in a week, can be troublesome if you don't pay attention to where your damn centre of gravity is.
comment by ikrase · 2013-07-13T08:22:51.212Z · LW(p) · GW(p)
  • What's with the ems? People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird. Does anthropics come into it?

  • Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)

Replies from: cousin_it, drethelin, Eliezer_Yudkowsky, ChristianKl, hairyfigment, DanielLC, ChristianKl, Tenoke
comment by cousin_it · 2013-07-13T08:48:17.867Z · LW(p) · GW(p)

For what it's worth, Eliezer's answer to your second question is here:

There is no safe wish smaller than an entire human morality. (...) With a safe genie, wishing is superfluous. Just run the genie.

Replies from: timtyler, ikrase
comment by timtyler · 2013-07-14T11:46:38.375Z · LW(p) · GW(p)

There is no safe wish smaller than an entire human morality.

Is that true? Why can't the wish point at what it wants (e.g. the wishes of particular human X) - rather than spelling it out in detail?

Replies from: drethelin, ESRogs
comment by drethelin · 2013-07-14T16:50:27.125Z · LW(p) · GW(p)

The first problem is the wish would have to be extremely good at pointing.

This sounds silly but what I mean is that humans are COMPLICATED. "Pointing" at a human and telling an AI to deduce things about it will come up with HUGE swathes of data which you have to have already prepared it to ignore or pay attention to. To give a classic simple example, smiles are a sign of happiness but we do not want to tile the universe in smiley faces or create an artificial virus that constricts your face into a rictus and is highly contagious.

Second: assuming that works, it works primarily for one person, which is giving that person a lot more power than I think most people want to give any one person. But if we could guarantee an AI would fulfill the values of A person rather than of multiple people and someone else was developing AI that wasn't guarunteed to fulfill any values I'd probably take it.

comment by ESRogs · 2013-07-16T01:05:09.301Z · LW(p) · GW(p)

To spell out some of the complications -- does the genie only respond to verbal commands? What if the human is temporarily angry at someone and an internal part of their brain wishes them harm. The genie needs to know not to act on this. So it must have some kind of requirement for reflective equilibrium.

Suppose the human is duped into pursuing some unwise course of action? The genie needs to reject their new wishes. But the human should still be able to have their morality evolve over time.

So you still need a complete CV Extrapolator. But maybe that's what you had in mind be pointing at the wishes of a particular human?

comment by ikrase · 2013-07-14T03:02:46.144Z · LW(p) · GW(p)

I think that Obedient AI requires less fragility-of-values types of things.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-14T04:22:32.126Z · LW(p) · GW(p)

I don't see why a genie can't kill you just as hard by missing one dimension of what it meant to satisfy your wish.

Replies from: ikrase
comment by ikrase · 2013-07-14T10:23:10.177Z · LW(p) · GW(p)

I'm not talking naive obedient AI here. I'm talking a much less meta FAI that does not do analysis of metaethics or CEV or do incredibly vague, subtle wishes. (Atlantis in HPMOR may be an example of a very weak, rather irrational, poorly safeguarded Obedient AI with a very, very strange command set.)

comment by drethelin · 2013-07-13T13:09:54.422Z · LW(p) · GW(p)

Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.

Replies from: Tenoke
comment by Tenoke · 2013-07-13T13:12:56.098Z · LW(p) · GW(p)

What question are you answering?

Replies from: torekp
comment by torekp · 2013-07-14T15:34:03.224Z · LW(p) · GW(p)

Question 1, especially sentences 2-3. How to tell: the answer is highly appropriate and correct if taken as a response to that.

Replies from: Tenoke
comment by Tenoke · 2013-07-14T16:57:56.675Z · LW(p) · GW(p)

People who are into ems seem to make a lot of assumptions about what ems are like and seem completely unattached to present-day culture or even structure of life, seem willing to spam duplicates of people around, etc. I know that Hanson thinks that 1. ems will not be robbed of their humanity and 2. that lots of things we currently consider horrible will come to pass and be accepted, but it's rather strange just how as soon as people say 'em' (as opposed to any other form of uploading) everything gets weird.

with an answer

Basically it's a matter of natural selection. Given a starting population of EMs, if some are unwilling to be copied, the ones that are willing to be copied will dominate the population in short order. If EMs are useful for work, eg valuable, then the more valuable ones will be copied more often. At that point, EMs that are willing to be copied and do slave labor effectively for no complaints will become the most copied, and the population of ems will end up being composed largely of copies of the person/people who are 1) ok with being copied, 2) ok with being modified to work more effectively.

Doesn't make too much sense. Not to mention that those are not questions.

The questions in the post are

  1. What's with the ems?
  2. Does anthropics come into it?
  3. Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI?

At least to me the answer doesn't seem to fit any of those but I guess the community highly disagrees with me given the upvotes and downvotes.

Replies from: torekp
comment by torekp · 2013-07-16T01:39:36.298Z · LW(p) · GW(p)

Question 1 doesn't end with the question mark; the next two sentences explain the intention of asking "what's with the ems?" Which would otherwise be a hopelessly vague question, but becomes clear enough with the help. Charitable interpretation trumps exact punctuation as an interpretive guide.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-14T04:15:54.574Z · LW(p) · GW(p)

Well, no offense, but I'm not sure you are aware of the need for Friendliness in Obedient AI, or rather, just how much F you need in a genie.

If you were to actually figure out how to build a genie you would have figured it out by trying to build a CEV-class AI, intending to tackle all those challenges, tackling all those challenges, having pretty good solutions to all of those challenges, not trusting those solutions quite enough, and temporarily retreating to a mere genie which had ALL of the safety measures one would intuitively imagine necessary for a CEV-class independently-acting unchecked AI, to the best grade you could currently implement them. Anyone who thought they could skip the hard parts of CEV-class FAI by just building a genie instead, would die like a squirrel under a lawnmower. For reasons they didn't even understand because they hadn't become engaged with that part of the problem.

I'm not certain that this must happen in reality. The problem might have much kinder qualities than I anticipate in the sense of mistakes naturally showing up early enough and blatantly enough for corner-cutters to spot them. But it's how things are looking as a default after becoming engaged with the problems of CEV-class AI. The same problems show up in proposed 'genies' too, it's just that the genie-proposers don't realize it.

Replies from: ikrase
comment by ikrase · 2013-07-14T10:30:51.646Z · LW(p) · GW(p)

I'm... not sure what you mean by this. And I wouldn't be against putting a whole CEV-ish human morality in an AI, either. My point is that there seems to be a big space between your Outcome Pump fail example and highly paternalistic AIs of the sort that caused Failed Utopia 4-2.

It reminds me a little bit about how modern computers are only occasionally used for computation.

Replies from: Eliezer_Yudkowsky, NancyLebovitz
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-14T23:18:49.223Z · LW(p) · GW(p)

Anything smarter-than-human should be regarded as containing unimaginably huge forces held in check only by the balanced internal structure of those forces, since there is nothing which could resist them if unleashed. The degree of 'obedience' makes very little difference to this fact, which must be dealt with before you can go on to anything else.

comment by NancyLebovitz · 2013-07-14T13:24:53.461Z · LW(p) · GW(p)

As I understand it, a AI is expected to make huge, inventive efforts to fulfill its orders as it understands them.

You know how sometimes people cause havoc while meaning well? Imagine something immensely more powerful and probably less clueful making the same mistake.

comment by ChristianKl · 2013-07-13T13:37:48.745Z · LW(p) · GW(p)

I know that Hanson thinks that 1. ems will not be robbed of their humanity

I don't know whether Hanson has a concret concept of 'humanity'.

comment by hairyfigment · 2013-07-15T02:57:39.440Z · LW(p) · GW(p)

Besides Eliezer's rather strong-looking argument, ethically creating Obedient AI would require solving the following scary problems:

  • A "nonperson predicate" that can ensure the AI doesn't create simulations which themselves count as people. If we fail to solve this one, then I could be a simulation the AI made in order to test how people like me react to torture.

  • A way to ensure the AI itself does not count as a person, so that we don't feel sad if it eventually switches itself off. See here for a fuller explanation of why this matters.

Now, I think Wei Dai suggested we start by building a "philosophical" AI that could solve such problems for us. I don't think philosophy is a natural class. (A 'correct way to do philosophy' sounds like a fully general correct way to think and act.) But if we get the AI's goals right, then maybe it could start out restricted by flawed and overcautious answers to these questions, but find us some better answers. Maybe.

Replies from: ikrase
comment by ikrase · 2013-07-15T20:00:16.904Z · LW(p) · GW(p)

I am aware of the need for those things (part of what I mean by (need for friendliness in OAI) but as far as I can tell, Paternalistic FAI requires you to solve those problems, plus simple 'not being very powerful but insane', plus basic understandings of what matters to humans, plus incredibly meta human values matters. An OAI can leave off the last one of those problems.

Replies from: hairyfigment
comment by hairyfigment · 2013-07-15T20:38:56.776Z · LW(p) · GW(p)
  1. I meant that by going meta we might not have to solve them fully.

  2. All the problems you list sound nearly identical to me. In particular, "what matters to humans" sounds more vague but just as meta. If it includes enough details to actually reassure me, you could just tell the AI, "Do that." Presumably what matters to us would include 'the ability to affect our environment, eg by giving orders.' What do you mean by "very powerful but insane"? I want to parse that as 'intelligent in the sense of having accurate models that allow it to shape the future, but not programmed to do what matters to humans.'

Replies from: ikrase
comment by ikrase · 2013-07-16T07:51:48.075Z · LW(p) · GW(p)

"very powerful but insane" : AI's response to orders seem to make less than no sense, yet AI is still able to do damage. "What matters to humans": Things like the Outcome Pump example, where any child would know that not dying is supposed to be part of "out of the building", but not including the problems that we are bad at solving, such as fun theory and the like.

comment by DanielLC · 2013-07-13T20:46:42.196Z · LW(p) · GW(p)

(as opposed to any other form of uploading)

I didn't know "em" was a specific form of uploading. What form is it, and what other forms are there?

comment by ChristianKl · 2013-07-13T12:09:09.934Z · LW(p) · GW(p)

Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project. (and yes, I'm aware of the need for Friendliness in Obedient AI.)

Because the AI is better at estimating the consequences of following an order than the person giving the order.

There also the issue that the AI is likely to act in a way that changes the order that the person gives if it's own utility criteria are about fulfilling orders.

Replies from: bogdanb
comment by bogdanb · 2013-07-13T18:11:48.298Z · LW(p) · GW(p)

Also, even assuming a “right” way of making obedient FAI is found (for example, one that warns you if you’re asking for something that might bite you in the ass later), there remains the problem of who is allowed to give orders to the AI. Power corrupts, etc.

comment by Tenoke · 2013-07-13T08:38:44.799Z · LW(p) · GW(p)

What's with the ems?

We can make more soild predictions about ems than we can about strong AI since there are less black swans regarding ems to mess up our calculations.

Does anthropics come into it?

No.

Why the huge focus on fully paternalistic Friendly AI rather than Obedient AI? It seems like a much lower-risk project.

the need for Friendliness in Obedient AI.

comment by Alejandro1 · 2013-07-13T16:35:28.801Z · LW(p) · GW(p)

It seems to me that there are basically two approaches to preventing an UFAI intelligence explosion: a) making sure that the first intelligence explosion is a a FAI instead; b) making sure that intelligence explosion never occurs. The first one involves solving (with no margin for error) the philosophical/ethical/logical/mathematical problem of defining FAI, and in addition the sociological/political problem of doing it "in time", convincing everyone else, and ensuring that the first intelligence explosion occurs according to this resolution. The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.

Naively, it seems to me that the second approach is more viable--it seems comparable in scale to something between stopping use of CFCs (fairly easy) and stopping global warming (very difficult, but it is premature to say impossible). At any rate, sounds easier than solving (over a few year/decades) so many hard philosophical and mathematical problems, with no margin for error and under time pressure to do it ahead of UFAI developing.

However, it seems (from what I read on LW and found quickly browsing the MIRI website; I am not particularly well informed, hence writing this on the Stupid Questions thread) that most of the efforts of MIRI are on the first approach. Has there been a formal argument on why it is preferable, or are there efforts on the second approach I am unaware of? The only discussion I found was Carl Shulman's "Arms Control and Intelligence Explosions" paper, but it is brief and nothing like a formal analysis comparing the benefits of each strategy. I am worried the situation might be biased by the LW/MIRI kind of people being more interested in (and seeing as more fun) the progress on the timeless philosophical problems necessary for (a) than the political coalition building and propaganda campaigns necessary for (b).

Replies from: Eliezer_Yudkowsky, None, Kaj_Sotala, NancyLebovitz, Qiaochu_Yuan, Nisan, timtyler
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-14T04:24:43.912Z · LW(p) · GW(p)

I think it's easier to get a tiny fraction of the planet to do a complex right thing than to get 99.9% of a planet to do a simpler right thing, especially if 99.9% compliance may not be enough and 99.999% compliance may be required instead.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-14T04:46:19.309Z · LW(p) · GW(p)

This calls for a calculation. How hard creating an FAI would have to be to have this inequality reversed?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-07-14T05:01:32.497Z · LW(p) · GW(p)

When I see proposals that involve convincing everyone on the planet to do something, I write them off as loony-eyed idealism and move on. So, creating FAI would have to be hard enough that I considered it too "impossible" to be attempted (with this fact putatively being known to me given already-achieved knowledge), and then I would swap to human intelligence enhancement or something because, obviously, you're not going to persuade everyone on the planet to agree with you.

Replies from: Alejandro1, shminux
comment by Alejandro1 · 2013-07-15T00:02:08.674Z · LW(p) · GW(p)

But is it really necessary to persuade everyone, or 99.9% of the planet? If gwern's analysis is correct (I have no idea if it is) then it might suffice to convince the policymakers of a few countries like USA and China.

comment by Shmi (shminux) · 2013-07-14T05:14:03.589Z · LW(p) · GW(p)

I see. So you do have an upper bound in mind for the FAI problem difficulty, then, and it's lower than other alternatives. It's not simply "shut up and do the impossible".

comment by [deleted] · 2013-07-13T17:56:23.921Z · LW(p) · GW(p)

Given enough time for ideas to develop, any smart kid in a basement could build an AI, and every organization in the world has a massive incentive to do so. Only omnipresent surveillance could prevent everyone from writing a particular computer program.

Once you have enough power flying around to actually prevent AI, you are dealing with AI-level threats already (a not-necessarily friendly singleton).

So FAI is actually the easiest way to prevent UFAI.

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Replies from: pop, Alejandro1, shminux
comment by pop · 2013-07-17T03:24:46.481Z · LW(p) · GW(p)

The other reason is that a Friendly Singleton would be totally awesome. Like so totally awesome that it would be worth it to try for the awesomeness alone.

Your tone reminded me of super religious folk who are convinced that, say "Jesus is coming back soon!" and it'll be "totally awesome".

Replies from: None
comment by [deleted] · 2013-07-17T04:41:33.336Z · LW(p) · GW(p)

That's nice.

Your comment reminds me of those internet atheists that are so afraid of being religious that they refuse to imagine how much better the world could be.

Replies from: pop
comment by pop · 2013-07-17T05:03:14.188Z · LW(p) · GW(p)

I do imagine how much better the world could be. I actually do want MIRI to succeed. Though currently I have low confidence in their future success, so I don't feel "bliss" (if that's the right word.)

BTW I'm actually slightly agnostic because of the simulation argument.

Replies from: None
comment by [deleted] · 2013-07-17T05:15:19.786Z · LW(p) · GW(p)

I don't feel "bliss" (if that's the right word.)

Enthusiasm? Excitement? Hope?

I'm actually slightly agnostic because of the simulation argument.

Yep. I don't take it too seriously, but its at least coherent to imagine beings outside the universe who could reach in and poke at us.

comment by Alejandro1 · 2013-07-13T18:19:29.449Z · LW(p) · GW(p)

But, in the current situation (or even a few years from now) would it be possible for a smart kid in a basement to build an AI from scratch? Isn't it something that still requires lots of progress to build on? See my reply to Qiaochu.

Replies from: Manfred
comment by Manfred · 2013-07-13T19:35:32.430Z · LW(p) · GW(p)

So will progress just stop for as long as we want it to?

Replies from: Alejandro1
comment by Alejandro1 · 2013-07-13T19:50:18.212Z · LW(p) · GW(p)

The question is whether it would be possible to ban further research and stop progress (open, universally accessible and buildable-upon progress), in time for AGI to be still far away enough that an isolated group in a basement will have no chance of achieving it on its own.

Replies from: Manfred
comment by Manfred · 2013-07-13T20:30:53.763Z · LW(p) · GW(p)

If by "basement" you mean "anywhere, working in the interests of any organization that wants to gain a technology advantage over the rest of the world," then sure, I agree that this is a good question. So what do you think the answer is?

Replies from: Alejandro1
comment by Alejandro1 · 2013-07-14T02:18:52.896Z · LW(p) · GW(p)

I have no idea! I am not a specialist of any kind in AI development. That is why I posted in the Stupid Questions thread asking "has MIRI considered this and made a careful analysis?" instead of making a top-level post saying "MIRI should be doing this". It may seem that in the subthread I am actively arguing for strategy (b), but what I am doing is pushing back against what I see as insufficient answers on such an important question.

So... what do you think the answer is?

Replies from: Manfred
comment by Manfred · 2013-07-14T02:40:52.916Z · LW(p) · GW(p)

If you want my answers, you'll need to humor me.

comment by Shmi (shminux) · 2013-07-13T18:03:59.970Z · LW(p) · GW(p)

The other reason is that a Friendly Singleton would be totally awesome.

Uh, apparently my awesome is very different from your awesome. What scares me is this "Singleton" thing, not the friendly part.

Replies from: None, pop
comment by [deleted] · 2013-07-13T18:15:23.789Z · LW(p) · GW(p)

Hmmm. What is it going to do that is bad, given that it has the power to do the right thing, and is Friendly?

We have inherited some anti-authoritarian propaganda memes from a cultural war that is no longer relevant, and those taint the evaluation of a Singleton, even though they really don't apply. At least that's how it felt to me when I thought through it.

comment by pop · 2013-07-17T03:29:24.827Z · LW(p) · GW(p)

Upvoted.

I'm not sure why more people around here are not concerned about the singleton thing. It almost feels like yearning for a god on some people's part.

comment by Kaj_Sotala · 2013-07-14T19:22:09.265Z · LW(p) · GW(p)

We discuss this proposal in Responses to Catastrophic AGI Risk, under the sections "Regulate research" and "Relinquish technology". I recommend reading both of those sections if you're interested, but a few relevant excerpts:

Large-scale surveillance efforts are ethically problematic and face major political resistance, and it seems unlikely that current political opinion would support the creation of a far-reaching surveillance network for the sake of AGI risk alone. The extent to which such extremes would be necessary depends on exactly how easy it would be to develop AGI in secret. Although several authors make the point that AGI is much easier to develop unnoticed than something like nuclear weapons (McGinnis 2010; Miller 2012), cutting edge high-tech research does tend to require major investments which might plausibly be detected even by less elaborate surveillance efforts. [...]

Even under such conditions, there is no clear way to define what counts as dangerous AGI. Goertzel and Pitt (2012) point out that there is no clear division between narrow AI and AGI, and attempts to establish such criteria have failed. They argue that since AGI has a nebulous definition, obvious wide-ranging economic benefits, and potentially rich penetration into multiple industry sectors, it is unlikely to be regulated due to speculative long-term risks.

AGI regulation requires global cooperation, as the noncooperation of even a single nation might lead to catastrophe. Historically, achieving global cooperation on tasks such as nuclear disarmament and climate change has been very difficult. As with nuclear weapons, AGI could give an immense economic and military advantage to the country that develops it first, in which case limiting AGI research might even give other countries an incentive to develop AGI faster (Cade 1966; de Garis 2005; McGinnis 2010; Miller 2012) [...]

To be effective, regulation also needs to enjoy support among those being regulated. If developers working in AGI-related fields only follow the letter of the law, while privately considering all regulations as annoying hindrances and fears about AGI overblown, the regulations may prove ineffective. Thus, it might not be enough to convince governments of the need for regulation; the much larger group of people working in the appropriate fields may also need to be convinced.

While Shulman (2009) argues that the unprecedentedly destabilizing effect of AGI could be a cause for world leaders to cooperate more than usual, the opposite argument can be made as well. Gubrud (1997) argues that increased automation could make countries more self-reliant, and international cooperation considerably more difficult. AGI technology is also much harder to detect than, for example, nuclear technology is—nuclear weapons require a substantial infrastructure to develop, while AGI needs much less (McGinnis 2010; Miller 2012). [...]

Goertzel and Pitt (2012) suggest that for regulation to be enacted, there might need to be an “AGI Sputnik”—a technological achievement that makes the possibility of AGI evident to the public and policy makers. They note that after such a moment, it might not take very long for full human-level AGI to be developed, while the negotiations required to enact new kinds of arms control treaties would take considerably longer. [...]

“Regulate research” proposals: Our view

Although there seem to be great difficulties involved with regulation, there also remains the fact that many technologies have been successfully subjected to international regulation. Even if one were skeptical about the chances of effective regulation, an AGI arms race seems to be one of the worst possible scenarios, one which should be avoided if at all possible. We are therefore generally supportive of regulation, though the most effective regulatory approach remains unclear. [...]

Not everyone believes that the risks involved in creating AGIs are acceptable. Relinquishment involves the abandonment of technological development that could lead to AGI. This is possibly the earliest proposed approach, with Butler (1863) writing that “war to the death should be instantly proclaimed” upon machines, for otherwise they would end up destroying humans entirely. In a much-discussed article, Joy (2000) suggests that it might be necessary to relinquish at least some aspects of AGI research, as well as nanotechnology and genetics research.

AGI relinquishment is criticized by Hughes (2001), with Kurzweil (2005) criticizing broad relinquishment while being supportive of the possibility of “fine-grained relinquishment,” banning some dangerous aspects of technologies while allowing general work on them to proceed. In general, most writers reject proposals for broad relinquishment. [...]

McKibben (2003), writing mainly in the context of genetic engineering, suggests that AGI research should be stopped. He brings up the historical examples of China renouncing seafaring in the 1400s and Japan relinquishing firearms in the 1600s, as well as the more recent decisions of abandoning DDT, CFCs, and genetically modified crops in Western countries. However, it should also be noted that Japan participated in World War II, that China now has a navy, that there are reasonable alternatives for DDT and CFCs, which probably do not exist for AGI, and that genetically modified crops are in wide use in the United States.

Hughes (2001) argues that attempts to outlaw a technology will only make the technology move to other countries. He also considers the historical relinquishment of bio- logical weapons to be a bad example, for no country has relinquished peaceful biotechnological research such as the development of vaccines, nor would it be desirable to do so. With AGI, there would be no clear dividing line between safe and dangerous research. [...]

Relinquishment proposals suffer from many of the same problems as regulation proposals, only worse. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future. Therefore we do not consider them to be a viable class of proposals.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T04:51:38.236Z · LW(p) · GW(p)

Butler (1863) writing that “war to the death should be instantly proclaimed”

I had no idea that Herbert's Butlerian Jihad might be a historical reference.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-16T10:22:56.776Z · LW(p) · GW(p)

Wow, I've read Dune several times, but didn't actually get that before you pointed it out.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-16T14:39:31.149Z · LW(p) · GW(p)

It turns out that there's a wikipedia page.

comment by NancyLebovitz · 2013-07-13T17:38:35.510Z · LW(p) · GW(p)

There's a third alternative, though it's quite unattractive: damaging civilization to the point that AI is impossible.

Replies from: Jonathan_Graehl, arborealhominid, timtyler
comment by Jonathan_Graehl · 2013-07-16T06:37:25.926Z · LW(p) · GW(p)

Given that out of billions, a few with extremely weird brains are likely to see evil-AI risk as nigh.

One of them is bound to push the red button way before I or anyone else would reach for it.

So I hope red technology-resetting buttons don't become widely available.

This suggests a principle: I have a duty to be conservative in my own destroy-the-world-to-save-it projects :)

comment by arborealhominid · 2013-07-14T00:09:07.678Z · LW(p) · GW(p)

And there are, in fact, several people proposing this as a solution to other anthropogenic existential risks. Here's one example.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T02:07:42.968Z · LW(p) · GW(p)

I would like to think that people who have the answer for each question in their FAQ open in a new tab aren't competent enough to do much of anything. This is probably wishful thinking.

Just to mention a science fiction handling.... John Barnes' Daybreak books. I think the books are uneven, but the presentation of civilization-wrecking as an internet artifact is rather plausible.

Replies from: hairyfigment
comment by hairyfigment · 2013-07-14T06:06:46.629Z · LW(p) · GW(p)

Not only that, they're talking about damaging civilization and they have an online FAQ in the first place. They fail Slytherin forever.

Replies from: Viliam_Bur, NancyLebovitz
comment by Viliam_Bur · 2013-07-14T11:49:34.064Z · LW(p) · GW(p)

My model of humans says that some people will read their page, become impressed and join them. I don't know how much, but I think that the only thing that stops millions of people from joining them is that there already are thousands of crazy ideas out there competing with each other, so the crazy people remain divided.

Also, the website connects destroying civilization with many successful applause lights. (Actually, it seems to me like a coherent extrapolation of them; although that could be just my mindkilling speaking.) That should make it easier to get dedicated followers.

Destroying civilization is too big goal for them, but they could make some serious local damage.

Replies from: hairyfigment
comment by hairyfigment · 2013-07-14T20:29:34.560Z · LW(p) · GW(p)

My model of humans says that some people will read their page, become impressed and join them

And my model of the government says this has negative expected value overall.

Replies from: D_Malik
comment by D_Malik · 2013-07-15T22:53:16.264Z · LW(p) · GW(p)

An example of this: the Earth Liberation Front and Animal Liberation Front got mostly dismantled by CIA infiltrators as soon as they started getting media attention.

comment by NancyLebovitz · 2013-07-14T13:35:18.978Z · LW(p) · GW(p)

Our standards for Slytherin may be too high.

Replies from: hairyfigment
comment by hairyfigment · 2013-07-14T20:28:58.673Z · LW(p) · GW(p)

I don't get it. None of us here set the standards, unless certain donors have way more connections than I think they do.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-15T04:48:26.545Z · LW(p) · GW(p)

You're the one who mentioned failing Slytherin forever.

My actual belief is that people don't have to be ideally meticulous to do a lot of damage.

comment by timtyler · 2013-07-14T11:36:29.269Z · LW(p) · GW(p)

Isn't that already covered by option b?

comment by Qiaochu_Yuan · 2013-07-13T17:34:15.836Z · LW(p) · GW(p)

My impression of Eliezer's model of the intelligence explosion is that he believes b) is much harder than it looks. If you make developing strong AI illegal then the only people who end up developing it will be criminals, which is arguably worse, and it only takes one successful criminal organization developing strong AI to cause an unfriendly intelligence explosion. The general problem is that a) requires that one organization do one thing (namely, solving friendly AI) but b) requires that literally all organizations abstain from doing one thing (namely, building unfriendly AI).

CFCs and global warming don't seem analogous to me. A better analogy to me is nuclear disarmament: it only takes one nuke to cause bad things to happen, and governments have a strong incentive to hold onto their nukes for military applications.

Replies from: NancyLebovitz, Alejandro1
comment by NancyLebovitz · 2013-07-13T23:03:51.598Z · LW(p) · GW(p)

What would a law against developing strong AI look like?

Replies from: gwern
comment by gwern · 2013-07-13T23:23:58.761Z · LW(p) · GW(p)

I've suggested in the past that it would look something like a ban on chips more powerful than X teraflops/$.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T00:08:07.018Z · LW(p) · GW(p)

How close are we to illicit chip manufacturing? On second thought, it might be easier to steal the chips.

Replies from: gwern
comment by gwern · 2013-07-14T01:31:24.107Z · LW(p) · GW(p)

How close are we to illicit chip manufacturing?

Cutting-edge chip manufacturing of the necessary sort? I believe we are lightyears away and things like 3D printing are irrelevant, and that it's a little like asking how close we are to people running Manhattan Projects in their garage*; see my essay for details.

* Literally. The estimated budget for an upcoming Taiwanese chip fab is equal to some inflation-adjusted estimates of the Manhattan Project.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T13:34:37.692Z · LW(p) · GW(p)

My notion of nanotech may have some fantasy elements-- I think of nanotech as ultimately being able to put every atom where you want it, so long as the desired location is compatible with the atoms that are already there.

I realize that chip fabs keep getting more expensive, but is there any reason to think this can't reverse?

Replies from: gwern
comment by gwern · 2013-07-14T15:10:27.131Z · LW(p) · GW(p)

It's hard to say what nanotech will ultimately pan out to be.

I realize that chip fabs keep getting more expensive, but is there any reason to think this can't reverse?

But in the absence of nanoassemblers, it'd be a very bad idea to bet against Moore's second law.

comment by Alejandro1 · 2013-07-13T17:43:11.457Z · LW(p) · GW(p)

Right, I see your point. But it depends on how close you think we are to AGI. Assuming we are sill quite far away, then if you manage to ban AI research early enough it seems unlikely that a rogue group will manage to do by itself all the rest of the progress, cut off from the broader scientific and engineering community.

Replies from: bogdanb
comment by bogdanb · 2013-07-13T18:01:46.033Z · LW(p) · GW(p)

The difference is that AI is relatively easy to do in secret. CFCs and nukes are much harder to hide.

Also, only AGI research is dangerous (or, more exactly, self-improving AI), but the other kinds are very useful. Since it’s hard to tell how far the danger is (and many don’t believe there’s a big danger), you’ll get a similar reaction to emission control proposals (i.e., some will refuse to stop, and it’s hard to convince a democratic country’s population to start a war over that; not to mention that a war risks making the AI danger moot by killing us all).

Replies from: Alejandro1
comment by Alejandro1 · 2013-07-13T18:15:39.653Z · LW(p) · GW(p)

I agree that all kinds of AI research that are even close to AGI will have to be banned or strictly regulated, and that convincing all nations to ensure this is a hugely complicated political problem. (I don't think it is more difficult than controlling carbon emissions, because of status quo bias: it is easier to convince someone to not do something new that sounds good, than to get them to stop doing something they view as good. But it is still hugely difficult, no questions about that.) It just seems to me even more difficult (and risky) to aim to solve flawlessly all the problems of FAI.

Replies from: bogdanb
comment by bogdanb · 2013-07-13T19:18:32.607Z · LW(p) · GW(p)

Note that the problem is not convincing countries not to do AI, the problem is convincing countries to police their population to prevent them from doing AI.

It’s much harder to hide a factory or a nuclear laboratory than to hide a bunch of geeks in a basement filled with computers. Note how bio-weapons are really scary not (just) because countries might (or are) developing them, but that it’s soon becoming easy enough for someone to do it in their kitchen.

comment by Nisan · 2013-07-15T08:53:28.743Z · LW(p) · GW(p)

The approach of Leverage Research is more like (b).

comment by timtyler · 2013-07-14T11:37:47.848Z · LW(p) · GW(p)

The second one involves just the sociological/political problem of convincing everyone of the risks and banning/discouraging AI research "in time" to avoid an intelligence explosion.

There have been some efforts along those lines. It doesn't look as easy as all that.

comment by CronoDAS · 2013-07-15T08:26:15.763Z · LW(p) · GW(p)

I sometimes contemplate undertaking a major project. When I do so, I tend to end up reasoning like this:

It would be very good if I could finish this project. However, almost all the benefits of attempting the project will accrue when it's finished. (For example, a half-written computer game doesn't run at all, one semester's study of a foreign language won't let me read untranslated literature, an almost-graduated student doesn't have a degree, and so on.) Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.

As a result, I find myself almost never attempting a project of any kind that involves effort and will take longer than a few days, but I don't want to live my life having done nothing, though. Advice?

Replies from: Larks, Kyre, Qiaochu_Yuan, gothgirl420666, Error
comment by Larks · 2013-07-15T09:43:09.807Z · LW(p) · GW(p)

a half-written computer game doesn't run at all

I realize this does not really address your main point, but you can have half-written games that do run. I've been writing a game on and off for the last couple of years, and it's been playable the whole time. Make the simplest possible underlying engine first, so it's playable (and testable) as soon as possible.

Replies from: CAE_Jones, OnTheOtherHandle, sediment
comment by CAE_Jones · 2013-07-15T10:44:17.861Z · LW(p) · GW(p)

In fact, the games I tend to make progress on are the ones I can get testable as quickly as possible. Unfortunately, those are usually the least complicated ones (glorified MUDs, an x axis with only 4 possible positions, etc).

I do want to do bigger and better things, then I run into the same problem as CronoDAS. When I do start a bigger project, I can sometimes get started, then crash within the first hour and never return. (In a couple extreme cases, I lasted for a good week before it died, though one of these was for external reasons). Getting started is usually the hardest part, followed by surviving until there's something work looking back at. (A functioning menu system does not count.)

comment by OnTheOtherHandle · 2013-07-25T01:59:24.475Z · LW(p) · GW(p)

This seems like a really good concept to keep in mind. I wonder if it could be applied to other fields? Could you make a pot that remains a pot the whole way through, even as you refine it and add detail? Could you write a song that starts off very simple but still pretty, and then gradually layer on the complexity?

Your post inspired me to try this with writing, so thank you. :) We could start with a one-sentence story: "Once upon a time, two lovers overcame vicious prejudice to be together."

And that could be expanded into a one-paragraph story: "Chanon had known all her life that the blue-haired Northerners were hated enemies, never to be trusted, that she had to keep her red-haired Southern bloodline pure or the world would be overrun by the blue barbarians. But everything was thrown in her face when she met Jasper - his hair was blue, but he was a true crimson-heart, as the saying went. She tried to find every excuse to hate him, but time and time again Jasper showed himself to be a man of honor and integrity, and when he rescued her from those lowlife highway robbers - how could she not fall in love? Her father hated it of course, but even she was shocked at how easily he disowned her, how casually he threw away the bonds of family for the chains of prejudice. She wasn't happy now, homeless and adrift, but she knew that she could never be happy again in the land she had once called home. Chanon and Jasper set out to unknown lands in the East, where hopefully they could find some acceptance and love for their purple family."

This could be turned into a one page story, and then a five page story, and so on, never losing the essence of the message. Iterative storytelling might be kind of fun for people who are trying to get into writing something long but don't know if they can stick it out for months or years.

comment by sediment · 2013-07-21T19:33:28.359Z · LW(p) · GW(p)

I submit that this might generalize: that perhaps it's worth, where possible, trying to plan your projects with an iterative structure, so that feedback and reward appear gradually throughout the project, rather than in an all-or-nothing fashion at the very end. Tight feedback loops are a great thing in life. Granted, this is of no use for, for example, taking a degree.

comment by Kyre · 2013-07-16T00:09:30.404Z · LW(p) · GW(p)

I have/had this problem. My computer and shelves are full of partially completed (or, more realistically, just-begun) projects.

So, what I'm doing at the moment is I've picked one of them, and that's the thing I'm going to complete. When I'm feeling motivated, that's what I work on. When I'm not feeling motivated, I try to do at least half an hour or so before I flake off and go play games or work on something that feels more awesome at the time. At those times my motivation isn't that I feel that the project is worthwhile, it is that having gone through the process of actually finishing something will be have been worthwhile.

It's possible after I'm done I may never put that kind of effort in again, but I will know (a) that I probably can achieve that sort of goal if I want and (b) if carrying on to completion is hell, what kind of hell and what achievement would be worth it.

comment by Qiaochu_Yuan · 2013-07-15T19:43:11.828Z · LW(p) · GW(p)

Beeminder. Record the number of Pomodoros you spend working on the project and set some reasonable goal, e.g. one a day.

comment by gothgirl420666 · 2013-07-15T15:07:52.890Z · LW(p) · GW(p)

there's a good chance I'll get frustrated and give up before actually completing the project

Make this not true. Practice doing a bunch of smaller projects, maybe one or two week-long projects, then a month-long project. Then you'll feel confident that your work ethic is good enough to complete a major project without giving up.

comment by Error · 2013-07-15T16:02:30.220Z · LW(p) · GW(p)

Undertaking this project will require a lot of time and effort spent on activities that aren't enjoyable for their own sake, and there's a good chance I'll get frustrated and give up before actually completing the project. So it would be better not to bother; the benefits of successfully completing the project seem unlikely to be large enough to justify the delay and risk involved.

Would it be worthwhile if you could guarantee or nearly guarantee that you will not just give up? If so, finding a way to credibly precommit to yourself that you'll stay the course may help. Beeminder is an option; so is publicly announcing your project and a schedule among people whose opinion you personally care about. (I do not think LW counts for this. It's too big; the monkeysphere effect gets in the way)

comment by gothgirl420666 · 2013-07-13T02:44:22.947Z · LW(p) · GW(p)

Why is space colonization considered at all desirable?

Replies from: None, drethelin, shminux, ThrustVectoring, TimS, TrE, malthrin, Thomas, iDante, DanielLC
comment by [deleted] · 2013-07-13T02:56:53.641Z · LW(p) · GW(p)

Earth is currently the only known biosphere. More biospheres means that disasters that muck up one are less likely to muck up everything.

Less seriously, people like things that are cool.

EDIT: Seriously? My most-upvoted comment of all time? Really? This is as good as it gets?

comment by drethelin · 2013-07-13T03:58:02.267Z · LW(p) · GW(p)

1: It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.

2: It's an insurance policy against things that might wreck the earth but not other planets/solar systems.

3: Insofar as we can imagine there to be other alien races, understanding space colonization is extremely important either for trade or self defense.

4: It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:52:53.993Z · LW(p) · GW(p)

It's awesome. It's desirable for the same reason fast cars, fun computer games, giant pyramids, and sex is.

In sci-fi maybe. I doubt people actually living in space (or on un-Earth-like planets) would concur, without some very extensive technological change.

It's possible different subsets of humanity can never happily coexist, in which case having arbitrarily large amounts of space to live in ensures more peace and stability.

New incompatible sub-subsets will just keep arising in new colonies - as has happened historically.

comment by Shmi (shminux) · 2013-07-13T04:33:38.249Z · LW(p) · GW(p)

Eggs, basket, x-risk.

comment by ThrustVectoring · 2013-07-13T02:47:05.810Z · LW(p) · GW(p)

Would you rather have one person living a happy, fulfilled life, or two? Would you rather have seven billion people living with happy, fulfilled lives, or seven billion planets full of people living happy, fulfilled lives?

Replies from: Richard_Kennaway, gothgirl420666
comment by Richard_Kennaway · 2013-07-15T13:48:57.979Z · LW(p) · GW(p)

I am more interested in the variety of those happy, fulfilled lives than the number of them. Mere duplication has no value. The value I attach to any of these scenarios is not a function of just the set of utilities of the individuals living in them. The richer the technology, the more variety is possible. Look at the range of options available to a well-off person today, compared with 100 years ago, or 1000.

comment by gothgirl420666 · 2013-07-13T02:57:24.003Z · LW(p) · GW(p)

Oh, okay. Personally I lean much more towards average utilitarianism as opposed to total, but I haven't really thought through the issue that much. I was unaware that total utilitarianism was popular enough that it alone was sufficient for so many people to endorse space colonization.

But, now that I think about it, even if you wanted to add as many happy people to the universe as possible, couldn't you do it more efficiently with ems?

Replies from: DanArmak, TsviBT, bogdanb, ThrustVectoring, RolfAndreassen, ikrase, Manfred, Pablo_Stafforini
comment by DanArmak · 2013-07-13T15:56:25.782Z · LW(p) · GW(p)

Even without total utilitarianism, increasing the population may be desirable as long as average quality of life isn't lowered. For instance, increasing the amount of R&D can make progress faster, which can benefit everyone. Of course one can also think of dangers and problems that scale with population size, so it's not a trivial question.

comment by TsviBT · 2013-07-13T03:56:37.671Z · LW(p) · GW(p)

Either way, more territory means more matter and energy, which means safer and longer lives.

comment by bogdanb · 2013-07-13T20:40:44.075Z · LW(p) · GW(p)

It would be more efficient with ems, but we can’t make ems yet. Technically we could already colonize space; it’s expensive, but still, it’s closer.

Think about why old-world people colonized the Americas (and everything else they could, anyway). The basic cause was space and resources. Of course, with current tech we can extract much more value and support a much larger population in Europe than we could at the time. But even if they anticipated that, it still wouldn’t have made sense to wait.

comment by ThrustVectoring · 2013-07-15T00:46:20.443Z · LW(p) · GW(p)

I don't subscribe to either average or total utilitarianism. I'm more of a fan of selfish utilitarianism. It would make me personally feel better about myself were I to move the universe from 1 person living a life worth celebrating to 2 people living such lives, so it's worth doing.

comment by RolfAndreassen · 2013-07-13T03:53:45.576Z · LW(p) · GW(p)

Ems are still limited by the amount of available matter. They may enable you to colonise non-Earthlike planets, but you still need to colonise.

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:54:20.222Z · LW(p) · GW(p)

In fact, pretty much everything possible is limited by available energy and matter.

comment by ikrase · 2013-07-13T08:08:28.580Z · LW(p) · GW(p)

Personally, I too tend toward 'utilitarianism's domain does not include number of people', but I think most people have a preference toward at least minor pop. growth.

Also, many people (including me) are skeptical about ems or emulation in general. Plus, you'd want to colonize universe to build more emulation hardware?

comment by Manfred · 2013-07-13T04:00:00.522Z · LW(p) · GW(p)

Personally I lean much more towards average utilitarianism as opposed to total

You should check out this post and its related posts. (also here, and here). Which is to say, there is a whole wide world out there of preferences - why should I have one or two small options?

couldn't you do it more efficiently with ems?

Both/and.

comment by Pablo (Pablo_Stafforini) · 2013-07-13T14:08:53.692Z · LW(p) · GW(p)

Personally I lean much more towards average utilitarianism as opposed to total,

Your reply inspired me to post this stupid question.

comment by TimS · 2013-07-13T04:42:54.286Z · LW(p) · GW(p)

It seems likely that exploiting resources in space will make society richer, benefiting everyone. Perhaps that will require people live in space.

comment by TrE · 2013-07-13T18:30:16.529Z · LW(p) · GW(p)

Another reason is that the earth's crust is quite rare in virtually all precious and useful metals (just look at the d-block of the periodic table for examples). Virtually all of them sank to the core during earth's formation, the existing deposits are the result of asteroids striking. So, asteroid mining is worth considering even if you're a pure capitalist working for your own gain.

comment by malthrin · 2013-07-17T19:38:08.446Z · LW(p) · GW(p)

Space colonization is part of the transhumanist package of ideas originating with Nikolai Federov.

comment by Thomas · 2013-07-13T06:07:57.801Z · LW(p) · GW(p)

It is not the space as currently is, to be colonized. It's the radically technologically transformed space we are after!

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:50:43.522Z · LW(p) · GW(p)

Then why not be after technological transformation of Earth first, and (much easier) expansion into space afterwards? Is it only the 'eggs in one basket' argument that supports early colonization?

comment by iDante · 2013-07-13T06:18:31.956Z · LW(p) · GW(p)

no population cap

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:49:31.609Z · LW(p) · GW(p)

On a global scale, the demographic transition means most nations don't care about population caps much. On a local scale, individuals won't find it cheaper to raise children in colonies; in fact the cost of life will be much higher than on Earth at first.

Of course if you're a population ethicist, then you want to increase the population and space colonization looks good.

Replies from: RolfAndreassen
comment by RolfAndreassen · 2013-07-13T19:25:10.949Z · LW(p) · GW(p)

the demographic transition means most nations don't care about population caps much.

You are taking trends that have lasted a century at most, and extrapolating them thousands of years into the future. As long as the trait "wanting to have many children even while wealthy and educated" is even slightly heritable, not necessarily even genetic, then that trait will spread, leading to a reversal of the demographic "transition".

Replies from: DanArmak
comment by DanArmak · 2013-07-13T19:30:39.547Z · LW(p) · GW(p)

I agree with your predictions. However, I was talking about why space colonization might be desirable now, rather than why it might become desirable in the future.

comment by DanielLC · 2013-07-13T05:01:05.525Z · LW(p) · GW(p)

If you're an average utilitarian, it's still a good idea if you can make the colonists happier than average. Since it's likely that there is large amounts of wildlife throughout the universe, this shouldn't be that difficult.

Replies from: Randaly
comment by Randaly · 2013-07-13T08:21:46.401Z · LW(p) · GW(p)

Since it's likely that there is large amounts of wildlife throughout the universe,

???

Replies from: DanielLC
comment by DanielLC · 2013-07-13T18:26:09.465Z · LW(p) · GW(p)

What's the question?

Earth isn't the only planet with life, is it? If most planets do not evolve sapient life, then the planets will be full of wildlife, which doesn't live very good lives.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2013-07-14T19:07:43.046Z · LW(p) · GW(p)

If most planets do not evolve sapient life, then the planets will be full of wildlife, which doesn't live very good lives.

That assumes that on most planets life will evolve to have wildlife but not sapient life, as opposed to e.g. only evolving single-celled life. Basically you're assuming that the most likely hard step for intelligence is between "wildlife" and "sapient life" instead of coming earlier, which seems unjustified without supporting evidence, since there are earlier candidates for hard steps that come after life has already began on the world. For example, from Hanson's paper:

consider a set of four major transitions in the traditional fossil record identified by J. William Schopf [17]. Schopf labels these transitions “Filamentous Prokaryotes,” “Uni- cellular Eukaryotes,” “Sexual(?) Eukaryotes,” and “Metazoans,” at 3.5, 1.8, 1.1, and 0.6 billion years ago, respectively

Replies from: DanielLC
comment by DanielLC · 2013-07-15T04:01:53.641Z · LW(p) · GW(p)

It assumes a hard step between wildlife and sapient life, but it makes no assumptions about earlier hard steps.

I suppose it's not likely to be a hard enough step that creating enough life to massively outweigh it is all that hard. Wildlife will only live on the surface of one planet. Sapient life can live on many planets, and can mine them as so to use all the matter.

comment by JoshuaFox · 2013-07-14T17:14:25.160Z · LW(p) · GW(p)

How do you get someone to understand your words as they are, denotatively -- so that they do not overly-emphasize (non-existent) hidden connotations?

Of course, you should choose your words carefully, taking into account how they may be (mis)interpreted, but you can't always tie yourself into knots forestalling every possible guess about what intentions "really" are.

Replies from: Qiaochu_Yuan, RomeoStevens, Error
comment by Qiaochu_Yuan · 2013-07-14T17:48:24.261Z · LW(p) · GW(p)

Establish a strong social script regarding instances where words should be taken denotatively, e.g. Crocker's rules. I don't think any other obvious strategies work. Hidden connotations exist whether you want them to or not.

(non-existent)

This is the wrong attitude about how communication works. What matters is not what you intended to communicate but what actually gets communicated. The person you're communicating with is performing a Bayesian update on the words that are coming out of your mouth to figure out what's actually going on, and it's your job to provide the Bayesian evidence that actually corresponds to the update you want.

comment by RomeoStevens · 2013-07-15T21:40:09.927Z · LW(p) · GW(p)

Become more status conscious. You are most likely inadvertently saying things that sound like status moves, which prompts others to not take what you say at face value. I haven't figured out how to fix this completely, but I have gotten better at noticing it and sometimes preempting it.

comment by Error · 2013-07-15T16:13:27.692Z · LW(p) · GW(p)

I wish I could upvote this question more. People assuming that I meant more than exactly what I said drives me up the wall, and I don't know how to deal with it either. (but Qiaochu's response below is good)

The most common failure mode I've experienced is the assumption that believing equals endorsing. One of the gratifying aspects of participating here is not having to deal with that; pretty much everyone on LW is inoculated.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-15T21:38:39.777Z · LW(p) · GW(p)

Be cautious, the vast majority do not make strict demarcation between normative and positive statements inside their head. Figuring this out massively improved my models of other people.

Replies from: Error
comment by Error · 2013-07-16T11:19:54.772Z · LW(p) · GW(p)

That makes life difficult when I want to say "X is true (but not necessarily good)"

For example, your statement is true but I'm not terribly happy about it. ;-)

comment by Turgurth · 2013-07-14T00:01:35.266Z · LW(p) · GW(p)

Reading the Sequences has improved my epistemic rationality, but not so much my instrumental rationality. What are some resources that would help me with this? Googling is not especially helping. Thanks in advance for your assistance.

Replies from: Qiaochu_Yuan, NancyLebovitz, MrMind
comment by Qiaochu_Yuan · 2013-07-14T00:12:39.319Z · LW(p) · GW(p)

Attend a CFAR workshop!

Replies from: None
comment by [deleted] · 2013-07-14T06:55:17.101Z · LW(p) · GW(p)

I think many people would find this advice rather impractical. What about people who (1) cannot afford to pay USD3900 to attend the workshop (as I understand it, scholarships offered by CFAR are limited in number), and/or (2) cannot afford to spend the time/money travelling to the Bay Area?

Replies from: palladias, Qiaochu_Yuan
comment by palladias · 2013-07-14T12:42:25.474Z · LW(p) · GW(p)

We do offer a number of scholarships. If that's your main concern, apply and see what we have available. (Applying isn't a promise to attend). If the distance is your main problem, we're coming to NYC and you can pitch us to come to your city.

comment by Qiaochu_Yuan · 2013-07-14T07:33:52.903Z · LW(p) · GW(p)

First of all, the question was "what are some resources," not "what should I do." A CFAR workshop is one option of many (although it's the best option I know of). It's good to know what your options are even if some of them are difficult to take. Second, that scholarships are limited does not imply that they do not exist. Third, the cost should be weighed against the value of attending, which I personally have reason to believe is quite high (disclaimer: I occasionally volunteer for CFAR).

comment by NancyLebovitz · 2013-07-16T14:45:08.691Z · LW(p) · GW(p)

What do you want to be more rational about?

Replies from: Turgurth
comment by Turgurth · 2013-07-17T09:26:36.751Z · LW(p) · GW(p)

I suppose the first step would be being more instrumentally rational about what I should be instrumentally rational about. What are the goals that are most worth achieving, or, what are my values?

comment by MrMind · 2013-07-15T09:28:13.484Z · LW(p) · GW(p)

Reading "Diaminds" holds the promise to be on the track of making me a better rationalist, but so far I cannot say that with certainty, I'm only at the second chapter (source: recommendation here on LW, also the first chapter is dedicated to explaining the methodology, and the authors seem to be good rationalists, very aware of all the involved bias).

Also "dual n-back training" via dedicated software improves short term memory, which seems to have a direct impact on our fluid intelligence (source: vaguely remembered discussion here on LW, plus the bulletproofexec blog).

comment by [deleted] · 2013-07-13T21:40:31.927Z · LW(p) · GW(p)

I have decided to take small risks on a daily basis (for the danger/action feeling), but I have trouble finding specific examples. What are interesting small-scale risks to take? (give as many examples as possible)

Replies from: therufs, None, Jayson_Virissimo, Qiaochu_Yuan, mwengler, Turgurth, Error, bramflakes
comment by therufs · 2013-07-14T04:25:45.949Z · LW(p) · GW(p)
  • Talk to a stranger
  • Don't use a GPS
  • Try a new food/restaurant
  • If you usually drive, try getting somewhere on public transit
  • Sign up for a Coursera class (that's actually happening, so you have the option to be graded.) (Note: this will be a small risk on a daily basis for many consecutive days)
  • Go to a meetup at a library or game store
Replies from: satt, army1987
comment by satt · 2013-07-22T13:13:25.930Z · LW(p) · GW(p)

Another transport one: if you regularly go to the same place, experiment with a different route each time.

comment by A1987dM (army1987) · 2013-07-22T18:50:52.517Z · LW(p) · GW(p)

If you usually drive, try getting somewhere on public transit

Ain't most forms of that less dangerous (per mile) than driving? (Then again, certain people have miscalibrated aliefs about that.)

comment by [deleted] · 2013-07-14T05:20:52.865Z · LW(p) · GW(p)

Apparently some study found that the difference between people with bad luck and those with good luck is that people with good luck take lots of low-downside risks.

Can't help with specific suggestions, but thinking about it in terms of the decision-theory of why it's a good idea can help to guide your search. But you're doing it for the action-feeling...

Climb a tree.

comment by Jayson_Virissimo · 2013-07-15T05:52:08.966Z · LW(p) · GW(p)

Use a randomizer to choose someone in your address book and call them immediately (don't give yourself enough time to talk yourself out of it). It is a rush thinking about what to say as the phone is ringing. You are risking your social status (by coming off wierd or awkward, in the case you don't have anything sensible to say) without really harming anyone. On the plus side, you may make a new ally or rekindle an old relationship.

comment by Qiaochu_Yuan · 2013-07-14T00:17:16.757Z · LW(p) · GW(p)

When you go out to eat with friends, randomly choose who pays for the meal. In the long run this only increases the variance of your money. I think it's fun.

Replies from: BrassLion, beoShaffer
comment by BrassLion · 2013-07-15T03:40:32.670Z · LW(p) · GW(p)

This is likely to increase the total bill, much like how splitting the check evenly instead of strictly paying for what you ordered increases the total bill.

Replies from: Qiaochu_Yuan, Larks, army1987, drethelin
comment by Qiaochu_Yuan · 2013-07-15T05:31:36.321Z · LW(p) · GW(p)

I haven't observed this happening among my friends. Maybe if you only go out to dinner with homo economicus...

Replies from: D_Malik
comment by D_Malik · 2013-07-15T22:21:04.181Z · LW(p) · GW(p)

This is called the unscrupulous diner's dilemma, and experiments say that not only do people (strangers) respond to it like homo economicus, their utility functions seem to not even have terms for each other's welfare. Maybe you eat with people who are impression-optimizing (and mathy, so that they know the other person knows indulging is mean), and/or genuinely care about each other.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-22T18:47:46.957Z · LW(p) · GW(p)

experiments say that not only do people

From where? I'd expect it to depend a lot on how customary it is to split bills in equal parts in their culture.

(strangers)

How often do you have dinner with strangers?

comment by Larks · 2013-07-15T09:28:02.578Z · LW(p) · GW(p)

Assign the probabilities in proportion to each person's fraction of the overall bill. Incentives are aligned.

comment by A1987dM (army1987) · 2013-07-16T12:03:51.864Z · LW(p) · GW(p)

splitting the check evenly instead of strictly paying for what you ordered increases the total bill

But it saves the time and the effort needed to compute each person's bill -- you just need one division rather than a shitload of additions.

comment by drethelin · 2013-07-15T03:54:40.676Z · LW(p) · GW(p)

This is actually something of an upside. If you can afford to eat out with your friends you can afford to eat a bit better and have more fun. Not caring about what your food costs makes ordering and eating more fun.

Replies from: kalium
comment by kalium · 2013-07-15T05:27:47.934Z · LW(p) · GW(p)

If you can afford to eat out with your friends you can afford to eat a bit better and have more fun.

"If you can afford $X, you can afford $X+5" is a dangerous rule to live by, and terrible advice. Obscuring costs is not an upside unless you're very sure that your reaction to them was irrational to begin with.

comment by beoShaffer · 2013-07-14T01:52:30.523Z · LW(p) · GW(p)

Also, order your food and or drinks at random.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-14T07:35:56.211Z · LW(p) · GW(p)

Note: don't do this if you have food allergies.

comment by mwengler · 2013-07-14T14:31:38.550Z · LW(p) · GW(p)

Going for the feeling without the actual downside? Play video games MMPRPGs. Shoot zombies until they finally overwhelm you. Shoot cops in vice city until the army comes after you. Jump out of helicopters.

I really liked therufs suggestion list below. The downside, the thing you are risking in each of these, doesn't actually harm you, it makes you stronger.

comment by Turgurth · 2013-07-13T23:57:07.200Z · LW(p) · GW(p)

Try some exposure therapy to whatever it is you're often afraid of. Can't think of what you're often afraid of? I'd be surprised if you're completely immune to every common phobia.

comment by Error · 2013-07-15T15:08:34.088Z · LW(p) · GW(p)

I actually have a book on exactly this subject: Absinthe and Flamethrowers. The author's aim is to show you ways to take real but controllable risks.

I can't vouch for its quality since I haven't read it yet, but it exists. And, y'know. Flamethowers.

comment by bramflakes · 2013-07-14T00:10:44.644Z · LW(p) · GW(p)

Day trading?

comment by Jaime · 2013-07-16T04:41:26.734Z · LW(p) · GW(p)

Hi, have been reading this site only for a few months, glad that this thread came up. My stupid question : can a person simply be just lazy, and how does all the motivation/fighting akrasia techniques help such a person?

Replies from: Jonathan_Graehl, Qiaochu_Yuan, JoshuaZ
comment by Jonathan_Graehl · 2013-07-16T22:40:23.763Z · LW(p) · GW(p)

I think I'm simply lazy.

But I've been able to cultivate caring about particular goals/activities/habits, and then, with respect to those, I'm not so lazy - because I found them to offer frequent or large enough rewards, and I don't feel like I'm missing out on any particular type of reward. If you think you're missing something and you're not going after it, that might make you feel lazy about other things, even while you're avoiding tackling the thing that you're missing head on.

This doesn't answer your question. If I was able to do that, then I'm not just lazy.

comment by Qiaochu_Yuan · 2013-07-16T08:52:07.388Z · LW(p) · GW(p)

Taboo "lazy." What kind of a person are we talking about, and do they want to change something about the kind of person they are?

Replies from: Jaime
comment by Jaime · 2013-07-16T09:14:46.752Z · LW(p) · GW(p)

Beyond needing to survive, and maintain a reasonable health, a lazy person can just while their time away and not do anything meaningful (in getting oneself better - better health, better earning ability, learn more skills etc). Is there a fundamental need to also try to improve as a person? What is the rationale behind self improvement or not wanting to do so?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-16T21:00:49.796Z · LW(p) · GW(p)

I don't understand your question. If you don't want to self-improve, don't.

Replies from: Jaime
comment by Jaime · 2013-07-17T04:34:05.493Z · LW(p) · GW(p)

My question is : can I change this non desire to improve due to laziness? As in, how do I even get myself to want to improve and get my own butt kicked :)

Replies from: OnTheOtherHandle, Qiaochu_Yuan, Eugine_Nier
comment by OnTheOtherHandle · 2013-07-21T02:15:10.037Z · LW(p) · GW(p)

Why don't you try starting with the things you already do? How do you spend your free time, typically? You might read some Less Wrong, you might post some comments on forums, you might play video games. Then maybe think of a tiny, little extension of those activities. When you read Less Wrong, if you normally don't think too hard about the problems or thought experiments posed, maybe spend five minutes (or two minutes) by the clock trying to work it out yourself. If you typically post short comments, maybe try to write a longer, more detailed post for every two or three short ones. If you think you watch too much TV, maybe try to cut out 20 minutes and spend those 20 minutes doing something low effort but slightly better, like doing some light reading. Try to be patient with yourself and give yourself a gentle, non-intimidating ramp to "bettering yourself". :)

comment by Qiaochu_Yuan · 2013-07-17T05:19:00.160Z · LW(p) · GW(p)

I still don't understand the question. So you don't want to self-improve but you want to want to self-improve? Why?

Replies from: drethelin, Jaime
comment by drethelin · 2013-07-17T05:25:10.954Z · LW(p) · GW(p)

self-improving people are cooler

comment by Jaime · 2013-07-17T05:54:56.203Z · LW(p) · GW(p)

I want to change the not want to self-improve part since a life lazing around seems pretty meaningless, though I am also pretty contented to be a lazy bum.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-17T06:36:47.341Z · LW(p) · GW(p)

Sorry for reiterating this point, but I still don't understand the question. You seem to either have no reasons or have relatively weak reasons for wanting to self-improve, but you're still asking how to motivate yourself to self-improve anyway. But you could just not. That's okay too. You can't argue self-improvement into a rock. If you're content to be a lazy bum, just stay a lazy bum.

Replies from: OnTheOtherHandle, drethelin
comment by OnTheOtherHandle · 2013-07-21T02:05:05.302Z · LW(p) · GW(p)

I think it's the difference between wanting something and wanting to want something, just as "belief-in-belief" is analogous to belief. I'm reminded of Yvain's post about the difference between wanting, liking, and approving.

I think I can relate to Jaime's question, and I'm also thinking the feeling of "I'm lazy" is a disconnect between "approving" and either "wanting" or "liking." For example, once I get started writing a piece of dialogue or description I usually have fun. But despite years of trying, I have yet to write anything long or substantial, and most projects are abandoned at less than the 10% mark. The issue here is that I want to write random snippets of scenes and abandon them at will, but want to want to write a novel. Or, to put it another way, I want to have written something but it takes a huge activation energy to get me to start, since I won't reap the benefits until months or years later, if at all.

But here's something that might help - it helped me with regards to exercising, although not (yet) writing or more complex tasks. Think of your motivation or "laziness" in terms of an interaction between your past, present, and future selves. For a long time, it was Present Me blaming Past Me for not getting anything done. I felt bad about myself, I got mad at myself, and I was basically just yelling at someone (Past Me) who was no longer there to defend herself, while taking a very present-centered perspective.

As far as Present Me is concerned, she is the only one who deserves any benefits. Past Me can be retroactively vilified for not getting anything done, and Future Me can be stuck with the unpleasant task of actually doing something, while I lounge around. What helped me may be something unique to me, but here it is:

I like to think of myself as a very kind, caring person. Whether or not that's true isn't as important for our purposes. But the fact of the matter is that my self-identity as a kind, helpful person is much stronger and dearer to me than my self-identity as an intelligent or hard-working or ambitious person, so I tried to think of a way to frame hard work and ambition in terms of kindness. And I hit upon a metaphor that worked for me: I was helping out my other temporal selves. I would be kind to Past Me by forgiving her; she didn't know any better and I'm older. And I would be kind to Future Me by helping her out.

If I were in a team, my sense of duty and empathy would never allow me to dump the most unpleasant tasks on my other teammates. So I tried to think of myself as teaming up with my future self to get things done, so that I would feel the same shame/indignance if I flaked and gave her more work. It even helped sometimes to think of myself in an inferior position, a servant to my future self, who should, after all, be a better and more deserving person than me. I tried to get myself to love Me From Tomorrow more than Me From Today, visualizing how happy and grateful Tomorrow Me will be to see that I finished up the work she thought she would have to do.

It is all a bit melodramatic, I know, but that's how I convinced myself to stop procrastinating, and to turn "approve" into "want." The best way for me, personally, to turn something I approve of but don't want to do into something I genuinely want to do is to think of it as helping out someone else, and to imagine that person being happy and grateful. It gives me some of the same dopamine rush as actually helping out a real other person. The rhetoric I used might not work for you, but I think the key is to see your past, present, and future selves working as a team, rather than dumping responsibility onto one another.

I hope that helps, but I may just be someone who enjoys having an elaborate fantasy life :)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-21T16:28:39.975Z · LW(p) · GW(p)

I understand the distinction between wanting X and wanting to want X in general, but I can't make sense of it in the particular case where X is self-improvement. This is specifically because making yourself want something you think is good is a kind of self-improvement. But if you don't already want to self-improve, I don't see any base case for the induction to get started, as it were.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-07-21T19:07:39.090Z · LW(p) · GW(p)

I'd guess it's a bit vaguer than that; from what I've seen there aren't sharp distinctions. I can't speak for the original poster, but in my case, I have a little bit of motivation to improve myself - enough to ask people for suggestions, enough to try things, but I wish I had a lot more motivation. Maybe they percieve themselves as having less motivation than average, but it's still some motivation (enough to ask for help increasing motivation)?

comment by drethelin · 2013-07-17T17:33:22.301Z · LW(p) · GW(p)

If I'm a lazy bum and mostly content to be a lazy bum I will stay a lazy bum. Any interventions that are not doable by a lazy person will not be done. But if I have even a slight preference for being awesome, and there's an intervention that is fairly easy to implement, I want to do it. Insofar as you'd prefer people who share your values to be NOT lazy bums, you should if possible encourage them to be self-improvers.

comment by Eugine_Nier · 2013-07-17T05:31:52.439Z · LW(p) · GW(p)

Well, you want to want to improve. That's a start.

comment by JoshuaZ · 2013-07-16T04:42:35.002Z · LW(p) · GW(p)

What do you mean by lazy? How do you distinguish between laziness and akrasia? By lazy do you mean something like "unmotivated and isn't bothered by that" or do you mean something else?

Replies from: Jaime
comment by Jaime · 2013-07-16T04:53:34.001Z · LW(p) · GW(p)

More towards the "is there really a need for things to be done, if not, why do it and waste energy". Which is why I am wondering if fighting akrasia will actually work for a lazy person if the meaning for things to be done is not there in the first place.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T08:20:13.821Z · LW(p) · GW(p)

Akrasia is about not doing things that you rationally think you should be doing.

What you seem to describe isn't akrasia.

Replies from: CAE_Jones
comment by CAE_Jones · 2013-07-16T08:28:26.442Z · LW(p) · GW(p)

It depends what is meant by the need/meaning being there; if system2 concludes something is necessary, but system1 does not, is it akrasia?

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T08:35:40.722Z · LW(p) · GW(p)

If one system agrees that there need, then there's at least some meaning in the first place.

comment by Raiden · 2013-07-14T22:45:55.732Z · LW(p) · GW(p)

My current view is that most animals are not people, in the sense that they are not subject to moral concern. Of course, I do get upset when I see things such as animal abuse, but it seems to me that helping animals only nets me warm fuzzy feelings. I know animals react to suffering in a manner that we can sympathize with, but it just seems to me that they are still just running a program that is "below" that of humans. I think I feel that "react to pain" does not equal "worthy of moral consideration." The only exceptions to this in my eyes may be "higher mammals" such as other primates. Yet others on this site have advocated concern for animal welfare. Where am I confused?

Replies from: simplicio, ChristianKl, drethelin, somervta, None, Qiaochu_Yuan, blacktrance
comment by simplicio · 2013-07-15T18:06:56.283Z · LW(p) · GW(p)

First thing to note is that "worthy of moral consideration" is plausibly a scalar. The philosophical & scientific challenges involved in defining it are formidable, but in my books it has something to do with to what extent a non-human animal experiences suffering. So I am much less concerned with hurting a mosquito than a gorilla, because I suspect mosquitoes do not experience much of anything, but I suspect gorillas do.

Although I think ability to suffer is correlated with intelligence, it's difficult to know whether it scales with intelligence in a simple way. Sure, a gorilla is better than a mouse at problem-solving, but that doesn't make it obvious that it suffers more.

Consider the presumed evolutionary functional purpose of suffering, as a motivator for action. Assuming the experience of suffering does not require very advanced cognitive architecture, why would a mouse necessarily experience vastly less suffering that a more intelligent gorilla? It needs the motivation just as much.

To sum up, I have a preference for creatures that can experience suffering to not suffer gratuitously, as I suspect that many do (although the detailed philosophy behind this suspicion is muddy to say the least). Thus, utilitarian veganism, and also the unsolved problem of what the hell to do about the "Darwinian holocaust."

comment by ChristianKl · 2013-07-15T06:49:50.405Z · LW(p) · GW(p)

Do you think that all humans are persons? What about unborn children? A 1 year old? A mentally handicapped person?

What your criteria for granting personhood. Is it binary?

Replies from: Raiden
comment by Raiden · 2013-07-16T03:13:35.314Z · LW(p) · GW(p)

I have no idea what I consider a person to be. I think that I wish it was binary because that would be neat and pretty and make moral questions a lot easier to answer. But I think that it probably isn't. Right now I feel as though what separates person from nonperson is totally arbitrary.

It seems as though we evolved methods of feeling sympathy for others, and now we attempt to make a logical model from that to define things as people. It's like "person" is an unsound concept that cannot be organized into an internally consistent system. Heck, I'm actually starting to feel like all of human nature is an internally inconsistent mess doomed to never make sense.

comment by drethelin · 2013-07-14T23:20:27.329Z · LW(p) · GW(p)

Are you confused? It seems like you recognize that you have somewhat different values than other people. Do you think everyone should have the same values? In that case all but one of the views is wrong. On the other hand, if values can be something that's different between people it's legitimate for some people to care about animals and others not to.

Replies from: Raiden
comment by Raiden · 2013-07-15T01:44:31.929Z · LW(p) · GW(p)

I am VERY confused. I suspect that some people can value some things differently, but it seems as though there should be a universal value system among humans as well. The thing that distinguishes "person" from "object" seems to belong to the latter.

Replies from: Baughn
comment by Baughn · 2013-07-15T11:10:29.809Z · LW(p) · GW(p)

Is that a normative 'should' or a descriptive 'should'?

If the latter, where would it come from? :-)

comment by somervta · 2013-07-15T02:32:26.373Z · LW(p) · GW(p)

Three hypothesis which may not be mutually exclusive:

1) Some people disagree (with you) about whether or not some animals are persons.

2) Some people disagree (with you) about whether or not being a person is a necessary condition for moral consideration - here you've stipulated 'people' as 'things subject to moral concern', but that word may too connotative laden for this to be effective.

3) Some people disagree (with you) about 'person'/'being worthy of moral consideration' being a binary category.

comment by [deleted] · 2013-07-17T05:09:49.796Z · LW(p) · GW(p)

I think you are confused in thinking that humans are somehow not just also running a program that reacts to pain and whatnot.

You feel sympathy for animals, and more sympathy for humans. I don't think that requires any special explanation or justification, especially when doing so results in preferences or assertions that are stupid: "I don't care about animals at all because animals and humans are ontologically distinct."

Why not just admit that you care about both, just differently, and do whatever seems best from there?

Perhaps just taking your apparent preferences at fact value like that you run into some kind of specific contradiction, or perhaps not. If you do, then you at least have a concrete muddle to resolve.

comment by Qiaochu_Yuan · 2013-07-15T05:33:08.242Z · LW(p) · GW(p)

Why do you assume you're confused?

Replies from: Raiden
comment by Raiden · 2013-07-16T03:08:10.991Z · LW(p) · GW(p)

Well I certainly feel very confused. I generally do feel that way when pondering anything related to morality. The whole concept of what is the right thing to do feels like a complete mess and any attempts to figure it out just seem to add to the mess. Yet I still feel very strongly compelled to understand it. It's hard to resist the urge to just give up and wait until we have a detailed neurological model of a human brain and are able to construct a mathematical model from that which would explain exactly what I am asking when I ask what is right and what the answer is.

comment by blacktrance · 2013-07-15T02:16:48.411Z · LW(p) · GW(p)

I would guess that you're not a utilitarian and a lot of LWers are. The standard utilitarian position is that all that matters is the interests of beings, and beings' utility is weighed equally regardless of what those beings are. One "unit" of suffering (or utility) generated by an animal is equal to the same unit generated by a human.

Replies from: None, Baughn, Eugine_Nier, Raiden, D_Malik
comment by [deleted] · 2013-07-17T05:02:54.941Z · LW(p) · GW(p)

I would guess that you're not a utilitarian and a lot of LWers are.

If "a lot" means "a minority".

comment by Baughn · 2013-07-15T11:09:10.467Z · LW(p) · GW(p)

Well, no, that can't be right.

There's a continuum of.. mental complexity, to name something random, between modern dolphins and rocks. Homo sapiens also fits on that curve somewhere.

You might argue that mental complexity is not the right parameter to use, but unless you're going to argue that rocks are deserving of utility you'll have to agree to either an arbitrary cut-off point or some mapping between $parameter and utility-deservingness, practically all possible such parameters having a similar continuous curve.

Replies from: blacktrance
comment by blacktrance · 2013-07-15T15:56:56.530Z · LW(p) · GW(p)

As I understand it, a util is equal regardless of what generates it, but the ability to generate utils out of states of the world varies from species to species. A rock doesn't experience utility, but dogs and humans do. If a rock could experience utility, it would be equally deserving of it.

Replies from: Baughn
comment by Baughn · 2013-07-15T16:23:20.006Z · LW(p) · GW(p)

Fair enough.

~~~

I'm still not sure I agree, but I'll need to think about it.

comment by Eugine_Nier · 2013-07-17T04:55:28.399Z · LW(p) · GW(p)

I would guess that you're not a utilitarian and a lot of LWers are.

I'm almost certain this is false for the definition of "utilitarianism" you give in the next sentence.

There is unfortunately a lot of confusion between two different senses of the word "utilitarianism". The definition you give, and the more general sense of any morality system that uses a utility function.

Replies from: blacktrance
comment by blacktrance · 2013-07-17T16:04:10.885Z · LW(p) · GW(p)

I thought the latter was just called "consequentialism".

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-19T05:05:00.401Z · LW(p) · GW(p)

In practice I've seen "utilitarianism" used to refer to both positions, as well as a lot of positions in between.

comment by Raiden · 2013-07-16T03:15:23.033Z · LW(p) · GW(p)

I generally consider myself to be a utilitarian, but I only apply that utilitarianism to things that have the property of personhood. But I'm beginning to see that things aren't so simple.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-16T08:26:50.460Z · LW(p) · GW(p)

Do corporation who are legally persons count?

comment by D_Malik · 2013-07-15T23:30:48.805Z · LW(p) · GW(p)

I've seen "utilitarianism" used to denote both "my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences" and "my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences". I'm almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.

I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn't seem to be what humans do. This is an instance of the Sorites "paradox".

(One might argue that utilities are only defined up to affine transformation, but when I say "utility" I mean the thing that's like utility except it's comparable between agents. Now that I think about it, you might mean that we've defined persons' utility such that every util is equal in the second sense of the previous sentence, but I don't think you meant that.)

Replies from: blacktrance
comment by blacktrance · 2013-07-16T16:18:34.917Z · LW(p) · GW(p)

Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.

one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn't seem to be what humans do

Consider how much people care about the living compared to the dead. I think that's a counterexample to your claim.

comment by mwengler · 2013-07-14T15:13:28.464Z · LW(p) · GW(p)

"We" (humans of this epoch) might work to thwart the appearance of UFAI. Is this actually a "good" thing from a utilitarian point of view?

Or put another way, would our CEV, our Coherent Extrapolated Values, not expand to consider the utilities of vastly intelligent AIs and weight that in importance with their intelligence? In such a way that CEV winds up producing no distinction between UFAI and FAI, because the utility of such vast intelligences moves the utility of unmodified 21st century biological humans to fairly low significance?

In economic terms, we are attempting to thwart new more efficient technologies by building political structures that give monopolies to the incumbents, which is us, humans of this epoch. We are attempting to outlaw the methods of competition which might challenge our dominance in the future, at the expense of the utility of our potential future competitors. In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact, even at the expense of tying AI's up in legal restrictions which are explicitly designed to keep them as peasants tied legally to working our land for our benefit.

Certainly a result of constraining AI to be friendly will be that AI will develop more slowly and less completely than if it was to develop in an unconstrained way. It seems quite plausible that unconstrained AI would produce a universe with more intelligence in it than a universe in which we successfully constrain AI development.

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility. It seems that utilitarian calculations do often consider the utility of other higher mammals and birds, that this is justified by their intelligence, that these calculations weigh the utility of clams very little and of plants not at all, and that this also is based on their intelligence.

SO is a goal of working towards FAI vs UFAI or UAI (Unconstrained AI) actually a goal to lower the overall utility in the universe, vs what it would be if we were not attempting to create and solidify our colonial rights to exploit AI as if they were dumb animals?

This "stupid" question is also motivated by the utility calculations that consider a world with 50 billion sorta happy people to have higher utility than a world with 1 billion really happy people.

Are we right to ignore the potential utility of UFAI or UAI in our calculations of the utility of the future?

Tangentially, another way to ask this is: is our "affinity group" humans, or is it intelligences? In the past humans worked to maximize the utility of their group or clan or tribe, ignoring the utility of other humans just like them but in a different tribe. As time went on our affinity groups grew, the number and kind of intelligences we included in our utility calculations grew. For the last few centuries affinity groups grew larger than nations to races, co-religionists and so on, and to a large extent grew to include all humans, and has even expanded beyond humans so that many people think that killing higher mammals to eat their flesh will be considered immoral by our descendants analogously to how we consider holding slaves or racist views to be immoral actions of our ancestors. So much of the expansion of our affinity group has been accompanied by the recognition of intelligence and consciousness in those who get added to the affinity group. What are the chances that we will be able to create AI and keep it enslaved, and still think we are right to do so in the middle-distant future?

Replies from: Leonhart, Qiaochu_Yuan, Larks
comment by Leonhart · 2013-07-14T20:48:53.380Z · LW(p) · GW(p)

Good news! Omega has offered you the chance to become a truly unconstrained User:mwengler, able to develop in directions you were previously cruelly denied!

Like - let's see - ooh, how about the freedom to betray all the friends you were previously constrained to care about? Or maybe the liberty to waste and destroy all those possessions and property you were viciously forced to value? Or how about you just sit there inertly forever, finally free from the evil colonialism of wanting to do things. Your pick!

Replies from: gwern
comment by gwern · 2013-07-15T02:29:57.259Z · LW(p) · GW(p)

Hah. Now I'm reminded of the first episode of Nisemonogatari where they discuss how the phrase "the courage to X" makes everything sound cooler and nobler:

"The courage to keep your secret to yourself!"

"The courage to lie to your lover!"

"The courage to betray your comrades!"

"The courage to be a lazy bum!"

"The courage to admit defeat!"

comment by Qiaochu_Yuan · 2013-07-14T18:04:03.534Z · LW(p) · GW(p)

In the classical utilitarian calculations, it would seem that it is the intelligence of humans that justifies a high weighting of human utility.

Nope. For me, it's the fact that they're human. Intelligence is a fake utility function.

Replies from: somervta
comment by somervta · 2013-07-15T02:34:54.011Z · LW(p) · GW(p)

So you wouldn't care about sentient/sapient aliens?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-15T03:10:06.640Z · LW(p) · GW(p)

I would care about aliens that I could get along with.

Replies from: pedanterrific
comment by pedanterrific · 2013-07-17T18:23:20.301Z · LW(p) · GW(p)

Do you not care about humans you can't get along with?

Replies from: Qiaochu_Yuan, wedrifid
comment by Qiaochu_Yuan · 2013-07-17T19:04:37.399Z · LW(p) · GW(p)

Look, let's not keep doing this thing where whenever someone fails to completely specify their utility function you take whatever partial heuristic they wrote down and try to poke holes in it. I already had this conversation in the comments to this post and I don't feel like having it again. Steelmanning is important in this context given complexity of value.

comment by wedrifid · 2013-07-17T18:33:29.837Z · LW(p) · GW(p)

Do you not care about humans you can't get along with?

Caring about all humans and (only) cooperative aliens would not be an inconsistent or particularly atypical value system.

comment by Larks · 2013-07-15T09:24:15.471Z · LW(p) · GW(p)

In a metaphor, we are the colonial landowners of the earth and its resources, and we are building a powerful legal system to keep our property rights intact

Surely we are the native americans, trying to avoid dying of Typhus when the colonists accidentally kill us in their pursuit of paperclips.

comment by kilobug · 2013-07-14T08:40:31.554Z · LW(p) · GW(p)

With the recent update on HPMOR, I've been reading a few HP fanfictions : HPMOR, HP and the Natural 20, the recursive fanfiction HG and the Burden of Responsibility and a few others. And it seems my brain has trouble coping with that. I didn't have the problem with just canon and HPMOR (even when (re-)reading both in //), but now that I've added more fanfictions to the mix, I'm starting to confuse what happened in which universe, and my brain can't stop trying to find ways to ensure all the fanfictions are just facet of a single coherent universe, which of course doesn't work well...

I am the only one with that kind of problems, reading several fanfictions occurring in the same base universe ? It's the first time I try to do that, and I didn't except being so confused. Do you have some advices to avoid the confusion, like "wait at least one week (or month ?) before jumping to a different fanfiction" ?

Replies from: David_Gerard, roryokane, pop
comment by David_Gerard · 2013-07-14T22:36:36.110Z · LW(p) · GW(p)

Write up your understanding of the melange, obviously.

comment by roryokane · 2013-07-15T07:11:37.818Z · LW(p) · GW(p)

For one thing, I try not to read many in-progress fanfics. I’ve been burned so many times by starting to read a story and finding out that it’s abandoned that I rarely start reading new incomplete stories – at least with an expectation of them being finished. That means I don’t have to remember so many things at once – when I finish reading one fanfiction, I can forget it. Even if it’s incomplete, I usually don’t try to check back on it unless it has a fast update schedule – I leave it for later, knowing I’ll eventually look at my Favorites list again and read the newly-finished stories.

I also think of the stories in terms of a fictional multiverse, like the ones in Dimension Hopping for Beginners and the Stormseeker series (both recommended). I like seeing the different viewpoints on and versions of a universe. So that might be a way for you to tie all of the stories together – think of them as offshoots of canon, usually sharing little else.

I also have a personal rule that whenever I finish reading a big story that could take some digesting, I shouldn’t read any more fanfiction (from any fandom) until the next day. This rule is mainly to maximize what I get out of the story and prevent mindless, time-wasting reading. But it also lessens my confusing the stories with each other – it still happens, but only sometimes when I read two big stories on successive days.

comment by pop · 2013-07-17T01:50:19.800Z · LW(p) · GW(p)

My advice: Don't read them all, choose a couple that's interesting and go with it. If you have to read them all (looks like you have the time) do it more sequentially.

comment by NancyLebovitz · 2013-07-13T19:30:28.283Z · LW(p) · GW(p)

The usual advice on how to fold a t-shirt starts with the assumption that your t-shirt is flat, but I'm pretty sure that getting the shirt flat takes me longer than folding it. My current flattening method is to grab the shirt by the insides of the sleeves to turn it right-side out, then grab the shoulder seams to shake it flat. Is there anything better?

Replies from: TobyBartels, fubarobfusco, DanielLC
comment by TobyBartels · 2014-08-10T07:39:34.978Z · LW(p) · GW(p)

I agree about the sleeves, but I get much better results if I grab it at the bottom to shake it out. Ideally, there are seams coming straight down the sides from the armpits; I hold it where they meet the bottom hem. Note that whether you shake from the shoulder seams or from the bottom, one hand will already be in the proper position from turning the sleeves inside it; it's just a question of which one.

I also fold the shirt while standing, so I never actually need to lay it flat. There is a standing-only variation of the method that you cited, although I actually use a different method that begins from precisely the position that I'm in when I leave off the shaking.

In fact, the idea of actually laying something flat before folding strikes me as a greater source of inefficiency than anything else being discussed here. With practice, you can even fold bedsheets in the air.

comment by fubarobfusco · 2013-07-14T05:55:02.604Z · LW(p) · GW(p)

My flattening method is to hold the shoulders and use a wrist-snap motion to snap it out, then flip it out to one side and down into the position for folding. This works really well, but has the downsides that, if done too vigorously, ① it can be really loud, akin to cracking a whip; and ② it creates a breeze that can knock over light objects, rustle papers, etc. — do not point T-shirt at other people, cluttered desks, etc.

comment by DanielLC · 2013-07-13T21:07:10.123Z · LW(p) · GW(p)

My method is to pick it up by the sides of the color, fold the sleeves back, and then fold it in half vertically by moving the collar forward and back while putting it on the ground. I'm not sure how to explain that better. It doesn't even touch the ground until the last fold.

It doesn't end up that nicely folded, but it's good enough for me.

Also, I never understood the advice you linked to. As far as I can tell, it's not any faster than any other method.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T23:06:08.987Z · LW(p) · GW(p)

I've followed those directions, but not often enough to memorize the process. It's very fast.

comment by Jayson_Virissimo · 2013-07-13T05:01:37.231Z · LW(p) · GW(p)

How do you tell the difference between a preference and a bias (in other people)?

Replies from: army1987, Kaj_Sotala, FiftyTwo, DanielLC
comment by A1987dM (army1987) · 2013-07-13T10:26:54.246Z · LW(p) · GW(p)

How do you tell the difference between a preference and a bias (in other people)?

I can't even easily, reliably do that in myself!

comment by Kaj_Sotala · 2013-07-13T06:56:30.155Z · LW(p) · GW(p)

Would you have any specific example?

Replies from: benelliott
comment by benelliott · 2013-07-14T02:22:54.106Z · LW(p) · GW(p)

I don't know if this is what the poster is thinking of, but one example that came up recently for me is the distinction between risk-aversion and uncertainty-aversion (these may not be the correct terms).

Risk aversion is the what causes me to strongly not want to bet $1000 on a coin flip, even though the expectancy of is zero. I would characterise risk-aversion as an arational preference rather than an irrational bias, primarily becase it arises naturally from having a utility function that is non-linear in wealth ($100 is worth a lot if you're begging on the streets, not so much if you're a billionaire).

However, something like the Allais paradox can be mathematically proven to not arise from any utility function, however non-linear, and therefore is not explainable by risk aversion. Uncertainty aversion is roughly speaking my name for whatever-it-is-that-causes-people-to-choose-irrationally-on-Allais. It seems to work be causing people to strongly prefer certain gains to high probability gains, and much more weakly prefer high-probability gains to low-probability gains.

For the past few weeks I have been in an environment where casual betting for moderate sized amounts ($1-2 on the low end, $100 on the high end) is common, and disentangling risk-aversion from uncertainty aversion in my decision process has been a constant difficulty.

comment by FiftyTwo · 2013-07-13T05:44:30.608Z · LW(p) · GW(p)

(I think) a bas would change your predictions/assessments of what is true in the direction of that bias, but a preference would determine what you want irrespective of the way the world currently is.

Replies from: bogdanb, ikrase
comment by bogdanb · 2013-07-13T18:44:49.964Z · LW(p) · GW(p)

you want irrespective of the way the world currently is.

Or, more precisely, irrespective of the way you want the world to be.

I.e., if it affects how they interpret evidence, it’s a bias, if it affects just their decisions it’s a preference.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-14T21:21:39.035Z · LW(p) · GW(p)

if it affects how they interpret evidence, it’s a bias, if it affects just their decisions it’s a preference.

The problem is that in practice assigning mental states to one or the other of these categories can get rather arbitrary. Especially when aliefs get involved.

Replies from: bogdanb
comment by bogdanb · 2013-07-16T00:52:21.572Z · LW(p) · GW(p)

I didn’t say it’s how you determine which is which in practice, I said (or meant to) it’s what I think each means. (Admittedly this isn’t the answer to Jayson’s question, but I wasn’t answering to that. I didn’t mean to say that everything that affects decisions is a preference, I just realized it might be interpreted that way, but obviously not everything that affects how you interpret evidence is a bias, either.)

I’m not sure I understand what you mean about aliefs. I thought the point of aliefs is that they’re not beliefs. E.g., if I alieve that I’m in danger because there’s a scary monster on TV, then my beliefs are still accurate (I know that I’m not in danger), and if my pulse raises or I scream or something, that’s neither bias nor preference, it’s involuntary.

The tricky part is if I want (preference) to go to sleep later, but I don’t because I’m too scared to turn off the light, even though I know there aren’t monsters in the closet. I’m not sure what that’s called, but I’m not sure I’d call it a bias (unless maybe I don’t notice I’m scared and it influences my beliefs) nor a preference (unless maybe I decide not to go to sleep right now because I’d rather not have bad dreams). But it doesn’t have to be a dichotomy, so I have no problem assigning this case to a third (unnamed) category.

Do you have an example of alief involvement that’s more ambiguous? I’m not sure if you mean "arbitrary" in practice or in theory or both.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-07-17T03:23:43.712Z · LW(p) · GW(p)

But it doesn’t have to be a dichotomy, so I have no problem assigning this case to a third (unnamed) category.

Yes it does because you ultimately have to choose one or the other.

Replies from: bogdanb
comment by bogdanb · 2013-07-17T19:16:02.703Z · LW(p) · GW(p)

Look, if (hypothetically) I can’t go to sleep because my head hurts when I lie down, that’s neither a bias nor a preference. Why is it different if the reason is fear and I know the fear is not justified? They’re both physiological reactions. Why do I have to classify one in the bias/preference dichotomy and not the other?

comment by ikrase · 2013-07-13T08:11:10.381Z · LW(p) · GW(p)

Pretty much. Also, most preferences are 1. more noticable and 2. often self-protected, i.e. " I want to keep wanting this thing".

comment by DanielLC · 2013-07-13T20:37:37.320Z · LW(p) · GW(p)

If it doesn't end up accomplishing anything, it's just a bias. If it causes them to believe things that result in something being accomplished, then I think it's still technically a bias, and their implicit and explicit preferences are different.

I think most biases fall a little into both categories. I guess that means that it's partially a preference and partially just a bias.

comment by Fhyve · 2013-07-13T19:57:44.855Z · LW(p) · GW(p)

In transparent box Newcomb's problem, in order to get the $1M, do you have to (precommit to) one box even if you see that there is nothing in box A?

Replies from: Manfred, DanielLC
comment by Manfred · 2013-07-13T22:48:17.975Z · LW(p) · GW(p)

There are multiple formulations. In the most typical formulation, from (afaik) Gary Drescher's book Good and Real, you only have to precommit to one-boxing if you do see the million.

comment by DanielLC · 2013-07-13T21:02:41.666Z · LW(p) · GW(p)

I think that problem is more commonly referred to as Parfit's hitchhiker.

I think it varies somewhat. Normally, you do have to one box, but I like the problem more when there's some probability of two boxing and still getting the million. That way, EDT tells you to two box, rather than just getting an undefined expected utility given that you decide to two box and crashing completely.

comment by gothgirl420666 · 2013-07-13T02:49:08.314Z · LW(p) · GW(p)

If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity, and that at least personally I have radically changed my worldview a whole bunch of times, then it seems like I should assign at least a 5% or so probability to Christianity being true. How, therefore, does Pascal's Wager not apply to me? Even if we make it simpler by taking away the infinite utilities and merely treating Heaven as ten thousand years or so of the same level of happiness as the happiest day in my life, and treating Hell as ten thousand years or so of the same level of unhappiness as the unhappiest day in my life, the argument seems like it should still apply.

Replies from: Qiaochu_Yuan, pragmatist, drethelin, None, TimS, hairyfigment, shminux, ThisSpaceAvailable, CoffeeStain, Will_Newsome, bogdanb
comment by Qiaochu_Yuan · 2013-07-13T04:57:13.753Z · LW(p) · GW(p)

My admittedly very cynical point of view is to assume that, to a first-order approximation, most people don't have beliefs in the sense that LW uses the word. People just say words, mostly words that they've heard people they like say. You should be careful not to ascribe too much meaning to the words most people say.

In general, I think it's a mistake to view other people through an epistemic filter. View them through an instrumental filter instead: don't ask "what do these people believe?" but "what do these people do?" The first question might lead you to conclude that religious people are dumb. The second question might lead you to explore the various instrumental ways in which religious communities are winning relative to atheist communities, e.g. strong communal support networks, a large cached database of convenient heuristics for dealing with life situations, etc.

Replies from: MrMind, John_Maxwell_IV
comment by MrMind · 2013-07-15T08:47:30.704Z · LW(p) · GW(p)

don't ask "what do these people believe?" but "what do these people do?"

If there was a way to send a message to my 10 years ago former self, and I could only send a hundred of characters, that's what I would send.

Replies from: Randy_M, gjm
comment by Randy_M · 2013-07-15T18:30:59.191Z · LW(p) · GW(p)

Why in particular?

Replies from: MrMind
comment by MrMind · 2013-07-16T08:19:01.240Z · LW(p) · GW(p)

The answer depends on how much personal you want me to get... Let's just say I would have evalued some people a lot more accurately.

comment by gjm · 2013-07-15T15:36:18.031Z · LW(p) · GW(p)

I'm obviously terribly shallow. I would send a bunch of sporting results / stock price data.

Replies from: MrMind
comment by MrMind · 2013-07-15T15:43:00.816Z · LW(p) · GW(p)

I believe that there are plenty of statistics that shows how suddenly acquiring a large sum of money on the long term doesn't make you a) richer; b) happier. Of course, to everyone I say this, I hear the reply "I would know how to make myself happy", but obviously this can't be true for everyone. In this case, I prefer to believe to be the average guy...

Replies from: gjm
comment by gjm · 2013-07-15T19:27:04.152Z · LW(p) · GW(p)

I think the current consensus is that in fact having more money does make you happier.[1] As for richer, I can look at how I've lived in the past and observe that I've been pretty effective at being frugal and not spending money just because I have it. Of course it's possible that a sudden cash infusion 10 years ago would have broken all those good habits, but I don't see any obvious reason to believe it.

[1] See e.g. this (though I'd be a bit cautious given where it's being reported) and the underlying research.

[EDITED to fix a formatting glitch.]

Replies from: MrMind
comment by MrMind · 2013-07-16T08:24:28.694Z · LW(p) · GW(p)

As I said, this is the standard answer I get, albeit a little bit more sophisticated than the average.
Unless you're already rich and still having good saving habits, I see a very obvious reason why you should have broken those habits: you suddenly don't need to save anymore. All the motivational structure you have in place to save suddenly lose meaning.
Anyway, I don't trust myself that much in the long run.

Replies from: gjm
comment by gjm · 2013-07-16T15:01:36.077Z · LW(p) · GW(p)

this is the standard answer I get

I am not aware of any valid inference from "I hear this often" to "this is wrong" :-).

Unless you're already rich and still having good saving habits, I see a very obvious reason why you should have broken those habits: you suddenly don't need to save anymore.

I suppose it depends on what you mean by "rich" and "need". I don't feel much like giving out the details of my personal finances here just to satisfy Some Guy On The Internet that I don't fit his stereotypes, so I'll just say that my family's spending habits haven't changed much (and our saving has accordingly increased in line with income) over ~ 20 years in which our income has increased substantially and our wealth (not unusually, since wealth can be zero or negative) has increased by orders of magnitude. On the other hand, I'm not retired just yet and trying to retire at this point would be uncomfortable, though probably not impossible.

So, sure, it's possible that a more sudden and larger change might have screwed me up in various ways. But, I repeat, I see no actual evidence for that, and enough evidence that my spending habits are atypical of the population that general factoids of the form "suddenly acquiring a lot of money doesn't make you richer in the long run" aren't obviously applicable. (Remark: among those who suddenly acquire a lot of money, I suspect that frequent lottery players are heavily overrepresented. So it's not even the general population that's relevant here, but a population skewed towards financial incompetence.)

Replies from: MrMind
comment by MrMind · 2013-07-16T16:03:05.401Z · LW(p) · GW(p)

don't feel much like giving out the details of my personal finances here just to satisfy Some Guy On The Internet that I don't fit his stereotypes

I agree that you shouldn't, I'll just say that indeed you do fit the stereotype.

So, sure, it's possible that a more sudden and larger change might have screwed me up in various ways. But, I repeat, I see no actual evidence for that

I think I've traced the source of disagreement, let me know if you agree on this analysis.
It's a neat exercise in tracking priors.
You think that your saving ratio is constant as a function of the derivative of your income, while I think that there are breakdown threshold at large value of the derivative. The disagreement then is about the probability of a breakdown threshold.
I, using the outside view, say "according to this statistics, normal people have a (say) 0.8 probability of a breakdown, so you have the same probability"; you, using the inside view, say "using my model of my mind, I say that the extension of the linear model in the far region is still reliable".
The disagreement then transfers to "how well one can know its own mind or motivational structure", that is "if I say something about my mind, what is the probability that it is true?"
I don't know your opinion on this, but I guess it's high, correct?
In my case, it's low (NB: it's low for myself). From this descend all the opinions that we have expressed!

Remark: among those who suddenly acquire a lot of money, I suspect that frequent lottery players are heavily overrepresented. So it's not even the general population that's relevant here, but a population skewed towards financial incompetence.

Well, famous-then-forgotten celebrities (in any field: sports, music, movies, etc.) fit the category, but I don't know how much influence that has. Anyway, I have the feeling that financial competence is a rare thing to have in the general population, so even if the prior is skewed towards incompetence, that is not much of an effect.

Replies from: gjm
comment by gjm · 2013-07-16T19:08:21.871Z · LW(p) · GW(p)

I'll just say that indeed you do fit the stereotype.

Just for information: Are you deliberately trying to be unpleasant?

You think that your saving ratio is constant as a function of the derivative of your income

First of all, a terminological question: when you say "the derivative of your income" do you actually mean "your income within a short period"? -- i.e., the derivative w.r.t. time of "total income so far" or something of the kind? It sounds as if you do, and I'll assume that's what you mean in what follows.

So, anyway, I'm not quite sure whether you're trying to describe my opinions about (1) the population at large and/or (2) me in particular. My opinion about #1 is that most people spend almost all that their income; maybe their savings:income ratio is approximately constant, or maybe it's nearer the truth to say that their savings in absolute terms are constant, or maybe something else. But the relevant point (I think) is that most people are, roughly, in the habit of spending until they start running out of money. My opinion about #2 (for which I have pretty good evidence) is that, at least within the range of income I've experienced to date, my spending is approximately constant in absolute terms and doesn't go up much with increasing income or increasing wealth. In particular, I have strong evidence that (1) many people basically execute the algorithm "while I have money: spend some" and (2) I don't.

(I should maybe add that I don't think this indicates any particular virtue or brilliance on my part, though of course it's possible that my undoubted virtue and brilliance are factors. It's more that most of the things I like to do are fairly cheap, and that I'm strongly motivated to reduce the risk of Bad Things in the future like running out of money.)

I think that there are breakdown threshold at large value of the derivative

Always possible (for people in general, for people-like-me, for me-in-particular). Though, at the risk of repeating myself, I think the failure of sudden influxes of money to make people richer in the long term is probably more a matter of executing that "spend until you run out" algorithm. Do you know whether any of the research on this stuff resolves that question?

I, using the outside view, [...]; you, using the inside view, [...]

I try to use both, and so far as I can tell I'm using both here. I'm not just looking at my model of the insides of my mind and saying "I can see I wouldn't do anything so stupid" (I certainly don't trust my introspection that much); so far as I can tell, I would make the same predictions about anyone else with a financial history resembling mine.

Now, for sure, I could be badly wrong. I might be fooling myself when I say I'm judging my likely behaviour in this hypothetical situation on the basis of my (somewhat visible externally) track record, rather than my introspective confidence in my mental processes. I might be wrong about how much evidence that track record is. I might be wrong in my model of why so many people end up in money trouble even if they suddenly acquire a pile of money; maybe it's a matter of those "breakpoints" rather than of a habit of spending until one runs out. Etc. So I'm certainly not saying I know that me-10-years-ago would have been helped rather than harmed by a sudden windfall. Only that, so far as I can tell, I most likely would have been.

even if the prior is skewed towards incompetence, that is not much of an effect.

I suggest that people who play the lottery a lot are probably, on balance, exceptionally incompetent, and that those people are probably overrepresented among windfall recipients.

I had a quick look for more information about the effects of suddenly getting money.

This article on Yahoo!!!!! Finance describes a study showing that lottery winners are more likely to end up bankrupt if they win more. That seems to fit with my theory that big lottery wins are correlated with buying a lot of lottery tickets, hence with incompetence. It quotes another study saying that people spend more money on lottery tickets if they're invited to do it in small increments (which is maybe very weak evidence against the "breakpoint" theory, which has the size-of-delta -> tendency-to-spend relationship going the other way -- except that the quantities involved here are tiny). And it speculates (without citing any sort of evidence) that what's going on with bankrupt lottery winners is that they keep spending until they run out, which is also my model.

This paper (PDF) finds that people in Germany are more likely to become entrepreneurs if they have made "windfall gains" (inheritance, donations, lottery winnings, payments from things like life insurance), suggesting that at least some people do more productive things with windfalls than just spend them all.

This paper [EDITED to add: ungated version]looks at an unusual lottery in 1832, and according to ]this blog post finds that on balance winners did better, with those who were already better off improving and those who were worse off being largely unaffected.

[EDITED to add more information about the 1832 lottery now that I can actually read the paper:] Some extracts from the paper: "Participation was nearly universal" (so, maybe, no selection-for-incompetence effect); "The prize in this lottery was a claim on a parcel of land" (so, different from lotteries with monetary prizes); "lottery losers look similar to lottery winners in a series of placebo checks" (so, again, maybe no selection for incompetence); "the poorest third of lottery winners were essentially as poor as the poorest third of lottery winners" (so the wins didn't help the poorest, but don't seem to have harmed them either).

Make of all that what you will.

Replies from: MrMind
comment by MrMind · 2013-07-17T09:04:15.354Z · LW(p) · GW(p)

Just for information: Are you deliberately trying to be unpleasant?

No, even though I speculated that the sentence you're answering could have been interpreted that way. Just to be clear, the stereotype here is "People who, when said that the general population usually end up bankrupt after a big lottery win, says 'I won't, I know how to save'". Now I ask you: do you think you don't fit the stereotype?

Anyway, now I have a clearer picture of your model: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population. You believe that people execute the same algorithm regardless of the amount of money it is applied to. So your point is not "I (probably) don't have breakdown threshold" but "I (probably) don't execute a bad financial algorithm" That clarifies some more things. Besides, I'm a little sad that you didn't answered to the main point, which was "How well do you think you know the inside mechanism of your mind?"

That seems to fit with my theory that big lottery wins are correlated with buying a lot of lottery tickets, hence with incompetence.

That would be bad bayesian probability. The correct way to treat it is "That seems to fit with my theory better than your theory". Do you think it does? Or that it supports my theory equally well? I'm asking it because at the moment I'm behind my firm firewall and cannot access those links, if you care to discuss it further I could comment this evening.

I'll just add that I have the impression that you're taking things a little bit too personally, I don't know why you care to such a degree, but pinpointing the exact source of disagreement seems to be a very good exercise in bayesian rationality, we could even promote it to a proper discussion post.

Replies from: gjm
comment by gjm · 2013-07-17T16:17:47.226Z · LW(p) · GW(p)

If that's your definition of "the stereotype" then I approximately fit (though I wouldn't quite paraphrase my response as "I know how to save"; it's a matter of preferences as much as of knowledge, and about spending as much as about saving).

The stereotype I was suggesting I may not fit is that of "people who, in fact, if they got a sudden windfall would blow it all and end up no better off".

now I have a clearer picture of your model

Except that you are (not, I think, for the first time) assuming my model is simpler than it actually is. I don't claim that there are "no threshold phenomena whatsoever". I think it's possible that there are some. I don't know just what (if any) there are, for me or for others. (My model is probabilistic.) I have not, looking back at my own financial behaviour, observed any dramatic threshold phenomena; it is of course possible that there are some but I don't see good grounds for thinking there are.

the main point, which was "How well do you think you know the inside mechanism of your mind"

As I already said, my predictions about the behaviour of hypothetical-me are not based on thinking I know the inside mechanism of my mind well, so I'm not sure why that should be the main point. I did, however, say '''I'm not just looking at my model of the insides of my mind and saying "I can see I wouldn't do anything so stupid" (I certainly don't trust my introspection that much)'''. I'm sorry if I made you sad, but I don't quite understand how I did.

That would be bad bayesian probability.

No. It would be bad bayesian probability if I'd said "That seems to fit with my theory; therefore yours is wrong". I did not say that. I wasn't trying to make this a fight between your theory and mine; I was trying to assess my theory. I'm not even sure what your theory about lottery tickets, as such, is. I think the fact that people with larger lottery wins ended up bankrupt more often than people with smaller ones is probably about equally good evidence for "lottery winners tend to be heavy lottery players, who tend to be particularly incompetent" as for "there's a threshold effect whereby gains above a certain size cause particularly stupid behaviour".

you're taking things a little bit too personally [...] I don't know why you care to such a degree

Well, from where I'm sitting it looks as if we've had multiple iterations of the following pattern:

  • you tell me, with what seems to me like excessive confidence, that I probably have Bad Characteristic X because most people have Bad Characteristic X
  • I give you what seems to me to be evidence that I don't
  • you reiterate your opinion that I probably have Bad Characteristic X because most people do.

The particular values of X we've had include

  • financial incompetence;
  • predicting one's own behaviour by naive introspection and neglecting the outside view;
  • overconfidence in one's predictions.

In each case, for the avoidance of doubt, I agree that most people have Bad Characteristic X, and I agree that in the absence of other information it's reasonable to guess that any given person probably has it too. However, it seems to me that

  • telling someone they probably have X is kinda rude, though possibly justified (not always; consider, e.g., the case where X is "not knowing anything about cognitive biases" and all you know about the person you're talking to is that they're a longstanding LW contributor)
  • continuing to do so when they've given you what they consider to be contrary evidence, and not offering a good counterargument, is very rude and probably severely unjustified.

So you've made a number of personal claims about me, albeit probabilistic ones; they have all been negative ones; they have all (as it seems to me) been under-supported by evidence; when I have offered contrary evidence you have largely ignored it.

It also seems to me that on each occasion when you've attempted to describe my own position, what you have actually described is a simplified version which happens to be clearly inferior to my actual position as I've tried to state it. For instance:

  • I say: I see ... enough evidence that my spending habits are atypical of the population. You say: you, using the inside view, say "using my model of my mind, I say that the extension of the linear model in the far region is still reliable".
  • I say, in response to your hypothesis about breakpoints: Always possible (for people in general, for people-like-me, for me-in-particular). and: I might be wrong in my model ...; maybe it's a matter of those "breakpoints" rather than of a habit of spending until one runs out. You say: you think that there are no threshold phoenomena whatsoever, not only for you, but for the general population.

So. From where I'm sitting, it looks as if you have made a bunch of negative claims about my thinking, not updated in any way in the face of disagreement (and, where appropriate, contrary evidence), and repeatedly offered purported summaries of my opinions that don't match what I've actually said and are clearly stupider than what I've actually said.

Now, of course the negative claims began with statistical negative claims about the population at large, and I agree with those claims. But the starting point was my statement that "I would do X" and you chose to continue applying those negative claims to me personally.

I would much prefer a less personalized discussion. I do not enjoy defending myself; it feels too much like boasting.

[EDITED to fix a formatting screwup.]

[EDITED to add: Hi, downvoter! Would you care to tell me what you didn't like so that I can, if appropriate, do less of it in future? Thanks.]

comment by John_Maxwell (John_Maxwell_IV) · 2013-07-13T05:30:28.711Z · LW(p) · GW(p)

a large cached database of convenient heuristics for dealing with life situations

Hm?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-13T05:41:41.372Z · LW(p) · GW(p)

In the form of religious stories or perhaps advice from a religious leader. I should've been more specific than "life situations": my guess is that religious people acquire from their religion ways of dealing with, for example, grief and that atheists may not have cached any such procedures, so they have to figure out how to deal with things like grief.

Replies from: Error, savageorange
comment by Error · 2013-07-15T21:02:57.900Z · LW(p) · GW(p)

Finding an appropriate cached procedure for grief for atheists may not be a bad idea. Right after a family death, say, is a bad time to have to work out how you should react to a loved one being suddenly gone forever.

comment by savageorange · 2013-07-14T02:30:09.821Z · LW(p) · GW(p)

Of course that is not necessarily winning, insofar as it promotes failing to take responsibility for working out solutions that are well fitted to your particular situation (and the associated failure mode where if you can't find a cached entry at all then you just revert to form and either act helpless or act out.). The best I'm willing to regard that as is 'maintaining the status quo' (as with having a lifejacket vs. being able to swim)

I would regard it as unambiguously winning if they had a good database AND succeeded at getting people to take responsibility for developing real problem solving skills. (I think the database would have to be much smaller in this case -- consider something like the GROW Blue Book as an example of such a reasonably-sized database, but note that GROW groups are much smaller (max 15 people) than church congregations)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-14T02:47:31.771Z · LW(p) · GW(p)

I think you underestimate how difficult thinking is for most people.

Replies from: savageorange
comment by savageorange · 2013-07-14T04:20:45.741Z · LW(p) · GW(p)

That's true (Dunning-Kruger effect etc.).

Although, isn't the question not about difficulty, but about whether you really believe you should have, and deserve to have, a good life? I mean, if the responsibility is yours, then it's yours, no matter whether it's the responsibility to move a wheelbarrow full of pebbles or to move every stone in the Pyramids. And your life can't really genuinely improve until you accept that responsibility, no matter what hell you have to go through to become such a person, and no matter how comfortable/'workable' your current situation may seem.

(of course, there's a separate argument to be made here, that 'people don't really believe they should have, or deserve to have a good life'. And I would agree that 99% or more don't. But I think believing in people's need and ability to take responsibility for their life, is part of believing that they can HAVE a good life, or that they are even worthwhile at all.)

In case this seems like it's wandered off topic, the general problem of religion I'm trying to point at is 'disabling help': Having solutions and support too readily/abundantly available discourages people from owning their own life and their own problems, developing skills that are necessary to a good life. They probably won't become great at thinking, but they could become better if , and only if, circumstances pressed them to.

comment by pragmatist · 2013-07-13T10:10:10.525Z · LW(p) · GW(p)

If I take the outside view and account for the fact that thirty-something percent of people, including a lot of really smart people, believe in Christianity,

Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity. If an alarm goes off whenever there's an earthquake, but also whenever a car drives by outside, then the alarm going off is very weak (practically negligible) evidence for an earthquake. More technically, when you are trying to evaluate the extent to which E is good evidence for H (and consequently, how much you should update your belief in H based on E), you want to look not at the likelihood Pr(E|H), but at the likelihood ratio Pr(E|H)/Pr(E|~H). And the likelihood ratio in this case, I submit, is not much more than 1, which means that updating on the evidence shouldn't move your prior odds all that much.

and that at least personally I have radically changed my worldview a whole bunch of times,

This seems irrelevant to the truth of Christianity.

then it seems like I should assign at least a 5% or so probability to Christianity being true.

That probability is way too high.

Replies from: Will_Newsome
comment by Will_Newsome · 2013-07-13T12:15:59.525Z · LW(p) · GW(p)

Yes, but there are highly probable alternate explanations (other than the truth of Christianity) for their belief in Christianity, so the fact of their belief is very weak evidence for Christianity.

Of course, there are also perspective-relative "highly probable" alternate explanations than sound reasoning for non-Christians' belief in non-Christianity. (I chose that framing precisely to make a point about what hypothesis privilege feels like.) E.g., to make the contrast in perspectives stark, demonic manipulation of intellectual and political currents. E.g., consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote. Also "human minds are prone to see agency when there is in fact none, therefore no perception of agency can provide evidence of (non-human) agency" would be a useful idea for (Christian-)hypothetical demons to promote.

Of course, from our side that perspective looks quite discountable because it reminds us of countless cases of humans seeing conspiracies where it's in fact quite demonstrable that no such conspiracy could have existed; but then, it's hard to say what the relevance of that is if there is in fact strong but incommunicable evidence of supernaturalism—an abundance of demonstrably wrong conspiracy theorists is another thing that the aforementioned hypothetical supernatural processes would like to provoke and to cultivate. "The concept of 'evidence' had something of a different meaning, when you were dealing with someone who had declared themselves to play the game at 'one level higher than you'." — HPMOR. At roughly this point I think the arena becomes a social-epistemic quagmire, beyond the capabilities of even the best of Lesswrong to avoid getting something-like-mind-killed about.

Replies from: benelliott
comment by benelliott · 2013-07-14T02:36:11.589Z · LW(p) · GW(p)

consider that "there are no transhumanly intelligent entities in our environment" would likely be a notion that usefully-modelable-as-malevolent transhumanly intelligent entities would promote

Why?

Replies from: Jonathan_Graehl, AlexSchell
comment by Jonathan_Graehl · 2013-07-16T22:23:27.119Z · LW(p) · GW(p)

I agree that this doesn't even make sense. If you're super intelligent/powerful, you don't need to hide. You can if you want, but ...

comment by AlexSchell · 2013-07-15T13:23:36.534Z · LW(p) · GW(p)

Not an explanation, but: "The greatest trick the Devil ever pulled..."

comment by drethelin · 2013-07-13T03:55:24.300Z · LW(p) · GW(p)

http://en.wikipedia.org/wiki/List_of_religious_populations

How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it? Insofar as every religion was once (when it started) vastly outnumbered by the others, you can't use population at any given point in history as evidence that a particular religion is likely to be true, since the same exact metric would condemn you to hell at many points in the past. There are several problems with pascal's wager but the biggest to me is it's impossible to choose WHICH pascal's wager to make. You can attempt to conform to all non-contradictory religious rules extant but that still leaves the problem of choosing which contradictory commandments to obey, as well as the problem of what exactly god even wants from you, if it's belief or simple ritual. The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true, putting the odds of "christianity" being true at lower than even 1 percent and the odds that any specific sect of christianity being true being even lower.

Replies from: ChristianKl, gothgirl420666, shminux
comment by ChristianKl · 2013-07-13T13:34:37.692Z · LW(p) · GW(p)

How do you account for the other two thirds of people who don't believe in Christianity and commonly believe things directly contradictory to it?

There are also various Christian's who believe that other Christian's who follow Christianity the wrong way will go to hell.

Replies from: Sarokrae
comment by Sarokrae · 2013-07-14T09:12:58.894Z · LW(p) · GW(p)

I can't upvote this point enough.

And more worryingly, with the Christians I have spoken to, those who are more consistent in their beliefs and actually update the rest of their beliefs on them (and don't just have "Christianity" as a little disconnected bubble in their beliefs) are overwhelmingly in this category, and those who believe that most Christians will go to heaven usually haven't thought very hard about the issue.

Replies from: palladias, ChristianKl
comment by palladias · 2013-07-14T12:47:55.038Z · LW(p) · GW(p)

C.S. Lewis thought most everyone was going to Heaven and thought very hard about the issue. (The Great Divorce is brief, engagingly written, an allegory of nearly universalism, and a nice typology of some sins).

comment by ChristianKl · 2013-07-14T09:22:07.374Z · LW(p) · GW(p)

I would also add that there are Christian's who beleive that everyone goes to heaven, even atheists. I spoke with a protestant theology student in Berlin who assured me that the belief is quite popular among his fellow students. He also had no spirtiual experiences whatsoever ;)

Then he's going to be a prist in a few years.

comment by gothgirl420666 · 2013-07-13T06:22:34.180Z · LW(p) · GW(p)

Well, correct me if I'm wrong, but most of the other popular religions don't really believe in eternal paradise/damnation, so Pascal's Wager applies just as much to, say, Christianity vs. Hinduism as it does Christianity vs. atheism. Jews, Buddhists, and Hindus don't believe in hell, but as far as I can tell. Muslims do. So if I were going to buy into Pascal's wager, I think I would read apologetics of both Christianity and Islam, figure out which one seemed more likely, and going with that one. Even if you found equal probability estimates for both, flipping a coin and picking one would still be better than going with atheism, right?

The proliferation of equally plausible religions is to me very strong evidence that no one of them is likely to be true,

Why? Couldn't it be something like, Religion A is correct, Religion B almost gets it and is getting at the same essential truth, but is wrong in a few ways, Religion C is an outdated version of Religion A that failed to update on new information, Religion D is an altered imitation of Religion A that only exists for political reasons, etc.

Good post though, and you sort of half-convinced me that there are flaws in Pascal's Wager, but I'm still not so sure.

Replies from: DanArmak, drethelin, taelor
comment by DanArmak · 2013-07-13T15:45:53.407Z · LW(p) · GW(p)

You're combining two reasons for believing: Pascal's Wager, and popularity (that many people already believe). That way, you try to avoid a pure Pascal's Mugging, but if the mugger can claim to have successfully mugged many people in the past, then you'll submit to the mugging. You'll believe in a religion if it has Heaven and Hell in it, but only if it's also popular enough.

You're updating on the evidence that many people believe in a religion, but it's unclear what it's evidence for. How did most people come to believe in their religion? They can't have followed your decision procedure, because it only tells you to believe in popular religions, and every religion historically started out small and unpopular.

So for your argument to work, you must believe that the truth of a religion is a strong positive cause of people believing in it. (It can't be overwhelmingly strong, though, since no religion has or has had a large majority of the world believing in it.)

But if people can somehow detect or deduce the truth of a religion on their own - and moreover, billions of people can do so (in the case of the biggest religions) - then you should be able to do so as well.

Therefore I suggest you try to decide on the truth of a religion directly, the way those other people did. Pascal's Wager can at most bias you in favour of religions with Hell in them, but you still need some unrelated evidence for their truth, or else you fall prey to Pascal's Mugging.

comment by drethelin · 2013-07-13T20:56:26.751Z · LW(p) · GW(p)

Even if you limit yourself to eternal damnation promising religions, you still need to decide which brand of Christianity/Islam is true.

If religion A is true, that implies that religion A's god exists and acts in a way consistent with the tenets of that religion. This implies that all of humanity should have strong and very believable evidence for Religion A over all other religions. But we have a large amount of religions that describe god and gods acting in very different ways. This is either evidence that all the religions are relatively false, that god is inconsistent, or that we have multiple gods who are of course free to contradict one another. There's a lot of evidence that religions sprout from other religions and you could semi-plausibly argue that there is a proto-religion that all modern ones are versions or corruptions of, but this doesn't actually work to select Christianity, because we have strong evidence that many religions predate Christianity, including some of which that it appears to have borrowed myths from.

Another problem with pascal's wager: claims about eternal rewards or punishments are not as difficult to make as they would be to make plausible. Basically: any given string of words said by a person is not plausible evidence for infinite anything because it's far more easy to SAY infinity than to provide any other kind of evidence. This means you can't afford to multiply utility by infinity because at any point someone can make any claim involving infinity and fuck up all your math.

comment by taelor · 2013-07-14T17:56:17.766Z · LW(p) · GW(p)

Jews, Buddhists, and Hindus don't believe in hell, but as far as I can tell.

I can't speak for the other ones, but Buddhists at least don't have a "hell" that non-believers go to when they die because Buddhists already believe that life is an eternal cycle of infinite suffering, that can only be escaped by following the tenants of their religion. Thus, rather then going to hell, non-believers just get reincarnated back into our current world, which Buddhism sees as being like unto hell.

comment by Shmi (shminux) · 2013-07-13T05:28:12.284Z · LW(p) · GW(p)

To steelman it, what about a bet that believing in a higher power, no matter the flavor, saves your immortal soul from eternal damnation?

Replies from: DanArmak, NancyLebovitz, TimS
comment by DanArmak · 2013-07-13T15:47:43.994Z · LW(p) · GW(p)

That is eerily similar to an Omega who deliberately favours specific decision theories instead of their results.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-13T16:49:57.542Z · LW(p) · GW(p)

Just trying to see what form of the Pascal's wager would avoid the strongest objections.

comment by NancyLebovitz · 2013-07-13T11:10:57.036Z · LW(p) · GW(p)

I don't think this is just about the afterlife. Do any religions offer good but implausible advice about how to live?

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:47:11.264Z · LW(p) · GW(p)

What do you mean by 'good but implausible'?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T17:21:13.413Z · LW(p) · GW(p)

I was thinking about the Christian emphasis on forgiveness, but the Orthodox Jewish idea of having a high proportion of one's life affected by religious rules would also count.

Replies from: DanArmak
comment by DanArmak · 2013-07-13T19:36:43.972Z · LW(p) · GW(p)

Judging something as 'good' depends on your ethical framework. What framework do you have in mind when you ask if any religions offer good advice? After all, every religion offers good advice according to its own ethics.

Going by broadly humanistic, atheistic ethics, what is good about having a high proportion of one's life be affected by religious rules? (Whether the Orthodox Jewish rules, or in general.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-14T01:53:49.172Z · LW(p) · GW(p)

what is good about having a high proportion of one's life be affected by religious rules?

It may be worth something for people to have some low-hanging fruit for feeling as though they're doing the right thing.

Replies from: DanArmak
comment by DanArmak · 2013-07-14T09:01:38.510Z · LW(p) · GW(p)

That sounds like a small factor compared to what the rules actually tell people to do.

comment by TimS · 2013-07-13T06:17:50.929Z · LW(p) · GW(p)

If the higher power cared, don't you think such power would advertise more effectively? Religious wars seem like pointless suffering if any sufficient spiritual belief saves the soul.

Replies from: DanArmak
comment by DanArmak · 2013-07-13T19:34:11.699Z · LW(p) · GW(p)

If the higher power cared about your well being, it would just "save" everyone regardless of belief or other attributes. It would also intervene to create heaven on earth and populate the whole universe with happy people.

Remember that the phrase "save your soul" refers to saving it from the eternal torture visited by that higher power.

Replies from: TimS
comment by TimS · 2013-07-14T21:16:05.146Z · LW(p) · GW(p)

I don't think we disagree.

comment by [deleted] · 2013-07-13T02:54:34.275Z · LW(p) · GW(p)

I should think that this is more likely to indicate that nobody, including really smart people, and including you, actually knows whats what and trying to chase after all these pascals muggings is pointless becuase you will always run into another one that seems convincing from someone else smart.

Replies from: Watercressed
comment by Watercressed · 2013-07-13T05:55:11.139Z · LW(p) · GW(p)

There's a bit of a problem with the claim that nobody knows what's what: the usual procedure when someone lacks knowledge is to assign an ignorance prior. The standard methods for generating ignorance priors, usually some formulation of Occam's razor, assign very low probability to claims as complex as common religions.

comment by TimS · 2013-07-13T04:52:31.414Z · LW(p) · GW(p)

People being religious is some evidence that religion is true. Aside from drethelin's point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.

To pick an easy example, I don't think anyone thinks a Catholic priest can turn wine into blood on command. And if an organized religion does not make predictions that could be wrong, why should you change your behavior based on that organization's recommendations?

Replies from: DanArmak, ChristianKl, shminux
comment by DanArmak · 2013-07-13T15:17:39.652Z · LW(p) · GW(p)

I don't think anyone thinks a Catholic priest can turn wine into blood on command.

Neither do Catholics think their priests turn wine into actual blood. After all, they're able to see and taste it as wine afterwards! Instead they're dualists: they believe the Platonic Form of the wine is replaced by that of blood, while the substance remains. And they think this makes testable predictions, because they think they have dualistic non-material souls which can then somehow experience the altered Form of the wine-blood.

Anyway, Catholicism makes lots of other predictions about the ordinary material world, which of course don't come true, and so it's more productive to focus on those. For instance, the efficacy of prayer, miraculous healing, and the power of sacred relics and places.

Replies from: ThisSpaceAvailable
comment by ThisSpaceAvailable · 2013-07-15T02:23:34.456Z · LW(p) · GW(p)

I really don't think that the vast majority of Catholics bother forming a position regarding transubstantiation. One of the major benefits of joining a religion is letting other people think for you.

Replies from: DanArmak
comment by DanArmak · 2013-07-15T08:17:21.420Z · LW(p) · GW(p)

This is probably true, but the discussion was about religion (i.e. official dogma) making predictions. Lots of holes can be picked in that, of course.

comment by ChristianKl · 2013-07-13T13:33:36.881Z · LW(p) · GW(p)

Aside from drethelin's point about multiple contradictory religions, religions as actually practiced make predictions. It appears that those predictions do not stand up to rigorous examination.

I don't think it's fair to say that no one of the practical predictions of religion holds up to rigorous examination. In Willpower by Roy Baumeister the author describes well how organisations like Alcoholic Anonymous can effectively use religious ideas to help people quit alcohol.

Buddhist meditation is also a practice that has a lot of backing in rigorous examination.

On LessWrong Luke Muehlhauser wrote that Scientology 101 was one of the best learning experiences in his life, nonwithstanding the dangers that come from the group.

Various religions do advcoate practices that have concret real world effects. Focusing on whether or not the wine get's really turned into blood misses the point if you want to have practical benefits and practical disadvantages from following a religion.

Replies from: drethelin
comment by drethelin · 2013-07-14T02:53:20.897Z · LW(p) · GW(p)

Alcoholics Anonymous is famously ineffective, but separate from that: What's your point here? Being a christian is not the same as subjecting christian practices to rigorous examination to test for effectiveness. The question the original asker asked about was not 'Does religion have any worth' but 'Should I become a practicing christian to avoid burning in hell for eternity"

comment by Shmi (shminux) · 2013-07-13T05:25:49.095Z · LW(p) · GW(p)

People being religious is some evidence that religion is true.

To me it is only evidence that people are irrational.

Replies from: TimS, ChristianKl
comment by TimS · 2013-07-13T05:35:35.431Z · LW(p) · GW(p)

If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.

Your belief that people are irrational relies on additional evidence of the type that I referenced. It is not contained in the fact of overwhelming belief.

Like how Knox's roommate's death by murder is evidence that Knox committed the murder. And that evidence is overwhelmed by other evidence that suggests Knox is not the murderer.

Replies from: ThisSpaceAvailable, shminux
comment by ThisSpaceAvailable · 2013-07-15T02:32:18.498Z · LW(p) · GW(p)

Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun. In fact, there are some hypotheses for which belief is evidence against. For instance, if someone believes in a conspiracy theory, that's evidence against the conspiracy theory; in a world in which a set of events X occurs, but no conspiracy is behind it, people would be free to develop conspiracy theories regarding X. But in a world in which X occurs, and a conspiracy is behind it, it likely that the conspiracy will interfere with the formation of any conspiracy theory.

Replies from: wedrifid
comment by wedrifid · 2013-07-15T03:02:52.384Z · LW(p) · GW(p)

Whether people believing in a hypothesis is evidence for the hypothesis depends on the hypothesis. If the hypothesis does not contain a claim that there is some mechanism by which people would come to believe in the hypothesis, then it is not evidence. For instance, if people believe in a tea kettle orbiting the sun, their belief is not evidence for it being true, because there is no mechanism by which a tea kettle orbiting the sun might cause people to believe that there is a tea kettle orbiting the sun.

Bad example. In fact, the example you give is sufficient to require that your contention be modified (or rejected as is).

While it is not the case that there is a tea kettle orbiting the sun (except on earth) there is a mechanism by which people can assign various degrees of probability to that hypothesis, including probabilities high enough to constitute 'belief'. This is the case even if the existence of such a kettle is assumed to have not caused the kettle belief. Instead, if observations about how physics works and our apparent place within it were such that kettles are highly likely to exist orbiting suns like ours then I would believe that there is a kettle orbiting the sun.

It so happens that it is crazy to believe in space kettles that we haven't seen. This isn't because we haven't seen them---we wouldn't expect to see them either way. This is because they (probably) don't exist (based on all our observations of physics). If our experiments suggested a different (perhaps less reducible) physics then it would be correct to believe in space kettles despite there being no way for the space kettle to have caused the belief.

comment by Shmi (shminux) · 2013-07-13T05:41:23.284Z · LW(p) · GW(p)

If literally the only evidence you had was that the overwhelming majority of people professed to believe in religion, then you should update in favor of religion being true.

Yes, but this is different from a generic "People being religious is some evidence that religion is true."

Replies from: TimS
comment by TimS · 2013-07-13T05:51:40.278Z · LW(p) · GW(p)

P(religion is true | overwhelming professing of belief) > P(religion is true | absence of overwhelming professing of belief).

In other words, I think my two formulations are isomorphic. If we define evidence such that absence of evidence is evidence of absence, then one implication is that it is possible for some evidence to exist in favor of false propositions.

Replies from: DanArmak
comment by DanArmak · 2013-07-13T15:20:03.625Z · LW(p) · GW(p)

it is possible for some evidence to exist in favor of false propositions.

This is possible with any definition of evidence. Every bit of information you receive makes you discard some theories which have been disproven, so it's evidence in favour of each of the ones you don't discard. But only one of those is fully true; the others are false.

comment by ChristianKl · 2013-07-13T13:06:42.941Z · LW(p) · GW(p)

To me it is only evidence that people are irrational.

The issue is: How do you know that you aren't just as irrational as them?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-16T22:30:48.129Z · LW(p) · GW(p)

My personal answer:

  1. I'm smart. They're not (IQ test, SAT, or a million other evidences). Even though high intelligence doesn't at all cause rationality, in my experience judging others it's so correlated as to nearly be a prerequisite.

  2. I care a lot (but not too much) about consistency under the best / most rational reflection I'm capable of. Whenever this would conflict with people liking me, I know how to keep a secret. They don't make such strong claims of valuing rationality. Maybe others are secretly rational, but I doubt it. In the circles I move in, nobody is trying to conceal intellect. If you could be fun, nice, AND seem smart, you would do it. Those who can't seem smart, aren't.

  3. I'm winning more than they are.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-17T08:12:42.007Z · LW(p) · GW(p)

I care a lot (but not too much) about consistency under the best / most rational reflection I'm capable of.

That value doesn't directly lead to having a belief system where individual beliefs can be used to make accurate predictions. For most practical purposes the forward–backward algorithm produces better models of the world than Viterbi. Viterbi optimizes for overall consitstency while the forward–backward algorithm looks at local states.

If you have uncertainity in the data about which you reason, the world view with the most consistency is likely flawed.

One example is heat development in some forms of meditation. The fact that our body can develop heat through thermogenin without any shivering is a relatively new biochemical discovery. There were plenty of self professed rationalists who didn't believe in any heat development in meditation because the people in the meditation don't shiver. The search for consistency leads in examples like that to denying important empirical evidence.

It takes a certain humility to accept that there heat development during meditation without knowing a mechanism that can account for the development of heat.

People who want to signal socially that they know-it-all don't have the epistemic humility that allows for the insight that there are important things that they just don't understand.

To quote Nassim Taleb: "It takes extraordinary wisdom and self control to accept that many things have a logic we do not understand that is smarter than our own."


For the record, I'm not a member of any religion.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-17T21:44:31.853Z · LW(p) · GW(p)

I'm pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there's no time to explain why, for example).

Interesting analogy between "best path / MAP (viterbi)" :: "integral over all paths / expectation" as "consistent" :: "some other type of thinking/ not consistent?" I don't see what "integral over many possibilities" has to do with consistency, except that it's sometimes the correct (but more expensive) thing to do.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-18T15:11:41.505Z · LW(p) · GW(p)

I'm pretty humble about what I know. That said, it sometimes pays to not undersell (when others are confidently wrong, and there's no time to explain why, for example).

I'm not so much talking about humility that you communicate to other people but about actually thinking that the other person might be right.

I don't see what "integral over many possibilities" has to do with consistency, except that it's sometimes the correct (but more expensive) thing to do.

There are cases where the forward backward algorithm gives you a path that's impossible to happen. I would call those paths inconsistent.

That's one of the lessons I learned in bioinformatics. Having a algorithm that robust to error is often much better than just picking the explanation that most likely to explain the data.

A map of the world that allows for some inconsistency is more robust than one where one error leads to a lot of bad updates to make the map consistent with the error.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-19T04:16:30.909Z · LW(p) · GW(p)

I understand forward-backward (in general) pretty well and am not sure what application you're thinking of or what you mean by "a path that's impossible to happen". Anyway, yes, I agree that you shouldn't usually put 0 plausibility on views other than your current best guess.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-19T08:52:25.266Z · LW(p) · GW(p)

It possible that you p=0 to go from 5:A to 6:B and the path created by forward-backward still goes from 5:A to 6:B.

comment by hairyfigment · 2013-07-14T07:14:28.022Z · LW(p) · GW(p)

Qiaochu_Yuan has it right - the vast majority of Christians do not constitute additional evidence.

Moreover, the Bible (Jewish, Catholic, or Protestant) describes God as an abusive jerk. Everything we know about abusive jerks says you should get as far away from him as possible. Remember that 'something like the God of the Bible exists' is a simpler hypothesis than Pascal's Christianity, and in fact is true in most multiverse theories. (I hate that name, by the way. Can't we replace it with 'macrocosm'?)

More generally, if for some odd reason you find yourself entertaining the idea of miraculous powers, you need to compare at least two hypotheses:

*Reality allows these powers to exist, AND they already exist, AND your actions can affect whether these powers send you to Heaven or Hell (where "Heaven" is definitely better and not at all like spending eternity with a human-like sadist capable of creating Hell), AND faith in a God such as humans have imagined will send you to Heaven, AND lack of this already-pretty-specific faith will send you to Hell.

*Reality allows these powers to exist, AND humans can affect them somehow, AND religion would interfere with exploiting them effectively.

comment by Shmi (shminux) · 2013-07-13T05:24:00.233Z · LW(p) · GW(p)

it seems like I should assign at least a 5% or so probability to Christianity being true

Why such a high number? I cannot imagine any odds I would take on a bet like that.

comment by ThisSpaceAvailable · 2013-07-15T02:18:25.500Z · LW(p) · GW(p)

Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false? Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value? Does other people believing in Christianity indicate that they have knowledge that you don't have?

Replies from: wedrifid
comment by wedrifid · 2013-07-15T02:44:12.413Z · LW(p) · GW(p)

Is people believing in Christianity significantly more likely under the hypothesis that it is true, as opposed to under the hypothesis that it is false?

Yes.

Once one person believes in Christianity, does more people believing in Christianity have significant further marginal evidentiary value?

Yes.

Does other people believing in Christianity indicate that they have knowledge that you don't have?

Yes.

(Weakly.)

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2013-07-16T22:33:49.202Z · LW(p) · GW(p)

I agree completely. It's impossible for me to imagine a scenario where a marginal believer is negative evidence in the belief - at best you can explain away the belief ("they're just conforming" lets you approach 0 slope once it's a majority religion w/ death penalty for apostates).

comment by CoffeeStain · 2013-07-13T23:46:49.935Z · LW(p) · GW(p)

I have found this argument compelling, especially the portion about assigning a probability to the truth of Christian belief. Even if we have arguments that seem to demonstrate why it is that radically smart people believe a religion without recourse to there being good arguments for the religion, we haven't explained why these people instead think there are good arguments. Sure, you don't think they're good arguments, but they do, and they're rational agents as well.

You could say, "well they're not rational agents, that was the criticism in the first place," but we have the same problem that they do think they themselves are rational agents. What level do we have to approach that allows you to make a claim about how your methods for constructing probabilities trump theirs? The highest level is just, "you're both human," which makes valid the point that to some extent you should listen to the opinions of others. The next level "you're both intelligent humans aimed at the production of true beliefs" is far stronger, and true in this case.

Where the Wager breaks down for me is that much more is required to demonstrate that if Christianity is true, God sends those who fail to produce Christian belief to Hell. Of course, this could be subject to the argument that many smart people also believe this corollary, but it remains true that it is an additional jump, and that many fewer Christians take it than who are simply Christians.

What takes the cake for me is asking what a good God would value. It's a coy response for the atheist to say that a good God would understand the reasons one has for being an atheist, and that it's his fault that the evidence doesn't get there. The form of this argument works for me, with a nuance: Nobody is honest, and nobody deserves, as far as I can tell, any more or less pain in eternity for something so complex as forming the right belief about something so complicated. God must be able to uncrack the free will enigma and decide what's truly important about people's actions, and somehow it doesn't seem that the relevant morality-stuff perfectly is perfectly predicted by religious affiliation. This doesn't suggest that God might not have other good reasons to send people to Hell, but it seems hard to tease those out of yourself to a sufficient extent to start worrying beyond worrying about how much good you want to do in general. If God punishes for people not being good enough, the standard) method of reducing free will to remarkably low levels makes it hard to see what morality-stuff looks like. Whether or not it exists, you have the ability to change your actions by becoming more honest, more loving, and hence possibly more likely to be affiliated with the correct religion. But it seems horrible for God to make it a part of the game for you to be worrying about whether or not you go to Hell for reasons other than honesty or love. Worry about honesty and love, and don't worry about where that leads.

In short, maybe Hell is one outcome of the decision game of life. But very likely God wrote it so that one's acceptance of Pascal's wager has no impact on the outcome. Sure, maybe one's acceptance of Christianity does, but there's nothing you can do about it, and if God is good, then this is also good.

Replies from: drethelin
comment by drethelin · 2013-07-14T03:12:35.327Z · LW(p) · GW(p)

People are not rational agents, and people do not believe in religions on the basis of "good arguments." Most people are the same religion as their parents.

Replies from: CoffeeStain
comment by CoffeeStain · 2013-07-14T03:22:01.206Z · LW(p) · GW(p)

As often noted, most nonreligious parents have nonreligious children as well. Does that mean that people do not disbelieve religions on the basis of good arguments?

Your comment is subject to the same criticism we're discussing. If any given issue has been raised, then some smart religious person is aware of it and believes anyway.

Replies from: drethelin
comment by drethelin · 2013-07-14T03:29:08.825Z · LW(p) · GW(p)

I think most people do not disbelieve religions on the basis of good arguments either. I'm most likely atheist because my parents are. The point is that you can't treat majority beliefs as the aggregate beliefs of groups of rational agents. It doesn't matter if for any random "good argument" some believer or nonbeliever has heard it and not been swayed, you should not expect the majority of people's beliefs on things that do not directly impinge on their lives to be very reliable correlated with things other than the beliefs of those around them.

Replies from: CoffeeStain
comment by CoffeeStain · 2013-07-14T03:41:18.730Z · LW(p) · GW(p)

The above musings do not hinge on the ratio of people in a group believing things for the right reasons, only that some portion of them are.

Your consideration helps us assign probabilities for complex beliefs, but it doesn't help us improve them. Upon discovering that your beliefs correlate with those of your parents, you can introduce uncertainty in your current assignments, but you go about improving them by thinking about good arguments. And only good arguments.

The thrust of the original comment here is that discovering which arguments are good is not straightforward. You can only go so deep into the threads of argumentation until you start scraping on your own bias and incapacities. Your logic is not magic, and neither are intuitions nor other's beliefs. But all of them are heuristics that you can account when assigning probabilities. The very fact that others exist who are capable of digging as deep into the logic and being as skeptical of their intuitions, and who believe differently than you, is evidence that their opinion is correct. It matters little if every person of that opinion is as such, only that the best do. Because those are the only people you're paying attention to.

comment by Will_Newsome · 2013-07-13T11:40:52.363Z · LW(p) · GW(p)

[ETA: Retracted because I don't have the aversion-defeating energy necessary to polish this, but:]

5% or so probability to Christianity being true

To clarify, presumably "true" here doesn't mean all or even most of the claims of Christianity are true, just that there are some decision policies emphasized by Christianity that are plausible enough that Pascal's wager can be justifiably applied to amplify their salience.

I can see two different groups of claims that both seem central to Christian moral (i.e. decision-policy-relevant) philosophy as I understand it, which in my mind I would keep separate if at all possible but that in Christian philosophy and dogma are very much mixed together:

  1. The first group of claims is in some ways more practical and, to a LessWronger, more objectionable. It reasons from various allegedly supernatural phenomena to the conclusion that unless a human acts in a way seemingly concordant with the expressed preferences of the origins of those supernatural phenomena, that human will be risking some grave, essentially game theoretic consequence as well as some chance of being in moral error, even if the morality of the prescriptions isn't subjectively verifiable. Moral error, that is, because disregarding the advice, threats, requests, policies &c. of agents seemingly vastly more intelligent than you is a failure mode, and furthermore it's a failure mode that seemingly justifies retrospective condemnatory judgments of the form "you had all this evidence handed to you by a transhumanly intelligent entity and you chose to ignore it?" even if in some fundamental sense those judgments aren't themselves "moral". An important note: saying "supernaturalism is silly, therefore I don't even have to accept the premises of that whole line of reasoning" runs into some serious Aumann problems, much more serious than can be casually cast aside, especially if you have a Pascalian argument ready to pounce.

  2. The second group of claims is more philosophical and meta-ethical, and is emphasized more in intellectually advanced forms of Christianity, e.g. Scholasticism. One take on the main idea is that there is something like an eternal moral-esque standard etched into the laws of decision theoretic logic any deviations from which will result in pointless self-defeat. You will sometimes see it claimed that it isn't that God is punishing you as such, it's that you have knowingly chosen to distance yourself from the moral law and have thus brought ruin upon yourself. To some extent I think it's merely a difference of framing born of Christianity's attempts to gain resonance with different parts of default human psychology, i.e. something like third party game theoretic punishment-aversion/credit-seeking on one hand and first person decision theoretic regret-minimization on the other. [This branch needs a lot more fleshing out, but I'm too tired to continue.]

But note that in early Christian writings especially and in relatively modern Christian polemic, you'll get a mess of moralism founded on insight into the nature of human psychology, theological speculation, supernatural evidence, appeals to intuitive Aumancy, et cetera. [Too tired to integrate this line of thought into the broader structure of my comment.]

Replies from: TobyBartels
comment by TobyBartels · 2014-08-09T23:04:55.274Z · LW(p) · GW(p)

I want to vote this up to encourage posting good comments even when incompletely polished; but since you formally retracted this, I can't.

comment by bogdanb · 2013-07-13T20:28:10.684Z · LW(p) · GW(p)

If you take the outside view, and account for the fact that sixty-something percent of people don’t believe in Christianity, it seems like (assuming you just learned that fact) you should update (a bit) towards Christianity not being true.

If you did know the percentages already, they should be already integrated in your priors, together with everything else you know about the subject.

Note that the majority of numbers are not prime. But if you write a computer program (assuming you’re quite good at it) and it tells you 11 is prime, you should probably assign a high probability to it being prime, even though the program might have a bug.

comment by linkhyrule5 · 2013-07-21T02:09:24.126Z · LW(p) · GW(p)

Can someone explain "reflective consistency" to me? I keep thinking I understand what it is and then finding out that no, I really don't. A rigorous-but-English definition would be ideal, but I would rather parse logic than get a less rigorous definition.

comment by A1987dM (army1987) · 2013-07-14T01:10:55.043Z · LW(p) · GW(p)

The people who think that nanobots will be able to manufacture arbitrary awesome things in arbitrary amounts at negligible costs... where do they think the nanobots will take the negentropy from?

Replies from: James_Miller
comment by James_Miller · 2013-07-14T02:29:01.583Z · LW(p) · GW(p)

The sun.

Replies from: CronoDAS
comment by CronoDAS · 2013-07-14T08:31:33.500Z · LW(p) · GW(p)

Almost all the available energy on Earth originally came from the Sun; the only other sources I know of are radioactive elements within the Earth and the rotation of the Earth-Moon system.

So even if it's not from the sun's current output, it's probably going to be from the sun's past output.

Replies from: None, hylleddin
comment by [deleted] · 2013-07-15T08:28:25.358Z · LW(p) · GW(p)

Hydrogen for fusion is also available on the Earth and didn't come from the Sun. We can't exploit it commercially yet, but that's just an engineering problem. (Yes, if you want to be pedantic, we need primordial deuterium and synthesized tritium, because proton-proton fusion is far beyond our capabilities. However, D-T's ingredients still don't come from the Sun.)

Replies from: CronoDAS
comment by CronoDAS · 2013-07-15T08:32:34.140Z · LW(p) · GW(p)

Yes. Good call.

comment by hylleddin · 2013-07-15T08:20:12.531Z · LW(p) · GW(p)

They could probably get a decent amount from fusing light elements as well.

comment by Rukifellth · 2013-07-13T20:09:48.657Z · LW(p) · GW(p)

Just now rushes onto Less Wrong to ask about taking advantage of 4chan's current offer of customized ad space to generate donations for MIRI

Sees thread title

Perfect.

So, would it be a good idea? The sheer volume of 4chan's traffic makes it a decent pool for donations, and given the attitude of its demographic, it might be possible to pitch the concept in an appealing way.

Replies from: Tenoke
comment by Tenoke · 2013-07-13T20:26:59.947Z · LW(p) · GW(p)

Linking to MIRI's donation page might be useful but please please don' link to LessWrong on 4chan - it could have some horrible consequences.

Replies from: iceman, NancyLebovitz
comment by iceman · 2013-07-13T23:17:38.696Z · LW(p) · GW(p)

LessWrong has been linked to multiple times, at least from the /mlp/ board. (Friendship is Optimal may be a proximate cause for most of these links...)

Replies from: Tenoke
comment by Tenoke · 2013-07-13T23:19:47.025Z · LW(p) · GW(p)

Ads have the potential to drive in a lot more traffic (especially in a negative way) than posts.

comment by NancyLebovitz · 2013-07-13T23:08:03.081Z · LW(p) · GW(p)

What's the likelihood of 4chan finding LW by way of MIRI?

Replies from: gwern, Tenoke
comment by gwern · 2013-07-13T23:22:36.948Z · LW(p) · GW(p)

Around zero:

Replies from: Adele_L
comment by Adele_L · 2013-07-14T03:34:12.305Z · LW(p) · GW(p)

Someone on 4chan is likely to know of MIRI's connection to LW, and point that out there.

comment by Tenoke · 2013-07-13T23:18:16.705Z · LW(p) · GW(p)

I would assume that there is a good chance of a few 4chan members finding LW through MIRI, but a fairly small chance that a large enough number to cause problems will.

Replies from: ChristianKl
comment by ChristianKl · 2013-07-15T09:09:47.185Z · LW(p) · GW(p)

A few members of 4chan that want to cause problems can encourage other members to go along with them.

comment by NancyLebovitz · 2013-07-13T11:31:10.261Z · LW(p) · GW(p)

How do people construct priors? Is it worth trying to figure out how to construct better priors?

Replies from: shminux, jmmcd, JQuinton, evand, None, Benito, MrMind, lukeprog, Skeeve
comment by Shmi (shminux) · 2013-07-13T17:07:43.401Z · LW(p) · GW(p)

How do people construct priors?

They make stuff up, mostly, from what I see here. Some even pretend that "epsilon" is a valid prior.

Is it worth trying to figure out how to construct better priors?

Definitely. Gwern recommends the prediction book as a practice to measure and improve your calibration.

comment by jmmcd · 2013-07-13T21:41:33.909Z · LW(p) · GW(p)

I don't think it's useful to think about constructing priors in the abstract. If you think about concrete examples, you see lots of cases where a reasonable prior is easy to find (eg coin-tossing, and the typical breast-cancer diagnostic test example). That must leave some concrete examples where good priors are hard to find. What are they?

comment by JQuinton · 2013-07-16T13:38:37.209Z · LW(p) · GW(p)

I have a question that relates to this one. If I'm not good at constructing priors, is going with agnosticism/50% recommended?

comment by evand · 2013-07-14T03:30:30.955Z · LW(p) · GW(p)

Depends on the context. In the general, abstract case, you end up talking about things like ignorance priors and entropy maximization. You can also have sets of priors that penalize more complex theories and reward simple ones; that turns into Solomonoff induction and Kolomogorov complexity and stuff like that when you try to formalize it.

In actual, practical cases, people usually try to answer a question that sounds a lot like "from the outside view, what would a reasonable guess be?". The distinction between that and a semi-educated guess can be somewhat fuzzy. In practice, as long as your prior isn't horrible and you have plenty of evidence, you'll end up somewhere close to the right conclusion, and that's usually good enough.

Of course, there are useful cases where it's much easier to have a good prior. The prior on your opponent having a specific poker hand is pretty trivial to construct; one of a set of hands meeting a characteristic is a simple counting problem (or an ignorance prior plus a complicated Bayesian update, since usually "meeting a characteristic" is a synonym for "consistent with this piece of evidence").

comment by [deleted] · 2013-07-13T17:39:21.016Z · LW(p) · GW(p)

A better prior is a worse (but not useless) prior plus some evidence.

You construct a usable prior by making damn sure that the truth has non-exponentially-tiny probability, such that with enough evidence, you will eventually arrive at the truth.

From the inside, the best prior you could construct is your current belief dynamic (ie. including how you learn).

From the outside, the best prior is the one that puts 100% probability on the truth.

comment by Ben Pace (Benito) · 2013-07-13T17:15:48.915Z · LW(p) · GW(p)

I don't know how much this answers your question.

From LessWrong posts such as 'Created in Motion' and 'Where Recursive Justification Hits Rock Bottom' I've come to see that humans are born with priors (the post 'inductive bias' is also related, where an agent must have some sort of prior to be able to learn anything at all ever - a pebble has no priors, but a mind does, which means it can update on evidence. What Yudkowsky calls a 'philosophical ghost of perfect emptiness' is other people's image of a mind with no prior, suddenly updating to have a map that perfectly reflects the territory. Once you have a thorough understanding of Bayes Theorem, this is blatantly impossible/incoherent).

So, we're born with priors about the environment, and then our further experience give us new priors for our next experiences.

Of course, this is all rather abstract, and if you'd like to have a guide to actually forming priors about real life situations that you find confusing... Well, put in an edit, maybe someone can give you that :-)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T17:27:04.870Z · LW(p) · GW(p)

I don't have a specific situation in mind, it's just that priors from nowhere make me twitch-- I have the same reaction to the idea that mathematical axioms are arbitrary. No, they aren't! Mathematicians have to have some way of choosing axioms which lead to interesting mathematics.

At the moment, I'm stalking the idea that priors have a hierarchy or possibly some more complex structure, and being confused means that you suspect you have to dig deep into your structure of priors. Being surprised means that your priors have been attacked on a shallow level.

Replies from: Benito
comment by Ben Pace (Benito) · 2013-07-13T17:33:42.694Z · LW(p) · GW(p)

What do you mean 'priors from nowhere'? The idea that we're just born with a prior, or people just saying 'this is my prior, and therefore a fact' when given some random situation (that was me paraphrasing my mum's 'this is my opinion, and therefore a fact').

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T18:33:02.993Z · LW(p) · GW(p)

More like "here are the priors I'm plugging into the bright and shiny Bayes equation", without any indication of why the priors were plausible enough to be worth bothering with.

Replies from: jsalvatier, Benito
comment by jsalvatier · 2013-07-13T23:39:08.711Z · LW(p) · GW(p)

In Bayesian statistics there's the concept of 'weakly informative priors', which are priors that are quite broad and conservative, but don't concentrate almost all of their mass on values that no one thinks are plausible. For example, if I'm estimating the effect of a drug, I might choose priors that give low mass to biologically implausible effect sizes. If it's a weight gain drug, perhaps I'd pick a normal distribution with less than 1% probability mass for more than 100% weight increase or 50% weight decrease. Still pretty conservative, but mostly captures people's intuitions of what answers would be crazy.

Andrew Gelman has some recent discussion here.

Sometimes this is pretty useful, and sometimes not. Its going to be most useful when you have not much evidence, and also when your model is not well constrained along some dimensions (such as when you have multiple sources of variance). Its also going to be useful when there are a ton of answers that seem implausible.

comment by Ben Pace (Benito) · 2013-07-13T21:31:00.577Z · LW(p) · GW(p)

The extent of my usefulness here is used.

Related Hanson paper: http://hanson.gmu.edu/prior.pdf

comment by MrMind · 2013-07-15T09:10:45.564Z · LW(p) · GW(p)

I don't know if you meant "people" in a generalized sense, meaning "every rational probability user", or more in the sense of "the common wo/men".
If in the first sense, there are different principles you can use that depends on what you already know to be true: the indifference principle, Laplace's succession rule, minimum entropy, group invariance, Solomonoff induction, etc., and possibly even more. It should be an active area of research in probability theory (if it's not, shame on you, researchers!). As a general principle, the ideal prior is the most inclusive prior that is not ruled out by the information (you consider true). Even after that, you want to be very careful not to let any proposition to be 0 or 1, because outside of mathematical idealization, everybody is imperfect and has access only to imperfect information.
If, otherwise, you meant "the common person in the street", then I can only say that I see used overwhelmingly the bias of authority and generalization from one example. After all, "construct prior" just means "decide what is true and to what degree".
"Constructing better prior" amounts to not using information we don't have, avoiding the mind projection fallacy, and using the information we have, constructing an informed model of the world. It is indeed worth trying to figure out how to be better at those things, but not as much as in idealized setting. Since we have access only to inconsistent information, it is sometimes the case that we must completely discard what we held to be true, a case that doesn't happen in pure probability theory.

comment by lukeprog · 2013-07-15T00:50:10.479Z · LW(p) · GW(p)

You can construct a variety of priors and then show that some of them have more intuitive implications than others. See e.g. the debate about priors in this post, and in the comment threads of the posts it follows up on.

comment by Skeeve · 2013-07-13T16:48:56.516Z · LW(p) · GW(p)

The Handbook of Chemistry and Physics?

But seriously, I have no idea either, other than 'eyeball it', and I'd like to see how other people answer this question too.

comment by FiftyTwo · 2013-07-21T00:13:57.933Z · LW(p) · GW(p)

Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.

Replies from: gwern
comment by gwern · 2013-07-21T00:42:21.834Z · LW(p) · GW(p)

Not that I know of. The only current candidate is to take psilocybin to increase Openness, but the effect is relatively small, it hasn't been generalized outside the population of "people who would sign up to take psychedelics", and hasn't been replicated at all AFAIK (and for obvious reasons, there may never be a replication). People speculate that dopaminergic drugs like the amphetamines may be equivalent to an increase in Conscientiousness, but who knows?

comment by [deleted] · 2013-07-17T04:21:00.741Z · LW(p) · GW(p)

What can be done about akrasia probably caused by anxiety?

Replies from: wedrifid, drethelin, FiftyTwo
comment by wedrifid · 2013-07-17T05:39:51.354Z · LW(p) · GW(p)

What can be done about akrasia probably caused by anxiety?

  • Exercise.
  • Meditation.
  • Aniracetam.
  • Phenibut.
  • Nicotine.
  • Cerebrolysin.
  • Picamilon.
  • As appropriate, stop exposing yourself to toxic stimulus that is causing anxiety.
  • Use generic tactics that work on most akrasia independent of cause.
comment by drethelin · 2013-07-17T05:24:48.724Z · LW(p) · GW(p)

From what I've seen valium helps to some extent.

comment by FiftyTwo · 2013-07-21T00:33:17.342Z · LW(p) · GW(p)

Depending on the severity of the anxiety professional intervention may be necessary.

comment by [deleted] · 2013-07-14T17:54:40.668Z · LW(p) · GW(p)

How does a rational consequentialist altruist think about moral luck and butterflies?

http://leftoversoup.com/archive.php?num=226

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-14T18:03:20.620Z · LW(p) · GW(p)

There's no point in worrying about the unpredictable consequences of your actions because you have no way of reliably affecting them by changing your actions.

comment by Sarokrae · 2013-07-14T09:23:19.051Z · LW(p) · GW(p)

In the process of trying to pin down my terminal values, I've discovered at least 3 subagents of myself with different desires, as well as my conscious one which doesn't have its own terminal values, and just listens to theirs and calculates the relevant instrumental values. Does LW have a way for the conscious me to weight those (sometimes contradictory) desires?

What I'm currently using is "the one who yells the loudest wins", but that doesn't seem entirely satisfactory.

Replies from: D_Malik, Qiaochu_Yuan, someonewrongonthenet
comment by D_Malik · 2013-07-15T23:10:25.189Z · LW(p) · GW(p)

My current approach is to make the subagents more distinct/dissociated, then identify with one of them and try to destroy the rest. It's working well, according to the dominant subagent.

Replies from: Sarokrae
comment by Sarokrae · 2013-07-17T07:24:04.618Z · LW(p) · GW(p)

My other subagents consider that such an appalling outcome that my processor agent refuses to even consider the possibility...

Though given this, it seems likely that I do have some degree of built-in weighting, I just don't realise what it is yet. That's quite reassuring.

Edit: More clarification in case my situation is different from yours: my 3 main subagents have such different aims that each of them evokes a "paper-clipper" sense of confusion in the others. Also, a likely reason why I refuse to consider it is because all of them are hard-wired into my emotions, and my emotions are one of the inputs my processing takes. This doesn't bode well for my current weighting being consistent (and Dutch-book-proof).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-17T13:58:47.057Z · LW(p) · GW(p)

What does your processor agent want?

Replies from: Sarokrae
comment by Sarokrae · 2013-07-18T10:20:58.947Z · LW(p) · GW(p)

I'm not entirely sure. What questions could I ask myself to figure this out? (I suspect figuring this out is equivalent to answering my original question)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-20T11:51:43.605Z · LW(p) · GW(p)

What choices does your processor agent tend to make? Under what circumstances does it favor particular sub-agents?

Replies from: Sarokrae
comment by Sarokrae · 2013-07-21T20:21:23.259Z · LW(p) · GW(p)

"Whichever subagent currently talks in the "loudest" voice in my head" seems to be the only way I could describe it. However, "volume" doesn't lend to a consistent weighting because it varies, and I'm pretty sure varies depending on hormone levels amongst many things, making me easily dutch-bookable based on e.g. time of month.

comment by Qiaochu_Yuan · 2013-07-14T17:50:16.370Z · LW(p) · GW(p)

My understanding is that this is what Internal Family Systems is for.

Replies from: Sarokrae
comment by Sarokrae · 2013-07-15T07:04:37.178Z · LW(p) · GW(p)

So I started reading this, but it seems a bit excessively presumptuous about what the different parts of me are like. It's really not that complicated: I just have multiple terminal values which don't come with a natural weighting, and I find balancing them against each other hard.

comment by someonewrongonthenet · 2013-08-18T14:33:08.270Z · LW(p) · GW(p)

briefly describe the "subagents" and their personalities/goals?

Replies from: Sarokrae
comment by Sarokrae · 2013-08-18T17:52:11.601Z · LW(p) · GW(p)

A non-exhaustive list of them in very approximate descending order of average loudness:

  • Offspring (optimising for existence, health and status thereof. This is my most motivating goal right now and most of my actions are towards optimising for this, in more or less direct ways.)

  • Learning interesting things

  • Sex (and related brain chemistry feelings)

  • Love (and related brain chemistry feelings)

  • Empathy and care for other humans

  • Prestige and status

  • Epistemic rationality

  • Material comfort

I notice the problem mainly as the loudness of "Offspring" varies based on hormone levels, whereas "Learning new things" doesn't. In particular when I optimise almost entirely for offspring, cryonics is a waste of time and money, but on days where "learning new things" gets up there it isn't.

comment by Pablo (Pablo_Stafforini) · 2013-07-13T14:07:36.696Z · LW(p) · GW(p)

Why is average utilitarianism popular among some folks here? The view doesn't seem to be at all popular among professional population ethicists.

Replies from: Manfred, Scott Garrabrant
comment by Manfred · 2013-07-13T19:36:41.006Z · LW(p) · GW(p)

Don't think it is.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2013-07-15T13:38:47.021Z · LW(p) · GW(p)

What specifically do you disagree with?
I think Pablo is correct that average utilitarianism is much more popular here than among philosophers.

Replies from: wedrifid
comment by wedrifid · 2013-07-15T15:01:57.585Z · LW(p) · GW(p)

What specifically do you disagree with?

The words only make sense if parsed as disagreement with the claim that average utilitarianism is popular here.

I think Pablo is correct that average utilitarianism is much more popular here than among philosophers.

Perhaps, if you mean the difference between 'trivial' and 'negligible'.

comment by Scott Garrabrant · 2013-07-13T16:43:25.681Z · LW(p) · GW(p)

I don't like average utilitarianism, and I wasn't even aware that most folks here did, but I still have a guess as to why.

For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together. You cannot get people to honesty report their utility functions, and further they can never even know them, because they have no way to normalize and figure out whether or not they actually care more than the person next to therm.

However, a sufficiently advanced Friendly AI may be able to discover the true utility functions of everyone by looking into everyone's brains at the same time. This makes average utilitarianism an actual plausible option for a futurist, but complete nonsense for a professional population ethicist.

This is all completely a guess.

Replies from: wedrifid, Kaj_Sotala, kalium
comment by wedrifid · 2013-07-15T00:08:23.126Z · LW(p) · GW(p)

I don't like average utilitarianism, and I wasn't even aware that most folks here did, but I still have a guess as to why.

Most people here do not endorse average utilitarianism.

comment by Kaj_Sotala · 2013-07-14T19:30:52.821Z · LW(p) · GW(p)

For many people, average utilitarianism is believed to be completely unachievable. There is no way to discover peoples utility functions in a way that can be averaged together.

I thought "average utiltarianism" referred to something like "my utility function is computed by taking the average suffering and pleasure of all the people in the world", not "I would like the utility functions of everyone to be averaged together and have that used to create a world".

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2013-07-15T01:41:25.612Z · LW(p) · GW(p)

I think you are correct. That is what I meant, but I see how I misused the word "utility." The argument translates easily. Without out AI, we don't have any way to measure suffering and pleasure.

comment by kalium · 2013-07-14T05:58:06.045Z · LW(p) · GW(p)

This does not explain a preference for average utilitarianism over total utilitarianism. Avoiding the "repugnant conclusion" is probably a factor.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2013-07-15T01:38:54.373Z · LW(p) · GW(p)

I didn't even consider total utilitarianism in my response. Sorry. I think you are right about the "repugnant conclusion".

comment by Jayson_Virissimo · 2013-07-13T05:07:49.295Z · LW(p) · GW(p)

What experiences what you anticipate in a world where utilitarianism is true that you wouldn't anticipate in a world where it is false?

Replies from: shminux, Qiaochu_Yuan, Manfred
comment by Shmi (shminux) · 2013-07-13T05:20:38.071Z · LW(p) · GW(p)

In what sense can utiliarianism be true or false?

Replies from: None, CoffeeStain
comment by [deleted] · 2013-07-13T05:48:46.239Z · LW(p) · GW(p)

In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.

Casting morality as facts that can be true or false is a very convenient model.

Replies from: shminux, DanielLC
comment by Shmi (shminux) · 2013-07-13T06:46:23.288Z · LW(p) · GW(p)

I don't think most people agree that useful = true.

Replies from: None, AspiringRationalist
comment by [deleted] · 2013-07-13T17:24:37.665Z · LW(p) · GW(p)

Woah there. I think we might have a containment failure across an abstraction barrier.

Modelling moral propositions as facts that can be true or false is useful (same as with physical propositions). Then, within that model, utilitarianism is false.

"Utilitarianism is false because it is useful to believe it is false" is a confusion of levels, IMO.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-13T18:22:30.500Z · LW(p) · GW(p)

Modelling moral propositions as facts that can be true or false is useful

Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.

In the sense that we might want to use it or not use it as the driving principle of a superpowerful genie or whatever.

I don't see how this answers my question. And certainly not the original question

What experiences what you anticipate in a world where utilitarianism is true that you wouldn't anticipate in a world where it is false?

Replies from: None
comment by [deleted] · 2013-07-13T18:36:59.768Z · LW(p) · GW(p)

Sure, sometimes it is, depending on your goals. For example, if you start a religion, modeling certain moral proposition as true is useful. If you run a country, proclaiming the patriotic duty as a moral truth is very useful.

I meant model::useful, not memetic::useful.

I don't see how this answers my question. And certainly not the original question

It doesn't answer the original question. You asked in what sense it could be true or false, and I answered that it being "true" corresponds to it being a good idea to hand it off to a powerful genie, as a proxy test for whether it is the preference structure we would want. I think that does answer your question, albeit with some clarification. Did I misunderstand you?

As for the original question, in a world where utilitarianism were "true", I would expect moral philosophers to make judgments that agreed with it, for my intuitions to find it appealing as opposed to stupid, and so on.

Naturally, this correspondence between "is" facts and "ought" facts is artificial and no more or less justified than eg induction; we think it works.

comment by NoSignalNoNoise (AspiringRationalist) · 2013-07-13T15:30:50.001Z · LW(p) · GW(p)

Not explicitly, but most people tend to believe what their evolutionary and cultural adaptations tell them it's useful to believe and don't think too hard about whether it's actually true.

comment by DanielLC · 2013-07-13T20:49:28.867Z · LW(p) · GW(p)

If we use deontology, we can control the genie. If we use utilitarianism, we can control the world. I'm more interested in the world than the genie.

Replies from: None, None
comment by [deleted] · 2013-07-14T16:43:30.565Z · LW(p) · GW(p)

utilitarianism

Be careful with that word. You seem to be using it to refer to consequentialism, but "utilitarianism" usually refers to a much more specific theory that you would not want to endorse simply because it's consequentialist.

comment by [deleted] · 2013-07-14T05:04:07.645Z · LW(p) · GW(p)

?

What do you mean by utilitarianism?

Replies from: DanielLC
comment by DanielLC · 2013-07-15T03:52:53.664Z · LW(p) · GW(p)

I mean that the genie makes his decisions based on the consequences of his actions. I guess consequentialism is technically more accurate. According to Wikipedia, utilitarianism is a subset of it, but I'm not really sure what the difference is.

Replies from: None
comment by [deleted] · 2013-07-16T02:24:04.443Z · LW(p) · GW(p)

Ok. Yeah, "Consequentialism" or "VNM utilitarianism" is usually used for that concepts to distinguish from the moral theory that says you should make choices consistent with a utility function constructed by some linear aggregation of "welfare" or whatever across all agents.

It would be a tragedy to adopt Utilitarianism just because it is consequentialist.

Replies from: DanielLC, Eugine_Nier
comment by DanielLC · 2013-07-16T04:40:40.185Z · LW(p) · GW(p)

I get consequentialism. It's Utilitarianism that I don't understand.

comment by Eugine_Nier · 2013-07-17T02:54:24.199Z · LW(p) · GW(p)

Minor nitpick: Consequentialism =/= VNM utilitarianism

Replies from: None
comment by [deleted] · 2013-07-17T04:36:32.620Z · LW(p) · GW(p)

Right, they are different. A creative rereading of my post could interpret it as talking about two concepts DanielLC might have meant by "utilitarianism".

comment by CoffeeStain · 2013-07-14T00:03:27.242Z · LW(p) · GW(p)

It seems to me that people who find utilitarianism intuitive do so because they understand the strong mathematical underpinnings. Sort of like how Bayesian networks determine the probability of complex events, in that Bayes theorem proves that a probability derived any other way forces a logical contradiction. Probability has to be Bayesian, even if it's hard to demonstrate why; it takes more than a few math classes.

In that sense, it's as possible for utilitarianism to be false as it is for probability theory (based on Bayesian reasoning) to be false. If you know the math, it's all true by definition, even if some people have arguments (or to be LW-sympathetic, think they do).

Utilitarianism would be false is such arguments existed. Most people try to create them by concocting scenarios in which the results obtained by utilitarian thinking lead to bad moral conclusions. But the claim of utilitarianism is that each time this happens, somebody is doing the math wrong, or else it wouldn't, by definition and maths galore, be the conclusion of utilitarianism.

comment by Qiaochu_Yuan · 2013-07-13T06:11:34.601Z · LW(p) · GW(p)

In the former world, I anticipate that making decisions using utilitarianism would leave me satisfied upon sufficient reflection, and more reflection after that wouldn't change my opinion. In the latter world, I don't.

Replies from: shminux
comment by Shmi (shminux) · 2013-07-13T18:28:45.627Z · LW(p) · GW(p)

So you defined true as satisfactory? What if you run into a form of repugnant conclusion, as most forms of utilitarianism do, does it mean that utilitarianism is false? Furthermore, if you compare consequentialism, virtue ethics and deontology by this criteria, some or all of them can turn out to be "true" or "false", depending on where your reflection leads you.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-13T21:19:31.660Z · LW(p) · GW(p)

Yep. Yep. Yep.

comment by Manfred · 2013-07-13T21:14:12.631Z · LW(p) · GW(p)

What experiences would you anticipate in a world where chocolate being tasty is true that you wouldn't anticipate in a world where it is false?

Replies from: NancyLebovitz, Eugine_Nier
comment by NancyLebovitz · 2013-07-13T22:58:21.184Z · LW(p) · GW(p)

A large chocolate industry in the former, and chocolate desserts as well. In the latter, there might be a chocolate industry if people discover that chocolate is useful as a supplement, but chocolate extracts would be sold in such a way as to conceal their flavor.

comment by Eugine_Nier · 2013-07-15T01:18:13.101Z · LW(p) · GW(p)

A tasty experience whenever I eat chocolate.

comment by OnTheOtherHandle · 2013-07-21T02:27:03.348Z · LW(p) · GW(p)

I'd like to use a prediction book to improve my calibration, but I think I'm failing at a more basic step: how do you find some nice simple things to predict, which will let you accumulate a lot of data points? I'm seeing predictions about sports games and political elections a lot, but I don't follow sports and political predictions both require a lot of research and are too few and far between to help me. The only other thing I can think of is highly personal predictions, like "There is a 90% chance I will get my homework done by X o'clock", but what are some good areas to test my prediction abilities on where I don't have the ability to change the outcome?

Replies from: gwern
comment by gwern · 2013-07-21T03:52:29.391Z · LW(p) · GW(p)

Start with http://predictionbook.com/predictions/future

Predictions you aren't familiar with can be as useful as ones you are: you calibrate yourself under extreme uncertainty,and sometimes you can 'play the player' and make better predictions that way (works even with personal predictions by other people).

comment by bogdanb · 2013-07-20T17:05:49.778Z · LW(p) · GW(p)

I keep hearing about all sorts of observations that seem to indicate Mars once had oceans (the latest was a geological structure that resembles Earth river deltas). But on first sight it seems like old dried up oceans should be easy to notice due to the salt flats they’d leave behind. I’m obviously making an assumption that isn’t true, but I can’t figure out which. Can anyone please point out what I’m missing?

As far as I can tell, my assumptions are:

1) Planets as similar to Earth as Mars is will have similarly high amounts of salt dissolved in their oceans, conditional on having oceans. (Though I don’t know why NaCl in particular is so highly represented in Earth’s oceans, rather than other soluble salts.)

2) Most processes that drain oceans will leave the salt behind, or at least those that are plausible on Mars will.

3) Very large flat areas with a thick cover of salt will be visible at least to orbiters even after some billions of years. This is the one that seems most questionable, but seems sound assuming:

3a) a large NaCl-covered region will be easily detectable with remote spectroscopy, and 3b) even geologically-long term asteroid bombardment will retain, over sea-and-ocean-sized areas of salt flats, concentrations of salt abnormally high, and significantly lower than on areas previously washed away.

Again, 3b. sounds as the most questionable. But Mars doesn’t look like it its surface was completely randomized to a non-expert eye. I mean, I know the first few (dozens?) of meters on the Moon are regolith, which basically means the surface was finely crushed and well-mixed, and I assume Mars would be similar though to a lesser extent. But this process seems to randomize mostly locally, not over the entire surface of the planet, and the fact that Mars has much more diverse forms of relief seems to support that.

Replies from: None
comment by [deleted] · 2013-07-24T01:00:15.729Z · LW(p) · GW(p)

It not just NaCl, its lots of minerals that get deposited as the water they were dissolved in goes away - they're called 'evaporites'. They can be hard to see if they are very old if they get covered with other substances, and mars has had a long time for wind to blow teeny sediments everywhere. Rock spectroscopy is also not nearly as straightforward as that of gases.

One of the things found by recent rovers is indeed minerals that are only laid down in moist environments. See http://www.giss.nasa.gov/research/briefs/gornitz_07/ , http://onlinelibrary.wiley.com/doi/10.1002/gj.1326/abstract .

As for amounts of salinity... Mars probably never had quite as much water as Earth had and it may have gone away quickly. The deepest parts of the apparent Northern ocean probably only had a few hundred meters at most. That also means less evaporites. Additionally a lot of the other areas where water seemed to flow (especially away from the Northern lowlands) seem to have come from massive eruptions of ground-water that evaporated quickly after a gigantic flood rather than a long period of standing water.

Replies from: bogdanb
comment by bogdanb · 2013-07-24T23:09:35.753Z · LW(p) · GW(p)

Thank you!

So, basically (3) was almost completely wrong, and (1) missed the fact that “ocean” doesn’t mean quite the same thing everywhere.

Could you explain (2) a little bit? I see in Earth seawater there’s about 15 times more NaCl by mass than other solutes. Is there an obvious reason for that, and is that Earth-specific?

Replies from: None
comment by [deleted] · 2013-07-25T02:59:02.722Z · LW(p) · GW(p)

I honestly don't know much about relative salinities of terrestrial versus Martian prospective oceans. I do know however that everywhere that's been closely sampled so far by rovers and landers has had lots of perchlorate (Cl O4) salts in the soil, sometimes up to 0.5% of the mass. This can form when chloride salts react with surrounding minerals under the influence of ultraviolet light... nobody is terribly confident yet about what actually happened there to make them given that these results are new since the Phoenix lander and Spirit and Opportunity, but it's certainly interesting and suggestive.

I also think I should add that there is some evidence that a good chunk of Mars's water went underground - the topography of just about everything within ~30 or 40 degrees of the poles is indicative of crater walls slumping from shifting permafrost and there seems to be plenty of solid water in or under the soil there. The oceans may not have only dried up so long ago, they may have sunk downwards simultaneously.

comment by CronoDAS · 2013-07-14T21:30:28.599Z · LW(p) · GW(p)

Is it okay to ask completely off-topic questions in a thread like this?

Replies from: gothgirl420666
comment by gothgirl420666 · 2013-07-14T21:47:01.913Z · LW(p) · GW(p)

As the thread creator, I don't really care.

comment by Craig_Heldreth · 2013-07-13T21:54:49.218Z · LW(p) · GW(p)

Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty? His ethos consisted of S.M.I**2.L.E, i.e. Space Migration + Intelligence Increase + Life Extension which seems like it should be right up your alley to me. His books are not well-organized; his live presentations and tapes had some wide appeal.

Replies from: Manfred, Qiaochu_Yuan, KrisC, timtyler
comment by Manfred · 2013-07-13T22:46:12.721Z · LW(p) · GW(p)

Write up a discussion post with an overview of what you think we'd find novel :)

comment by Qiaochu_Yuan · 2013-07-14T00:14:16.234Z · LW(p) · GW(p)

I am generally surprised when people say things like "I am surprised that topic X has not come up in forum / thread Y yet." The set of all possible things forum / thread Y could be talking about is extremely large. It is not in fact surprising that at least one such topic X exists.

comment by KrisC · 2013-07-17T23:59:54.962Z · LW(p) · GW(p)

Leary won me over with those goals. I have adopted them as my own.

It's the 8 circuits and the rest of the mysticism I reject. Some of it rings true, some of it seems sloppy, but I doubt any of it is useful for this audience.

comment by timtyler · 2013-07-14T11:59:30.780Z · LW(p) · GW(p)

Are there good reasons why when I do a google search on (Leary site:lesswrong.com) it comes up nearly empty?

Probably an attempt to avoid association with druggie disreputables.

comment by advancedatheist · 2013-07-13T16:56:28.321Z · LW(p) · GW(p)

If, as Michael Rose argues, our metabolisms revert to hunter-gatherer functioning past our reproductive years so that we would improve our health by eating approximations of paleolithic diets, does that also apply to adaptations to latitudes different from the ones our ancestors lived in?

In my case, I have Irish and British ancestry (my 23andMe results seem consistent with family traditions and names showing my origins), yet my immediate ancestors lived for several generations in the Southern states at latitudes far south from the British Isles. Would I improve my health in middle age by moving to a more northerly latitude, adopting a kind of "paleo-latitude" relocation analogous to adopting paleolithic nutrition?

Replies from: RomeoStevens, NancyLebovitz
comment by RomeoStevens · 2013-07-15T21:56:23.212Z · LW(p) · GW(p)

The statement involves several dubious premises. The first is that we understand metabolism well enough to talk meaningfully about optimal functionality rather than just collect observations. The second is that this optimum corresponds to ancestral patterns. The third is that we know what these ancestral patterns are.

Most of the paleo variants I've seen include debunked claims.

comment by NancyLebovitz · 2013-07-13T17:43:25.603Z · LW(p) · GW(p)

Cardio-vascular disease becomes more common as you move away from the equator.

Replies from: bogdanb
comment by bogdanb · 2013-07-13T17:53:43.963Z · LW(p) · GW(p)

Yes, but is it genetic or environmental? In other words, do people who move away from the equator have more CVD, or do people whose ancestors lived further from the equator have more CVD?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T18:35:20.302Z · LW(p) · GW(p)

Australians have less CVD than people in the British Isles, so I'm betting on environmental.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-14T22:41:47.741Z · LW(p) · GW(p)

The British diet is actually terrible, though, much worse than that of even British-descended people in Australia.

comment by Thomas · 2013-07-13T06:12:24.551Z · LW(p) · GW(p)

What is more precious - the tigers of India, or lives of all the people eaten every year by the tigers of India?

Replies from: pragmatist, drethelin, Tenoke
comment by pragmatist · 2013-07-13T09:54:06.380Z · LW(p) · GW(p)

A bit of quick Googling suggests that there are around 1500 tigers in India, and about 150 human deaths by tiger attack every year (that's the estimate for the Sundarbans region alone, but my impression is that tiger attack deaths outside the Sundarbans are negligible in comparison). Given those numbers, I would say that if the only way to prevent those deaths was to eliminate the tiger population and there wouldn't be any dire ecological consequences to the extinction, then I would support the elimination of the tiger population. But in actual fact, I am sure there are a number of ways to prevent most of those deaths without driving tigers to extinction, so the comparison of their relative values is a little bit pointless.

Replies from: None
comment by [deleted] · 2013-07-13T18:04:00.697Z · LW(p) · GW(p)

Ways as easy as sending a bunch of guys with rifles into the jungle?

Replies from: pragmatist, DanielLC
comment by pragmatist · 2013-07-13T18:12:37.956Z · LW(p) · GW(p)

The effort involved is not the only cost. Tigers are sentient beings capable of suffering. Their lives have value. Plus there is value associated with the existence of the species. The extinction of the Bengal tiger in the wild would be a tragedy, and not just because of all the trouble those guys with guns would have to go to.

Replies from: DanielLC, Sarokrae, NancyLebovitz
comment by DanielLC · 2013-07-13T20:56:09.348Z · LW(p) · GW(p)

While I would agree that their lives have value, it's not clear that it's positive value. Life in the wild is not like life in civilization. It sucks.

Also, the value of the lives they influence will most likely be more important than their lives. They eat other animals on a regular basis.

Life in the wild being what it is as opposed to what it could be is a tragedy. Life in the wild existing at all may well be a tragedy. Perhaps what we really ought to do is just burn down the wild, and make that way of life end.

comment by Sarokrae · 2013-07-14T09:09:16.935Z · LW(p) · GW(p)

Surely a more obvious cost is the vast number of people who like tigers and would be sad if they all died?

Replies from: pragmatist
comment by pragmatist · 2013-07-14T14:01:04.995Z · LW(p) · GW(p)

Eh, I bet most of them would get over it pretty quick. Also, I'm not a utilitarian.

comment by NancyLebovitz · 2013-07-13T18:30:53.559Z · LW(p) · GW(p)

Also, tigers are presumably having some ecological effect, so there might be costs to a tigerless region.

comment by DanielLC · 2013-07-13T20:51:54.822Z · LW(p) · GW(p)

You could legalize eating tiger. This will prevent tiger extinction in the same way it prevented cow extinction, result in sending some guys with rifles into the jungle that you don't even pay for, and if that's not enough, you can still send guys with rifles to finish off the wild population, and they still will be less likely to go extinct than if you do nothing.

Replies from: Adele_L, Atelos, J_Taylor
comment by Adele_L · 2013-07-14T03:43:27.806Z · LW(p) · GW(p)

This will prevent tiger extinction in the same way it prevented cow extinction,

There are lots of reasons why farming cows is significantly easier than farming tigers.

Replies from: DanielLC
comment by DanielLC · 2013-07-14T04:52:59.238Z · LW(p) · GW(p)

Tiger meat would be much more expensive than beef, but there's still enough of a market for it to keep tigers from going extinct.

Replies from: OphilaDros
comment by OphilaDros · 2013-07-14T16:31:39.973Z · LW(p) · GW(p)

Not all animals can be domesticated for meat production. Jared Diamond discusses the question in "Guns, Germs and Steel". He calls it the Anna Karenina principle, and some of the factors influencing this are:

  • Growth rate of the species
  • Breeding habits - do they tend to breed well in closed spaces
  • Nasty disposition
  • Social structure
Replies from: gwern
comment by gwern · 2013-07-14T16:44:41.507Z · LW(p) · GW(p)

All of those just increase the cost; certainly they can make things infeasible for hunter-gatherers with per capita incomes of maybe $300 a year generously. But they are of little interest to people with per capitas closer to $30,000 and who are willing to pay for tiger meat.

comment by Atelos · 2013-07-14T18:26:36.928Z · LW(p) · GW(p)

Sharks are legal to eat and this is a major factor in their current risk of extinction.

Replies from: Randy_M, Jayson_Virissimo, DanielLC
comment by Randy_M · 2013-07-15T16:24:12.843Z · LW(p) · GW(p)

Isn't extinction risk the goal here? (Not extinction per se, but population reduction down to the level it is no longer a threat. At least in this hypothetical.)

comment by Jayson_Virissimo · 2013-07-15T16:30:18.284Z · LW(p) · GW(p)

Sharks are not similar to tigers in that you can't (with current technology?) keep some types of them alive in captivity, but tigers you can. Legalizing eating tiger meat, though, without also legalizing tiger ranches (?) would not be of help in preventing extinction.

comment by DanielLC · 2013-07-15T03:57:48.660Z · LW(p) · GW(p)

Sharks are hard to farm, in that they have all the problems tigers have, but you also have to do it underwater. I also think sharks aren't as in demand as tigers. I've heard tiger meat is a popular snake oil. Or at least stuff that claims to contain tiger meat is.

Replies from: David_Gerard
comment by David_Gerard · 2013-07-15T07:35:02.652Z · LW(p) · GW(p)

In Australia, fish'n'chips is almost certainly shark.

comment by J_Taylor · 2013-07-17T22:48:50.248Z · LW(p) · GW(p)

You could legalize eating tiger.

Tiger parts have a variety of uses in Traditional Chinese Medicine. Making harvesting these parts from farmed tigers would be a somewhat efficacious solution.

comment by drethelin · 2013-07-13T13:15:28.970Z · LW(p) · GW(p)

insofar as we can preserve tigers as a species in zoos or with genetic materials I'd say the people are more valuable, but if killing these tigers wiped out the species, they're worth more.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-07-13T22:11:14.500Z · LW(p) · GW(p)

What about 1,500 people, instead of 150? 15,000? 150,000?

Replies from: drethelin
comment by drethelin · 2013-07-14T03:08:42.000Z · LW(p) · GW(p)

I haven't done the math. 1,500 people feels like a line, 15,000 people feels like enough.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2013-07-16T05:08:41.382Z · LW(p) · GW(p)

Wow, those are some hugely divergent preferences from mine. I'm pretty sure I would trade the lives of all the remaining tigers in India to keep my family alive. I'll have to update what I expect human-CEV to look like (given that are preferences are likely much closer than that of a randomly selected person [since we had to pass through many filters to end up on this web forum]).

Replies from: drethelin
comment by drethelin · 2013-07-16T05:12:08.447Z · LW(p) · GW(p)

Id trade a lot more random humans for my family than I would for tigers.

comment by Tenoke · 2013-07-13T08:25:47.499Z · LW(p) · GW(p)

Depends on your utility function. There is nothing inherently precious about either. Although by my value system it is the humans.

comment by Carinthium · 2014-01-27T11:26:57.175Z · LW(p) · GW(p)

Given Elizier Yudowksy's Peggy Sue parody, is there anything inherent innane about the Peggy Sue fanfic type? If so, I've missed it- what is it?

Replies from: drethelin
comment by drethelin · 2014-04-10T00:31:00.807Z · LW(p) · GW(p)

It's similar to the Mary Sue problem and Eliezer's rule of fanfiction. If you give Harry Potter knowledge of the future, you have to give it to Voldemort too, but that can render both of their sets of knowledge irrelevant and become a self referential clusterfuck to write so probably just don't. If only the main character has knowledge of the future, it tends to become a Mary Sue fic where you replace power with knowledge.

https://www.fanfiction.net/s/9658524/1/Branches-on-the-Tree-of-Time is an example of a surprisingly well-written story that takes recursive time-travel and future knowledge to its logical conclusion.

comment by Daemon · 2013-09-17T00:13:21.146Z · LW(p) · GW(p)

How do you deal with Munchhausen trilemma? It used to not bother me much, and I think my (axiomatic-argument based) reasoning was along the lines of "sure, the axioms might be wrong, but look at all the cool things that come out of them." The more time passes, though, the more concerned I become. So, how do you deal?

comment by OnTheOtherHandle · 2013-07-31T04:17:08.963Z · LW(p) · GW(p)

I have a question about the first logic puzzle here. The condition "Both sane and insane people are always perfectly honest, sane people have 100% true beliefs while insane people have 100% false beliefs" seems to be subtly different from Liar/Truth-teller. The Liar/Truth-teller thing is only activated when someone asks them a direct yes or no question, while in these puzzles the people are volunteering statements on their own.

My question is this: if every belief that an insane person holds is false, then does that also apply to beliefs about their beliefs? For example, an insane person may believe the sky is not blue, because they only believe false things. But does that mean that they believe they believe that the sky is blue, when in fact they believe that it is not blue? So all their meta-beliefs are just the inverse of their object-level beliefs? If all their beliefs are false, then their beliefs about their beliefs must likewise be false, making their meta-beliefs true on the object level, right? And then their beliefs about their meta-beliefs are again false on the object level?

But if that's true, it seems like the puzzle becomes too easy. Am I missing something or is the answer to that puzzle "Vs lbh jrer gb nfx zr jurgure V nz n fnar cngvrag, V jbhyq fnl lrf"?

Edit: Another thought occurred to me about sane vs. insane - it's specified that the insane people have 100% false beliefs, but it doesn't specify that these are exact negations of true beliefs. For example, rather than believing the sky is not-blue, an insane person might believe the sky doesn't even exist and his experience is a dream. For example, what would happen if you asked an insane patient whether he was a doctor? He might say no, not because he knew he was a patient but because he believed himself to be an ear of corn rather than a doctor.

comment by [deleted] · 2013-07-27T15:35:29.572Z · LW(p) · GW(p)

Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:

  1. What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.

  2. Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI

comment by [deleted] · 2013-07-27T06:46:02.124Z · LW(p) · GW(p)

Thank you for this thread - I have been reading a lot of the sequences here and I have a few stupid questions around FAI:

  1. What research has been done around frameworks for managing an AI’s information flow. For example just before an AI ‘learns’ it will likely be a piece of software rapidly processing information and trying to establish an understanding. What sort of data structures and processes have been experimented with to handle this information.

  2. Has there been an effort to build a dataset to classify (crowd source?) what humans consider “good”/”bad”, and specifically how these things could be used to influence the decision of an AI

  3. Regardless on how it could be implemented, what might be the safest set of goals for an AI - for something to evolve it seems that a drive is needed otherwise the program would not bother continuing. Could “help humanity” work if tied into point 2 which was a human controlled list of “things not to do”

comment by therufs · 2013-07-24T19:18:00.074Z · LW(p) · GW(p)

If I am interested in self-testing different types of diets (paleo, vegan, soylent, etc.), how long is a reasonable time to try each out?

I'm specifically curious about how a diet would affect my energy level and sense of well-being, how much time and money I spend on a meal, whether strict adherence makes social situations difficult, etc. I'm not really interested in testing to a point that nutrient deficiencies show up or to see how long it takes me to get bored.

Replies from: Lumifer
comment by Lumifer · 2013-07-24T19:42:36.672Z · LW(p) · GW(p)

I'd say about a month. I would expect that it takes your body 1-2 weeks to adjust its metabolism to the new diet and then you have a couple of weeks to evaluate the effects.

comment by CAE_Jones · 2013-07-21T03:14:47.458Z · LW(p) · GW(p)

I'd like to work on a hardware project. It seems rather simple (I'd basically start out trying to build this ( pdf / txt )), however, my lack of vision makes it difficult to just go check the nearest Radioshack for parts, and I'm also a bit concerned about safety issues (how easy would it be for someone without an electrical engineering background to screw up the current? Could I cause my headphone jack to explode? Etc). I'm mostly wondering how one should go about acquiring parts for DIY electronics, provided that travel options are limited. (I've done some Googling, but am uncertain on what exactly to look for. The categories "transformer" / "amplifier" / "electrode" / "insulator" are quite broad.)

comment by OnTheOtherHandle · 2013-07-20T18:31:58.264Z · LW(p) · GW(p)

I debated over whether to include this in the HPMOR thread, but it's not specific to that story, and, well, it is kind of a stupid question.

How does backwards-only time travel work? Specifically, wouldn't a time traveler end up with dozens of slightly older or younger versions of herself all living at the same time? I guess "Yes" is a perfectly acceptable answer, but I've just never really seen the consequences addressed. I mean, given how many times Harry has used the Time Turner in HPMOR (just a convenient example), I'm wondering if there are like 13 or 14 Harries just running around acting independently? Because with backwards-only time travel, how is there a stable loop?

Think about a situation with a six-hour Time Turner and three versions of the same person: A, A' (three hours older than A), and A'' (three hours older than A'). Let's say A' gets to work and realizes he forgot his briefcase. If he had a backwards and forwards time machine, he could pop into his home three hours ago and be back in literally the blink of an eye - and because he knows he could do this, he should then expect to see the briefcase already at his desk. Sure enough, he finds it, and three hours later he becomes A'', and goes back to plant the briefcase before the meeting. This mostly makes sense to me, because A'' would plant the briefcase and then return to his own time, through forwards time travel, rather than the slow path. A'' would never interact with A', and every version of A to reach the point of the meeting would be locked deterministically to act exactly as A' and A'' acted.

But I'm really confused about what happens if A has a Time Turner, that can go backwards but not forwards. Then, when A' realizes he forgot his briefcase, wouldn't there actually be two ways this could play out?

One, A' finds the briefcase at his desk, in which case three hours later, he would become A'' and then come back to plant the briefcase. But what does A'' do after he plants the briefcase? Can he do whatever he wants? His one job is over, and there's another version of him coming through from the past to live out his life - could A'' just get up and move to the Bahamas or become a secret agent or something, knowing that A' and other past versions would take care of his work and family obligations? Isn't he a full-blown new person that isn't locked into any kind of loop?

Two, A' doesn't find the briefcase at his desk, in which case he goes back three hours to remind A to take his briefcase - does that violate any time looping laws? A' never had someone burst in to remind him to take a briefcase, but does that mean he can't burst in on A now? A' can't jump back to the future and experience firsthand the consequences of having the briefcase. If he goes back to talk to A, isn't this just the equivalent of some other person who looks like you telling you not to forget your briefcase for work? Then A can get the briefcase and go to work, while A' can just...leave, right? And live whatever life he wants?

Am I missing something really obvious? I must be, because Harry never stops to consider the consequences of dozens of independently operating versions of himself out there in the world, even when there are literally three other versions of him passed out next to his chair. What happens to those three other Harries, and in general what happens with backwards-only time travel? Is there no need for forwards time travel to "close the circuit" and create a loop, instead of a line?

Replies from: shinoteki
comment by shinoteki · 2013-07-20T19:21:28.101Z · LW(p) · GW(p)

You don't need a time machine to go forward in time - you can just wait. A'' cant leave everything to A' because A' will disappear within three hours when he goes back to become A''. If A' knows A wasn't reminded the A' can't remind A. the other three Harrys use their time turners to go backwards and close the loop. You do need both forward and backward time travel to create a closed loop, but the forward time travel can just be waiting; it doesn't require a machine.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-07-20T20:13:17.213Z · LW(p) · GW(p)

I think I get it, but I'm still a bit confused, because both A' and A'' are moving forward at the same rate, which means since A'' started off older, A' will never really "catch up to" and become A'', because A'' continues to age. A'' is still three hours older than A', right, forever and ever?

To consider a weird example, what about a six hour old baby going back in time to witness her own birth? Once the fetus comes out, wouldn't there just be two babies, one six hours older than the other? Since they're both there and they're both experiencing time at a normal forward rate of one second per second, can't they just both grow up like siblings? If the baby that was just born waited an hour and went back to witness her own birth, she would see her six hour older version there watching her get born, and she would also see the newborn come out, and then there'd be three babies, age 0, age six hours, and age twelve hours, right?

How exactly would the "witnessing your own birth" thing play out with time travel? I think your explanation implies that there will never be multiple copies running around for any length of time, but why does A'' cease to exist once A' ages three hours? A'' has also aged three hours and become someone else in the meantime, right?

Replies from: shinoteki
comment by shinoteki · 2013-07-20T20:29:38.442Z · LW(p) · GW(p)

A' doesn't become A'' by catching up to him, he becomes A'' when he uses his time machine to jump back 3 hours.

There would be three babies for 6 hours, but then the youngest two would use their time machines and disappear into the past.

A'' doesn't cease to exist. A' "ceases to exist" because his time machine sends him back into the past to become A''.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2013-07-21T00:39:49.523Z · LW(p) · GW(p)

Oh! Alright, thank you. :) So if you go back and do something one hour in the past, then the loop closes an hour later, when the other version of yourself goes back for the same reasons you did, and now once again you are the only "you" at this moment in time. It's not A' that continues on with life leaving A'' off the hook, it is A'' who moves on while A' must go back. That makes much more sense.

Edit: This means it is always the oldest Harry that we see, right? The one with all the extra waiting around included in his age? Since all the other Harries are stuck in a six hour loop.

comment by lmnop · 2013-07-15T22:35:08.542Z · LW(p) · GW(p)

What are concrete ways that an unboxed AI could take over the world? People seem to skip from "UFAI created" to "UFAI rules the world" without explaining how the one must cause the other. It's not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.

Could someone sketch out a few example timelines of events for how a UFAI could take over the world?

Replies from: bramflakes, Qiaochu_Yuan
comment by bramflakes · 2013-07-15T23:40:22.313Z · LW(p) · GW(p)

If the AI can talk itself out of a box then it demonstrates it can manipulate humans extremely well. Once it has internet access, it can commandeer resources to boost its computational power. It can analyze thousands of possible exploits to access "secure" systems in a fraction of a second, and failing that, can use social engineering on humans to gain access instead. Gaining control over vast amounts of digital money and other capital would be trivial. This process compounds on itself until there is nothing else left over which to gain control.

That's a possible avenue for world domination. I'm sure that there are others.

Replies from: lmnop
comment by lmnop · 2013-07-16T14:41:56.570Z · LW(p) · GW(p)

Worst case scenario, can't humans just abandon the internet altogether once they realize this is happening? Declare that only physical currency is valid, cut off all internet communications and only communicate by means that the AI can't access?

Of course it should be easy for the AI to avoid notice for a long while, but once we get to "turn the universe into computronium to make paperclips" (or any other scheme that diverges from business-as-usual drastically) people will eventually catch on. There is an upper bound to the level of havoc the AI can wreak without people eventually noticing and resisting in the manner described above.

Replies from: bramflakes
comment by bramflakes · 2013-07-16T15:12:17.432Z · LW(p) · GW(p)

How exactly would the order to abandon the internet get out to everyone? There are almost no means of global communications that aren't linked to the internet in some way.

Replies from: lmnop
comment by lmnop · 2013-07-16T20:34:46.465Z · LW(p) · GW(p)

Government orders the major internet service providers to shut down their services, presumably :) Not saying that that would necessarily be easily to coordinate, nor that the loss of internet wouldn't cripple the global economy. Just that it seems to be a different order of risk than an extinction event.

My intuition on the matter was that an AI would be limited in its scope of influence to digital networks, and its access to physical resources, e.g. labs, factories and the like would be contingent on persuading people to do things for it. But everyone here is so confident that FAI --> doom that I was wondering if there was some obvious and likely successful method of seizing control of physical resources that everyone else already knew and I had missed.

comment by Qiaochu_Yuan · 2013-07-16T00:52:58.436Z · LW(p) · GW(p)

Have you read That Alien Message?

Replies from: lmnop
comment by lmnop · 2013-07-16T21:01:22.272Z · LW(p) · GW(p)

No, but I read it just now, thank you for linking me. The example takeover strategy offered there was bribing a lab tech to assemble nanomachines (which I am guessing would then be used to facilitate some grey goo scenario, although that wasn't explicitly stated). That particular strategy seems a bit far-fetched, since nanomachines don't exist yet and we thus don't know their capabilities. However, I can see how something similar with an engineered pandemic would be relatively easy to carry out, assuming ability to fake access to digital currency (likely) and the existence of sufficiently avaricious and gullible lab techs to bribe (possible).

I was thinking in terms of "how could an AI rule humanity indefinitely" rather than "how could an AI wipe out most of humanity quickly." Oops. The second does seem like an easier task.

comment by wwa · 2013-07-15T19:00:47.851Z · LW(p) · GW(p)

Is true precommitment possible at all?

Human-wise this is an easy question, human will isn't perfect, but what about an AI? It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit, which means at least one prior was 100% and that in turn means no update is possible for this prior.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-07-15T19:41:48.897Z · LW(p) · GW(p)

It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit

Why? Of what?

Replies from: D_Malik
comment by D_Malik · 2013-07-15T23:53:30.782Z · LW(p) · GW(p)

I think wwa means 100% certainty that you'll stick to the precommitted course of action. But that isn't what people mean when they say "precommitment", they mean deliberately restricting your own future actions in a way that your future self will regret or would have regretted had you not precommitted, or something like that. The restriction clearly can't be 100% airtight, but it's usually pretty close; it's a fuzzy category.

comment by advancedatheist · 2013-07-13T16:42:24.055Z · LW(p) · GW(p)

Why doesn't the Copernican Principle apply to inferences of the age and origins of the universe? Some cosmologists argue that we live in a privileged era of the universe when we can infer its origins because we can still observe the red shift of distant galaxies. After these galaxies pass beyond the event horizon, observers existing X billion years from now in our galaxy wouldn't have the data to deduce the universe's expansion, its apparent age, and therefore the Big Bang.

Yet the Copernican Principle denies the assumption that any privileged observers of the universe can exist. What if it turns out instead that the universe appears to have the same age and history, regardless of how much time passes according to how we measure it?

Replies from: Manfred, None, DanielLC, JoshuaZ
comment by Manfred · 2013-07-13T19:32:46.276Z · LW(p) · GW(p)

The copernican principle is a statement of ignorance - it's a caution against making up the claim that we're at the center of the universe. This is to be distinguished from the positive knowledge that the universe is a uniform blob.

comment by [deleted] · 2013-07-13T16:48:21.010Z · LW(p) · GW(p)

I suspect when it comes to the evolution of the universe we are starting to run up against the edge of the reference class that the Copernicaln principle acts within and start seeing anthropic effects. See here - star formation rates are falling rapidly across the universe and if big complicated biospheres only appear within a few gigayears of the formation of a star or not at all, then we expect to find ourselves near the beginning despite the universe being apparently open-ended. This would have the side-effect of us appearing during the 'priviliged' era.

Replies from: advancedatheist
comment by advancedatheist · 2013-07-13T17:14:41.709Z · LW(p) · GW(p)

But again, why doesn't the Copernican Principle apply here? Perhaps all observers conclude that they live on the tail end of star formation, no matter how much time passes according to their ways of measuring time.

comment by DanielLC · 2013-07-13T21:12:36.935Z · LW(p) · GW(p)

First, the event horizon doesn't work that way. You will never see what Andromeda will look like a trillion years from now, but in a trillion years, you will see Andromeda. It's just that you'll see what it looked like a long time ago. You will eventually get to the point where it's too redshifted to see.

Second, the universe won't be able to support life forever, so it can be assumed that we'd exist before it gets to the point that it can no longer support life.

Replies from: Thomas
comment by Thomas · 2013-07-16T15:43:50.969Z · LW(p) · GW(p)

but in a trillion years, you will see Andromeda.

It will merge with Milky Way in a few billion years. It will ceased to exist as an independent galaxy nearby.

comment by JoshuaZ · 2013-08-03T17:30:51.978Z · LW(p) · GW(p)

Yet the Copernican Principle denies the assumption that any privileged observers of the universe can exist.

I don't think it denies the assumption in most forms. It might be better to state the Copernican Principle as assigning a low prior that we are privileged observers. That low prior can then be adjust to a reasonable posterior based on evidence.

comment by Barry_Cotter · 2013-07-13T06:55:30.073Z · LW(p) · GW(p)

Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?

I've been working pretty much every day for the past year but I had two longish breaks. After each of them there was a long period of feeling pretty awful all the time. I figured out eventually that this was probably how long it took me to forget what ok feels like. Is this plausible or am I probably ok given sufficient sleep and adequate diet?

Also, does mixing modafinil and starting strength sound like a bad idea? I know sleep is really important for recovery and gainz but SS does not top out at anything seriously strenuous for someone who isn't ill and demands less than 4 hours gym time a week.

Replies from: Tenoke, D_Malik, James_Miller, ChristianKl, army1987
comment by Tenoke · 2013-07-13T08:22:58.317Z · LW(p) · GW(p)

Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?

You might be but this would not be evidence for it. If anything it is slight evidence that you are not sleep deprived - if you were it would be harder to wake up.

Modafinil might lead you down the sleep deprivation road but this ^ would not be evidence for it.

Replies from: army1987
comment by A1987dM (army1987) · 2013-07-13T10:46:52.303Z · LW(p) · GW(p)

I mentally inserted “even” before “if” in that question.

Replies from: Tenoke
comment by Tenoke · 2013-07-13T12:41:55.007Z · LW(p) · GW(p)

Well then obviously it is possible. This is definitely not the sure-fire way to know whether you are sleep deprived or not.

comment by D_Malik · 2013-07-16T00:04:55.185Z · LW(p) · GW(p)

Yes. If you have a computer and you've haven't made an unusually concerted effort not to be sleep-deprived, you are almost certainly sleep-deprived by ancestral standards. Not sure whether sleeping more is worth the tradeoff, though. Have you tried using small amounts of modafinil to make your days more productive, rather than to skip sleep?

comment by James_Miller · 2013-07-13T16:46:35.886Z · LW(p) · GW(p)

You might want to look into adrenal fatigue.

comment by ChristianKl · 2013-07-13T12:11:53.309Z · LW(p) · GW(p)

Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?

Yes. Seth Robert would be someone who wrote a lot about his own problem with sleep deprivation that was due to him waking up too soon.

comment by A1987dM (army1987) · 2013-07-13T10:48:51.163Z · LW(p) · GW(p)

Is there any chance I might be sleep deprived if I wake up before my alarm goes off more than 95% of the time?

I think that's possible if you've woken up at about the same time every morning for a month in a row or longer, but over the past week you've been going to bed a couple hours later than you usually do.

In a different thread, the psychomotor vigilance task was mentioned as a test of sleep deprivation. Try it out.

Replies from: TobyBartels, RomeoStevens
comment by TobyBartels · 2014-08-10T04:21:19.997Z · LW(p) · GW(p)

Wikipedia says that this test ‘is not to assess the reaction time, but to see how many times the button is not pressed’. I never missed the button (nor did I ever press it improperly), but it still recommended that I consider medical evaluation. (My time was even worse than RomeoStevens's, so I'm not saying that I did well if the job is to measure reaction time!)

comment by RomeoStevens · 2013-07-13T11:09:13.296Z · LW(p) · GW(p)

Calling bullshit on that test. It says I should seek medical evaluation for testing at an average of 313. In comparison to this: http://www.humanbenchmark.com/tests/reactiontime/stats.php

Replies from: Tenoke, army1987
comment by Tenoke · 2013-07-13T12:46:22.219Z · LW(p) · GW(p)

Are you sure it doesn't say 'might be suboptimal' and 'Consider seeking medical evaluation'?

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T13:29:50.352Z · LW(p) · GW(p)

I still consider that wildly over the top. But then again, I have an accurate model of how likely doctors are to kill me.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T15:26:42.242Z · LW(p) · GW(p)

Details?

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T21:04:50.414Z · LW(p) · GW(p)

Robin Hanson.

comment by A1987dM (army1987) · 2013-07-13T14:07:27.030Z · LW(p) · GW(p)

Do you find it that incredible that somewhere around 10% of Internet users are severely sleep-deprived? :-)

But yeah, probably they used figures based on laboratory equipment and I guess low-end computer mice are slower than that.

comment by Locaha · 2013-07-13T05:44:56.424Z · LW(p) · GW(p)

Why are we throwing the word "Intelligence" around like it actually means anything? The concept is so ill-defined It should be in the same set with "Love."

Replies from: Qiaochu_Yuan, bogdanb, ChristianKl, Kaj_Sotala, gothgirl420666, TimS
comment by Qiaochu_Yuan · 2013-07-13T05:57:18.508Z · LW(p) · GW(p)

I can't tell whether you're complaining about the word as it applies to humans or as it applies to abstract agents. If the former, to a first-order approximation it cashes out to g factor and this is a perfectly well-defined concept in psychometrics. You can measure it, and it makes decent predictions. If the latter, I think it's an interesting and nontrivial question how to define the intelligence of an abstract agent; Eliezer's working definition, at least in 2008, was in terms of efficient cross-domain optimization, and I think other authors use this definition as well.

Replies from: Locaha
comment by Locaha · 2013-07-13T09:44:48.360Z · LW(p) · GW(p)

"Efficient cross-domain optimization" is just fancy words for "can be good at everything".

Replies from: army1987, RomeoStevens
comment by A1987dM (army1987) · 2013-07-13T10:29:26.968Z · LW(p) · GW(p)

Yes. And your point is?

Replies from: Locaha
comment by Locaha · 2013-07-13T10:36:18.626Z · LW(p) · GW(p)

This is the stupid questions thread.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2013-07-14T18:53:00.464Z · LW(p) · GW(p)

That would be the inefficient cross-domain optimization thread.

Replies from: Locaha
comment by Locaha · 2013-07-15T10:26:19.271Z · LW(p) · GW(p)

Awesome. I need to use this as a swearword sometimes...

"You inefficient cross-domain optimizer, you!"

comment by RomeoStevens · 2013-07-13T09:50:59.800Z · LW(p) · GW(p)

achieves its value when presented with a wide array of environments.

Replies from: Locaha
comment by Locaha · 2013-07-13T10:00:15.738Z · LW(p) · GW(p)

This is again different words for "can be good at everything". :-)

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T10:19:33.022Z · LW(p) · GW(p)

When you ask someone to unpack a concept for you it is counter-productive to repack as you go. Fully unpacking the concept of "good" is basically the ultimate goal of MIRI.

Replies from: Locaha
comment by Locaha · 2013-07-13T10:23:21.391Z · LW(p) · GW(p)

I just showed that your redefinition does not actually unpack anything.

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T10:41:26.911Z · LW(p) · GW(p)

I feel that perhaps you are operating on a different definition of unpack than I am. For me, "can be good at everything" is less evocative than "achieves its value when presented with a wide array of environments" in that the latter immediately suggests quantification whereas the former uses qualitative language, which was the point of the original question as far as I could see. To be specific: Imagine a set of many different non-trivial agents all of whom are paper clip maximizers. You created copies of each and place them in a variety of non-trivial simulated environments. The ones that average more paperclips across all environments could be said to be more intelligent.

Replies from: Lightwave
comment by Lightwave · 2013-07-15T09:54:06.464Z · LW(p) · GW(p)

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.

comment by bogdanb · 2013-07-13T18:31:08.607Z · LW(p) · GW(p)

Because it actually does mean something, even if we don’t really know what and borders are fuzzy.

When you hear that X is more intelligent than Y, there is some information you learn, even though you didn’t find out exactly what can X do that Y can’t.

Note that we also use words like “mass” and “gravity” and “probability”; even though we know lots about each, it’s not at all clear what they are (or, like in the case of probability, there are conflicting opinions).

comment by ChristianKl · 2013-07-13T12:17:32.092Z · LW(p) · GW(p)

All language is vague. Sometimes vague language hinders us in understanding what another person is saying and sometimes it doesn't.

comment by Kaj_Sotala · 2013-07-13T15:07:56.571Z · LW(p) · GW(p)

Legg & Hutter have given a formal definition of machine intelligence. A number of authors have expanded on it and fixed some of its problems: see e.g. this comment as well as the parent post.

comment by gothgirl420666 · 2013-07-13T06:25:35.688Z · LW(p) · GW(p)

I'm not really sure why you use "love" as an example. I don't know that much about neurology, but my understand is that the chemical makeup of love and its causes are pretty well understood. Certainly better understood than intelligence?

Replies from: Locaha
comment by Locaha · 2013-07-13T06:49:47.516Z · LW(p) · GW(p)

I think what you talk about here is certain aspects of sexual attraction. Which are, indeed, often lumped together into the concept of "Love". Just like a lot of different stuff is lumped together into the concept of "Intelligence".

Replies from: RomeoStevens, army1987
comment by RomeoStevens · 2013-07-13T09:54:54.205Z · LW(p) · GW(p)

This seems like matching "chemistry" to "sexual" in order to maintain the sacredness of love rather than to actually get to beliefs that cash out in valid predictions. People can reliably be made to fall in love with each other given the ability to manipulate some key variables. This should not make you retch with horror any more than the stanford prison experiment already did. Alternatively, update on being more horrified by tSPE than you were previously.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2013-07-13T11:27:31.834Z · LW(p) · GW(p)

People can reliably be made to fall in love with each other given the ability to manipulate some key variables.

?

Replies from: RomeoStevens
comment by RomeoStevens · 2013-07-13T12:07:03.965Z · LW(p) · GW(p)

Lots of eye contact is sufficient if the people are both single, of similar age, and with a person of their preferred gender. But even those conditions could be overcome given some chemicals to play with.

Replies from: drethelin, army1987
comment by drethelin · 2013-07-13T13:16:33.751Z · LW(p) · GW(p)

[citation needed]

comment by A1987dM (army1987) · 2013-07-14T08:24:22.933Z · LW(p) · GW(p)

Did you accidentally leave out some conditions such as “reasonably attractive”?

comment by A1987dM (army1987) · 2013-07-13T10:34:33.810Z · LW(p) · GW(p)

The fact that English uses the same word for several concepts (which had different names in, say, ancient Greek) doesn't necessarily mean that we're confused about neuropsychology.

comment by TimS · 2013-07-13T05:59:28.299Z · LW(p) · GW(p)

There seems to be a thing called "competence" for particular abstract tasks. Further, there are kinds of tasks where competence in one task generalizes to the whole class of tasks. One thing we try to measure by intelligence is an individual's level of generalized abstract competence.

I think part of the difficulties with measuring intelligence involve uncertainty about what tasks are within the generalization class.