Open Thread: March 2010

post by AdeleneDawner · 2010-03-01T09:25:07.423Z · LW · GW · Legacy · 680 comments

Contents

680 comments

We've had these for a year, I'm sure we all know what to do by now.

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

680 comments

Comments sorted by top scores.

comment by ShardPhoenix · 2010-03-08T12:25:51.593Z · LW(p) · GW(p)

A fascinating article about rationality or the lack thereof as it applied to curing scurvy, and how hard trying to be less wrong can be: http://idlewords.com/2010/03/scott_and_scurvy.htm

Replies from: Morendil, Tyrrell_McAllister
comment by Morendil · 2010-03-08T13:06:27.565Z · LW(p) · GW(p)

Wonderful article, thanks. I'm fond of reminders of this type that scientific advances are very seldom as discrete, as irreversible, as incontrovertible as the myths of science often give them to be.

When you look at the detailed stories of scientific progress you see false starts, blind alleys, half-baked theories that happen by luck to predict phenomena and mostly sound ones that unfortunately fail on key bits of evidence, and a lot of hard work going into sorting it all out (not to mention, often enough, a good dose of luck). The manglish view, if nothing else, strikes me as a good vitamin for people wanting an antidote to the scurvy of overconfidence.

ETA: The article made for a great dinnertime story to my kids. Only one of the three, the oldest (13yo) was familiar with the term "scurvy" - and with the cure as well; both from One Piece. Manga 1 - school 0.

comment by Tyrrell_McAllister · 2010-03-08T17:36:44.606Z · LW(p) · GW(p)

Very interesting. And sobering.

comment by Cyan · 2010-03-01T14:04:22.098Z · LW(p) · GW(p)

Call for examples

When I posted my case study of an abuse of frequentist statistics, cupholder wrote:

Still, the main post feels to me like a sales pitch for Bayes brand chainsaws that's trying to scare me off Neyman-Pearson chainsaws by pointing out how often people using Neyman-Pearson chainsaws accidentally cut off a limb with them.

So this is a call for examples of abuse of Bayesian statistics; examples by working scientists preferred. Let’s learn how to avoid these mistakes.

Replies from: khafra
comment by khafra · 2010-03-02T16:17:26.567Z · LW(p) · GW(p)

Some googling around yielded a pdf about a controversial use of Bayes in court. The controversy seems to center around using one probability distribution on both sides of the equation. Lesser complaints include mixing in a frequentist test without a good reason.

Replies from: Cyan
comment by Cyan · 2010-03-02T22:10:19.851Z · LW(p) · GW(p)

That's a great find!

comment by michaelkeenan · 2010-03-01T09:41:53.784Z · LW(p) · GW(p)

How do you introduce your friends to LessWrong?

Sometimes I'll start a new relationship or friendship, and as this person becomes close to me I'll want to talk about things like rationality and transhumanism and the Singularity. This hasn't ever gone badly, as these subjects are interesting to smart people. But I think I could introduce these ideas more effectively, with a better structure, to maximize the chance that those close to me might be as interested in these topics as I am (e.g. to the point of reading or participating in OB/LW, or donating to SIAI, or attending/founding rationalist groups). It might help to present the futurist ideas in increasing order of outrageousness as described in Yudkowsky1999's future shock levels. Has anyone else had experience with introducing new people to these strange ideas, who has any thoughts or tips on that?

Edit: for futurist topics, I've sometimes begun (in new relationships) by reading and discussing science fiction short stories, particularly those relating to alien minds or the Singularity.

For rationalist topics, I have no real plan. One girl really appreciated a discussion of the effect of social status on the persuasiveness of arguments; she later mentioned that she'd even told her mother about it. She also appreciated the concept of confirmation bias. She's started reading LessWrong, but she's not a native English speaker so it's going to be even more difficult than LessWrong already is.

Replies from: RobinZ, XiXiDu, gimpf, nazgulnarsil
comment by RobinZ · 2010-03-01T19:41:16.650Z · LW(p) · GW(p)

I think of LessWrong from a really, really pragmatic viewpoint: it's like software patches for your brain to eliminate costly bugs. There was a really good illustration in the Allais mini-sequence - that is a literal example of people throwing away their money because they refused to consider how their brain might let them down.

Edit: Related to The Lens That Sees Its Flaws.

comment by XiXiDu · 2010-03-02T15:55:40.841Z · LW(p) · GW(p)

It shows you that there is really more to most things than meets the eye, but more often than not much less than you think. It shows you that even smart people can be completely wrong but that most people are not even wrong. It tells you to be careful in what you emit and to be skeptical of what you receive. It doesn't tell you what is right, it teaches you how to think and to become less wrong. And to do so is in your own self interest because it helps you to attain your goals, it helps you to achieve what you want. Thus what you want is to read and participate on LessWrong.

comment by gimpf · 2010-03-01T21:40:29.508Z · LW(p) · GW(p)

I am probably a miserable talker, as usually after my introduction of rationality/singularity related topics people tend to even strengthen their former opinions. I could well use a "good argumentation for rationality dummys" article. No, reading through all the sequences does not help. (Understanding would?)

Often enough it seems that I achieve better results by trying not to touch any "religious" topic too early; religious meaning that the argument for not having that opinion requires an understanding of reductionism and epistemology worth a third year philosophy student (btw, acceptance is also required).

This may seem to take enormous amounts of time to get people onto this train, but, well, the average IQ is 100, and getting rationality seems to be even less far spread than intelligence, so it may actually be more useful to hint in the right direction for special topics than to catch it all.

And, how does this actually help your own intentions? It seems non-trivial to me that finding a utility-function where taking the time to improve the rationality-q of a few philosophy/arts students or electricians or whatever is actually a net-win for what one can improve. Or is everybody here just hanging out with (gonna-be) scientists?

Replies from: michaelkeenan
comment by michaelkeenan · 2010-03-02T04:42:33.607Z · LW(p) · GW(p)

I am probably a miserable talker, as usually after my introduction of rationality/singularity related topics people tend to even strengthen their former opinions.

I'm not sure this is what you're doing, but I'm careful not to bring up LessWrong in an actual argument. I don't want arguments for rationality to be enemy soldiers.

Instead, I bring rationalist topics up as an interesting thing I read recently, or as an influence on why I did a certain thing a certain way, or hold a particular view (in a non-argument context). That can lead to a full-fledged pitch for LessWrong, and it's there that I falter; I'm not sure I'm pitching with optimal effectiveness. I don't have a good grasp on what topics are most interesting/accessible to normal (albeit smart) people.

And, how does this actually help your own intentions? It seems non-trivial to me that finding a utility-function where taking the time to improve the rationality-q of a few philosophy/arts students or electricians or whatever is actually a net-win for what one can improve. Or is everybody here just hanging out with (gonna-be) scientists?

If rationalists were so common that I could just filter people I get close to by whether they're rationalists, I probably would. But I live in Taiwan, and I'm probably the only LessWrong reader in the country. If I want to talk to someone in person about rationality, I have to convert someone first. I like to talk about these topics, since they're frequently on my mind, and because certain conclusions and approaches are huge wins (especially cryonics and reductionism).

comment by nazgulnarsil · 2010-03-01T11:00:29.024Z · LW(p) · GW(p)

the main hurdle in my experience is getting people over biases that cause them to think that the future is going to look mostly like the present. if you can get people over this then they do a lot of the remaining work for you.

comment by Lightwave · 2010-03-02T09:08:27.297Z · LW(p) · GW(p)

The following stuff isn't new, but I still find it fascinating:

Reverse-engineering the Seagull

The Mouse and the Rectangle

Replies from: AdeleneDawner, nazgulnarsil
comment by AdeleneDawner · 2010-03-02T09:55:37.419Z · LW(p) · GW(p)

Neat!

comment by nazgulnarsil · 2010-03-12T12:06:52.709Z · LW(p) · GW(p)

what's depressing is the vast disconnect between how well marketers understand super stimulus and how poorly everyone else does.

also this: http://www.theonion.com/content/video/new_live_poll_allows_pundits_to

comment by MixedNuts · 2010-03-02T15:52:01.989Z · LW(p) · GW(p)

TL;DR: Help me go less crazy and I'll give you $100 after six months.

I'm a long-time lurker and signed up to ask this. I have a whole lot of mental issues, the worst being lack of mental energy (similar to laziness, procrastination, etc., but turned up to eleven and almost not influenced by will). Because of it, I can't pick myself up and do things I need to (like calling a shrink); I'm not sure why I can do certain things and not others. If this goes on, I won't be able to go out and buy food, let alone get a job. Or sign up for cryonics or donate to SIAI.

I've tried every trick I could bootstrap; the only one that helped was "count backwards then start", for things I can do but have trouble getting started on. I offer $100 to anyone who suggests a trick that significantly improves my life for at least six months. By "significant improvement" I mean being able to do things like going to the bank (if I can't, I won't be able to give you the money anyway), and having ways to keep myself stable or better (most likely, by seeing a therapist).

One-time tricks to do one important thing are also welcome, but I'd offer less.

Replies from: CronoDAS, pjeby, anonymous259, Jordan, hugh, hugh, Alicorn, Kevin, Psy-Kosh, wedrifid, MrHen, Unnamed, knb, whpearson, Mitchell_Porter, Jack, markrkrebs
comment by CronoDAS · 2010-03-05T02:15:00.527Z · LW(p) · GW(p)

After reading this thread, I can only offer one piece of advice:

You need to see a medical doctor, and fast. Your problems are clearly more serious than anything we can deal with here. If you have to, call 911 and have them carry you off in an ambulance.

comment by pjeby · 2010-03-04T06:47:31.240Z · LW(p) · GW(p)

This is just a guess, and I'm not interested in your money, but I think that you probably have a health problem. I'd suggest you check out the book "The Mood Cure" by Julia Ross, which has some very good information on supplementation. Offhand, you sound like the author's profile for low-in-catecholamines, and might benefit very quickly from fairly low doses of certain amino acids such as L-tyrosine.

I strongly recommend reading the book, though, as there are quite a few caveats regarding self-supplementation like this. Using too high a dose can be as problematic as too low, and times of day are important too. Consistent management is important, too. When you're low on something, taking what you need can make you feel euphoric, but when you have the right dose, you won't notice anything by taking some. (Instead, you'll notice if you go off it for a few days, and find mood/energy going back to pre-supplementation levels.)

Anyway... don't know if it'll work for you, but I do suggest you try it. (And the same recommendation goes for anyone else who's experiencing a chronic mood or energy issue that's not specific to a particular task/subject/environment.)

Replies from: MixedNuts
comment by MixedNuts · 2010-03-04T14:50:50.413Z · LW(p) · GW(p)

Buying a (specific) book isn't possible right now, but may help later; thanks. I took the questionnaire on her website and apparently everything is wrong with me, which makes me doubt her tests' discriminating power.

Replies from: Cyan, pjeby
comment by Cyan · 2010-03-04T20:23:31.287Z · LW(p) · GW(p)

It's a marketing tool, not a test.

comment by pjeby · 2010-03-04T19:36:24.181Z · LW(p) · GW(p)

FWIW, I don't have "everything" wrong with me; I had only two, and my wife scores on two, with only one the same between the two of us.

comment by anonymous259 · 2010-03-03T03:48:57.833Z · LW(p) · GW(p)

I'll come out of the shadows (well not really, I'm too ashamed to post this under my normal LW username) and announce that I am, or anyway have been, in more or less the same situation as MixedNuts. Maybe not as severe (there are some important things I can do, at the moment, and I have in the past been much worse than I am now -- I would actually appear externally to be keeping up with my life at this exact moment, though that may come crashing down before too long), but generally speaking almost everything MixedNuts says rings true to me. I don't live with anyone or have any nearby family, so that adds some extra difficulty.

Right now, as I said, this is actually a relatively good moment, I've got some interesting projects to work on that are currently helping me get out of bed. But I know myself too well to assume that this will last. Plus, I'm way behind on all kinds of other things I'm supposed to be doing (or already have done).

I'm not offering any money, but I'd be interested to see if anyone is interested in conversing with me about this (whether here or by PM). Otherwise, my reason for posting this comment was to add some evidence that this may be a common problem (even afflicting people you wouldn't necessarily guess suffered from it).

Replies from: Eliezer_Yudkowsky, AdeleneDawner, ata, Alicorn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-03T04:45:22.248Z · LW(p) · GW(p)

I've got a weaker form of this, but I manage. The number one thing that seems to work is a tight feedback loop (as in daily) between action and reward, preferably reward by other people. That's how I was able to do OBLW. Right now I'm trying to get up to a reasonable speed on the book, and seem to be slowly ramping up.

comment by AdeleneDawner · 2010-03-03T04:01:28.689Z · LW(p) · GW(p)

I have limited mental resources myself, and am sometimes busy, but I'm generally willing to (and find it enjoyable to) talk to people about this kind of thing via IM. I'm fairly easily findable on Skype (put a dot between my first and last names; text only, please), AIM (same name as here), GChat (same name at gmail dot com), and MSN (same name at hotmail dot com). The google email is the one I pay attention to, but I'm not so great at responding to email unless it has obvious questions in it for me to answer. It's also noteworthy that my sleep schedule is quite random - it is worth checking to see if I'm awake at 5am if you want to, but also don't assume that just because it's daytime I'll be awake.

comment by ata · 2010-03-04T20:29:08.083Z · LW(p) · GW(p)

Hope this doesn't turn into a free-therapy bandwagon, but I have a lot of the same issues as MixedNuts and anonymous259, so if anyone has any tips or other insights they'd like to share with me, that would be delightful.

My main problem seems to be that, if I don't find something thrilling or fascinating, and it requires much mental or physical effort, I don't do it, even if I know I need to do it, even if I really want to do it. Immediate rewards and punishments help very little (sometimes they actually make things worse, if the task requires a lot of thought or creativity). There are sometimes exceptions when the boring+mentally/physically-demanding task is to help someone, but that's only when the person is actually relying on me for something, not just imposing an artificial expectation, and it usually only works if it's someone I know and care about (except myself).

A related problem is that I rarely find anything thrilling or fascinating (enough to make me actually do it, at least) for very long. In my room I have stacks of books that I've only read a few chapters into; on my computer I have probably hundreds of unfinished (or barely started) programs and essays and designs, and countless others that only exist in my mind; on my academic transcripts are many 'W's and 'F's, not because the classes were difficult (a more self-controlled me would have breezed through them), but because I stopped being interested halfway through. So even when something starts out intrinsically motivating for me, the momentum usually doesn't last.

Like anon259, I can't offer any money — this sort of problem really gets in the way of wanting/finding/keeping a job — but drop me a PM if gratitude motivates you. :)

Replies from: RobinZ, Alicorn
comment by RobinZ · 2010-03-04T21:30:46.131Z · LW(p) · GW(p)

To some extent, the purpose of LessWrong is to fix problems with ourselves, and the distinction between errors in reasoning and errors in action is subtle enough that I would hesitate to declare this on- or off-topic.

It should be mentioned, however, that the population of LessWrongers-asking-for-advice is unlikely to be representative of the population of LessWrongers, and even less so of the population of agents-LessWrongers-care-about. This is likely to make generalizations drawn from observations here narrower in scope than we might like.

comment by Alicorn · 2010-03-04T20:32:09.577Z · LW(p) · GW(p)

Same deal as the other two - PM me IM contact info, we can chat :)

comment by Alicorn · 2010-03-03T17:24:55.798Z · LW(p) · GW(p)

PM me with your IM contact info and I'll try to help you too.

Look, I'll do it for free too!

comment by Jordan · 2010-03-10T05:23:34.234Z · LW(p) · GW(p)

For what it's worth:

A few years back I was suffering from some pretty severe health problems. The major manifestations were cognitive and mood related. Often when I was saying a sentence I would become overwhelmed halfway through and would have to consciously force myself to finish what I was saying.

Long story short, I started treating my diet like a controlled experiment and, after a few years of trial and error, have come out feeling better than I can ever remember. If you're going to try self experimentation the three things I recommend most highly to ease the analysis process are:

  • Don't eat things with ingredients in them, instead eat ingredients
  • Limit each meal to less than 5 different ingredients
  • Try and have the same handful of ingredients for every meal for at least a week at a time.
Replies from: wedrifid
comment by wedrifid · 2010-03-10T09:50:23.886Z · LW(p) · GW(p)

I'm curious. What foods (if you don't mind me asking) did you find had such a powerful effect?

Replies from: Jordan
comment by Jordan · 2010-03-11T08:18:38.076Z · LW(p) · GW(p)

I expanded upon it here.

What has helped me the most, by far, is cutting out soy, dairy, and all processed foods (there are some processed foods I feel fine eating, but the analysis to figure out which ones proved too costly for the small benefit of being able to occasionally eat unhealthy foods).

comment by hugh · 2010-03-02T19:02:56.962Z · LW(p) · GW(p)

Also, don't offer money. External motivators are disincentives. By offering $100, you are attaching a specific worth to the request, and undermining our own intrinsic motivations to help. Since allowing a reward to disincentivize a behavior is irrational, I'm curious how much effect it has on the LessWrong crowd; regardless, I would be surprised if anyone here tried to collect, so I don't see the point.

Replies from: Alicorn
comment by Alicorn · 2010-03-02T19:06:58.931Z · LW(p) · GW(p)

My understanding is that the mechanism by which this works lets you sidestep it pretty neatly by also doing basically similar things for free. That way you can credibly tell yourself that you would do it for free, and being paid is unrelated.

Replies from: hugh, thomblake, AdeleneDawner
comment by hugh · 2010-03-02T19:18:09.102Z · LW(p) · GW(p)

To the contrary. If you pay volunteers, they stop enjoying their work. Other similar studies have been done that show that paying people who already enjoy something will sometimes make them stop the activity altogether, or to at least stop doing it without an external incentive.

Edit: AdeleneDawner and thomblake agree with the parent. This may be a counterargument, or just an answer to my earlier question, namely "Are LessWrongers better able to control this irrational impulse?"

Replies from: Liron, Alicorn, AdeleneDawner
comment by Liron · 2010-03-03T13:09:01.107Z · LW(p) · GW(p)

So can a person ever love their day job? It seems that moneymaking/entrepreneurship should be the only reflectively stable passion.

Replies from: hugh
comment by hugh · 2010-03-03T14:45:31.063Z · LW(p) · GW(p)

Obviously, many people do love their day job. However, your question is apt, and I have no answer to it---even with regards to myself. I often have struggled with doing the exact same things at work and for myself, and enjoying one but not the other. I think in my case, it is more an issue of pressure and expectations. However, when trying to answer the question of what I should do with my life, it makes things difficult!

comment by Alicorn · 2010-03-02T19:24:28.419Z · LW(p) · GW(p)

I didn't download the .pdf, but it looks like this was probably conducted by paying volunteers for all of their volunteer work. If someone got paid for half of their hours volunteering, or had two positions doing very similar work and then one of them started paying, I'd expect this effect to diminish.

Replies from: hugh
comment by hugh · 2010-03-02T19:48:02.412Z · LW(p) · GW(p)

The study concerns how many hours per week were spent volunteering; some was paid, some was not, though presumably a single organization would either pay or not pay volunteers, rather than both. Paid volunteers worked less per week overall.

The study I referenced was not the one I intended to reference, but I have not found the one I most specifically remember. Citing studies is one of the things I most desperately want an eidetic memory for.

comment by AdeleneDawner · 2010-03-02T19:34:37.966Z · LW(p) · GW(p)

Edit: AdeleneDawner and thomblake agree with the parent. This may be a counterargument, or just an answer to my earlier question, namely "Are LessWrongers better able to control this irrational impulse?"

On reflection, it seems to me to be the latter - my cognitive model of money is unusual in general, but this particular reaction seems to be a result of an intentional tweak that I made to reduce my chance of being bribe-able. (Not that I've had a problem with being bribed, but that broad kind of situation registers as 'having my values co-opted', which I'm not at all willing to take risks with.)

comment by thomblake · 2010-03-02T19:19:33.870Z · LW(p) · GW(p)

That seems to work. If I were teaching part-time simply because I needed the money, I wouldn't do it. But I decided that I'd teach this class for free, so I also have no problem doing it for very little money.

comment by AdeleneDawner · 2010-03-02T19:11:31.607Z · LW(p) · GW(p)

Agreed - I do basically similar things for free, and am reasonably confident that my reaction would be "*shrug* ok" if I were to work with MixedNuts and xe wanted to pay me.

(I do intend to offer help here; I'm still trying to determine what the most useful offer would be.)

comment by hugh · 2010-03-02T17:36:52.909Z · LW(p) · GW(p)

MixedNuts, I'm in a similar position, though perhaps less severely, and more intermittently. I've been diagnosed with bipolar, though I've had difficulty taking my meds. At this point in my life, I'm being supported almost entirely by a network of family, friends, and associates that is working hard to help me be a real person and getting very little in return.

I have one book that has helped me tremendously, "The Depression Cure", by Dr. Ilardi. He claims that depression-spectrum disorders are primarily caused by lifestyle, and that almost everyone can benefit from simple changes. As any book--especially a self-help book---it ought to be read skeptically, and it doesn't introduce any ideas that can't be found in modern psychological research. Rather, it aggregates what in Ilardi's opinion are the most important: exercise works more effectively than SSRIs, etc.

If you really want a copy, and you really can't get one yourself, I will send you one if you can send me your address. It helped me that much. Which is not to say that I am problem free. Still, a 40% reduction in problem behavior, after 6 months, with increasing rather than decreasing results, is a huge deal for me.

Rather, I want to give you your "one trick". It is the easiest rather than the most effective; but it has an immediate effect, which helped me implement the others. Morning sunlight. I don't know where you live; I live in a place where I can comfortably sit outside in the morning even this time of year. Get up as soon as you can after waking, and wake as early in the day as you would ideally like to. Walk around, sit, or lie down in the brightest area outside for half an hour. You can go read studies on why this works, or that debate its efficacy, but for me it helps.

I realize that your post didn't say anything about depression; just lack of willpower. For me, they were tightly intertwined, and they might not be for you. Please try it anyway.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T17:52:55.040Z · LW(p) · GW(p)

Thanks. I'll try the morning light thing; from experience it seems to help somewhat, but I can't keep it going for long.

If nothing else works, I'll ask you for the book. I'm skeptical since they tend to recommend unbootstrapable things such as exercise, but it could help.

Replies from: hugh
comment by hugh · 2010-03-02T18:35:44.460Z · LW(p) · GW(p)

There is one boot process that works well, which is to contract an overseer. For me, it was my father. I felt embarrassed to be a grown adult asking for his father's oversight, but it helped when I was at my worst. Now, I have him, my roommate, two ex-girlfriends, and my advisor who are all concerned about me and check up with me on a regular basis. I can be honest with them, and if I've stopped taking care of myself, they'll call or even come over to drag me out of bed, feed me, and/or take me for a run.

I have periodically been an immense burden on the people who love me. However, I eventually came to the realization that being miserable, useless, and isolated was harder and more unpleasant for them than being let in on what was wrong with me and being asked to help. I've been a net negative to this world, but for some reason people still care for me, and as long as they do, my best course of action seems to be to let them try to help me. I suspect you have a set of people who would likewise prefer to help you than to watch you suffer.

Feeling less helpless was nearly as good for them as for me. I have a debt to them that I am continuing to increase, because I'm still not healthy or self-sufficient. I don't know if I can ever repay it, but

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T19:35:54.430Z · LW(p) · GW(p)

Yes, I've considered that. There are people who can and do help, but not to the extent I'd need. I believe they help me as much as they can while still having a life that isn't me. I shouldn't ask for more, should I?

If you have tips for getting more efficient help out of them, suggestions of people who'd help though I don't expect them to, or ways to get help from other people (professional caretakers?), by all means please shoot.

Replies from: hugh
comment by hugh · 2010-03-02T19:57:05.271Z · LW(p) · GW(p)

You indicated that you had trouble maintaining the behavior of getting daily morning light. Ask someone who 1) likes talking to you, 2) is generally up at that hour, and 3) is free to talk on the phone, to call you most mornings. They can set an alarm on their phone and have a 2 minute chat with you each day.

In my experience if I can pick up the phone (which admittedly can be difficult), the conversation is enough of a distraction and a motivation to get outside, and then inertia is enough to keep me out there.

The reason I chose my father is that he is an early riser, self-employed, and he would like to talk to me more than he gets to. You might not have someone like that in your life, but if you do, it is minimally intrusive to them, and may be a big help to you.

Replies from: MixedNuts, jimmy
comment by MixedNuts · 2010-03-02T20:22:23.442Z · LW(p) · GW(p)

This sounds like a great idea. I have a strong impulse to answer phones, so if I put the phone far enough from my bed I had to get up to answer it, I'd get past the biggest obstacle.

There are two minor problems: None of the people I know have free time early in the morning, but two minutes is manageable. When outside, I'm not sure what to do so there's a risk I'd get anxious and default to going home.

I'll try it, thanks.

comment by jimmy · 2010-03-02T20:12:47.834Z · LW(p) · GW(p)

If you're going to go to the trouble of talking to someone every morning, you might as well see their face:

http://www.blog.sethroberts.net/2009/10/15/more-about-faces-and-mood-2/

Seth found that his mood the next day was significantly improved if he saw enough faces the previous morning. There was a LessWronger that posted somewhere that this trick helped him a lot, but I can't remember who or where right now.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T21:01:03.860Z · LW(p) · GW(p)

I see quite a lot of faces in the morning already. Maybe not early enough? Though I'm pretty skeptical; it looks like it'd work best for extroverted neurotypicals, and I'm neither. I added it to the list of tricks, but I'll try others first.

comment by Alicorn · 2010-03-02T17:22:43.425Z · LW(p) · GW(p)

I'm willing to try to help you but I think I'd be substantially more effective in real time. If you would like to IM, send me your contact info in a private message.

comment by Kevin · 2010-03-04T06:16:52.495Z · LW(p) · GW(p)

Do you take fish oil supplements or equivalent? Can't hurt to try; fish oil is recommended for ADHD and very well may repair some of the brain damage that causes mental illness.

http://news.ycombinator.com/item?id=1093866

Replies from: komponisto
comment by komponisto · 2010-03-04T23:55:52.056Z · LW(p) · GW(p)

[F]ish oil... may repair some of the brain damage that causes mental illness.

Use with caution, however.

Replies from: wedrifid
comment by wedrifid · 2010-03-05T00:09:07.366Z · LW(p) · GW(p)

I don't understand the link. It doesn't mention fish oil but does suggest that she changed her medication (for depression and anorexia) and then experienced suicidal ideation, which she later acted upon. Medications causing suicidal ideation is not unheard of but I haven't heard of Omega-3 having any such effect.

Some googling gives me more information. It seems that her psychiatrist was transitioning her from one antidepressant to another, and adding fish oil supplements. There is also suggestions that her depression was bipolar. Going off an antidepressant is known to provoke manic episodes in bipolar patients and even those vulnerable to bipolar that had never had an episode. Going on to an antidepressant (and in particular SSRIs, for both 'on' and 'off') can also provoke mania. A manic episode while suffering withdrawal symptoms and the symptoms of a preexisting anxiety based disorder is a recipe for suicide. As for Omega-3... the prior for her being responsible is low and she just happened to be on the scene when people were looking for something to blame!

Replies from: komponisto
comment by komponisto · 2010-03-05T01:35:40.347Z · LW(p) · GW(p)

I don't understand the link. It doesn't mention fish oil

Ah, sorry, I should have checked. (I guess it seemed an important enough detail that I just assumed it would be mentioned.)

Here (18:20 in the video) is an explicit mention of the fish oil, by her mother; apparently she was taking 12 tablets daily.

The way I had interpreted it, which prompted my caution above, was as a case of replacing antidepressants with fish oil, which seems unwise. Looking at it again now reveals there was in fact a plan to continue with antidepressants. It's unclear, however, how far along she was with this plan.

In any case, you're right that fish oil may not necessarily have been to blame as the trigger for suicide; but at the very least, it certainly didn't work here, and to the extent that it may have replaced the regular antidepressant treatment...that would seem a rather dubious decision.

comment by Psy-Kosh · 2010-03-04T05:00:06.184Z · LW(p) · GW(p)

I have had and sometimes still struggle with similar problems, but there is something that sometimes has helped me:

If there's something you need to do, try to do something with it, however little, as soon after you get up as possible. The example I'm going to use is studying, but you can generalize from it.

Pretty much soon as you get up, BEFORE checking email or anything like that, study (or whatever it is you need to do) a bit. And keep doing until you feel your mental energy "running out".. but then, any time later in the day that you feel a smigen of motivation, don't let go of it: run immediately to continue doing.

But starting the day with doing some, however little, seemed to help. I think with me the psychology was sort of "this is the sort of day when I'm working on this", so once I start on it, it's as if I'm "allowed" to periodically keep doing stuff with it during the day.

Anyways, as I said, this has sometimes helped me, so...

Replies from: MixedNuts
comment by MixedNuts · 2010-03-04T14:52:48.052Z · LW(p) · GW(p)

Hmm, this may be why there's such a gap between good and bad days.

It only applies to things you can do little by little and whenever you want, which is pretty limited but still useful. Thanks.

comment by wedrifid · 2010-03-03T05:13:42.790Z · LW(p) · GW(p)

Order modafinil online. Take it, using 'count backwards then swallow the pill' if necessary. Then, use the temporary boost in mental energy to call a shrink.

I have found this useful at times.

Replies from: knb, MixedNuts, HumanFlesh
comment by knb · 2010-03-03T21:10:03.348Z · LW(p) · GW(p)

Modafinil is a prescription drug, so he would have to see a doctor first, right?

Replies from: wedrifid
comment by wedrifid · 2010-03-04T00:28:15.727Z · LW(p) · GW(p)

Yes, full compliance with laws and schedules, even ones that are trivial to ignore, is something I publicly advocate.

Replies from: knb
comment by knb · 2010-03-04T05:03:48.492Z · LW(p) · GW(p)

Ok, I didn't know that scoring illegal prescription drugs online was so easy. Isn't it risky? I know people have been busted for this the USA, though it may be easier in France.

Replies from: wedrifid, Kevin
comment by wedrifid · 2010-03-04T05:36:12.703Z · LW(p) · GW(p)

I will not go into detail on what I understand to be the pragmatic considerations here, since the lesswrong morality encourages a more conservative approach to choosing what to do.

The life-extentionists over at imminst.org tend to be experienced in acquiring whatever they happen to need to meet their health and cognitive enhancement goals. They tend to give a fairly unbiased reports on the best way to go about getting what you need, accounting for legal risks, product quality risks, price and convenience.

I do note that when I want something that is restricted I usually just go tell a doctor that "I have run out" and get them to print me 'another' prescription.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-03-04T22:16:55.510Z · LW(p) · GW(p)

since the lesswrong morality encourages a more conservative approach to choosing what to do.

I'm curious why you say this. I don't get the impression that more than a tiny number of people here would have moral or even ethical qualms about ordering drugs online, though I would non-confidently expect us to overestimate the risk on average.

comment by Kevin · 2010-03-04T06:14:41.236Z · LW(p) · GW(p)

In the USA it's no problem to order unscheduled prescription drugs over the internet. Schedule IV drugs can be imported, but customs occasionally seizes them with no penalty for the importer. No company that takes credit cards will ship Schedule II or Schedule III drugs to the USA; at least not one that will be in business for more than a month or two.

I believe it's all easier in Europe but I don't know for sure. PM for more info.

Replies from: sketerpot
comment by sketerpot · 2010-03-05T22:25:12.979Z · LW(p) · GW(p)

And for completeness, I should note that Modafinil is a Schedule IV drug in the US.

Replies from: gwern
comment by gwern · 2010-03-06T00:39:43.846Z · LW(p) · GW(p)

Also, downloading music & movies is usually a copyright violation, frequently both civil & criminal.

comment by MixedNuts · 2010-03-03T10:45:04.026Z · LW(p) · GW(p)

Thanks, but it gets worse. I can't order anything online, because I need to see my bank about checks or debit cards first. I can imagine asking a friend to do it for me, though it's terrifying; I could probably do it on a good day. Also, I doubt the thing modafinil boosts is the same thing I lack, but it could help, if only through placebo effect.

Replies from: wedrifid
comment by wedrifid · 2010-03-04T00:31:03.584Z · LW(p) · GW(p)

I can imagine asking a friend to do it for me, though it's terrifying

Terrifying? That's troubling. A shrink can definitely help you!

Also, I doubt the thing modafinil boosts is the same thing I lack, but it could help, if only through placebo effect.

It may boost everything just enough to get you over the line.

Good luck getting something done. I hope something works for you. Do whatever it takes.

comment by HumanFlesh · 2010-03-04T14:05:34.730Z · LW(p) · GW(p)

Adrafinil is similar to modafinil, only it's much cheaper because its patent has expired.

comment by MrHen · 2010-03-02T16:22:13.951Z · LW(p) · GW(p)

What do you do when you aren't doing anything?

EDIT: More questions as you answer these questions. Too many questions at once is too much effort. I am taking you dead seriously so please don't be offended if I severely underestimate your ability.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T16:38:57.596Z · LW(p) · GW(p)

I keep doing something that doesn't require much effort, out of inertia; typically, reading, browsing the web, listening to the radio, washing a dish. Or I just sit or lie there letting my mind wander and periodically trying to get myself to start doing something. If I'm trying to do something that requires thinking (typically homework) when my brain stops working, I keep doing it but I can't make much progress.

Replies from: MrHen
comment by MrHen · 2010-03-02T18:23:24.882Z · LW(p) · GW(p)

Possible solutions:

  • Increase the amount of effort it takes to do the low-effort things you are trying to avoid. For instance, it isn't terribly hard to set your internet on a timer so it automatically shuts off from 1 - 3pm. While it isn't terribly hard to turn it back on, if you can scrounge up the effort to turn it back on you may be able to put that effort into something else.

  • Decrease the amount of effort it takes to do the high-effort things you are trying to accomplish. Paying bills, for instance, can be done online and streamlined. Family and friend can help tremendously in this area.

  • Increase the amount of effort it takes to avoid doing the things you are trying to accomplish. If you want to make it to an important meeting, try to get a friend to pick you up and drive you all the way over there.

These are somewhat complicated and broad categories and I don't know how much they would help.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T19:13:29.750Z · LW(p) · GW(p)

I've tried all that (they're on LW already).

  • That wouldn't work. I do these things by default, because I can't do the things I want. I don't even have a problem with standard akrasia anymore, because I immediately act on any impulse I have to do something, given how rare they are. Also, I can expend willpower do stop doing something, whereas "I need to do this but I can't" seems impervious to it, at least in the amounts I have.

  • There are plenty of things to be done here, but they're too hard to bootstrap. The easy ones helped somewhat.

  • That helped me most. In the grey area between things I can do and things I can't (currently, cleaning, homework, most phone calls), pressure helps. But no amount of ass-kicking has made me do the things I've been trying to do for a while.

Replies from: AdeleneDawner, MrHen
comment by AdeleneDawner · 2010-03-02T19:15:33.309Z · LW(p) · GW(p)

What classes of things are on the 'can't do' list?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T19:28:50.830Z · LW(p) · GW(p)

The worst are semi-routine activities; the kind of things you need to do sometimes but not frequently enough to mesh with the daily routine. Going to the bank, making most appointments, looking for an apartment, buying clothes (don't ask me why food is okay but clothes aren't). That list is expanding.

Other factors that hurt are:

  • need to do in one setting, no way of doing a small part at a time
  • need to go out
  • social situations
  • new situations
  • being watched while I do it (I can't cook because I share the kitchen with other students, but I could if I didn't)
  • having to do it quickly once I start

Most of these cause me fear, which makes it harder to do things, rather than make it harder directly.

Replies from: jimrandomh, AdeleneDawner, Kutta
comment by jimrandomh · 2010-03-02T21:00:05.717Z · LW(p) · GW(p)

This matches my experience very closely. One observation I'd like to add is that one of my strongest triggers for procrastination spirals is having a task repeatedly brought to my attention in a context where it's impossible to follow through on it - ie, reminders to do things from well-intentioned friends, delivered at inappropriate times. For example, if someone reminds me to get some car maintenance done, the fact that I obviously can't go do it right then means it gets mentally tagged as a wrong course of action, and then later when I really ought to do it the tag is still there.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T21:13:19.794Z · LW(p) · GW(p)

Definitely. So that's why I can't do the stuff I should have done a while ago! Thanks for the insight. What works for you?

Replies from: jimrandomh
comment by jimrandomh · 2010-03-02T21:33:54.980Z · LW(p) · GW(p)

I ended up just explaining the issue to the person who was generating most of the reminders. It wasn't an easy conversation to have (it can sound like being ungrateful and passing blame) but it was definitely necessary. Sending a link to this thread and then bringing it up later seems like it'd mitigate that problem, so that's probably the way to go.

Note that it's very important to draw a distinction between things you haven't done because you've forgotten, for which reminders can actually be helpful, and things you aren't doing because of lack of motivation, for which reminders are harmful.

If you're reading this because a chronic procrastinator sent you a link, then please take this one piece of advice: The very worst thing you can do is remind them every time you speak. If you do that, you will not only reduce the chance that they'll actually do it, you'll also poison your relationship with them by getting yourself mentally classified as a nag.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T21:45:16.007Z · LW(p) · GW(p)

I can't do that, but thanks anyway. A good deal of the reminders happen in a (semi-)professional context where the top priority is pretending to be normal (yes, my priorities are screwed up). Most others come from a person who doesn't react to "this thing you do is causing me physical pain", so forget it.

Replies from: Alicorn, jimrandomh
comment by Alicorn · 2010-03-02T21:47:55.412Z · LW(p) · GW(p)

a person who doesn't react to "this thing you do is causing me physical pain"

Why do you interact with this person?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T21:51:03.971Z · LW(p) · GW(p)

They're family. I planned to be as independent from the family ASAP, but couldn't due to my worsening problems.

comment by jimrandomh · 2010-03-02T22:28:47.162Z · LW(p) · GW(p)

I can't do that, but thanks anyway. A good deal of the reminders happen in a (semi-)professional context

In that case, you'll have to mindhack yourself to change the way you react to reminders like this. This isn't necessarily easy, but if you pull it off it's a one-time act with results that stick with you.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-03T12:29:17.170Z · LW(p) · GW(p)

That's a good change to make, and there's also a complementary third option: A specific variant of 'making a mental note' that seems to work very well, at least for me.

1) Determine a point in your regular or planned schedule where you could divert from your regular schedule to do the thing that you need to do. This doesn't have to be the optimal point of departure, just a workable one; you should naturally learn how to spot better points of departure as time goes on, but it's more important to have a point of departure than it is to have a perfect one. It is, however, important that the point of departure is a task during which you will be thinking, rather than being on autopilot. I like to use doorway passages as my points of departure (for example, 'when I get home from running the errands I'm going to do tomorrow, and go to open my front door') because they tend to be natural transition times, but there are many other options. (Other favorites are 'next time I see a certain person' and 'when I finish (or start) a certain task'.)

2) Envision what you would perceive as you entered that situation, using whatever visualization method most closely matches your normal way of paying attention to the world. I tend to use my senses of sight and touch most, so I might visualize what I'd see as I walked up to my front door, or the feel of holding my keys as I got ready to open it.

3) Envision yourself suddenly and strongly remembering your task in the situation you envisioned in step two. It may also work, if you aren't able to envision your thoughts like that, to visualize yourself taking the first few task-specific steps - for example, if the task is to write an email, you'd want to visualize not just turning on your computer or starting up your email program, but entering the recipient's name into the from: field and writing the greeting.

If this works for you like it works for me, it should cause the appropriate thought (or task, if you used that variant of step 3) to be triggered at a useful time, and with practice it only takes a few moments to set up, so you can ask the person giving you the reminder to give you a moment to make a mental note of it, and then move on with the conversation. Also, if you do have a trigger like this set up for a given task, it gives you a very good response to repeated reminders: "Yes, I know; I'm planning to do that at whatever particular point in time."

A further advantage is that since this method causes the reminder to be triggered by something that will happen automatically anyway, you don't have to keep thinking about it; in fact, I've found that my memory will be triggered more reliably when I haven't worried about the task in the meantime. And if you can let the task go until the trigger reminds you of it, that will reduce the cognitive load that you're carrying, as well.

There is a noteworthy concern with this method, though: It can make you reliant on your schedule staying consistent. If I have plans to run errands, for example, and add a trigger to go off when I get home from that, then I can't change my plans without interfering with the trigger - and if the trigger is set for when I come home from the errands, I may not even remember that I had it set at all when I decide to change my errand plans. There are a few ways to work around that; I go with a combination of having a separate mental to-do list as a backup (which I strictly only refer to during mental downtime, and never try to work from directly: another cognitive-resource saving mechanism), and sometimes using a daily review of what I was intending to get done that day, with brief visualizations of all of the transition points where I'm likely to have had a trigger that wasn't triggered. ("Ok, I was going to get on my bike and go to the craft store and the grocery store, and then bike home, and then... bugger.")

Overall, I've found this to work very well, though.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-10T09:19:26.052Z · LW(p) · GW(p)

I'm doing this wrong. How do you prevent tasks from nagging you at other times?

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-10T14:34:49.901Z · LW(p) · GW(p)

The technique should work even if you find yourself thinking about the task at other times; it just might not work as well, because of the effect that jimrandmoh mentioned about reminders reducing your inclination to do something. A variation of the workaround I mentioned for dealing with others works to mitigate the effect of self-reminders, though - don't just tell yourself 'not right now', tell yourself 'not right now, but at [time/event]'.

I can't say much about how to disable involuntary self-reminders altogether, unfortunately. I don't experience them, and if I ever did, it was long enough ago that I've forgotten both that I did and how I stopped. I have, however, read in several different places that using a reliable reminder system (whether one like I'm suggesting, or something more formal like a written or typed list, or whatever) tends to make them eventually stop happening without any particular effort, as the relevant brain-bits learn that the reliable system is in fact reliable, which seems quite plausible to me.

comment by AdeleneDawner · 2010-03-02T19:48:26.753Z · LW(p) · GW(p)

That sounds like a cognitive-load issue at least as much as it sounds like inertia, to me. (Except the being-watched part, that is. I have that quirk too, and I still haven't figured out what that's about.) There are things that can be done about that, but most of them are minor tweaks that would need to be personalized for you. I suspect I might have some useful things to say about the fear, too. I'll PM you my contact info.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T19:56:14.292Z · LW(p) · GW(p)

What do you mean by "cognitive load"? I read the Wikipedia article on cognitive load theory, but I don't see the connection.

For me, the being-watched part is about embarrassment. I often need to stop and examine a situation and explicitly model it, when most people would just go ahead naturally. Awkward looks cause anxiety.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-02T20:14:28.723Z · LW(p) · GW(p)

The concept I'm talking about is broader than the concept that Wikipedia talks about; it's the general idea that brains only have so many resources to go around, and that some brains have less resources than others or find certain tasks more costly than others, and that it takes a while for those resources to regenerate. Something like this idea has come up a few times here, mostly regarding willpower specifically (and we've found studies supporting it in that case), but my experience is that it's much more generally applicable then that.

And, if your brain regenerates that resource particularly slowly, and if you haven't been thinking in terms of conserving that limited resource (or set of resources, depending on how exactly you're modeling it), it's fairly easy to set yourself up with a lifestyle that uses the resource faster than it can regenerate, which has pretty much the effect you described. (I've experienced it, too, and it's not an uncommon situation to hear about in the autistic community.)

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T20:32:02.227Z · LW(p) · GW(p)

Yes! It does feel like running out of a scarce resource most people have in heaps. I don't know exactly how that resource is generated and how to tell how much I have left before I run out, though.

Replies from: AdeleneDawner, Unnamed, NancyLebovitz
comment by AdeleneDawner · 2010-03-02T20:41:43.413Z · LW(p) · GW(p)

Fortunately, the latter at least seems to be a learnable skill for most people. :)

comment by Unnamed · 2010-03-16T09:13:51.535Z · LW(p) · GW(p)

There is evidence linking people's limited resources for thought and willpower to their blood glucose, which is another good reason to see a doctor to find out if there's something physiological underlying some of your problems.

comment by NancyLebovitz · 2010-03-04T04:56:53.043Z · LW(p) · GW(p)

Does thinking about having less of that resource than other people tend to consume it?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-04T14:43:20.712Z · LW(p) · GW(p)

That's a good question. There is a correlation between running out of it and thinking about it, but it's pretty obvious that most of the causation happens the other way around. Talking about it here doesn't seem to hurt, so probably not.

comment by Kutta · 2010-03-03T12:26:00.314Z · LW(p) · GW(p)

I have a couple of questions, MixedNuts:

  • Have you ever been to a therapist?
  • What kind of you history do you have regarding any kinds of medical conditions?
  • What kind of diagnostic information do you currently have? (blood profile, expert assesment, hair analysis, etc.)
  • What kind of drugs have you been taking, if you've been?
  • What does your diet look like?
Replies from: MixedNuts
comment by MixedNuts · 2010-03-03T12:50:08.897Z · LW(p) · GW(p)
  • I have, for a few months, about a year and a half ago. It was slightly effective. I stopped when I moved and couldn't get myself to call again.
  • Nothing that looks like it should matter.
  • Not much. I had a routine blood test some years ago. Everything was normal, though they probably only measured a few things.
  • No prescription drugs.
  • When I'm on campus I eat mostly vegetables, fresh or canned, and some canned fish or meat, and generic cafeteria food (balanced diet plus a heap of French fries); nothing that requires a lot of effort. At my parents', I eat, um, traditional wholesome food. I eat a lot between meals for comfort, mostly apples. I think my diet is fine in quality but terrible in quantity; I eat way too much and skip meals at random.
Replies from: CronoDAS, blogospheroid
comment by CronoDAS · 2010-03-05T02:02:41.665Z · LW(p) · GW(p)

Given your symptoms, the best advice I can give you is to see a medical doctor of some kind, probably a psychiatrist, and describe your problems. It has to be someone who can order medical tests and write prescriptions. You might very well have a thyroid problem - they cause all kinds of problems with energy and such - and you need someone who can diagnose them. I don't know how to get you to a doctor's office, but I guess you could ask someone else to take you?

comment by blogospheroid · 2010-03-05T01:05:50.025Z · LW(p) · GW(p)

How much fresh citrus fruit is there in your diet?

One of the things that helped me with near depression symptoms when i was in another country was consumption of fresh fruit. Apples and pears helped me, but you already are having apples. hmm..

Try some fresh orange/lemon/sweet lime/grapefruit juices. Might help.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-05T01:26:48.309Z · LW(p) · GW(p)

Quite a lot, but possibly too sporadically. I'll try it, thanks.

comment by MrHen · 2010-03-02T19:36:29.080Z · LW(p) · GW(p)

Okay. Nothing I have will help you. My problems are generally OCD based procrastination loops or modifying bad habits and rituals. Solutions to these assume impulses to do things.

I have nothing that would provide you with impulses to do.

All of my interpretations of "I can't do X" assume what I mean when I tell myself I can't do X.

Sorry. If I were actually there I could probably come up with something but I highly doubt I would be able to "see" you well enough through text to be able to find a relevant answer.

comment by Unnamed · 2010-03-16T09:09:10.495Z · LW(p) · GW(p)

The number one piece of advice that I can give is see a doctor. Not a psychologist or psychiatrist - just a medical doctor. Tell them your main symptoms (low energy, difficulty focusing, panic attacks) and have them run some tests. Those types of problems can have physical, medical causes (including conditions involving the thyroid or blood sugar - hyperthyroidism & hypoglycemia). If a medical problem is a big part of what's happening, you need to get it taken care of.

If you're having trouble getting yourself to the doctor, then you need to find a way to do it. Can you ask someone for help? Would a family member help you set up a doctor's appointment and help get you there? A friend? You might even be able to find someone on Less Wrong who lives near you and could help.

My second and third suggestions would be to find a friend or family member who can give you more support and help (talking about your issues, driving you to appointments, etc.) and to start seeing a therapist again (and find a good one - someone who uses cognitive-behavioral therapy).

Replies from: MixedNuts
comment by MixedNuts · 2010-03-20T21:52:18.481Z · LW(p) · GW(p)

This is technically a good idea. What counts as "my main symptoms", though? The ones that make life most difficult? The ones that occur most often? The most visible ones to others? To me?

Replies from: Unnamed
comment by Unnamed · 2010-04-02T05:59:05.404Z · LW(p) · GW(p)

You'll want to give the doctor a sense of what's going on with you (just like you've done here), and then to help them find any medical issues that may be causing your problems. So give an overall description of the problem and how serious it is (sort of like in your initial post - your lack of energy, inability to do things, and lots of related problems) - including some examples or specifics (like these) can help make that clearer. And be sure to describe anything that seems like it could be physiological (the three that stuck out to me were lack of energy, difficulty focusing, and anxiety / panic attacks - you might be able to think of some others).

The doctor will have questions which will help guide the conversation, and you can always ask whether they want more details about something. Do you think that figuring out what to say to the doctor could be a barrier for you? If so, let me know - I could say more about it.

comment by knb · 2010-03-03T21:11:59.747Z · LW(p) · GW(p)

I recommend a counseling psychologist rather than a psychiatrist. Or, if you can manage it, do both.

I used to be just like this, I actually put off applying for college until I missed the deadlines for my favorite schools, just because I couldn't get myself started. Something changed for me over the last couple years, though, and I'm now really thriving. One big thing that helps in the short term is stimulants: ephedrine and caffeine are OTC in most countries. Make sure you learn how to cycle them, if you do decide to use them. Things seem to get easier over time.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-03T23:38:36.832Z · LW(p) · GW(p)

Why? (The psychiatrist is the one who's a psychologist but can also give you meds, right?)

Caffeine seems to work at least a little, but makes me anxious; it's almost always worth it. Thanks. Ephedrine is illegal in France.

ETA: Actually, scratch that. I tried drinking coffee and soda when I wasn't unusually relaxed, and the anxiety is too extreme to make me more productive.

Replies from: Alicorn, CronoDAS, knb, orthonormal, wedrifid
comment by Alicorn · 2010-03-03T23:41:58.372Z · LW(p) · GW(p)

A psychiatrist is someone who went to medical school and specialized in the brain. A psychologist is someone who has a PhD in psychology. Putting "clinical" before either means they treat patients; "experimental" means what it sounds like. There's some crosstraining, but not as much as one might imagine. ("Therapist" and "counselor" imply no specific degree.)

Replies from: knb
comment by knb · 2010-03-04T05:17:40.224Z · LW(p) · GW(p)

Some common misconceptions:

Counseling Psychology is a very specific degree program within psychology. A psychologist can have a PhD, a PsyD, (doctor of psychology degree), or in some fields, even a masters.

Psychiatrists also don't specialize in "the brain" (that's neurology), they specialize in treating psychiatric disorders using the medical model.

comment by CronoDAS · 2010-03-05T02:17:24.439Z · LW(p) · GW(p)

See the psychiatrist first. Your problems may be caused by some more physiological cause, such as a problem with your thyroid, and a medical doctor is more likely to be able to diagnose them.

comment by knb · 2010-03-04T05:19:02.305Z · LW(p) · GW(p)

(Note: I'm a psychology grad student, my undergrad work was in neuroscience and psychology.)

Psychiatrists (in America at least) are usually too busy to do much psychotherapy. When they do, get ready to pay big time. It just isn't worth their extremely valuable time and in any case, it isn't their specialty.

You don't want to see a clinical psychologist because they treat people with diagnosable psych. disorders. You may have melancholic depression, but it sounds like you just have extreme akrasia issues. If you go to a psychiatrist first, they'll likely just try to give you worthless SSRIs.

comment by orthonormal · 2010-03-04T01:48:21.778Z · LW(p) · GW(p)

Psychologists are for that reason often cheaper. In fact, a counseling psychologist in a training clinic can be downright affordable, and most of the benefits of therapy seem to be independent of the therapist anyway.

Also, it would be worth checking for data on the effectiveness of a psychiatric drug before spending on it; many may be ineffective or not worth the side effects.

Replies from: MixedNuts, wedrifid
comment by MixedNuts · 2010-03-04T03:10:26.374Z · LW(p) · GW(p)

Is Crazy meds as good as it looks?

Replies from: wedrifid
comment by wedrifid · 2010-03-04T06:23:40.227Z · LW(p) · GW(p)

Absolutely. Just reading it made my day! Hilarious. (And the info isn't bad either. )

comment by wedrifid · 2010-03-04T03:12:05.788Z · LW(p) · GW(p)

In fact, a counseling psychologist in a training clinic can be downright affordable

And if you live in Australia can sometimes be free!

comment by wedrifid · 2010-03-04T01:05:06.390Z · LW(p) · GW(p)

(Suggest seeing a psychiatrist first then a psychologist. Therapy works far better once your brain is functioning. Usually just go to a doctor and they will refer you as appropriate.)

comment by whpearson · 2010-03-03T11:49:27.316Z · LW(p) · GW(p)

Do you want a companion of some sort?

If so, a mind hack that might work is imagining what a hypothetical companion might find attractive in a person. Then try and become that person. Do this by using your hypothetical companion as a filter on what you are doing. Don't beat yourself up about not doing what the hypothetical companion would find attractive, that isn't attractive!

Your hypothetical companion does not have to be neurotypical but should be someone you would want to be around.

We should be good at following on from these kinds of motivations as we have a long history of trying to get mates by adjusting behaviour.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-03T12:27:56.337Z · LW(p) · GW(p)

I've sort of considered that, though not framed that way. It might be useful later, but not at my current level. Thanks.

comment by Mitchell_Porter · 2010-03-03T03:08:19.546Z · LW(p) · GW(p)

Maybe you need to go more crazy, not less. Accept that you are in an existential desert and your soul is dying. But there are other places over the horizon, where you may or may not be better off. So either you die where you are, or you pick a direction, crawl, and see if you end up somewhere better.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-03T10:58:02.019Z · LW(p) · GW(p)

I've considered that. There are changes in circumstances that would effect positive changes in my mental state, like hopping on the first train to a faraway town or just stop pretending I'm normal in public. I'd be much happier, until I run out of money.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-03-04T01:24:36.597Z · LW(p) · GW(p)

Why would you run out of money if you stopped pretending you're normal?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-04T03:13:08.959Z · LW(p) · GW(p)

I couldn't go to school or get a job. If I stay in school, I have a career ahead of me if I can pursue it.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-03-04T03:18:20.533Z · LW(p) · GW(p)

What is this abnormality you have which, if you displayed it, would make it impossible to go to school or get a job?

Replies from: MixedNuts, Kevin
comment by MixedNuts · 2010-03-04T20:09:00.314Z · LW(p) · GW(p)

Not one big abnormality. Inability to work for long stretches of time (you can get good at faking). Trouble focusing at random-ish times (even easier to fake). Inability to do certain things out of routine (now I pretend I'll do it later). Extreme anxiety at things like paperwork. Panic attacks (I can delay them until I'm alone, but the cost is high). Sometimes after a panic attack my legs refuse to work, so I just sit there; I could crawl, but I don't in public. Stimming (I choose consciously to do it, but the effects of not doing it when it's needed are bad; I do it as discreetly as possible while still effective).

Replies from: CronoDAS
comment by CronoDAS · 2010-03-05T02:09:40.409Z · LW(p) · GW(p)

Extreme anxiety at things like paperwork. Panic attacks (I can delay them until I'm alone, but the cost is high).

Panic attacks are a very treatable illness. See a medical doctor and tell him or her all about this.

comment by Kevin · 2010-03-04T14:48:33.125Z · LW(p) · GW(p)

Not wanting to go to school or get a job?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-04T20:10:48.656Z · LW(p) · GW(p)

Nice try.

I do, very much; I want a job so I can get money so I can do things (such as, you know, saving the world). I don't particularly like schooling but it helps get jobs, and has less variance than being an autodidact.

comment by Jack · 2010-03-02T20:36:46.932Z · LW(p) · GW(p)

I imagine a specific authority in my life or from my past (okay, this is usually my mother) getting really angry and yelling at me to get my ass up and get to work. If you have any memories of being yelled at by an authority figure, use those to help build the image.

Replies from: MixedNuts
comment by MixedNuts · 2010-03-02T20:46:33.861Z · LW(p) · GW(p)

I promise to give this a honest try, but I expect it to result in panic more than anything.

Replies from: h-H
comment by h-H · 2010-03-03T00:21:35.278Z · LW(p) · GW(p)

try this http://www.antiprocrastinator.com/

also, contact someone who is proficient in helping people, for eg. here we have Alicorn, or try some googling.

Replies from: MixedNuts, Alicorn
comment by MixedNuts · 2010-03-03T01:23:38.640Z · LW(p) · GW(p)

I'm desperate enough to ask on LW. Of course I've Googled everything I could think of.

The link is decent, combining two good tricks and a valuable insight, but all three have been on LW before so I knew them.

Pointing out Alicorn in particular may be useful, but isn't it sort of forcing her to offer help? She already did, though, which makes this point moot.

Replies from: h-H
comment by h-H · 2010-03-03T02:14:53.905Z · LW(p) · GW(p)

I more or less meant direct a question to her and see what happens rather than impose and keep bugging, which I had a feeling you wouldn't do in either case.

comment by Alicorn · 2010-03-03T00:49:00.644Z · LW(p) · GW(p)

I'm flattered, but while I enjoy helping people, I'm not sure how I've projected being proficient at it such that you'd notice - can you explain whence this charming compliment?

Replies from: h-H
comment by h-H · 2010-03-03T01:23:02.794Z · LW(p) · GW(p)

why of course! I've been lurking for a few years now so I remember when you began posting on self help etc. now that think more about it though, I might've had pjeby in mind as well, you two sort of 'merged' when I wrote that above comment, heh

but really, proficient is just a word choice, I guess it is flattery, and I did mean to signal you, but that's how I usually write.

apologies if that overburdened you in anyway..

ETA: oh and I'd meant to write 'more proficient', not just 'proficient'.

comment by markrkrebs · 2010-03-03T15:20:33.981Z · LW(p) · GW(p)

I suggest you pay me $50 for each week you don't get and hold a job. Else, avoid paying me by getting one, and save yourself 6mo x 4wk/mo x $50 -$100 = $400! Wooo! What a deal for us both, eh?

Replies from: MixedNuts
comment by MixedNuts · 2010-03-03T16:03:58.855Z · LW(p) · GW(p)

That's an amusing idea, but disincentives don't work well, and paying money is too Far a disincentive to work (now, if you followed me around and punched me, that might do the trick).

This reminds me of the joke about a beggar who asks Rothschild for money. Rothschild thinks and says "A janitor is retiring next week, you can have their job and I'll double the pay.", and the beggar replies "Don't bother, I have a cousin who can do it for the original wage, just give me the difference!"

comment by Daniel_Burfoot · 2010-03-01T14:09:58.413Z · LW(p) · GW(p)

Has anyone had any success applying rationalist principles to Major Life Decisions? I am facing one of those now, and am finding it impossible to apply rationalist ideas (maybe I'm just doing something wrong).

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Weirdly, the most convincing argument I've contemplated so far is basically a "what would X do?" style analysis, where X is a fictional character.

Replies from: Kaj_Sotala, orthonormal, Dagon, Eliezer_Yudkowsky, MrHen, Morendil, RobinZ, Jordan, RobinZ, gimpf
comment by Kaj_Sotala · 2010-03-01T15:15:37.035Z · LW(p) · GW(p)

It feels to me that rationalist principles are most useful in avoiding failure modes. But they're much less useful in coming up with new things you should do (as opposed to specifying things you shouldn't do).

comment by orthonormal · 2010-03-02T01:43:30.489Z · LW(p) · GW(p)

I'd start by asking whether the unknowns of the problem are primarily social and psychological, or whether they include things that the human intuition doesn't handle well (like large numbers).

If it's the former, then good news! This is basically the sort of problem your frontal cortex is optimized to solve. In fact, you probably unconsciously know what the best choice is already, and you might be feeling conflicted so as to preserve your conscious image of yourself (since you'll probably have to trade off conscious values in such a choice, which we're never happy to do).

In such a case, you can speed up the process substantially by finding some way of "letting the choice be made for you" and thus absolving you of so much responsibility. I actually like to flip a coin when I've thought for a while and am feeling conflicted. If I like the way it lands, then I do that. If I don't like the way it lands, well, I have my answer then, and in that case I can just disobey the coin!

(I've realized that one element of the historical success of divination, astrology, and all other vague soothsaying is that the seeker can interpret a vague omen as telling them what they wanted to hear— thus giving divine sanction to it, and removing any human responsibility. By thus revealing one's wants and giving one permission to seek them, these superstitions may have actually helped people make better decisions throughout history! That doesn't mean it needs the superstitious bits in order to work, though.)

If it's the latter case, though, you probably need good specific advice from a rational friend. Actually, that practically never hurts.

comment by Dagon · 2010-03-01T18:41:57.470Z · LW(p) · GW(p)

A few principles that can help in such cases (major decision, very little direct data):

  • Outside view. You're probably more similar to other people than you like to think. What has worked for them?
  • Far vs Near mode: beware of generalizations when visualizing distant (more than a few weeks!) results of a choice. Consider what daily activities will be like.
  • Avoiding oversimplified modeling: With the exceptions of procreation and suicide, there are almost no life decisions that are permanent and unchangeable.
  • Shut up and multiply, even for yourself: Many times it turns out that minor-but-frequent issues dominate your happiness. Weight your pros/cons for future choices based on this, not just on how important something "should" be.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-01T15:59:07.660Z · LW(p) · GW(p)

...I don't suppose you can tell us what? I expect that if you could, you would have said, but thought I'd ask. It's difficult to work with this little.

I could toss around advices like "A lot of Major Life Decisions consist of deciding which of two high standards you should hold yourself to" but it's just a shot in the dark at this point.

comment by MrHen · 2010-03-01T19:21:42.331Z · LW(p) · GW(p)

I am not that far in the sequences, but these are posts I would expect to come into play during Major Life Decisions. These are ordered by my perceived relevance and accompanied with a cool quote. (The quotes are not replacements for the whole article, however. If the connection isn't obvious feel free to skim the article again.)

To do better, ask yourself straight out: If I saw that there was a superior alternative to my current policy, would I be glad in the depths of my heart, or would I feel a tiny flash of reluctance before I let go? If the answers are "no" and "yes", beware that you may not have searched for a Third Alternative. ~ The Third Alternative

The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there's a lot of fast cheap evidence you haven't gathered yet - Web sites you could visit, counter-counter arguments you could consider, or you haven't closed your eyes for five minutes by the clock trying to think of a better option. You should suspect motivated continuation when some evidence is leaning in a way you don't like, but you decide that more evidence is needed - expensive evidence that you know you can't gather anytime soon, as opposed to something you're going to look up on Google in 30 minutes - before you'll have to do anything uncomfortable. ~ Motivated Stopping and Continuation

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can't yet guess what our answer will be; thus giving our intelligence a longer time in which to act. ~ Hold Off On Proposing Solutions

"Rationality" is the forward flow that gathers evidence, weighs it, and outputs a conclusion. [...] "Rationalization" is a backward flow from conclusion to selected evidence.
~ Rationalization

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren't willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. ~ The Bottom Line

Hope that helps.

comment by Morendil · 2010-03-01T14:32:39.402Z · LW(p) · GW(p)

One problem is that I just don't have enough "evidence" to make meaningful probability estimates. Another is that I'm only weakly aware of my own utility function.

Based on those two lucid observations, I'd say you're doing well so far.

There are some principles I used to weigh major life decisions. I'm not sure they are "rationalist" principles; I don't much care. They've turned out well for me.

Here's one of them: "having one option is called a trap; having two options is a dilemma; three or more is truly a choice". Think about the terms of your decision and generate as many different options as you can. Not necessarily a list of final choices, but rather a list of candidate choices, or even of choice-components.

If you could wave a magic wand and have whatever you wanted, what would be at the top of your list? (This is a mind-trick to improve awareness of your desires, or "utility function" if you want to use that term.) What options, irrespective of their downsides, give you those results?

Given a more complete list you can use the good old Benjamin Franklin method of listing pros and cons of each choice. Often this first step of option generation turns out sufficient to get you unstuck anyway.

Replies from: None
comment by [deleted] · 2010-03-01T21:48:56.989Z · LW(p) · GW(p)

Having two options is a dilemma, having three options is a trilemma, having four options is a tetralemma, having five options is a pentalemma...

:)

Replies from: Cyan
comment by Cyan · 2010-03-01T22:10:04.375Z · LW(p) · GW(p)

A few more than five is an oligolemma; many more is a polylemma.

Replies from: knb
comment by knb · 2010-03-02T07:31:12.142Z · LW(p) · GW(p)

Many more is called perfect competition. :3

comment by RobinZ · 2010-03-05T15:50:39.861Z · LW(p) · GW(p)

Just remembered: I managed not to be stupid on one or two times by asking whether, not why.

comment by Jordan · 2010-03-01T22:27:53.792Z · LW(p) · GW(p)

I just came out of a tough Major Life Situation myself. The rationality 'tools' I used were mostly directed at forcing myself to be honest with myself, confronting the facts, not privileging certain decisions over others, recognizing when I was becoming emotional (and more importantly recognizing when my emotions were affecting my judgement), tracking my preferred choice over time and noticing correlations with my mood and pertinent events.

Overall, less like decision theory and more like a science: trying to cut away confounding factors to discover my true desire. Of course, sometimes knowing your desires isn't sufficient to take action, but I find that for many personal choices it is (or at least is enough to reduce the decision theory component to something much more manageable).

comment by RobinZ · 2010-03-01T19:49:17.537Z · LW(p) · GW(p)

The dissolving the question mindset has actually served me pretty well as a TA - just bearing in mind the principle that you should determine what led to this particular confused bottom line is useful in correcting it afterwards.

comment by gimpf · 2010-03-01T21:19:49.242Z · LW(p) · GW(p)

Well, what are "major" life decisions? Working in the area of Friendly AGI instead of, say, just String Theory? Quit smoking? Or things like getting a child or not?

As one may guess from those questions, I did not have any more success by coercing the bayesian monster than I would have had by just doing the things which already seemed well supported by major pop-science-newspaper-articles.

What I do know is, that although it is difficult to get information on what to do next in my special situation, it seems much easier to get information on things many people already do. I just try to make and educated guess and say that nearly everybody does many things which many people do.

And often enough one can find things which one does but which should not be done. It may sound silly, but I include things like not smoking, not talking to your friends when you're depressed (writing personal notes works better as friends seem to reinforce the bad mood), and not trying to work as a researcher (y'a know, 80% of the people think they are above average...).

What you describe as "X, the fictional character", seems like setting up an in-brain story to think about difficult topics which require analytical thinking, helping to concentrate on one topic by actively blocking random interference of visual/auditory ideas.

This is not an "convincing argument" (maybe it's just my English skills, but "convincing argument ... what would do" just does not parse into something meaningful) but just a technique. Similar to concentrate on breathing or muscle tonus or your thoughts or some real or imaginary candle or smell when exeucting the meditation of your preference.

comment by whpearson · 2010-03-01T12:24:10.990Z · LW(p) · GW(p)

Pigeons can solve Monty hall (MHD)?

A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy.

Behind a paywall

Replies from: toto
comment by toto · 2010-03-01T14:24:10.829Z · LW(p) · GW(p)

Behind a paywall

But freely available from one of the authors' website.

Basically, pigeons also start with a slight bias towards keeping their initial choice. However, they find it much easier to "learn to switch" than humans, even when humans are faced with a learning environment as similar as possible to that of pigeons (neutral descriptions, etc.). Not sure how interesting that is.

comment by Peter_de_Blanc · 2010-03-08T00:39:21.481Z · LW(p) · GW(p)

How much information is preserved by plastination? Is it a reasonable alternative to cryonics?

Replies from: Jack, ciphergoth
comment by Jack · 2010-03-08T03:04:49.418Z · LW(p) · GW(p)

Afaict pretty much the same amount as cryonics. And it is cheaper and more amenable to laser scanning. This is helpful. The post has an interesting explanation of why all the attention is on cryo:

Freezing has a certain subjective appeal. We freeze foods and rewarm them to eat. We read stories about children who have fallen into ice cold water and survived for hours without breathing. We know that human sperm, eggs, and even embryos can be frozen and thawed without harm. Freezing seems intuitively reversible and complete. Perhaps this is why cryonics quickly attained, and has kept, its singular appeal for life extensionists.

By contrast, we tend to associate chemical preservation with processes that are particularly irreversible and inadequate. Corpses are embalmed to prevent decay for only a short time. Taxidermists make deceased animals look alive, although most of their body parts are missing or transformed. “Plastinated” cadavers are used to demonstrate surface anatomy in schools and museums. No wonder, then, that cryonicists routinely dismiss chemopreservation as a truly bad idea.

Edit: Further googling suggest there might be some unsolved implementation issues.

comment by MrHen · 2010-03-01T22:40:41.040Z · LW(p) · GW(p)

This was in my drafts folder but due to the lackluster performance of my latest few posts I decided it doesn't deserve to be a top level post. As such, I am making it a comment here. It also does not answer the question being asked so it probably wouldn't have made the cut even if my last few posts been voted to +20 and promoted... but whatever. :P


Perceived Change

Once, I was dealing a game of poker for some friends. After dealing some but not all of the cards I cut the deck and continued dealing. This irritated them a great deal because I altered the order of the deck because some players would not receive the cards they were supposed to be dealt. One of the friends happened to be majoring in Mathematics and understood probability as much as anyone else at the table. Even he thought what I did was wrong.

I explained that the cut didn’t matter because everyone still has the same odds of receiving any particular card from the deck. His retort was that it did matter because the card he was going to get is now near the middle of the deck. Instead of that particular random card he will get a different particular random card. As such, I should not have cut the deck.

During the ensuing arguments I found myself constantly presented with the following point: The fact of the game is that he would have received a certain card and now he will receive a different card. Shouldn’t this matter? People seem to hold grudges when someone swaps random chances of an outcome and the swap changes who wins.

The problem with this objection is illustrated if I secretly cut the cards. If they have no reason to believe I cut the deck, they wouldn’t complain. Furthermore, it is completely impossible to perceive the change by studying before and after states of the probabilities. More clearly, if I put the cards under the table and threatened to cut the cards, my friends would have no way of knowing whether or not I cut the deck. This implies that the change itself is not the sole cause of complaint. The change must be accompanied with the knowledge that something was changed.

The big catch is that the change itself isn’t actually necessary at all. If I simply tell my friends that I cut the cards when they were not looking they will be just as upset. They have perceived a change in the situation. In reality, every card is in exactly the same position and they will be dealt what they think they should have been dealt. But now even that has changed. Now they actually think the exact opposite. Even though nothing about the deck has been changed, they now think that the cards being dealt to them are the wrong cards.

What is this? There has to be some label for this, but I don’t know what it is or what the next step in this observation should be. Something is seriously, obviously wrong. What is it?


Edit to add:

The underlying problem here is not that they were worried about me cheating. The specific scenario and the arguments that followed from that scenario were such that cheating wasn't really a valid excuse for their objections.

Replies from: RobinZ, orthonormal, rwallace, JGWeissman, Jordan, cousin_it
comment by RobinZ · 2010-03-01T22:48:36.439Z · LW(p) · GW(p)

To venture a guess: their true objection was probably "you didn't follow the rules for dealing cards". And, to be fair to your friends, those rules were designed to defend honest players against card sharps, which makes violations Bayesian grounds to suspect you of cheating.

Replies from: MrHen
comment by MrHen · 2010-03-01T23:08:19.704Z · LW(p) · GW(p)

No, this wasn't their true objection. I have a near flawless reputation for being honest and the arguments that ensued had nothing to do with stacking the deck. If I were a dispassionate third party dealing the game they would have objected just as strongly.

I initially had a second example as such:

Assume my friend and I each purchased a lottery ticket. As the winning number was about to be announced, we willing traded tickets. If I won, I would not be surprised to be asked to share the winnings because, after all, he chose the winning ticket.

It seems as though some personal attachment is created with the specific random object. Once that object is "taken," there is an associated sense of loss.

Replies from: prase, RobinZ, hugh, gwern
comment by prase · 2010-03-02T16:03:37.272Z · LW(p) · GW(p)

Your reputation doesn't matter. Once the rules are changed, you are on a slippery slope of changing rules. The game slowly ceases to be poker.

When I am playing chess, I demand that the white moves first. When I find myself as the black, knowing that the opponent had whites the last game and it is now my turn to make the first move, I rather change places or rotate the chessboard than play the first move with the blacks, although it would not change my chances of winning. (I don't remember the standard openings, so I wouldn't be confused by the change of colors. And even if I were, this would be the same for the opponent.)

Rules are rules in order to be respected. They are often a lot arbitrary, but you shouldn't change any arbitrary rule during the game without prior consent of the others, even if it provably has no effect to the winning odds.

I think this is a fairly useful heuristic. Usually, when a player tries to change the rules, he has some reason, and usually, the reason is to increase his own chances of winning. Even if you opponent doesn't see any profit which you can get from changing the rules, he may suppose that there is one. Maybe you remember somehow that there are better or worse cards in the middle of the pack. Or you are trying to test their attention. Or you want to make more important changes of rules later, and wanted to have a precedent for doing that. These possibilities are quite realistic in gambling, and therefore is is considered a bad manner to change the rules in any way during the game.

Replies from: MrHen, Sniffnoy
comment by MrHen · 2010-03-02T16:20:09.736Z · LW(p) · GW(p)

I don't know how to respond to this. I feel like I have addressed all of these points elsewhere in the comments.

A summary:

  • The poker game is an example. There are more examples involving things with less obvious rules.
  • My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic.
  • I confronted my friends and asked for a reasonable answer. Heuristics were not offered. No one complained about broken rules or cheating. They complained that they were not going to get their card.

It seems to be a problem with ownership. If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride?

One more thing of note: They argued against the abstract scenario. This scenario assumed no cheating and no funny business. They still thought it mattered.

Personally, I think this is a larger issue than catching cheaters. People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?

Replies from: Nick_Tarleton, prase
comment by Nick_Tarleton · 2010-03-02T16:25:15.009Z · LW(p) · GW(p)

If this sense of ownership is based on a heuristic meant to detect cheaters or suspicious situations... okay, I can buy that. But why would someone who knows all of the probabilities involved refuse to admit that cutting the deck doesn't matter? Pride? ... People seemed somewhat attached to the anti-cheating heuristic.

The System 1 suspicion-detector would be less effective if System 2 could override it, since System 2 can be manipulated.

(Another possibility may be loss aversion, making any change unattractive that guarantees a different outcome without changing the expected value. (I see hugh already mentioned this.) A third, seemingly less likely, possibility is intuitive 'belief' in the agency of the cards, which is somehow being undesirably thwarted by changing the ritual.)

Replies from: MrHen
comment by MrHen · 2010-03-02T16:49:56.822Z · LW(p) · GW(p)

Why can I override mine? What makes me different from my friends? The answer isn't knowledge of math or probabilities.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-03-02T16:59:58.336Z · LW(p) · GW(p)

I really don't know. Unusual mental architecture, like high reflectivity or 'stronger' deliberative relative to non-deliberative motivation? Low paranoia? High trust in logical argument?

comment by prase · 2010-03-02T16:40:21.573Z · LW(p) · GW(p)

People seemed somewhat attached to the anti-cheating heuristic. Would it be worth me typing up an addendum addressing that point in full?

Depends, of course, on what exactly you would say and how much unpleasant the writing is for you.

My reputation matters in the sense that they know wasn't trying to cheat. As such, when pestered for an answer they are not secretly thinking, "Cheater." This should imply that they are avoiding the cheater-heuristic or are unaware that they are using the cheater-heuristic.

I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters.

EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.

Replies from: MrHen
comment by MrHen · 2010-03-02T16:54:06.376Z · LW(p) · GW(p)

I would say that they impement the rule-changing-heuristic, which is not automatically thought of as an instance of the cheater-heuristic, even if it evolved from it. Changing the rules makes people feeling unsafe, people who do it without good reason are considered dangerous, but not automatically cheaters.

This behavior is repeated in scenarios where the rules are not being changed or there aren't "rules" in the sense of a game and its rules. These examples are significantly fuzzier which is why I chose the poker example.

The lottery ticket example is the first that comes to mind.

EDIT: And also, from your description it seems that you have deliberately broken a rule without giving any reason for that. It is suspicious.

Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?

Replies from: prase
comment by prase · 2010-03-02T17:26:43.295Z · LW(p) · GW(p)

Why wouldn't the complaint then take the form of, "You broke the rules! Stop it!"?

Because people aren't good at telling their actual reason for disagreement. I suspect that they are aware that the particular rule is arbitrary and doesn't influence the game, and almost everybody agrees that blindly following the rules is not a good idea. So "you broke the rules" doesn't sound as a good justification. "You have influenced the outcome", on the other hand, does sound like a good justification, even if it is irrelevant.

The lottery ticked example is a valid argument, which is easily explained by attachment to random objects and which can't be explained by rule-changing heuristic. However, rule-fixing sentiments certainly exist and I am not sure which play stronger role in the poker scenario. My intuition was that the poker scenario was more akin to, say, playing tennis in non-white clothes in the old times when it was demanded, or missing the obligatory bow before the match in judo.

Now, I am not sure which of these effects is more important in the poker scenario, and moreover I don't see by which experiment we can discriminate between the explanation.

Replies from: RobinZ, MrHen
comment by RobinZ · 2010-03-02T17:48:02.455Z · LW(p) · GW(p)

Because people aren't good at telling their actual reason for disagreement.

This is the best synopsis of the "true rejection" article I have ever seen.

comment by MrHen · 2010-03-02T17:55:21.664Z · LW(p) · GW(p)

That works for me. I am not convinced that the rule-changing heuristic was the cause but I think you have defended your position adequately.

comment by Sniffnoy · 2010-03-02T16:29:28.300Z · LW(p) · GW(p)

But this isn't a rule of the game - it's an implementation issue. The game is the same so long as cards are randomly selected without replacement from a deck of the appropriate sort.

Replies from: Nick_Tarleton, prase
comment by Nick_Tarleton · 2010-03-02T16:34:44.054Z · LW(p) · GW(p)

(The first Google hit for "texas hold'em rules" in fact mentions burning cards.)

That the game has the same structure either way is recognized only at a more abstract mental level than the level that the negative reaction comes from; in most people, I suspect the abstract level isn't 'strong enough' here to override the more concrete/non-inferential/sphexish level.

comment by prase · 2010-03-02T16:47:19.009Z · LW(p) · GW(p)

The ideal decision algorithm used in the game remains the same, but people don't look at it this way. It is a rule, since it is how they have learned the game.

comment by RobinZ · 2010-03-01T23:32:45.595Z · LW(p) · GW(p)

I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.

Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

Replies from: MrHen
comment by MrHen · 2010-03-02T00:03:42.378Z · LW(p) · GW(p)

EDIT: Wow, this turned into a ramble. I didn't have time to proof it so I apologize if it doesn't make sense.

I'm not sure our guesses (I presume you have not tested the lottery ticket swap experimentally) are actually in conflict. My thesis was not "they think you're cheating", but simply, straightforwardly "they object to any alteration of the dealing rules", and they might do so for the wrong reason - even though, in their defense, valid reasons exist.

Okay, yeah, that makes sense. My instinct is pointing me in the other direction namely because I have the (self perceived) benefit of knowing which friends of mine were objecting. Of note, no one openly accused me of cheating or anything like that. If I accidently dropped the deck on the floor or knocked it over the complaints would remain. The specific complaint, which I specifically asked for, is that their card was put into the middle of the deck.

(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)

Your thesis, being narrow, is definitely of interest, though. I'm trying to think of cases where my thesis, interpreted naturally, would imply the opposite state of objection to yours. Poor shuffling (rule-stickler objects, my-cardist doesn't) might work, but a lot of people don't attend closely to whether cards are well-shuffled, stickler or not.

Any pseudo random event where people can (a) predict the undisclosed particular random object and (b) someone can voluntarily preempt that prediction and change the result tends to receive the same behavior.

(I presume you have not tested the lottery ticket swap experimentally)

I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story:

My grandfather once won at bingo and was offered to choose a prize from a series of stuffed animals. Each animal was accompanied by an envelope containing some amount of cash. Amongst the animals were a turtle and a rhinoceros. Traditionally, he would always choose the turtle because he likes turtles but this time he picked the rhinoceros because my father happens to like rhinos. The turtle contained more money than the rhino and my dad got to hear about how he lost my grandfather money.

Granted, there are a handful of obvious holes in this particular story. The list includes:

  • My grandfather could have merely used it as an excuse to jab his son-in-law in the ribs (very likely)
  • My grandfather was lying (not likely)
  • The bingo organizers knew that rhinos were chosen more often than turtles (not likely)
  • My grandfather wasn't very good at probability (likely, considering he was playing bingo)
  • Etc.

More stories like this have taught me to never muck with pseudo random variables whose outcomes effect things people care about even if the math behind the mucking doesn't change anything. People who had a lottery ticket and traded it for a different equal chance will get extremely depressed because they actually "had a shot at winning." These people could completely understand the probabilities involved, but somehow this doesn't help them avoid the "what if" depression that tells them they shouldn't have traded tickets.

People do this all the time involving things like when they left for work. Decades ago, my mother-in-law put her sister on a bus and the sister died when the bus crashed. "What if?" has dogged her ever since. The connection between the random chance of that particular bus crashing on that particular day is associated with her completely independent choice to put her sister on the bus. While they are mathematically independent, it doesn't change the fact that her choice mattered. For some reason, people take this mattering and do things with it that makes no sense.

This topic can branch out into really weird places when viewed this way. The classic problem of someone holding 10 people hostage and telling you to kill 1 or all 10 die matches the pattern with a moral choice instead of random chance. When asking if it is more moral to kill 1 or let the 10 die people will argue that refusing to kill an innocent will result in 9 more people dying than needed. The decision matters and this mattering reflects on the moral value of each choice. Whether this is correct or not seems to be in debate and it is only loosely relevant for this particular topic. I am eagerly looking for the eventual answer to the question, "Are these events related?" But to get there I need to understand the simple scenario, which is the one presented by my original comment.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

I am having trouble understanding this. Can you say it again with different words?

Replies from: RobinZ
comment by RobinZ · 2010-03-02T00:53:13.576Z · LW(p) · GW(p)

Have no fear - your comment is clear.

(By the way, I do not think that claiming arrival at a valid complaint via the wrong reason is offering much defense for my friends.)

I'll give you that one, with a caveat: if an algorithm consistently outputs correct data rather than incorrect, it's a heuristic, not a bias. They lose points either way for failing to provide valid support for their complaint.

I have not tested it in the sense that I sought to eliminate any form of weird contamination. But I have lots of anecdotal evidence. One such, very true, story: [truncated for brevity]

Yes, those anecdotes constitute the sort of data I requested - your hypothesis now outranks mine in my sorting.

(Incidentally, If you had made a top-level post, I would want to see this kind of prediction-based elimination of alternative hypotheses.

I am having trouble understanding this. Can you say it again with different words?

When I read your initial comment, I felt that you had proposed an overly complicated explanation based on the amount of evidence you presented for it. I felt so based on the fact that I could immediately arrive at a simpler (and more plausible by my prior) explanation which your evidence did not refute. It is impressive, although not necessary, when you can anticipate my plausible hypothesis and present falsifying evidence; it is sufficient, as you have done, to test both hypotheses fairly against additional data when additional hypotheses appear.

Replies from: MrHen
comment by MrHen · 2010-03-02T01:21:32.254Z · LW(p) · GW(p)

Ah, okay. That makes more sense. I am still experimenting with the amount of predictive counter-arguing to use. In the past I have attempted to so by adding examples that would address the potential objections. This hasn't been terribly successful. I have also directly addressed the points and people still brought them up... so I am pondering how to fix the problem.

But, anyway. The topic at hand still interests me. I assume there is a term for this that matches the behavior. I could come up with some fancy technical definition (perceived present ownership of a potential future ownership) but it seems dumb to make up a term when there is one lurking around somewhere. And the idea of labeling it an ownership problem didn't really occur to me until my conversation with you... so maybe I am answering my own question slowly?

Replies from: thomblake
comment by thomblake · 2010-03-02T16:58:24.087Z · LW(p) · GW(p)

Something like "ownership" seems right, as well as the loss aversion issue. Somehow, this seemingly-irrational behavior seems perfectly natural to me (and I'm familiar with similar complaints about the order of cards coming out). If you look at it from the standpoint of causality and counterfactuals, I think it will snap into place...

Suppose that Tim was waiting for the king of hearts to complete his royal flush, and was about to be dealt that card. Then, you cut the deck, putting the king of hearts in the middle of the deck. Therefore, you caused him to not get the king of hearts; if your cutting of the deck were surgically removed, he would have had a straight flush.

Presumably, your rejoinder would be that this scenario is just as likely as the one where he would not have gotten the king of hearts but your cutting of the deck gave it to him. But note that in this situation the other players have just as much reason to complain that you caused Tim to win!

Of course, any of them is as likely to have been benefited or hurt by this cut, assuming a uniform distribution of cards, and shuffling is not more or less "random" than shuffling plus cutting.

A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. The frequentist would charitably assume that the shuffle guarantees a uniform distribution, so that the cards each have the same probability of appearing on any particular draw. The Bayesian will symmetrically note that shuffling makes everyone involved assign the same probability to each card appearing on any particular draw, due to their ignorance of which ones are more likely. But this only works because everyone involved grants that shuffling has this property. You could imagine someone who payed attention to the shuffle and knew exactly which card was going to come up, and then was duly annoyed when you unexpectedly cut the deck. Given that such a person is possible in principle, there actually is a fact about which card each person 'would have' gotten under a standard method, and so you really did change something by cutting the deck.

Replies from: MrHen
comment by MrHen · 2010-03-02T17:51:56.969Z · LW(p) · GW(p)

A digression: But hopefully at this point, you'll realize the difference between the frequentist and Bayesian instincts in this situation. [...]

Yep. This really is a digression which is why I hadn't brought up another interesting example with the same group of friends:

One of my friends dealt hearts in a manner of giving each player a pack of three cards, the next player a pack of three cards and so on. The amount of cards being dealt were the same but we all complained that this actually affected the game because shuffling isn't truly random and it was mucking with the odds.

We didn't do any tests on the subject because we really just wanted the annoying kid to stop dealing weird. But, now that I think about it, it should be relatively easy to test...

Also related, I have learned a few magic tricks in my time. I understand that shuffling is a tricksy business. Plenty of more amusing stories are lurking about. This one is marginally related:

At a poker game with friends of friends there was one player who shuffled by cutting the cards. No riffles, no complicated cuts, just take a chunk from the top and put it on the bottom. Me and the mathematician friend from my first example told him to knock it off and shuffle the cards. He tried to convince us he was randomizing the deck. We told him to knock it off and shuffle the cards. He obliged while claiming that it really doesn't matter.

This example is a counterpoint to the original. Here is someone claiming that it doesn't matter when the math says it most certainly does. The aforementioned cheater-heuristic would have prevented this player from doing something Bad. I honestly have no idea if he was just lying to us or was completely clueless but I couldn't help but be extremely suspicious when he ended up winning first place later that night.

Replies from: thomblake
comment by thomblake · 2010-03-02T18:38:57.683Z · LW(p) · GW(p)

On a tangent, myself and friends always pick the initial draw of cards using no particular method when playing Munchkin, to emphasize that we aren't supposed to be taking this very seriously. I favor snatching a card off the deck just as someone else was reaching for it.

comment by hugh · 2010-03-02T03:32:16.713Z · LW(p) · GW(p)

When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without [suspicion of] malicious intent breaks a ritual associated with the game.

While in this instance, the ritual doesn't protect the integrity of the game, rituals can be very important in getting into and enjoying activities. Humans are badly wired, and Less Wrong readers work hard to control our irrationalities. One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.

Replies from: MrHen
comment by MrHen · 2010-03-02T04:22:55.924Z · LW(p) · GW(p)

When you deal Texas Hold'em, do you "burn" cards in the traditional way? Neither I nor most of my friends think that those cards are special, but it's part of the rules of the game. Altering them, even without (suspicion of) malicious intent breaks a ritual associated with the game.

We didn't until the people on TV did it. The ritual was only important in the sense that this is how they were predicting which card they were going to get. Their point was based entirely on the fact that the card they were going to get is not the card they ended up getting.

As a reminder to the ongoing conversation, we had arguments about the topic. They didn't say, "Do it because you are supposed to do it!" They said, "Don't change the card I am supposed to get!"

One arena in which I see less need for that is when our superstitious and pattern-seeking behaviors let us enjoy things more. I have a ritual for making coffee. I enjoy coffee without it, but I can reach a near-euphoric state with it. Faulty wiring, but I see no harm in taking advantage of it.

Sure, but this isn't one of those cases. In this case, they are complaining for no good reason. Well, I guess I haven't found a good reason for their reaction. The consensus in the replies here seems to be that their reaction was wrong.

I am not trying to say you shouldn't enjoy your coffee rituals.

Replies from: hugh
comment by hugh · 2010-03-02T05:10:31.453Z · LW(p) · GW(p)

RobinZ ventured a guess that their true objection was not their stated objection; I stated it poorly, but I was offering the same hypothesis with a different true objection--that you were disrupting the flow of the game.

I'm not entirely sure if this makes sense, partially because there is no reason to disguise unhappiness with an unusual order of game play. From what you've said, your friends worked to convince you that their objection was really about which cards were being dealt, and in this instance I think we can believe them. My fallacy was probably one of projection, in that I would have objected in the same instance, but for different reasons. I was also trying to defend their point of view as much as possible, so I was trying to find a rational explanation for it.

I suspect that the real problem is related to the certainty effect. In this case, though no probabilities were altered, there was a new "what-if" introduced into the situation. Now, if they lose (or rather, when all but one of you lose) they will likely retrace the situation and think that if you hadn't cut the deck, they could have won. Which is true, of course, but irrelevant, since it also could have gone the other way. However, the same thought process doesn't occur on winning; people aren't inclined to analyze their successes in the say way that they analyze their failures, even if they are both random events. The negative emotion associated with feeling like a victory is stolen would be enough to preemptively object and prevent that from occurring in the first place.

However, even if what I said above is true, I don't think it really addresses the problem of adjusting their map to match the territory. That's another question entirely.

Replies from: MrHen
comment by MrHen · 2010-03-02T14:30:26.704Z · LW(p) · GW(p)

I agree with your comment and this part especially:

However, the same thought process doesn't occur on winning; people aren't inclined to analyze their successes in the say way that they analyze their failures, even if they are both random events.

Very true. I see a lot of behavior that matches this. This would be an excellent source of the complaint if it happened after they lost. My friends complained before they even picked up their cards.

comment by gwern · 2010-03-04T02:09:20.776Z · LW(p) · GW(p)

I have a near flawless reputation for being honest

That's what they say, I take it.

comment by orthonormal · 2010-03-02T02:04:26.054Z · LW(p) · GW(p)

To modify RobinZ's hypothesis:

Rather than focusing on any Bayesian evidence for cheating, let's think like evolution for a second: how do you want your organism to react when someone else's voluntary action changes who receives a prize? Do you want the organism to react, on a gut level, as if the action could have just as easily swung the balance in their favor as against them? Or do you want them to cry foul if they're in a social position to do so?

Your friends' response could come directly out of that adaptation, whatever rationalizations they make for it afterwards. I'd expect to see the same reaction in experiments with chimps.

Replies from: MrHen
comment by MrHen · 2010-03-02T02:43:52.317Z · LW(p) · GW(p)

How do you want your organism to react when someone else's voluntary action changes who receives a prize?

I want my organism to be able to tell the difference between a cheater and someone making irrelevant changes to a deck of cards. I assume this was a rhetorical question.

Evolution is great but I want more than that. I want to know why. I want to know why my friends feel that way but I didn't when the roles were reversed. The answer is not "because I knew more math." Have I just evolved differently?

I want to know what other areas are affected by this. I want to know how to predict whatever caused this reaction in my friends before it happens in me. "Evolution" doesn't help me do that. I cannot think like evolution.

As much as, "You could have been cheating" is a great response -- and "They are conditioned to respond to this situation as if you were cheating" is a better response -- these friends know the probabilities are the same and know I wasn't cheating. And they still react this way because... why?

I suppose this comment is a bit snippier than it needs to be. I don't understand how your answer is an answer. I also don't know much about evolution. If I learned more about evolution would I be less confused?

Replies from: None, JamesPfeiffer
comment by [deleted] · 2010-03-02T08:03:26.726Z · LW(p) · GW(p)

It might be because people conceive a loss more severely than a gain. There might be an evolutionary explanation for that. Because of that they would conceive their "lossed" card which they already thought would be theirs more severely than the card the "gained" after the cut. While you on the other hand might already be trained to think about it differently.

comment by JamesPfeiffer · 2010-03-02T07:23:08.054Z · LW(p) · GW(p)

Based on my friends, the care/don't care dichotomy cuts orthogonally to the math/no math dichotomy. Most people, whether good or bad at math, can understand that the chances are the same. It's some other independent aspect of your brain that determines whether it intensely matters to you to do things "the right way" or if you can accept the symmetry of the situation. I hereby nominate some OCD-like explanation. I'd be interested in seeing whether OCD correlated with your friends' behavior.

As a data point, I am not OCD and don't care if you cut the deck.

Replies from: MrHen
comment by MrHen · 2010-03-02T14:42:33.157Z · LW(p) · GW(p)

I am more likely to be considered OCD than any of my friends in the example. I don't care if you cut the deck.

comment by rwallace · 2010-03-02T05:26:11.214Z · LW(p) · GW(p)

It's a side effect.

Yes, they were being irrational in this case. But the heuristics they were using are there for good reason. Suppose they had money coming to them and you swooped in and took it away before it could reach them, they would be rational to object, right? That's why those heuristics are there. In practice the trigger conditions for these things are not specified with unlimited precision, and pure but interruptible random number generators are not common in real life, so the trigger conditions harmlessly spill over to this case. But the upshot is that they were irrational as a side effect of usually rational heuristics.

Replies from: MrHen
comment by MrHen · 2010-03-02T14:39:00.741Z · LW(p) · GW(p)

But the upshot is that they were irrational as a side effect of usually rational heuristics.

So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?

I can understand your answer if the scenario was more like:

"Hey! Don't do that!"
"But it doesn't matter. See?"
"Oh. Well, okay. But don't do it anyway because..."

And then they mention your heuristic. They didn't do anything like this. They explicitly understood that nothing was changing in the probabilities and they explicitly understood that I was not cheating. And they were completely willing to defend their reaction in arguments. In their mind, their position was completely rational. I could not convince them that it was rational with math. Something else was the problem.

"Heuristics" is nifty, but I am not completely satisfied with that answer. Why would they have kept defending it when it was demonstrably wrong?

I suppose it is possible that they were completely unaware that they were using whatever heuristic they were using. Would that explain the behavior? Perhaps this is why they could not explain their position to me at the time of the arguments?

How would you describe this heuristic in a few sentences?

Replies from: AdeleneDawner, orthonormal
comment by AdeleneDawner · 2010-03-02T15:25:23.470Z · LW(p) · GW(p)

I suspect it starts with something like "in the context of a game or other competition, if my opponent does something unexpected, and I don't understand why, it's probably bad news for me", with an emotional response of suspicion. Then when your explanation is about why shuffling the cards is neutral rather than being about why you did something unexpected, it triggers an "if someone I'm suspicious of tries to convince me with logic rather than just assuring me that they're harmless, they're probably trying to get away with something" heuristic.

Also, most people seem to make the assumption, in cases like that, that they aren't going to be able to figure out what you're up to on the fly, so even flawless logic is unlikely to be accepted - the heuristic is "there must be a catch somewhere, even if I don't see it".

comment by orthonormal · 2010-03-03T03:09:05.892Z · LW(p) · GW(p)

So, when I pester them for a rational reason, why do they keep giving an answer that is irrational for this situation?

Because human beings often first have a reaction based on an evolved, unconscious heuristic, and only later form a conscious rationalization about it, which can end up looking irrational if you ask the right questions (e.g. the standard reactions to the incest thought experiment there). So, yes, they were probably unaware of the heuristic they were actually using.

I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.

Given that rigorous probability theory didn't emerge until the later stages of human civilization, there's not much room for an additional heuristic saying "unless it doesn't change the odds" to have evolved; indeed, all of the agreed-upon random ways of selecting things (that I've ever heard of) work by obvious symmetry of chances rather than by abstract equality of odds†, and most of the times someone intentionally changed the process, they were probably in fact hoping to cheat the odds.

† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.

Now compute the odds (50-50, unless I made a dumb mistake), and then actually try it (in real life) with non-negligible stakes. I predict that you'll feel slightly more uneasy about the experience than you would be flipping a coin.

Replies from: MrHen
comment by MrHen · 2010-03-03T05:53:11.429Z · LW(p) · GW(p)

I'd suppose that the heuristic is along the lines of the following: Say there's an agreed-upon fair procedure for deciding who gets something, and then someone changes that procedure, and someone other than you ends up benefiting. Then it's unfair, and what's yours has probably been taken.

Everything else you've said makes sense, but I think the heuristic here is way off. Firstly, they object before the results have been produced, so the benefit is unknown. Second, the assumption of an agreed upon procedure is only really valid in the poker example. Other examples don't have such an agreement and seem to display the same behavior. Finally, the change to the produce could be by a disinterested party with no possible personal gain to be had. I suspect that the reaction would stay the same.

So, whatever heuristic may be at fault here, it doesn't seem to be the one you are focusing on. The fact that my friends didn't say, "You're cheating" or "You broke the rules" is more evidence against this being the heuristic. I am open to the idea of a heuristic being behind this. I am also open to the idea that my friends may not be aware of the heuristic or its implications. But I don't see how anything is pointing toward the heuristic you have suggested.

† Thought experiment: we have to decide a binary disagreement by chance, and instead of flipping a coin or playing Rock-Paper-Scissors, I suggest we do the following: First, you roll a 6-sided die, and if it's a 1 or 2 you win. Otherwise, I roll a 12-sided die, and if it's 1 through 9 I win, and if it's 10 through 12 you win.

Hmm... 1/3 I win outright... 2/3 enters a second roll where I win 1/4 of the time. Is that...

1/3 + 2/3 * 1/4 =
1/3 + 2/12 =
4/12 + 2/12 =
6/12 =
1/2

Seems right to me. And I don't suspect to feel uneasy about such an experience at all since the odds are the same. If someone offered me a scenario and I didn't have the math prepared I would work out the math and decide if it is fair.

If I do the contest and you start winning every single time I might start getting nervous. But I would do the same thing regardless of the dice/coin combos we were using.

I would actually feel safer using the dice because I found that I can strongly influence flipping a fair quarter in my favor without much effort.

comment by JGWeissman · 2010-03-01T22:54:56.417Z · LW(p) · GW(p)

An important element of it being fair for you to cut the deck in the middle of dealing, which your friends may not trust, is that you do so in ignorance of who it will help and who it will hinder. By cutting the deck, you have explicitly made and acted on a choice (it is far less obvious when you choose not to cut the deck, the default expected action), and this causes your friends to worry that the choice may have been optimized for interests other than their own.

Replies from: MrHen
comment by MrHen · 2010-03-01T23:09:21.199Z · LW(p) · GW(p)

I don't think this is relevant. I responded in more detail to RobinZ's comment.

comment by Jordan · 2010-03-01T22:56:17.777Z · LW(p) · GW(p)

As you note, regular poker and poker with an extra cut mid-deal are completely isomorphic. In a professional game you would obviously care, because the formality of the shuffle and deal are part of a tradition to instill trust that the deck isn't rigged. For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong. We have two different pointers pointing to the same thing, and they are complaining because the pointers aren't the same, even though all that matters is what those pointers point to. It would be like complaining if you tried to change the name of Poker to Wallaboo mid-deal.

Replies from: Violet, MrHen
comment by Violet · 2010-03-02T09:50:56.799Z · LW(p) · GW(p)

There are rules for the game that are perceived as fair.

If one participant goes changing the rules in the middle of the game this 1) makes rule changing acceptable in the game, 2) forces other players to analyze the current (and future changes) to the game to ensure they are fair.

Cutting the deck probably doesn't affect the probability distribution (unless you shuffled the deck in a "funny" way). Allowing it makes a case for allowing the next changes in the rules too. Thus you can end up analyzing a new game rather than having fun playing poker.

comment by MrHen · 2010-03-01T23:11:56.180Z · LW(p) · GW(p)

For a casual game, where it is assumed no one is cheating, then, unless you're a stickler for tradition, who cares? Your friends are wrong.

Sure, but the "wrong" in this case couldn't be shown to my friends. They perfectly understood probability. The problem wasn't in the math. So where were they wrong?

Another way of saying this:

  • The territory said one thing
  • Their map said another thing
  • Their map understood probability
  • Where did their map go wrong?

The answer has nothing to do with me cheating and has nothing to do with misunderstanding probability. There is some other problem here and I don't know what it is.

comment by cousin_it · 2010-03-09T14:27:02.009Z · LW(p) · GW(p)

An argument isomorphic to yours can be used to demonstrate that spousal cheating is okay as long as there are no consequences and the spouse doesn't know. Maybe your concept of "valid objection" is overly narrow?

Replies from: MrHen
comment by MrHen · 2010-03-09T14:44:50.442Z · LW(p) · GW(p)

Rearranging the cards in a deck has no statistical consequence. Cheating on your spouse significantly alters the odds of certain things happening.

If you add the restriction that there are no consequences, there wouldn't really be much point in doing it because its not like you get sex as a result. That would be a consequence.

The idea that something immoral shouldn't be immoral if no one catches you and nothing bad happens as a result is an open problem as far as I know. Most people don't like such an idea but I hear the debate surface from time to time. (Usually by people trying to convince themselves that whatever they just did wasn't wrong.)

In addition, cutting a deck of cards does have an obvious effect. There is no statistical consequence but obviously you are not going to get the card you were originally going to be dealt.

comment by JustinShovelain · 2010-03-10T00:48:39.422Z · LW(p) · GW(p)

I'm thinking of writing up a post clearly explaining update-less decision theory. I have a somewhat different way of looking at things than Wei Dia and will give my interpretation of his idea if there is demand. I might also need to do this anyway in preparation for some additional decision theory I plan to post to lesswrong. Is there demand?

Replies from: Will_Newsome
comment by Will_Newsome · 2010-03-10T01:51:43.315Z · LW(p) · GW(p)

If and only if you can explain UDT in text at least as clearly as you explained it to me in person; I don't think that would take a very long post.

Replies from: Alicorn
comment by Alicorn · 2010-03-10T02:22:37.315Z · LW(p) · GW(p)

Maybe he should explain it again in person and someone should transcribe?

comment by XiXiDu · 2010-03-03T13:45:17.419Z · LW(p) · GW(p)

How important are 'the latest news'?

These days many people are following an enormous amount of news sources. I myself notice how skimming through my Google Reader items is increasingly time-consuming.

What is your take on it?

  • Is it important to read up on the latest news each day?
  • If so, what are your sources, please share them.
  • What kind of news are important?

I wonder if there is really more to it than just curiosity and leisure. Are there news sources (blogs, the latest research, 'lesswrong'-2.0 etc.), besides lesswrong.com, that every rationalist should stay up to date on? For example, when trying to reduce my news load, I'm trying to take into account how much of what I know and do has its origins in some blog post or news item. Would I even know about lesswrong.com if I wasn't the heavy news addict that I am?

What would it mean to ignore most news and concentrate on my goals of learning math, physics and programming while reading lesswrong.com? Have I already reached a level of knowledge that allows me to get from here to everywhere, without exposing myself to all the noise out there in hope of coming across some valuable information nugget which might help me reach the next level?

How do we ever know if there isn't something out there that is more worthwhile, valuable, beautiful, something that makes us happier and less wrong? At what point should we cease to be the tribesman who's happily trying to improve his hunting skills but ignorant of the possible revolutions taking place in a city only 1000 miles afar?

Is there a time to stop searching and approach what is at hand? Start learning and improving upon the possibilities we already know about? What proportion of one's time should a rationalist spend on the prospect of unknown unknowns?

Replies from: Rain, Morendil, h-H
comment by Rain · 2010-03-03T20:55:56.442Z · LW(p) · GW(p)

I searched for a good news filter that would inform me about the world in ways that I found to be useful and beneficial, and came up with nothing.

Any source that contained news items I categorized as useful, they made up less than 5% of the information presented by that source, and thus were drowned out and took too much time and effort, on a daily basis, to find. Thus, I mostly ignore news, except what I get indirectly through following particular communities like LessWrong or Slashdot.

However, I perform this exercise on a regular basis (perhaps once a year), clearing out feeds that have become too junk-filled, searching out new feeds, and re-evaluating feeds I did not accept last time, to refine my information access.

I find that this habit of perpetual long-term change (significant reorganization, from first principles of the involved topic or action) is highly beneficial in many aspects of my life.

ETA: My feed reader contains the following:

For the vast majority of posts on each of these feeds, I only read the headline. Feeds where I consistently (>25%) read the articles or comments are: Slashdot (mostly while bored at work), Marginal Revolution (the only place I read every post), Sentient Developments, Accelerating Future, and LessWrong. Even for those, I rarely (<10%) read linked articles, preferring instead to read only the distillation by the blog author, or the comments by other users.

ETA2: I also listen to NPR during my short commute to and from work, and occasionally watch the Daily Show and the Colbert Report online, for entertainment. Firefox with NoScript and Adblock Plus makes it bearable - I'm extremely advertising averse.

I do not own a television, and generally consider TV news (in the US) to be horrendous and mind-destroying.

comment by Morendil · 2010-03-03T20:58:06.230Z · LW(p) · GW(p)

Good question, which I'm finding surprisingly hard to answer. (i.e. I've spent more time composing this comment than is perhaps reasonable, struggling through several false starts).

Here are some strategies/behaviours I use: expand and winnow; scorched earth; independent confirmation; obsession.

  • "expand and winnow": after finding an information source I really like (using the term "source" loosely, a blog, a forum, a site, etc.) I will often explore the surrounding "area", subscribe to related blogs or sources recommended by that source. In a second phase I will sort through which of these are worth following and which I should drop to reduce overload
  • "scorched earth": when I feel like I've learned enough about a topic, or that I'm truly overloaded, I will simply drop (almost) every subscription I have related to that topic, maybe keeping a major source to just monitor (skim titles and very occasionally read an item)
  • "independent confirmation": I do like to make sure I have a diversified set of sources of information, and see if there are any items (books, articles, movies) which come at me from more than one direction, especially if they are not "massively popular" items, e.g. I'd discard a recommendation to see Avatar, but I decided to dive into Jaynes when it was recommended on LW and my dad turned out to have liked it enough to have a hard copy of the PDF
  • "obsession": there typically is one thing I'm obsessed with (often the target of an expand and winnow operation); e.g. at various points in my life I've been obsessed with Agora Nomic, XML, Java VM implementation, Agile, personal development, Go, and currently whatever LW is about. An "obsessed" topic can be but isn't necessarily a professional interest, but it's what dominates my other curiosity and tends to color my other interests. For instance while obsessed with Go I pursued the topic both for its own sake and as a source of metaphors for understanding, say, project management or software development. I generally quit ("scorched earth") once I become aware I'm no longer learning anything, which often coincides with the start of a new obsession.

My RSS feeds folder, once massive, is down to a half dozen indispensable blogs. I've unsubscribed from most of the mailing lists I used to read. My main "monitored" channel is Twitter, where I follow a few dozen folks who've turned up gold in the past. My main "active" source of new juicy stuff to think about is LW.

(ETA: as an example of "independent confirmation" in the past two minutes, one of my Agile colleagues on Twitter posted this link.)

comment by h-H · 2010-03-06T01:29:49.919Z · LW(p) · GW(p)

yeah, news is usually a time/attention sink, I go to my bookmarked blogs etc whenever I feel like procrastinating.

15-20 minutes of looking at the main news sites/blogs should be enough to tell you what the biggest developments are, but really, I read them for entertainment value as much as for anything else.

as a side note, antiwar is good site for world news.

comment by FrF · 2010-03-01T19:49:34.066Z · LW(p) · GW(p)

"Why Self-Educated Learners Often Come Up Short" http://www.scotthyoung.com/blog/2010/02/24/self-education-failings/

Quotation: "I have a theory that the most successful people in life aren’t the busiest people or the most relaxed people. They are the ones who have the greatest ability to commit to something nobody else forces them to do."

Replies from: SoullessAutomaton, hugh
comment by SoullessAutomaton · 2010-03-02T01:08:52.731Z · LW(p) · GW(p)

Interesting article, but the title is slightly misleading. What he seems to be complaining are people who mistake picking up a superficial overview of a topic for actually learning a subject, but I rather doubt they'd learn any more in school than by themselves.

Learning is what you make of it; getting a decent education is hard work, whether you're sitting in a lecture hall with other students, or digging through books alone in your free time.

comment by hugh · 2010-03-02T18:18:42.258Z · LW(p) · GW(p)

I partially agree with this. Somewhere along the way, I learned how to learn. I still haven't really learned how to finish. I think these two features would have been dramatically enhanced had I not gone to school. I think a potential problem with self-educated learners (I know two adults who were unschooled) is that they get much better at fulfilling their own needs and tend to suffer when it comes to long-term projects that have value for others.

The unschooled adults I know are both brilliant and creative, and ascribe those traits to their unconventional upbringing. But both of them work as freelance handymen. They like helping others, and would help other people more if they did something else, but short-term projects are all they can manage. They are polymaths that read textbooks and research papers, and one has even developed a machine learning technique that I've urged him to publish. However, when they get bored, they stop. The chance that writing up his results and releasing them would further research is not enough to get him past that obstacle of boredom.

I have long thought that school, as currently practiced, is an abomination. I have yet to come up with a solution that I'm convinced solves its fundamental problems. For a while, I thought that unschooling was the solution, but these two acquaintances changed my mind. What is your opinion, on the right way to teach and learn?

Replies from: gwillen
comment by gwillen · 2010-03-10T22:29:36.576Z · LW(p) · GW(p)

As an interesting anecdote, I was schooled in a completely traditional fashion, and yet I never really learned to finish either. I did learn to learn, but I did it through a combination of schooling and self-teaching. But all the self-teaching was in addition to a completely standard course of American schooling, up through a Bachelor's degree in computer science.

Replies from: hugh
comment by hugh · 2010-03-11T01:19:44.633Z · LW(p) · GW(p)

That's pretty much where I am; traditional school, up through college and grad school. I think my poor habits would have been intensified, however, if I had been unschooled.

comment by AdeleneDawner · 2010-03-01T09:28:24.350Z · LW(p) · GW(p)

It turns out that Eliezer might not have been as wrong as he thought he was about passing on calorie restriction.

Replies from: gwern, Eliezer_Yudkowsky, timtyler
comment by gwern · 2010-03-01T14:35:57.262Z · LW(p) · GW(p)

Well, there's still intermittent fasting.

IF would get around

"The non-aging-related causes of death included monkeys who died while taking blood samples under anesthesia, from injuries or from infections, such as gastritis and endometriosis. These causes may not be aging-related as defined by the researchers, but they could realistically be adverse effects of prolonged calorie restrictions on the animals’ health, their immune system, ability to handle stress, physical agility, cognition or behavior."

and would also work well with the musings about variability and duration:

"From an evolutionary standpoint, he explained, mice who subsist on less food for a few years is analogous, in terms of natural selection, to humans who survive 20-year famines. But nature seldom demands that humans endure such conditions.

Similar conclusions were reached by Dr. Aubrey D.N.J. de Grey with the Department of Genetics at the University of Cambridge, UK. Species have widely evolved to be able to adapt to transient periods of starvation. “What has been generally overlooked is that the extent of the evolutionary pressure to maintain adaptability to a given duration of starvation varies with the frequency of that duration,” he said."

(Our ancestors most certainly did have to survive frequent daily shortfalls. Feast or famine.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-01T15:57:48.179Z · LW(p) · GW(p)

Where do you get that I thought I was wrong about CR? I'd like to lose weight but I had been aware for a while that the state of evidence on caloric restriction doing the purported job of extending lifespan in mammals was bad.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-01T22:04:22.876Z · LW(p) · GW(p)

...huh.

The last thing I remember hearing from you about it was that it looked promising, but that the cognitive side effects made it impractical, so you'd settled on just taking the risk (which would, with that set of beliefs and values, be right in some ways, and wrong in others, and more right than wrong). But, for some reason the search bar doesn't turn up any relevant conversations for "calorie restriction Eliezer" or "caloric restriction Eliezer", so I couldn't actually check my memory. Sorry about that.

comment by timtyler · 2010-03-02T01:00:55.691Z · LW(p) · GW(p)

That's a dopey article. My council is to not get your diet advice from there.

Replies from: AdeleneDawner, wedrifid
comment by AdeleneDawner · 2010-03-02T04:16:42.651Z · LW(p) · GW(p)

"Dopey"?

comment by wedrifid · 2010-03-02T04:59:38.025Z · LW(p) · GW(p)

Suggest a better one?

Replies from: timtyler
comment by timtyler · 2010-03-02T09:06:32.187Z · LW(p) · GW(p)

http://www.crsociety.org/ is the best web resource relating to dietary energy restriction that I am aware of.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-02T09:36:38.382Z · LW(p) · GW(p)

I'm not seeing anything at all on that site regarding scientific evidence that CR works, except links to news articles (meh) and uncited assertions that studies have been done that came to that conclusion - the latter of which, in light of the issues raised in the article I linked to, I want to know more about before I try to decide whether they're useful or not. Overall, both the site and the wiki seem to be much more focused on how to do CR than on making any kind of case that CR is a good idea; I don't think we're asking the same question, if you consider that site to give good answers.

Replies from: timtyler
comment by timtyler · 2010-03-02T09:53:32.587Z · LW(p) · GW(p)

That site is the biggest and most comprehensive resource on the topic available on the internet, AFAIK.

Looking at what you say you are looking for, I don't think we're asking the same question either. The diet is not "a good idea" - e.g. see:

http://cr.timtyler.org/disadvantages/

Rather, it is a tool - and whether or not it is for you depends on what your aims in life are.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-02T09:59:48.725Z · LW(p) · GW(p)

Sorry, I thought the meaning of "a good idea" would be clear in context. I meant "likely to increase a user's chance of having a longer lifespan than they would otherwise".

If that's the best resource there is, taking CR at all seriously sounds like privileging the hypothesis to me.

Replies from: wedrifid, timtyler
comment by wedrifid · 2010-03-02T10:43:25.232Z · LW(p) · GW(p)

If that's the best resource there is, taking CR at all seriously sounds like privileging the hypothesis to me.

It may be wrong but I don't think the flaw is that of privileging the hypothesis. If CR actually does work in, say, rats then thinking it may work in humans is at least a worthwhile hypothesis. The essay you found suggests that the evidence for the hypothesis is looking kinda shaky.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2010-03-02T11:43:51.267Z · LW(p) · GW(p)

Noteworthy: CR is not a particular interest of mine, and I haven't researched it.

If there are good, solid studies of CR in rats, why doesn't that site seem to have, or link to, information about them? If that's the site for CR, and given that it has a publicly editable (yes, I checked) wiki, I'd expect that someone would have added that information, and it's not there: I searched for both "study" and "studies" in the wiki; nothing about rat studies - or any other animal studies, except a mention of monkey studies - showed up.

A google site search does turn up this, though.

Replies from: timtyler
comment by timtyler · 2010-03-02T21:50:57.463Z · LW(p) · GW(p)

Don't bother with the site's wiki.

They have a reference to a mouse study on the front page of the site:

Weindruch R, et al. (1986). "The retardation of aging in mice by dietary restriction: longevity, cancer, immunity and lifetime energy intake." Journal of Nutrition, April, 116(4), pages 641-54.

For the evidence from the rat studies, perhaps start with this review article:

Overview of caloric restriction and ageing.

http://www.crsociety.org/archive/read.php?2,172427,172427

comment by timtyler · 2010-03-02T10:38:28.703Z · LW(p) · GW(p)

I think most in the field agree on that. e.g.:

""I'm positive that caloric restriction will work in humans to extend median life span," Fontana says."

A summary from the site wiki:

"The evidence that bears on the question of the applicability of CR to humans then, is at present indirect. There is nonetheless a great deal of such indirect evidence, enough that we can say with an extremely high degree of confidence that CR will work in humans."

Replies from: wedrifid
comment by wedrifid · 2010-03-02T10:48:24.745Z · LW(p) · GW(p)

Off the top of your head do you know what CR has been shown to work on thus far?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-03T06:55:47.412Z · LW(p) · GW(p)

One of TT's links says CR works in "mice, hamsters, dogs, fish, invertebrate animals, and yeast."

comment by [deleted] · 2010-03-06T15:24:51.175Z · LW(p) · GW(p)

Pick some reasonable priors and use them to answer the following question.

On week 1, Grandma calls on Thursday to say she is coming over, and then comes over on Friday. On week 2, Grandma once again calls on Thursday to say she is coming over, and then comes over on Friday. On week 3, Grandma does not call on Thursday to say she is coming over. What is the probability that she will come over on Friday?

ETA: This is a problem, not a puzzle. Disclose your reasoning, and your chosen priors, and don't use ROT13.

Replies from: Sniffnoy, orthonormal, RobinZ, ata, Richard_Kennaway, Peter_de_Blanc
comment by Sniffnoy · 2010-03-07T04:50:11.148Z · LW(p) · GW(p)

In the calls, does she specify when she is coming over? I.e. does she say she'll be coming over on Thursday, Friday, just sometime in the near future, or she leaves it for you to infer?

Replies from: None
comment by [deleted] · 2010-03-07T20:33:20.409Z · LW(p) · GW(p)

The information I gave is the information you have. Don't make me make the problem more complicated.

ETA: Let me expand on this before people start getting on my case.

Rationality is about coming to the best conclusion you can given the information you have. If the information available to you is limited, you just have to deal with it.

Besides, sometimes, having less information makes the problem easier. Suppose I give you the following physics problem:

I throw a ball from a height of 4 feet; its maximum height is 10 feet. How long does it take from the time I throw it for it to hit the ground?

This problem is pretty easy. Now, suppose I also tell you that the ball is a sphere, and I tell you its mass and radius, and the viscosity of the air. This means that I'm expecting you to take air resistance into account, and suddenly the problem becomes a lot harder.

If you really want a problem where you have all the information, here:

Every time period, input A (of type Boolean) is revealed, and then input B (also of type Boolean) is revealed. There are no other inputs. In time period 0, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 1, input A is revealed to be TRUE, and then input B is revealed to be TRUE. In time period 2, input A is revealed to be FALSE. What is the probability that input B will be revealed to be TRUE?

Replies from: Douglas_Knight, RobinZ
comment by Douglas_Knight · 2010-03-07T23:51:07.194Z · LW(p) · GW(p)

Having less information makes easier the problem of satisfying the teacher. It does not make easier the problem of determining when the ball hits the ground. Incidentally, I got the impression somehow that there are venues where physics teachers scold students for using too much information.

ETA (months later): I do think it's a good exercise, I just think this is not why.

Replies from: None
comment by [deleted] · 2010-03-08T00:54:28.445Z · LW(p) · GW(p)

Here, though, the problem actually is simpler the less information you have. As an extreme example, if you know nothing, the probability is always 1/2 (or whatever your prior is).

comment by RobinZ · 2010-03-07T21:29:14.297Z · LW(p) · GW(p)

I can say immediately that it is less than 50% - to be more rigorous would take a minute.

Edit: Wait - no, I can't. If the variables are related, then that conclusion would appear, but it's not necessary that they be.

comment by orthonormal · 2010-03-08T22:11:34.035Z · LW(p) · GW(p)

Let

  • AN = "Grandma calls on Thursday of week N",
  • BN = "Grandma comes on Friday of week N".

A toy version of my prior could be reasonably close to the following:

P(AN)=p, P(AN,BN)=pq, P(~AN,BN)=(1-p)r

where

  • the distribution of p is uniform on [0,1]
  • the distribution of q is concentrated near 1 (distribution proportional to f(x)=x on [0,1], let's say)
  • the distribution of r is concentrated near 0 (distribution proportional to f(x)=1-x on [0,1], let's say)

Thus, the joint probability distribution of (p,q,r) is given by 4q(1-r) once we normalize. Now, how does the evidence affect this? The likelihood ratio for (A1,B1,A2,B2) is proportional to (pq)^2, so after multiplying and renormalizing, we get a joint probability distribution of 24p^2q^3(1-r). Thus P(~A3|A1,B1,A2,B2)=1/4 and P(~A3,B3|A1,B1,A2,B2)=1/12, so I wind up with a 1 in 3 chance that Grandma will come on Friday, if I've done all my math correctly.

Of course, this is all just a toy model, as I shouldn't assume things like "different weeks are independent", but to first order, this looks like the right behavior.

Replies from: orthonormal
comment by orthonormal · 2010-03-09T08:42:33.047Z · LW(p) · GW(p)

I should have realized this sooner: P(B3|~A3) is just the updated value of r, which isn't affected at all by (A1,B1,A2,B2). So of course the answer according to this model should be 1/3, as it's the expected value of r in the prior distribution.

Still, it was a good exercise to actually work out a Bayesian update on a continuous prior. I suggest everyone try it for themselves at least once!

comment by RobinZ · 2010-03-06T16:03:57.601Z · LW(p) · GW(p)

I fail to see how this question has a perceptibly rational answer - too much depends on the prior.

Replies from: None
comment by [deleted] · 2010-03-06T22:29:10.864Z · LW(p) · GW(p)

Presumably, once you've picked your priors, the rest follows. And presumably, once you've come up with an answer, you'll disclose your reasoning, and your chosen priors.

comment by ata · 2010-03-07T02:51:06.983Z · LW(p) · GW(p)

Does she come over unannounced on any days other than Friday?

Replies from: None
comment by [deleted] · 2010-03-07T20:34:05.780Z · LW(p) · GW(p)

I don't know.

comment by Richard_Kennaway · 2010-03-08T10:42:59.555Z · LW(p) · GW(p)

Using the information that she is my grandmother, I speculate on the reason why she did not call on Thursday. Perhaps it is because she does not intend to come on Friday: P(Friday) is lowered. Perhaps it is because she does intend to come but judges the regularity of the event to make calling in advance unnecessary unless she had decided not to come: P(Friday) is raised. Grandmothers tend to be old and consequently may be forgetful: perhaps she intends to come but has forgotten to call: P(Friday) is raised. Grandmothers tend to be old, and consequently may be frail: perhaps she has been taken unwell; perhaps she is even now lying on the floor of her home, having taken a fall, and no-one is there to help: P(Friday) is lowered, and perhaps I should phone her.

My answer to the problem is therefore: I phone her to see how she is and ask if she is coming tomorrow.

I know -- this is not an answer within the terms of the question. However, it is my answer.

The more abstract version you later posted is a different problem. We have two observations of A and B occurring together, and that is all. Unlike the case of Grandma's visits, we have no information about any causal connection between A and B. (The sequence of revealing A before B does not affect anything.) What is then the best estimate of P(B|~A)?

We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent. Therefore A can be ignored and the Laplace rule of succession applied to the two observations of B, giving 3/4.

ETA: I originally had a far more verbose analysis of the second problem based on modelling it as an urn problem, which I then deleted. But the urn problem may be useful for the intuition anyway. You have an urn full of balls, each of which is either rough or smooth (A or ~A), and either black or white (B or ~B). You pick two balls which turn out to be both rough and black. You pick a third and feel that it is smooth before you look at it. How likely is it to be black?

Replies from: wnoise, orthonormal, None
comment by wnoise · 2010-03-08T21:54:37.049Z · LW(p) · GW(p)

Directly using the Laplace rule of succession on the sample space A \tensor B gives weights proportional to:

(A,B): 3
(A, ~B): 1
(~A, B): 1
(~A, ~B): 1

Conditioning on ~A, P(B|~A) = 1/2. Assuming independence does make a significant difference on this little data.

comment by orthonormal · 2010-03-08T21:29:14.566Z · LW(p) · GW(p)

We have no information about the relation between A and B, so I am guessing that a reasonable prior for that relation is that A and B are independent.

On the contrary, on two points.

First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way— or, to put it another way, if that were your prior and you observed 100 cases and A and B agreed each time (sometimes true, sometimes false), you'd still assume they were independent.

What you should have said, I think, is that a reasonable prior would have "A and B independent" as one of the most probable options for their relation, as it is one of the simplest. But it should also give some substantial weight to simple dependencies like "A and B identical" and "A and B opposite".

Second, the sense in which we have no prior information about relations between A and B is not a sense that justifies ignoring A. We had no prior information before we observed them agreeing twice, which raises the probability of "A and B identical" while somewhat lowering that of "A and B independent".

Replies from: wnoise, Richard_Kennaway
comment by wnoise · 2010-03-08T21:48:28.836Z · LW(p) · GW(p)

It's true that the prior should not be "A and B are independent". But shouldn't symmetries of how they may be dependent give essentially the same result as assuming independence? Similar as to how any symmetric prior for how a coin is biased gives the same results for a prediction of probability of heads -- 1/2.

I don't think independence is a good way to analyze things when the probabilities are near zero or one. Independence is just P[A] P[B] = P[AB]. If P[A] or P[B] are near zero or one, this is automatically "nearly true".

Put another way, two observation of (A, B) give essentially no information about dependence by themselves. This is encoded into ratios between the four possibilities.

comment by Richard_Kennaway · 2010-03-08T22:33:25.594Z · LW(p) · GW(p)

First, "A and B are independent" is not a reasonable prior, because it assigns probability 0 to them being dependent in some way

This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence. Almost everyone has two eyes and two legs, and therefore almost everyone has both two eyes and two legs, but it does not follow from those observations alone that possession of two eyes either is, or is not, independent of having two legs. For example, it is well-known (in some possible world) that the rare grey-green greasy Limpopo bore worm invariably attacks either the eyes, or the legs, but never both in the same patient, and thus observing someone walking on healthy legs conveys a tiny positive amount of probability that they have no eyes; while (in another possible world) the venom of the giant rattlesnake of Sumatra rapidly causes both the eyes and the legs of anyone it bites to fall off, with the opposite effect on the relationship between the two misfortunes. I can predict that someone has both two eyes and two legs from the fact that they are a human being. The extra information about their legs that I gain from examining their eyes could go either way.

But that is just an intuitive ramble. What is needed here is a calculation, akin to the Laplace rule of succession, for observations in a 2x2 contingency table. Starting from an ignorance prior that the probabilities of A&B, A&~B, B&~A, and ~A&~B are each 1/4, and observing a, b, c, and d examples of each, what is the appropriate posterior? Then fill in the values 2, 0, 0, and 0.

ETA: On reading the comments, I realise that the above is almost all wrong.

Replies from: jimrandomh, orthonormal, FAWS
comment by jimrandomh · 2010-03-09T01:43:09.323Z · LW(p) · GW(p)

This raises a question of the meaningfuless of second-order Bayesian reasoning. Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value? A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

In order to have a probability distribution rather than just a probability, you need to ask a question that isn't boolean, ie one with more than two possible answers. If you ask "Will this coin come up heads on the next flip?", you get a probability, because there are only two possible answers. If you ask "How many times will this coin come up heads out of the next hundred flips?", then you get back a probability for each number from 0 to 100 - that is, a probability distribution. And if you ask "what kind of coin do I have in my pocket?", then you get a function that takes any possible description (from "copper" to "slightly worn 1980 American quarter") and returns a probability of matching that description.

comment by orthonormal · 2010-03-08T23:02:41.683Z · LW(p) · GW(p)

Suppose I had a prior for the probability of some event C of, say, 0.469. Could one object to that, on the grounds that I have assigned a probability of zero to the probability of C being some other value?

Depends on how you're doing this; if you have a continuous prior for the probability of C, with an expected value of 0.469, then no— and future evidence will continue to modify your probability distribution. If your prior for the probability of C consists of a delta mass at 0.469, then yes, your model perhaps should be criticized, as one might criticize Rosenkrantz for continuing to assume his coin is fair after 30 consecutive heads.

A Bayesian reasoner actually would have a hierarchy of uncertainty about every aspect of ver model, but the simplicity weighting would give them all low probabilities unless they started correctly predicting some strong pattern.

A prior of independence of A and B seems to me of a like nature to an assignment of a probability to C.

Independence has a specific meaning in probability theory, and it's a very delicate state of affairs. Many statisticians (and others) get themselves in trouble by assuming independence (because it's easier to calculate) for variables that are actually correlated.

And depending on your reference class (things with human DNA? animals? macroscopic objects?), having 2 eyes is extremely well correlated with having 2 legs.

comment by FAWS · 2010-03-08T22:43:04.495Z · LW(p) · GW(p)

On the second point, seeing A and B together twice, or twenty times, tells me nothing about their independence.

Even without any math It already tells you that they are not mutually exclusive. See wnoise's reply to the grandparent post for the Laplace rule equivalent.

comment by [deleted] · 2010-03-08T20:12:29.042Z · LW(p) · GW(p)

I really like your urn formulation.

comment by Peter_de_Blanc · 2010-03-07T21:57:31.386Z · LW(p) · GW(p)

OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model.

In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.

Replies from: None, Douglas_Knight
comment by [deleted] · 2010-03-08T01:13:32.103Z · LW(p) · GW(p)

I beg your pardon?

comment by Douglas_Knight · 2010-09-22T01:56:56.747Z · LW(p) · GW(p)

OK, I'll use the same model I use for text. The zeroth-order model is maxentropy, and the kth-order model is a k-gram model with a pseudocount of 2 (the alphabet size) allocated to the (k-1)th-order model.

Is this a standard model? Does it have a name? a reference?
I see that the level 1 model is Laplace's rule of succession. Is there some clean statement about the level k model? Is this a bayesian update?

In this case, since there's never before been a Thursday in which she did not call, we default to the 1st-order model, which says the probability is 3/4 that she will come on Friday.

You seem to be treating the string as being labeled by alternating Thursdays and Fridays, which have letters drawn from different alphabets. The model easily extends to this, but it was probably worth saying, particularly since the two alphabets happen to have the same size.

I find it odd that almost everyone treated weeks as discrete events. In this problem, days seem like the much more natural unit to me. ata probably agrees with me, but he didn't reach a conclusion. With weeks, we have very few observations, so a lot depends on our model, like whether we use alphabets of size 2 for Thursday and Friday (Peter), or whether we use alphabets of size 4 for the whole week (wnoise). I'm going to allow calls and visits on each day and use an alphabet of size 4 for each day. I think it would be better to use a Peter-ish system of separating morning visits from evening calls, but with data indexed by days, we have a lot of data, so I don't think this matters so much.

I'll run my weeks Sun-Sat. Weeks 1 and 2 are complete and week 3 is partial. Treating days as independent and having 4 outcomes: ([no]visit)x([no]call). I interpret the unspecified days as having no call and no visit. Using Laplace's rule of succession, we have 4/23 chance of visit, which sounds pretty reasonable to me. But if we use Peter's hierarchical model, I think our chance of a visit is 4/23*4/17*4/14*4/11*4/8*4/5 = 1/500. That is, since we've never seen a visit after a no-call/no-visit day, the only way to get a visit is from level 1 of the model, so we multiply the chance of falling through from level 2 to level 1, from level 3 to 2, etc. The chance of falling through from level n+1 to level n is 4/(4+c), where c is the number of times we've seen an n+1-gram that continues the last n days. So for n=5, the last 5 days were no-visit-no-call, which we've seen once before, culminating in the no-visit-call Thursday of the second week. So that's our factor of 4/5. For n=4, we've seen the resolution of 4 consecutive days of no-visit-no-call, once in the first week, twice in the second week, and once in the third week; so that's the 4/8.

1/500 seems awfully small to me. Am I using this model correctly? I like level 2, 4/23*4/17=4%, but maybe I'm implicitly getting "2" from a prior that the call is connected to the visit.

With a Peter's two alphabets, each of size two, level 1 yields 3/21, level 2 3/21*2/18=2%, and the full model 3/21*2/18*2/16*2/15*2/13*2/12*2/10*2/9*2/7*2/6*2/4*2/4 = 10^-8. Levels 1 and 2 were a little smaller than with the size 4 alphabet, but the full model much smaller. I was expecting the probability of a visit to be about squared, but it was cubed.

comment by ata · 2010-03-20T22:15:09.461Z · LW(p) · GW(p)

Today I was listening in on a couple of acquaintances talking about theology. As most theological discussions do, it consisted mainly of cached Deep Wisdom. At one point — can't recall the exact context — one of them said: "…but no mortal man wants to live forever."

I said: "I do!"

He paused a moment and then said: "Hmm. Yeah, so do I."

I think that's the fastest I've ever talked someone out of wise-sounding cached pro-death beliefs.

comment by Vladimir_Nesov · 2010-03-09T10:55:50.784Z · LW(p) · GW(p)

New on arXiv:

David H. Wolpert, Gregory Benford. (2010). What does Newcomb's paradox teach us?

In Newcomb's paradox you choose to receive either the contents of a particular closed box, or the contents of both that closed box and another one. Before you choose, a prediction algorithm deduces your choice, and fills the two boxes based on that deduction. Newcomb's paradox is that game theory appears to provide two conflicting recommendations for what choice you should make in this scenario. We analyze Newcomb's paradox using a recent extension of game theory in which the players set conditional probability distributions in a Bayes net. We show that the two game theory recommendations in Newcomb's scenario have different presumptions for what Bayes net relates your choice and the algorithm's prediction. We resolve the paradox by proving that these two Bayes nets are incompatible. We also show that the accuracy of the algorithm's prediction, the focus of much previous work, is irrelevant. In addition we show that Newcomb's scenario only provides a contradiction between game theory's expected utility and dominance principles if one is sloppy in specifying the underlying Bayes net. We also show that Newcomb's paradox is time-reversal invariant; both the paradox and its resolution are unchanged if the algorithm makes its `prediction' after you make your choice rather than before.

See also:

Replies from: xamdam, SilasBarta, SilasBarta
comment by xamdam · 2010-03-10T17:53:16.592Z · LW(p) · GW(p)

In a competely preverse coincedence Benford's law, attributed to an apparently unrelated Frank Bernford, was apparently invented by an unrelated Simon Newcomb http://en.wikipedia.org/wiki/Benford%27s_law

comment by SilasBarta · 2010-03-09T17:33:13.666Z · LW(p) · GW(p)

Okay, now that I've read section 2 of the paper (where it gives the two decompositions), it doesn't seem so insightful. Here's my summary of the Wolpert/Benford argument:

"There are two Bayes nets to represent the problem: Fearful, where your decision y causally influences Omega's decision g, and Realist, where Omega's decision causally influences yours.

"Fearful: P(y,g) = P(g|y) * P(y), you set P(y). Bayes net: Y -> G. One-boxing is preferable.
"Realist: P(y,g) = P(y|g) * P(g), you set P(y|g). Bayes net: G -> Y. Two-boxing is preferable."

My response: these choices neglect the option presented by AnnaSalamon and Eliezer_Yudkowsky previously: that Omega's act and your act are causally influenced by a common timeless node, which is a more faithful representation of the problem statement.

comment by SilasBarta · 2010-03-09T17:03:01.027Z · LW(p) · GW(p)

Self-serving FYI: In this comment I summarized Eliezer_Yudkowsky's list of the ways that Newcomb's problem, as stated, constrains a Bayes net.

For the non-link-clickers:

  • Must have nodes corresponding to logical uncertainty (Self-explanatory)

  • Omega's decision on box B correlates to our decision of which boxes to take (Box decision and Omega decision are d-connected)

  • Omega's act lies in the past. (ETA: Since nothing is simultaneous with Omega's act, then knowledge of Omega's act screens off the influence of everything before it; on the Bayes net, Omega's act blocks all paths from the past to future events; only paths originating from future or timeless events can bypass it.)

  • Omega's act is not directly influencing us (No causal arrow directly from Omega to us/our choice.)

  • We have not found any other property which would screen off this uncertainty even when we inspect our own source code / psychology in advance of knowing our actual decision, and that our computation is the only direct ancestor of our logical output. (Seem to be saying the same thing: arrow from computation directly to logical output.)

  • Our computation is the only direct ancestor of our logical output. (Only arrow pointing to our logical output comes from our computation.)

comment by JohannesDahlstrom · 2010-03-07T23:43:02.007Z · LW(p) · GW(p)

Warning: Your reality is out of date

tl;dr:

There are established facts that don't change perceptibly (the boiling point of water), and there are facts that change constantly (outside temperature, time of day)

Inbetween these two intuitive categories, however, a third class of facts could be defined: facts that do change measurably, or even drastically, over human lifespans, but still so slowly that people, after first learning about them, have a tendency of dumping them into the "no-change" category unless they're actively paying attention to the field in question.

Examples of these so-called mesofacts include the total human population (6*10⁹? No, almost 7*10⁹ nowadays) and the number of exoplanets found (A hundred? Two hundred? More like four hundred and counting.)

Replies from: RobinZ
comment by RobinZ · 2010-03-07T23:49:43.118Z · LW(p) · GW(p)

I notice the figure for cell phone connectivity is three years old. :P

comment by Peter_de_Blanc · 2010-03-07T20:07:45.474Z · LW(p) · GW(p)

Which very-low-effort activities are most worthwhile? By low effort, I mean about as hard as solitaire, facebook, blogs, TV, most fantasy novels, etc.

Replies from: Kevin, nazgulnarsil
comment by Kevin · 2010-03-12T12:13:15.635Z · LW(p) · GW(p)

I think I have a good one for people in the USA. This is a job that allows you to work from home on your computer rating the quality of search engine results. It pays $15/hour and because their productivity metrics aren't perfect, you can work for 30 seconds and then take two minutes off with about as much variance as you want. Instead of taking time off directly to do different work, you could also slow yourself down by continuously watching TV or downloaded videos.

They are also hiring for some workers in similar areas that are capable of doing somewhat more complicated tasks, presumably for higher salaries. Some sound interesting. http://www.lionbridge.com/lionbridge/en-us/company/work-with-us/careers.htm

Yes, out of all "work from home" internet jobs, this is the only one that is not a scam. Lionbridge is a real company and their shares recently continued to increase after a strong earnings report. http://online.wsj.com/article/BT-CO-20100210-716444.html?mod=rss_Hot_Stocks

First, you send them your resume, and they basically approve every US high school graduate that can create a resume for the next step. Then you have to take a test in doing the job. They provide plenty of training material and the job isn't all that hard, a few hours of rapid skimming is probably enough to pass the test for most people. Almost 100% of people would be able to pass the test after 10 hours of studying.

comment by nazgulnarsil · 2010-03-12T11:54:43.230Z · LW(p) · GW(p)

throwing/giving away stuff you don't use. reading instead of watching tv or browsing website for the umpteenth time. eating more fruit and less processed sugar. exercising 10-15 minutes a day. writing down your ideas. intro to econ of some sort. spending 30 minutes a day on a long term project. meditation.

comment by JGWeissman · 2010-03-05T06:00:27.467Z · LW(p) · GW(p)

Should we have a sidebar section "Friends of LessWrong" to link to sites with some overlap in goals/audience?

I would include TakeOnIt in such a list. Any other examples?

comment by [deleted] · 2010-03-02T21:07:46.483Z · LW(p) · GW(p)

When I was young, I happened upon a book called "The New Way Things Work," by David Macaulay. It described hundreds of household objects, along with descriptions and illustrations of how they work. (Well, a nuclear power plant, and the atoms within it, aren't household objects. But I digress.) It was really interesting!

I remember seeing someone here mention that they had read a similar book as a kid, and it helped them immensely in seeing the world from a reductionist viewpoint. I was wondering if anyone else had anything to say on the matter.

Replies from: MrHen, None, Nick_Tarleton, Jack, Morendil, Nisan, h-H
comment by MrHen · 2010-03-02T21:25:57.857Z · LW(p) · GW(p)

I loved that book. I still have moments when I pull some random picture from that book out of my memory to describe how an object works.

EDIT: Apparently the book is on Google.

comment by [deleted] · 2010-03-03T07:22:57.391Z · LW(p) · GW(p)

Today there's How Stuff Works.

comment by Nick_Tarleton · 2010-03-03T01:41:39.621Z · LW(p) · GW(p)

I also loved that book. It probably helped teach me reductionism, but it's hard to tell given my generally terrible memory for my childhood.

(FWIW, my best guess for my biggest reductionist influence would be learning assembly language and other low-level CS details.)

comment by Jack · 2010-03-02T22:15:23.464Z · LW(p) · GW(p)

I think we had this in the house, but I don't remember it very well, except some of the part about pullies and levers. This book would be a nice starting point for that rebuilding civilization manual idea from a while back.

comment by Morendil · 2010-03-02T21:14:07.293Z · LW(p) · GW(p)

My favorite Macaulay is "Motel of the Mysteries". I read it as a kid and it definitely had an influence. ;)

comment by Nisan · 2010-03-07T01:56:12.230Z · LW(p) · GW(p)

I have fond childhood memories of many hours tracing the circuit diagram of the adding circuit : ) God, I was so nerdy. I wanted to know how a computer worked and that book helped me avoid a mysterious answer to a mysterious question. Learning, in detail, how a specific logic circuit works really drove home how much I had yet to learn about the rest of the workings of a computer.

comment by h-H · 2010-03-03T00:07:23.562Z · LW(p) · GW(p)

I was going to get that for me younger brother when I next see him :)

comment by vinayak · 2010-03-01T20:27:14.784Z · LW(p) · GW(p)

I have two basic questions that I am confused about. This is probably a good place to ask them.

  1. What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be? For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

  2. Consider the following very interesting game. You have been given a person who will respond to all your yes/no questions by assigning a probability to 'yes' and a probability to 'no'. What's the smallest sequence of questions you can ask him to decide for sure that a) he is not a rationalist, b) he is not a Bayesian?

Replies from: MrHen, JGWeissman, orthonormal, Jack, knb, Kaj_Sotala
comment by MrHen · 2010-03-01T21:27:16.275Z · LW(p) · GW(p)

This is somewhat similar to the question I asked in Reacting to Inadequate Data. It was hit with a -3 rating though... so apparently it wasn't too useful.

The consensus of the comments was that the correct answer is .5.

Also of note is Bead Jar Guesses and its sequel.

comment by JGWeissman · 2010-03-01T20:46:59.592Z · LW(p) · GW(p)

What probability should you assign as a Bayesian to the answer of a yes/no question being yes if you have absolutely no clue about what the answer should be?

If you truly have no clue, .5 yes and .5 no.

For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli.

Ah, but here you have some clues, which you should update on, and knowing how is much trickier. One clue is that the unkown game of Doldun could possibly have more than 2 teams competing, of which only 1 could win, and this should shift the probabilities in favor of "No". How much? Well that depends on your probability distribution for an unknown game to have n competing teams. Of course, there may be other clues that should shift the probabilty towards "yes".

Replies from: Alicorn
comment by Alicorn · 2010-03-01T21:17:06.664Z · LW(p) · GW(p)

But the game of Doldun could also have the possibility of cooperative wins. Or it could be unwinnable. Or Strigli might not be playing. Or Strigli might be the only team playing - it's the team against the environment! Or Doldun could be called on account of a rain of frogs. Or Strigli's left running foobar could break a chitinous armor plate and be replaced by a member of team Baz, which means that Baz gets half credit for a Strigli win.

Replies from: orthonormal
comment by orthonormal · 2010-03-02T01:52:22.483Z · LW(p) · GW(p)

All of which means that you shouldn't be too confident in your probability distribution in such a foreign situation, but you still have to come up with a probability if it's relevant at all for action. Bad priors can hurt, but refusal to treat your uncertainty in a Bayes-like fashion hurts more (with high probability).

Replies from: Alicorn
comment by Alicorn · 2010-03-02T01:56:20.177Z · LW(p) · GW(p)

Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.

Replies from: orthonormal, gwern, SoullessAutomaton
comment by orthonormal · 2010-03-02T02:17:11.486Z · LW(p) · GW(p)

You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case?

Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question.

Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.

Replies from: Alicorn
comment by Alicorn · 2010-03-02T02:31:10.454Z · LW(p) · GW(p)

It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no".

Basically, your prior should be that everything is almost certainly false!

Replies from: cousin_it, orthonormal, JGWeissman
comment by cousin_it · 2010-03-09T15:51:33.847Z · LW(p) · GW(p)

The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.

Replies from: thomblake
comment by thomblake · 2010-03-09T20:13:00.208Z · LW(p) · GW(p)

A random alien?

Replies from: bogdanb
comment by bogdanb · 2010-03-12T13:29:31.361Z · LW(p) · GW(p)

No, just a random alien that (1) I encountered and (2) asked me a question.

The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can't see a way of properly encoding this information as a prior about the answer to said question.

[ETA:] Note that I don't necessarily accept cousin_it's assertion, I just state my interpretation of it.

comment by orthonormal · 2010-03-02T02:41:11.284Z · LW(p) · GW(p)

Well, let's say I ask you whether all "fnynznaqre"s are "nzcuvovna"s. Prior to using rot13 on this question (and hypothesizing that we hadn't had this particular conversation beforehand), would your prior really be as low as your previous comment implies?

(Of course, it should probably still be under 50% for the reference class we're discussing, but not nearly that far under.)

Replies from: Alicorn
comment by Alicorn · 2010-03-02T02:43:13.959Z · LW(p) · GW(p)

Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all "fnynznaqre"s are "nzcuvovna"s. We're talking about aliens. Come on, now that it's occurred to you, wouldn't you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?

Replies from: orthonormal
comment by orthonormal · 2010-03-02T03:00:15.519Z · LW(p) · GW(p)

Yes, but I might as easily choose a question whose answer was "Yes" if I thought that a trick question might be too predictable of a strategy.

1/4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can't see why the likelihood of "Yes" should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.

Replies from: Alicorn
comment by Alicorn · 2010-03-02T03:03:50.996Z · LW(p) · GW(p)

Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn't see a reason to think this was going on when the aliens were asking the question, though.

Replies from: orthonormal
comment by orthonormal · 2010-03-02T03:07:13.591Z · LW(p) · GW(p)

Hmm. As you point out, questionspace is biased towards "No" when represented in human formalisms (if weighting by length, it's biased by nearly the length of the "not" symbol), and it would seem weird if it weren't so an an alien representation. Perhaps that's a reason to revise down and not up when taking information off the table. But it doesn't seem like it should be more than (say) a decibel's worth of evidence for "No".

ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!

Replies from: Alicorn
comment by Alicorn · 2010-03-02T03:10:04.673Z · LW(p) · GW(p)

ETA: I think we just each acknowledged that the other has a point. On the Internet, no less!

Isn't it awesome when that happens? :D

Replies from: vinayak
comment by vinayak · 2010-03-02T05:49:23.138Z · LW(p) · GW(p)

I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -

  1. Strigli wins the game.
  2. It rains immediately after the match is over.
  3. Strigli wins the game AND it rains immediately after the match is over.

I am not sure if your scheme ensures that this does not happen.

Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?

One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.

Replies from: bogdanb, orthonormal
comment by bogdanb · 2010-03-12T14:08:28.623Z · LW(p) · GW(p)

For example, you should definitely not end up assigning equal probabilities to the following three events -

  1. Strigli wins the game.
  2. It rains immediately after the match is over.
  3. Strigli wins the game AND it rains immediately after the match is over. I am not sure if your scheme ensures that this does not happen

I just wanted to note that it is actually possible to do that, provided that the questions are asked in order (not simultaneously). That is, I might logically think that the answer to (1) and (2) is true with 50% probability after I'm asked each question. Then, when I'm asked (3), I might logically deduce that (3) is true with 50% probability — however, this only means that after I'm asked (3), the very fact that I was asked (3) caused me to raise my confidence that (1) and (2) are true. It's a fine point that seems easy to miss.

On a somewhat related point, I've looked at the entire discussion and it seems to me the original question is ill-posed, in the sense that the question, with high probability, doesn't mean what the asker thinks it means.

Take For example, let's say you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli. The question is intended to prevent you from having any prior information about its subject.

However, what it means is just that before you are asked the question, you don't have any information about it. (And I'm not even very sure about that.) But once you are asked the question, you received a huge amount of information: The very fact that you received that question is extremely improbable (in the class of “what could have happened instead”). Also note that it is vanishingly more improbable than, say, being asked by somebody on the street, say, if you think his son will get an A today.

“Something extremely improbable happens” means “you just received information”; the more improbable it was the more information you received (though I think there are some logs in that relationship).

So, the fact you are suddenly sent to the planet Progsta and a Sillpruk comes and asks you whether the game of Doldun will be won by the team Strigli brings a lot of information: space travel is possible within one's lifetime, aliens exist, aliens have that travel technology, aliens bring people to their planets, aliens can pose a question to somebody just brought to their question, they live on at least one planet, they have something they translate as “game” in English, they have names for planets, individuals, games and teams, they translate those names in some particular English-pronounceable (or -writable, depends on how the question was asked) form.

More subtly, you think that Sillpruk came to you and asked you a question; this implies you have good reason to think that the events should be interpreted as such (rather than just, say, a block of matter arrived in front of you, and it made some sounds. The class of events “aliens take you to their planets and ask you a question” is vastly larger than “the same, but you realize it”.


tl;dr: I guess what I mean is that “what priors you use for a question you have no idea about” is ill formed, because it's pretty much logically impossible that you have no relevant information.

comment by orthonormal · 2010-03-03T06:05:10.023Z · LW(p) · GW(p)

Definitely agree on the first point (although, to be careful, the probabilities I assign to the three events could be epsilons apart if I were convinced of a bidirectional implication between 1 and 2).

On the second part: Yep, you need to start with some prior probabilities, and if you don't have any already, the ignorance prior of 2^{-n} for each hypothesis that can be written (in some fixed binary language) as a program of length n is the way to go. (This is basically what you described, and carrying forward from that point is called Solomonoff induction.)

In practice, it's not possible to estimate hypothesis complexity with much precision, but it doesn't take all that much precision to judge in cases like Thor vs. Maxwell's Equations; and anyway, as long as your priors aren't too ridiculously off, actually updating on evidence will correct them soon enough for most practical purposes.

ETA: Good to keep in mind: When (Not) To Use Probabilities

comment by JGWeissman · 2010-03-02T02:38:59.753Z · LW(p) · GW(p)

and also false that I am on a red couch,

But it is true that you are not on a red couch.

Negation is a one-to-one map between true and false propositions.

Replies from: Alicorn
comment by Alicorn · 2010-03-02T02:40:46.035Z · LW(p) · GW(p)

Since you can understand the alien's question except for the nouns, presumably you'd be able to tell if there was a "not" in there?

Replies from: JGWeissman
comment by JGWeissman · 2010-03-02T02:52:49.436Z · LW(p) · GW(p)

Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien's question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.)

ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that's not the point.)

This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.

comment by gwern · 2010-03-04T02:03:29.960Z · LW(p) · GW(p)

That the aliens chose to translate their word as the English 'game' says, I think, a lot.

Replies from: Alicorn
comment by Alicorn · 2010-03-04T02:06:02.696Z · LW(p) · GW(p)

"Game" is one of the most notorious words in the language for the virtual impossibility of providing a unified definition absent counterexamples.

Replies from: Richard_Kennaway, gwern
comment by Richard_Kennaway · 2010-03-04T10:54:39.558Z · LW(p) · GW(p)

"A game is a voluntary attempt to overcome unnecessary obstacles."

Replies from: JohannesDahlstrom
comment by JohannesDahlstrom · 2010-03-16T16:04:19.045Z · LW(p) · GW(p)

This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn't classify hobbies such as computer programming or learning to play the piano as games.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-03-16T16:31:32.862Z · LW(p) · GW(p)

I wouldn't class most hobbies as attempts to overcome unnecessary obstacles either -- certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher's goal is to get as many "ticks" (sightings of different species) as possible, as falling within the definition, but for that very reason I'd regard it as being a game.

One could argue that compulsory games at school are a counterexample to the "voluntary" part. On the other hand, Láadan has a word "rashida": "a non-game, a cruel "playing" that is a game only for the dominant "player" with the power to force others to participate [ra=non- + shida=game]". In the light of that concept, perhaps these are not really games for the children forced to participate.

But whatever nits one can pick in Bernard Suits' definition, I still think it makes a pretty good counter to Wittgenstein's claims about the concept.

Replies from: JohannesDahlstrom
comment by JohannesDahlstrom · 2010-03-16T21:21:05.154Z · LW(p) · GW(p)

I wouldn't class most hobbies as attempts to overcome unnecessary obstacles either -- certainly not playing a musical instrument, where the difficulties are all necessary ones.

Oh, right. Reading "unnecessary" as "artificial", the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.

comment by gwern · 2010-03-04T02:22:39.809Z · LW(p) · GW(p)

A family resemblance is still a resemblance.

"The sense of a sentence - one would like to say - may, of course, leave this or that open, but the sentence must nevertheless have a definite sense. An indefinite sense - that would really not be a sense at all. - This is like: An indefinite boundary is not really a boundary at all. Here one thinks perhaps: if I say 'I have locked the man up fast in the room - there is only one door left open' - then I simply haven't locked him in at all; his being locked in is a sham. One would be inclined to say here: 'You haven't done anything at all'. An enclosure with a hole in it is as good as none. - But is that true?"

Replies from: radical_negative_one
comment by radical_negative_one · 2010-03-06T22:36:09.620Z · LW(p) · GW(p)

Could you include a source for this quote, please?

Replies from: gwern
comment by gwern · 2010-03-07T00:55:18.813Z · LW(p) · GW(p)

Googling it would've told you that it's from Wittgenstein's Philosophical Investigations.

Replies from: JGWeissman
comment by JGWeissman · 2010-03-07T01:37:08.189Z · LW(p) · GW(p)

Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.

Replies from: gwern
comment by gwern · 2010-03-07T02:24:16.455Z · LW(p) · GW(p)

/me bats the social signal into JGWeissman's court

Omitting the citation, which wasn't really needed, sends the message that I don't wish to stand on Wittgenstein's authority but think the sentiment stands on its own.

Replies from: wedrifid, RobinZ
comment by wedrifid · 2010-03-07T02:41:51.683Z · LW(p) · GW(p)

Then use your own words. Wittgenstein's are barely readable.

Replies from: JGWeissman
comment by JGWeissman · 2010-03-07T02:49:08.392Z · LW(p) · GW(p)

My words are barely readable? Did you mean Wittgenstein's words?

Replies from: wedrifid
comment by wedrifid · 2010-03-07T02:59:53.873Z · LW(p) · GW(p)

Pardon me I meant Wittgenstein.

comment by RobinZ · 2010-03-07T02:30:23.876Z · LW(p) · GW(p)

If it doesn't stand on its own, you shouldn't quote it at all - the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.

Replies from: JGWeissman, gwern
comment by JGWeissman · 2010-03-07T02:39:54.292Z · LW(p) · GW(p)

Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source.

I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.

Replies from: RobinZ
comment by RobinZ · 2010-03-07T20:08:00.011Z · LW(p) · GW(p)

You're right, of course.

comment by gwern · 2010-03-07T13:48:54.649Z · LW(p) · GW(p)

If he couldn't even 'investigate' one Google search, then he's not going to get a whole lot out of knowing it's Wittgenstein's PI.

not to help you convince.

Arguments from authority are inductively valid, much like ad hominems...

Replies from: RobinZ
comment by RobinZ · 2010-03-07T16:19:11.585Z · LW(p) · GW(p)

Argument screens off authority. And a Google search is inconvenient.

Please source your quotes. Thank you.

Replies from: gwern
comment by gwern · 2010-03-07T19:17:27.261Z · LW(p) · GW(p)

If you can't see the difference between Wittgenstein making an argument about what our intuitions about meaning and precision say and hard technical scientific arguments - like your 'Argument screens' link - nor how knowing the quote is by Wittgenstein could distort one's own introspection & thinking about the former argument, while it would not do so about the latter, then I will just have to accept my downvotes in silence because further dialogue is useless.

Replies from: JGWeissman
comment by JGWeissman · 2010-03-07T19:37:51.043Z · LW(p) · GW(p)

I voted up RobinZ's comment for the link to Beware Trivial Inconveniences.

Since his polite attempt is not getting through to you, I will be more explicit:

You do not have sufficient status to get away with violating group norms regarding the citations of quotes. Rather than signaling that you are confident in your status, and have greater wisdom about the value of citations, it actually signals that you are less valuable to the group, in part as a result of your lack of loyalty, and that your behavior reflects poorly on the group. On net, this lowers your status.

Knock it off.

Replies from: Morendil
comment by Morendil · 2010-03-07T19:55:13.834Z · LW(p) · GW(p)

On net, this lowers your status.

I am growing ever more suspicious of this "status" term.

I'd prefer to resort to (linguistic) pragmatics. RNO made a straightforward and polite request. Then gwern granted the request but planted a subtle barb at the same time (roughly "You should have Googled it"). That was rude. We can only speculate on the reasons for being rude (e.g. past exchanges with RNO). Instead of acknowledging the rudeness and apologizing gracefully gwern is defending the initial behaviour. Both the rudeness and the defensiveness run counter to this site's norms. My prediction is further downvotes if this continues (and apparently gwern agrees!).

"Status", in this case, seems once again to be a non-load-bearing term.

Replies from: JGWeissman, RobinZ, Jack
comment by JGWeissman · 2010-03-07T21:35:56.980Z · LW(p) · GW(p)

"Status", in this case, seems once again to be a non-load-bearing term.

I don't think this is fair as a criticism of my analysis, as the details I gave indicate how I cash out "status" at a lower level of abstraction.The explanatory power of the term in this case is that people have an expectation that with enough status, they can get away with violating group norms (and demonstrating this augments status), and Gwern seems to (falsely) think he(?) has sufficient status to get away with violating this norm. (Really, this norm is important to this group and I don't believe anyone has enough status to get away with violating it here.)

I realize we have had some confused posts about status lately, including the one your linked comment responds to, but that doesn't make it wrong to use the word to refer to a summary of a person's value and power within a group, and other group members' perceptions of these attributes.

Also note, I did not write that comment to explain to others what is going on, but to get Gwern to conform to what I believe is an important group norm.

Replies from: Morendil
comment by Morendil · 2010-03-07T21:59:45.749Z · LW(p) · GW(p)

Mind you, I have no particular interest in this minor dispute about sourcing quotes. By and large I prefer to see quotes with a source.

I am (perhaps unwisely) acting on my frustration at one more use of the term "status" that has increased my confusion, while my requests for clarification have gone without response, and thus opportunistically linking an unrelated thread to those requests.

The explanatory power of the term in this case is that people have an expectation

I do not have privileged access to gwern's expectations, I can only infer them in very roundabout ways from gwern's behaviour. I would regard with extreme caution an "explanation" that referred to someone's mental state, without at least a report by that person of their mental state. The short-hand I use for this mistake is "mind-reading".

Maybe if gwern had come out and said "I have 1337(+) karma, punk. I can quote without sourcing if I want to", I'd be more sympathetic to your use of the term "status". But gwern didn't, and in fact gave a reason for not sourcing, so he would be justified in saying something like "Argument screens off status" in response to your claims.

You could just as well have told gwern, "This community has a norm of sourcing quotes. I note your argument that this norm would detract from the value of the quotes by appearing to appeal to authority. I reject the argument, and additionally I think you're being a jerk."

(+) Not numerically correct, but close enough that I couldn't resist the pun.

Replies from: JGWeissman
comment by JGWeissman · 2010-03-07T22:29:37.169Z · LW(p) · GW(p)

Maybe if gwern had come out and said ...

I think gwern just might be more subtle than a paperclip maximizer.

I reject the argument

I did reject the argument, or at least agreed with RobinZ in rejecting the argument. I made the point about "This community has a norm of sourcing quotes." I won't just bluntly say "I think you're being a jerk." as "jerk" is an inflammatory uninformative term.

It seems to me like you are objecting to my practical use of a theory because you don't understand it, and because other people have written low quality posts about it (which I criticized). Maybe you should go read a high quality post about it.

comment by RobinZ · 2010-03-07T20:10:42.224Z · LW(p) · GW(p)

I suspect my analysis differs from yours - for one, I read in RNO's request a similar barb: roughly, "You should have included a source when you posted a quote." JGW initial comment noted the presence of this - RNO's - barb, whereupon gwern acknowledged the existence of a disagreement by arguing explicitly for his position. In fact, the first post in his argument is at positive karma - I suspect because it is a valid point, despite being in opposition to the norm.

I would not be so hasty to dismiss JGW's analysis.

comment by Jack · 2010-03-07T20:01:08.291Z · LW(p) · GW(p)

The status part seems to come from an assumption that Eliezer or someone else could have gotten away with it. That assumption may be wrong. I think your interpretation is better.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-03-07T21:13:39.124Z · LW(p) · GW(p)

There are rational reasons to hesitate more before harshing the behaviour of people you trust more - you are more likely to be mistaken.

comment by SoullessAutomaton · 2010-03-02T02:07:20.148Z · LW(p) · GW(p)

adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.

Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.

Replies from: Alicorn
comment by Alicorn · 2010-03-02T02:16:37.796Z · LW(p) · GW(p)

I was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-02T02:28:11.164Z · LW(p) · GW(p)

Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.

More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.

comment by orthonormal · 2010-03-02T02:26:13.049Z · LW(p) · GW(p)

For #2, I don't see how you could ever be completely sure the other was rationalist or Bayesian, short of getting their source code; they could always have one irrational belief hiding somewhere far from all the questions you can think up.

In practice, though, I think I could easily decide within 10 questions whether a given (honest) answerer is in the "aspiring rationalist" cluster and/or the "Bayesian" cluster, and get the vast majority of cases right. People cluster themselves pretty well on many questions.

comment by Jack · 2010-03-01T21:54:23.608Z · LW(p) · GW(p)

For two, can I just have an extended preface that describes a population, an infection rate for some disease and a test with false positivity rates and false negativity rates and see if the person gives me the right answer?

comment by knb · 2010-03-02T07:39:27.986Z · LW(p) · GW(p)

For number 1 you should weight "no" more highly. For the answer to be "yes" Strigli must be a team, a Doldun team, and it must win. Sure, maybe all teams win, but it is possible that all teams could lose, they could tie, or the game might be cancelled, so a "no" is significantly more likely to be right.

50% seems wrong to me.

comment by Kaj_Sotala · 2010-03-01T20:55:21.537Z · LW(p) · GW(p)

1: If you have no information to support either alternative more than the other, you should assign them both equal credence. So, fifty-fifty. Note that yes-no questions are the easiest possible case, as you have exactly two options. Things get much trickier once it's not obvious what things should be classified as the alternatives that should be considered equally plausible.

Though I would say that in this situation, the most rational approach would be to tell the Sillpruk, "I'm sorry, I'm not from around here. Before I answer, does this planet have a custom of killing people who give the wrong answer to this question, or is there anything else I should be aware of before replying?"

2: This depends a lot how we define a rationalist and a Bayesian. A question like "is the Bible literally true" could reveal a lot of irrational people, but I'm not certain of the amount of questions that'd need to be asked before we could know for sure that they were irrational. (Well, since 1 and 0 aren't probabilities, the strict answer to this question is "it can't be done", but I'm assuming you mean "before we know with such a certainty that in practice we can say it's for sure".)

Replies from: vinayak
comment by vinayak · 2010-03-02T06:07:35.225Z · LW(p) · GW(p)

Yes, I should be more specific about 2.

So let's say the following are the first three questions you ask and their answers -

Q1. Do you think A is true? A. Yes. Q2. Do you think A=>B is true? A. Yes. Q3. Do you think B is true? A. No.

At this point, will you conclude that the person you are talking to is not rational? Or will you first want to ask him the following question.

Q4. Do you believe in Modus Ponens?

or in other words,

Q4. Do you think that if A and A=>B are both true then B should also be true?

If you think you should ask this question before deciding whether the person is rational or not, then why stop here? You should continue and ask him the following question as well.

Q5. Do you think that if you believe in Modus Ponens and if you also think that A and A=>B are true, then you should also believe that B is true as well?

And I can go on and on...

So the point is, if you think asking all these questions is necessary to decide whether the person is rational or not, then in effect any given person can have any arbitrary set of beliefs and he can still claim to be rational by adding a few extra beliefs to his belief system that say the n^th level of "Modus Ponens is wrong" for some suitably chosen n.

Replies from: prase
comment by prase · 2010-03-02T16:18:35.191Z · LW(p) · GW(p)

I think that belief in modus ponens is a part of the definition of "rational", at least practically. So Q1 is enough. However, there are not much tortoises among the general public, so this type of question isn't probably much helpful.

comment by Kevin · 2010-03-10T03:24:10.347Z · LW(p) · GW(p)

LHC to shut down for a year to address safety concerns: http://news.bbc.co.uk/2/hi/science/nature/8556621.stm

Replies from: Kevin, Jack
comment by Kevin · 2010-03-10T09:43:45.771Z · LW(p) · GW(p)

Apparently this is shoddy journalism. http://news.ycombinator.com/item?id=1180487

comment by Jack · 2010-03-10T06:53:16.909Z · LW(p) · GW(p)

So do we count this as additional evidence that some anthropic selection is in effect even though it is causally connected to the earlier breakdown?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-03-10T09:27:12.583Z · LW(p) · GW(p)

I like this quote from the director:

"With a machine like the LHC, you only build one and you only build it once."

comment by MichaelGR · 2010-03-08T18:53:38.188Z · LW(p) · GW(p)

I've just finished reading Predictably Irrational by Dan Ariely.

I think most LWers would enjoy it. If you've read the sequences, you probably won't learn that many new things (though I did learn a few), but it's a good way to refresh your memory (and it probably helps memorization to see those biases approached from a different angle).

It's a bit light compared to going straight to the studies, but it's also a quick read.

Good to give as gift to friends.

Replies from: Hook
comment by Hook · 2010-03-08T19:36:41.917Z · LW(p) · GW(p)

I'm waiting for the revised edition to come out in May.

Replies from: Hook, MichaelGR
comment by Hook · 2010-03-08T19:41:05.372Z · LW(p) · GW(p)

Looking at that amazon link, has anyone considered automatically inserting a SIAI affiliate into amazon links? It appeared to work quite well for StackOverflow.

comment by MichaelGR · 2010-03-08T23:03:26.082Z · LW(p) · GW(p)

Is there a description of the changes somewhere?

Replies from: Hook
comment by Hook · 2010-03-09T03:03:18.091Z · LW(p) · GW(p)

I didn't see any, but it is close to 100 pages longer.

Replies from: MichaelGR
comment by MichaelGR · 2010-03-09T03:10:20.499Z · LW(p) · GW(p)

Original hardcover was 244 pages long, so 100 pages is a significant addition. Probably worth waiting for.

comment by Vladimir_Nesov · 2010-03-07T09:27:27.234Z · LW(p) · GW(p)

Game theorists discuss one-shot Prisoner's dilemma, why people who don't know Game Theory suggest the irrational strategy of cooperating, and how to make them intuitively see that defection is the right move.

Replies from: RobinZ
comment by RobinZ · 2010-03-07T17:26:38.149Z · LW(p) · GW(p)

Interesting. Has this experiment actually been run, and does it change the percentages in the responses relative to the textbook version?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-07T18:35:04.779Z · LW(p) · GW(p)

That would be scientific approach to Dark Arts.

Replies from: RobinZ
comment by RobinZ · 2010-03-07T19:26:44.337Z · LW(p) · GW(p)

The linked post seemed to run far ahead of the presented evidence - and this is a kind of situation in which the scientific method is known to be quite powerful.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-03-07T22:45:10.330Z · LW(p) · GW(p)

Sure. Dark arts don't stain the power of scientific approach, though probably defy the purpose.

comment by Kevin · 2010-03-04T10:41:28.533Z · LW(p) · GW(p)

Is there a way to view an all time top page for Less Wrong? I mean a page with all of the LW articles in descending order by points, or something similar.

Replies from: FAWS
comment by FAWS · 2010-03-04T11:52:04.891Z · LW(p) · GW(p)

The link named "top" in the top bar, below the banner? Starting with the 10 all time highest ranked articles and continuing with the 10 next highest when you click "next", and so on? Or do I misunderstand you and you mean something else?

Replies from: Kevin
comment by Kevin · 2010-03-04T12:00:58.775Z · LW(p) · GW(p)

Thanks, I was missing the drop down button on that page.

comment by h-H · 2010-03-04T01:33:22.315Z · LW(p) · GW(p)

while not so proficient in math, I do scour arxiv on occasion, and am rewarded with gems like this, enjoy :)

"Lessons from failures to achieve what was possible in the twentieth century physics" by Vesselin Petkov http://arxiv.org/PS_cache/arxiv/pdf/1001/1001.4218v1.pdf

Replies from: wnoise, arundelo
comment by wnoise · 2010-03-04T01:57:21.929Z · LW(p) · GW(p)

I generally prefer links to papers on the arxiv go the abstract, as so: http://arxiv.org/abs/1001.4218

This lets us read the abstract, and easily get to other versions of the same paper (including the latest, if some time goes by between your posting and my reading), and get to other works by the same author.

EDIT: overall, reasonable points, but some things "pinging" my crank-detectors. I suppose I'll have to track down reference 10 and the 4/3 claim for electro-magnetic mass.

Replies from: Mitchell_Porter, Cyan
comment by Mitchell_Porter · 2010-03-04T04:50:49.387Z · LW(p) · GW(p)

overall, reasonable points

I disagree. I think it's a paper which looks backwards in an unconstructive way. The author is hoping for conceptual breakthroughs as good as relativity and quantum theory, but which don't require engagement with the technical complexities of string theory or the Standard Model. Those two constructions respectively define the true theoretical and empirical frontier, but instead the author wants to ignore all that, linger at about a 1930s conceptual level, and look for another way.

ETA: As an example of not understanding contemporary developments, see his final section, where he says

While string theory has extensively studied how the interactions in the hydrogen atom can be represented in terms of the string formalism, I wonder how string theory would answer a much simpler question – what should be the electron in the ground state of the hydrogen atom in order that the hydrogen atom does not possess a dipole moment in that state?

I don't know what significance this question has for the author, but so far as I know, the hydrogen atom has no dipole moment in its ground state because the wavefunction is spherically symmetric. This will still be true in string theory. The hydrogen atom exists on a scale where the strings can be approximated by point particles. I suspect the author is thinking that because strings are extended objects they have dipole moments; but it's not of a magnitude to be relevant at the atomic scale.

Replies from: wnoise
comment by wnoise · 2010-03-04T06:48:02.914Z · LW(p) · GW(p)

Of course he looks backwards. You can't analyze why any discovery didn't happen sooner, even though all the pieces were there, unless you look backwards. I thought the case study of SR was quite illuminating, though it goes directly counter to his attack on string theory. After getting the Lorentz transform, it took a surprisingly long time to for anyone to treat the transformed quantities as equivalent -- that is, to take the math seriously. And for string theory, he says they take the math too seriously. Of course, the Lorentz transform was more clearly grounded in observed physical phenomenon.

I completely agree he doesn't understand contemporary developments, and that was some of what I referred to as "pinging my crank-detectors", along with the loose analogy between 4-d bending in "world tubes" to that in 3-d rods. I don't necessarily see that as a huge problem if he's not pretending to be able to offer us the next big revolution on a silver platter.

comment by Cyan · 2010-03-04T02:43:30.572Z · LW(p) · GW(p)

the 4/3 claim for electro-magnetic mass

Wikipedia points to the original text of a 1905 article by Poincaré. How's your French?

Replies from: wnoise
comment by wnoise · 2010-03-04T03:02:43.464Z · LW(p) · GW(p)

Thanks. It's decent, actually, but there's still some barrier. Increasing that barrier is changes to physics notation since then (no vectors!).

Fortunately my university library appears to have a copy of an older edition of Rohrlich's Classical Charged Particles, which may help piece things together.

Replies from: Cyan
comment by Cyan · 2010-03-04T03:26:46.430Z · LW(p) · GW(p)

Petkov wrote:

Feynman [wrote], ”It is therefore impossible to get all the mass to be electromagnetic in the way we hoped. It is not a legal theory if we have nothing but electrodynamics” [13, p. 28-4]; but he was unaware that the factor of 4/3 had already been accounted for [10]).

It's worth noting that Feynman's statements are actually correct. According to Wikipedia, the problem is solved by postulating a non-electromagnetic attractive force holding the charged particle together, which subtracts 1/3 of the 4/3 factor, leaving unity. Petkov doesn't explicitly say that Feynman is wrong, but his phrasing might leave that impression.

comment by arundelo · 2010-03-04T02:36:51.994Z · LW(p) · GW(p)

Neat find! I haven't read all of it yet, but I found this striking:

It was precisely the view, that successful abstractions should not be regarded as representing something real, that prevented Lorentz from discovering special relativity. He believed that the time t of an observer at rest with respect to the aether (which is a genuine example of reifying an unsuccessful abstraction) was the true time, whereas the quantity t of another observer, moving with respect to the first, was merely an abstraction that did not represent anything real in the world. Lorentz himself admitted the failure of his approach:

The chief cause of my failure was my clinging to the idea that the variable t only can be considered as the true time and that my local time t must be regarded as no more than an auxiliary mathematical quantity. In Einstein's theory, on the contrary, t plays the same part as t; if we want to describe phenomena in terms of x , y , z , t we must work with these variables exactly as we could do with x, y, z, t.

This reminds me of Mach's Principle: Anti-Epiphenomenal Physics:

When you see a seemingly contingent equality - two things that just happen to be equal, all the time, every time - it may be time to reformulate your physics so that there is one thing instead of two. The distinction you imagine is epiphenomenal; it has no experimental consequences. In the right physics, with the right elements of reality, you would no longer be able to imagine it.

comment by NancyLebovitz · 2010-03-01T15:57:15.917Z · LW(p) · GW(p)

I have a problem with the wording of "logical rudeness". Even after having seen it many times, I reflexively parse it to mean being rude by being logical-- almost the opposite of the actual meaning.

I don't know whether I'm the only person who has this problem, but I think it's worth checking.

"Anti-logical rudeness" strikes me as a good bit better.

Replies from: RobinZ, h-H
comment by RobinZ · 2010-03-01T19:57:29.741Z · LW(p) · GW(p)

It's not anti-logical, it's rude logic. The point of Suber's paper is that at no point does the logically rude debater reason incorrectly from their premises, and yet we consider what they have done to be a violation of a code of etiquette.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-01T21:08:29.275Z · LW(p) · GW(p)

When I was considering a better name for the problem, I couldn't find a word for the process of seeking truth, which is what's actually being derailed by logical rudeness.

Unless I've missed something, the problem with logical rudeness isn't that there's no logical flaw in it.

The fact that I've got 4 karma points suggests (but doesn't prove) that I'm not the only person who has a problem with the term "logical rudeness". I should have been clearer that "anti-logical rudeness" was just an attempt at an improvement, rather than a strong proposal for that particular change.

Replies from: RobinZ
comment by RobinZ · 2010-03-01T21:31:29.034Z · LW(p) · GW(p)

I think you're complaining about the problem of people not updating on their evidence by using anti-epistemological techniques such as logical rudeness.

I still don't see the need for changing the name, but I'll defer to the opinion of the crowd if need be.

comment by h-H · 2010-03-03T05:07:52.030Z · LW(p) · GW(p)

seconded, it's too benign for what it actually intends to convey.

comment by SilasBarta · 2010-03-07T14:45:29.073Z · LW(p) · GW(p)

Thermodynamics post on my blog. Not directly related to rationality, but you might find it interesting if you liked Engines of Cognition.

Summary: molar entropy is normally expressed as Joules per Kelvin per mole, but can also be expressed, more intuitively, as bits per molecule, which shows the relationship between a molecule's properties and how much information it contains. (Contains references to two books on the topic.)

comment by wnoise · 2010-03-02T20:15:39.783Z · LW(p) · GW(p)

I'm considering doing a post about "the lighthouse problem" from Data Analysis: a Bayesian Tutorial, by D. S. Sivia. This is example 3 in chapter 2, pp. 31-36. It boils down to finding the center and width of a Cauchy distribution (physicists may call it Lorentzian), given a set of samples.

I can present a reasonable Bayesian handling of it -- this is nearly mechanical, but I'd really like to see a competent Frequentist attack on it first, to get a good comparison going, untainted by seeing the Bayesian approach. Does anyone have suggestions for ways to structure the post?

Replies from: hugh
comment by hugh · 2010-03-02T22:56:34.521Z · LW(p) · GW(p)

I don't have the book you're referring to. Are you essentially going to walk through a solution for this [pdf], or at least to talk about point #10?

This is a Bayesian problem; the Frequentist answer is the same, just more convoluted because they have to say things like "in 95% of similar situations, the estimate of a and b are within d of the real position of the lighthouse". Alternately, a Frequentist, while always ignorant when starting a problem, never begins wrong. In this case, if the chose prior was very unsuitable, the Frequentist more quickly converges to a correct answer.

Replies from: wnoise
comment by wnoise · 2010-03-03T01:57:03.114Z · LW(p) · GW(p)

Yes, that was the plan.

This is a Bayesian problem;

I thought Frequentists would not be willing to cede such, but insist that any problem has a perfectly good Frequentist solution.

the Frequentist answer is the same,

I want to see not just the Frequentist solution, but the derivation of the solution.

comment by XiXiDu · 2010-03-01T18:52:02.876Z · LW(p) · GW(p)

What programming language should I learn?

As part of my long journey towards a decent education, I assume, it is mandatory to learn computer programming.

  • I'm not completely illiterate. I know the 'basics' of programming. Nevertheless, I want to start from the very beginning.
  • I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I'm thinking about starting with Processing and Lua. What do you think?

Replies from: AngryParsley, ciphergoth, gimpf, Morendil, hugh, nhamann, Douglas_Knight, wnoise, hugh, Emile, CronoDAS, Morendil, mkehrt, wedrifid
comment by AngryParsley · 2010-03-02T11:28:16.058Z · LW(p) · GW(p)

In an amazing coincidence, many of the suggestions you get will be the suggester's current favorite language. Many of these recommendations will be esoteric or unpopular languages. These people will say you should learn language X first because of the various features language X. They'll forget that they did not learn language X first, and while language X is powerful, it might not be easy to set up a development environment. Tutorials might be lacking. Newbie support might be lacking. Etc.

Others have said this but you can't hear it enough: It is not mandatory to learn computer programming. If you force yourself, you probably won't enjoy it.

So, what language should you learn first? Well the answer is... (drumroll) it depends! Mostly, it depends on what you are trying to do. (Side note: You can get a lot of help on mailing lists or IRC if you say, "I'm trying to do X." instead of, "I'm having a problem getting feature blah blah blah to work.")

I have no particular goal in mind that demands a practical orientation. My aim is to acquire general knowledge of computer programming to be used as starting point that I can build upon.

I paused after reading this. The main way people learn to program is by writing programs and getting feedback from peers/mentors. If you're not coding something you find interesting, it's hard to stay motivated for long enough to learn the language.

My advice is to learn a language that a lot of people learn as a first language. You'll be able to take advantage of tutorials and support geared toward newbies. You can always learn "cooler" languages later, but if you start with something advanced you might give up in frustration. Common first languages in CS programs are Java and C++, but Python is catching on pretty quickly. It also helps if your first language is used by people you already know. That way they'll be able to mentor/advise you.

Finally, I should give some of my background. I've been writing code for a while. I write code for work and leisure. My first language was QBasic. I moved on to C, C++, TI-BASIC, Perl, PHP, Java, C#, Ruby, and some others. I've played with but don't really know Lisp, Lua, and Haskell. My favorite language right now is Python, but I'm probably still in the honeymoon phase since I've been using it for less than a year.

Argh, see what I said at the start? I recommended Python and my favorite language is currently Python!

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T13:36:10.857Z · LW(p) · GW(p)

Motivation is not my problem these days. It has been all my youth, partly the reason that I completely failed at school. Now the almost primal fear of staying dumb and a nagging curiosity to gather knowledge, learn and understand, do trump any lack of motivation or boredom. To see how far above you people, here at lesswrong.com, are compared to the average person makes me strive to approximate your wit.

In other words, it's already enough motivation to know the basics of a programming language like Haskell, when average Joe is hardly self-aware but a mere puppet. I don't want to be one of them anymore.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-07T00:46:40.659Z · LW(p) · GW(p)

If motivation is no longer a problem for you, that could be something really interesting for the akrasia discussions. What changed so that motivation is no longer a problem?

Replies from: XiXiDu
comment by XiXiDu · 2010-03-07T12:11:20.934Z · LW(p) · GW(p)

Being an eye witness of your own motives and growing-up is a tough exercise to conclude accurately.

I believe that it would be of no help in the mentioned discussions. It is rather inherent, something neurological.

I grew up in a very religious environment. Any significance, my goals, were mainly set to focus on being a good Christian. Although I assume it never reached my 'inner self', I consciously tried to motivate myself to reach this particular goal due to fear of dying. But on a rather unconscious level it never worked, this goal has always been ineffectual.

At the age of 13, my decision to become vegetarian changed everything. With all my heart I came to the conclusion that something is wrong about all the pain and suffering. A sense for human suffering was still effectively dimmed, due to a whole life of indoctrination telling me that our pain is our own fault. But what about the animals? Why would an all-loving God design the universe this way? To cut a long story short, still believing, it made me abandon this God. With the onset of the Internet here in Germany I then learnt that there was nothing to abandon in the first place...I guess I won't have to go into details here.

Anyway, that was just one of the things that changed. I'm really bad when it comes to social things. Thus I suffered a lot in school, it wasn't easy. Those problems with other kids, a lack of concentration and that I always found the given explanations counterintuitive and hard to follow, dimmed any motivation to learn more. All these problems rather caused me to associate education with torture, I wanted it to end. Though curiosity was always a part of my character. I've probably been the only kid who liked to watch documentations and news at an early age.

Then there is the mental side I mentioned at the beginning. These are probably the most important reasons for all that happened and happens in my life. I got quite a few ticks and psychic problems. When I was a kid I was suffering from Tourette syndrome, which didn't help in school either. But many other urges are still prevalent. I pretty much have to consciously think about a lot that other people might just do and decide upon unconsciously. Like sleeping, I pretty much have to tell me each time why there are more reasons to sleep now than in favor of further evaluation. Or how, when and about what do I start to think, when do I stop and decide. How do I set the threshold? For me it is inherently very low, the slightest stimulus triggers a high tide of possibilities. Like when you look up some article on Wikipedia, you can click through forever. There is much more...I hope you see what I mean by mental problems.

I could refine the above or go on for long. I will just stop now. You see, my motivation is complex and pretty much based on my mental problems and curiosity. I love playing games, but I cannot push myself to play more than a few minutes. Then there's this fear and urge to think of what else is there, what I could be missing and what could happen if I just enjoy playing this game. I have to do it...I'm not strong enough not to care. Take this reply as an example, I really had to push myself to answer but also had an urge to write it. It's a pain. Though now the fear of how much time it takes up and what else I could do grew stronger.

Bottom line is that my motivation is a mixture of curiosity, inclination, mental problems, my youth, relieve, not staying dumb, fear of being wrong again about the nature of reality and so on. Really, the only problem I have with learning programming right now is that there are so many other problems in my head, not my 'motivation'. I often don't find the time to read more than one page in a book per day.

I'm sorry if this post sounds a bit confused, not having the best day today. Also just ask if you have further questions. I should probably think about it a bit more thoroughly anyway. But now you have some idea. I hope...

P.S. Another milestone that changed everything was discovering Orion's Arm. It was so awesome, I just had to learn more. That basicaly led me to get into science, transhumanism and later OB/LW.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-07T14:30:47.075Z · LW(p) · GW(p)

Thank you very much for writing this up. It wouldn't surprise me a bit if akrasia has a neurological basis, and I'm a little surprised that I haven't seen any posts really looking at it from that angle. Dopamine?

And on the other hand, your story is also about ideas and circumstances that undercut motivation.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-07T15:47:54.681Z · LW(p) · GW(p)

Those who restrain desire, do so because theirs is weak enough to be restrained. -- William Blake

I haven't read up on the akrasia discussions. I don't believe into intelligence. I believe in efficiency regarding goals stated in advance. It's all about what we want and how to achieve it. And what we want is merely 'the line of least resistance'.

Whatever intelligence is, it can't be intelligent all the way down. It's just dumb stuff at the bottom. -- Andy Clark

The universe really just exists. And it appears to us that it is unfolding because we are part of it. We appear to each other to be free and intelligent because we believe that we are not part of it.

There is a lot of talk here on LW on how to become less wrong. That works. Though it is not a proactive approach but simply trial and error allowed for by the mostly large error tolerance of our existence.

It's all about practicability, what works. If prayer worked, we'd use it if we wanted to use it.

Narns, Humans, Centauri… we all do what we do for the same reason: because it seems like a good idea at the time. -- G’Kar, Babylon 5

Anything you learn on lesswrong.com you'll have to apply by relying on fundamental non-intelligent processes. You can only hope to be lucky to learn enough in-time to avoid fatal failure. Since no possible system can use advanced heuristics to tackle, or even evaluate, every stimulus. For example, at what point are you going to use Bayesian statistics? You won't even be able to evaluate the importance of all data to be able to judge when to apply more rigorous tools. You can only be a passive observer who's waiting for new data by experience. And until new data arrives, rely on prior knowledge.

A man can do what he wants, but not want what he wants. -- Arthur Schopenhauer

Thus I don't think that a weakness of will does exist. I also don't think that you can do anything but your best. What is the right thing to do does always rely on what you want. Never you do something that you do not want. Only in retrospect or on average might we want something else. On that basis we then do conclude that what we have done was wrong and that we knew better. But what really was different at that time was that what we wanted, which changed the truth value of what we, contemplating at present, in retrospect know to be the best to do.

So what is it that can help us dealing with akrasia? Nothing. In future we might be able to strengthen our goals, so that what we want at the time of applying the amplification of our goals is what we're going to want forever. Or as long as something even stronger shifts our desires again.

If we could deliberately seize control of our pleasure systems, we could reproduce the pleasure of success. That be the end of everything. -- Marvin Minsky

I'm happy with how it is right now. I'm very happy that there is what we call akrasia. If there wasn't, I'd still be religious.

comment by Paul Crowley (ciphergoth) · 2010-03-02T13:41:04.862Z · LW(p) · GW(p)

I think the path outlined in ESR's How to Become a Hacker is pretty good. Python is in my opinion far and away the best choice as a first language, but Haskell as a second or subsequent language isn't a bad idea at all. Perl is no longer important; you probably need never learn it.

comment by gimpf · 2010-03-01T21:51:40.919Z · LW(p) · GW(p)

First, I do not think that learning to program computers must be part of a decent education. Many people learn to solve simple integrals in high-school, but the effect, beyond simple brain-training, is nil.

For programming it's the same. Learning to program well takes years. I mean years of full-time studying/programming etc.

However, if you really want to learn programming, the first question is not the language, but what you wanna do. You learn one language until you have built up some self-confidence, then learn another. The "what" typically breaks down very early. Sorry, I cannot give you any hints on this.

And, first exercise, you should post this question (or search for answers to this question, as it has been posted already too many times) on the correct forums for programming questions. Finding those forums is the first start into learning programming. You'll never be able to keep all the required facts for programming in your head.

I've never heard of processing, but I like Lua (more than python), and Lisp. However, even Java is just fine. Don't get into the habit of thinking that mediocre languages inhibit your progress. At the beginning, nearly all languages are more advanced than you.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T11:54:32.260Z · LW(p) · GW(p)

What I want is to be able understand, attain a more intuitive comprehension, of concepts associated with other fields that I'm interested in, which I assume are important. As a simple example, take this comment by RobinZ. Not that I don't understand that simple statement. As I said, I already know the 'basics' of programming. I thoroughly understand it. Just so you get an idea.

In addition to reading up on all lesswrong.com sequences, I'm mainly into mathematics and physics right now. That's where I have the biggest deficits. I see my planned 'study' of programming to be more as practise of logical thinking and as a underlying matrix to grasp fields liked computer science and concepts as that of a 'Turing machine'.

And I do not agree that the effect is nil. I believe that programming is one of the foundations necessary to understand. I believe that there are 4 cornerstones underlying human comprehension. From there you can go everywhere: Mathematics, Physics, Linguistics and Programming (formal languages, calculation/data processing/computation, symbolic manipulation). The art of computer programming is closely related to the basics of all that is important, information.

Replies from: gimpf
comment by gimpf · 2010-03-02T16:47:47.567Z · LW(p) · GW(p)

Well, now that I understand your intentions a little bit better (and having read through the other comments), I seriously want to second the recommendation of Scheme.

Use DrScheme as environment (zero-hassle), go through SICP and HTDP. Algorithms are nice, Knuth's series and so, but it may be more than you are asking. Project Euler is a website where you can find some inspirations for problems you may want to solve. Scheme as a language has the advantages that you will not need time to wrap your head around ugly syntax (most languages, except for Lua, maybe Python), memory management (C), or mathematical purity (Haskell, Prolog). AFAIK they also distinguish between exact (rational numbers, limited only by RAM) and inexact numbers (floating points) -- regularly a confusion for people trying to do some numeric code the first time. The trade-offs are quite different for professional programmers, though.

edit: welcome to the web, using links!

comment by Morendil · 2010-03-02T21:05:16.490Z · LW(p) · GW(p)

Consider finding a Coding Dojo near your location.

There is a subtle but deep distinction between learning a programming language and learning how to program. The latter is more important and abstracts away from any particular language or any particular programming paradigm.

To get a feeling for the difference, look at this animation of Paul Graham writing an article - crossing the chasm between ideas in his head and ideas expressed in words. (Compared to personal experience this "demo" simplifies the process of writing an article considerably, but it illustrates neatly what books can't teach about writing.)

What I mean by "learning how to program" is the analogue of that animation in the context of writing code. It isn't the same as learning to design algorithms or data structures. It is what you'll learn about getting from algorithms or data structures in your head to algorithms expressed in code.

Coding Dojos are an opportunity to pick up these largely untaught skills from experienced programmers.

comment by hugh · 2010-03-02T20:18:35.121Z · LW(p) · GW(p)

I agree with everything Emile and AngryParsley said. I program for work and for play, and use Python when I can get away with it. You can be shocked, that like AngryParsley, I will recommend my favorite language!

I have an additional recommendation though: to learn to program, you need to have questions to answer. My favorite source for fun programming problems is ProjectEuler. It's very math-heavy, and it sounds like you might like learning the math as much as learning the programming. Additionally, every problem, once solved, has a forum thread opened where many people post their solutions in many languages. Seeing better solutions to a problem you just solved on your own is a great way to rapidly advance.

comment by nhamann · 2010-03-02T02:21:41.909Z · LW(p) · GW(p)

As mentioned in another comment, the best introduction to programming is probably SICP. I recommend going with this route, as trying to learn programming from language-specific tutorials will almost certainly not give you an adequate understanding of fundamental programming concepts.

After that, you will probably want to start dabbling in a variety of programming styles. You could perhaps learn some C for imperative programming, Java for object-oriented, Python for a high-level hybrid approach, and Haskell for functional programming as starters. If you desire more programming knowledge you can branch out from there, but this seems to be a good start.

Just keep in mind that when starting out learning programming, it's probably more important to dabble in as many different languages as you can. Doing this successfully will enable you to quickly learn any language you may need to know. I admit I may be biased in this assessment, though, as I tend to get bored focusing on any one topic for long periods of time.

comment by Douglas_Knight · 2010-03-01T23:49:12.672Z · LW(p) · GW(p)

Processing and Lua seem pretty exotic to me. How did you hear of them? If you know people who use a particular language, that's a pretty good reason to choose it.

Even if you don't have a goal in mind, I would recommend choosing a language with applications in mind to keep you motivated. For example, if (but only if) you play wow, I would recommend Lua; or if the graphical applications of Processing appeal to you, then I'd recommend it. If you play with web pages, javascript...

At least that's my advice for one style of learning, a style suggested by your mention of those two languages, but almost opposite from your "Nevertheless, I want to start from the very beginning," which suggests something like SICP. There are probably similar courses built around OCaml. The proliferation of monad tutorials suggests that the courses built around Haskell don't work. That's not to disagree with wnoise about the value of Haskell either practical or educational, but I'm skeptical about it as an introduction.

ETA: SICP is a textbook using Scheme (Lisp). Lisp or OCaml seems like a good stepping-stone to Haskell. Monads are like burritos.

Replies from: SoullessAutomaton, XiXiDu
comment by SoullessAutomaton · 2010-03-02T01:37:58.278Z · LW(p) · GW(p)

Eh, monads are an extremely simple concept with a scary-sounding name, and not the only example of such in Haskell.

The problem is that Haskell encourages a degree of abstraction that would be absurd in most other languages, and tends to borrow mathematical terminology for those abstractions, instead of inventing arbitrary new jargon the way most other languages would.

So you end up with newcomers to Haskell trying to simultaneously:

  • Adjust to a degree of abstraction normally reserved for mathematicians and philosophers
  • Unlearn existing habits from other languages
  • Learn about intimidating math-y-sounding things

And the final blow is that the type of programming problem that the monad abstraction so elegantly captures is almost precisely the set of problems that look simple in most other languages.

But some people stick with it anyway, until eventually something clicks and they realize just how simple the whole monad thing is. Having at that point, in the throes of comprehension, already forgotten what it was to be confused, they promptly go write yet another "monad tutorial" filled with half-baked metaphors and misleading analogies to concrete concepts, perpetuating the idea that monads are some incredibly arcane, challenging concept.

The whole circus makes for an excellent demonstration of the sort of thing Eliezer complains about in regards to explaining things being hard.

comment by XiXiDu · 2010-03-02T12:32:01.345Z · LW(p) · GW(p)

I learnt about Lua thru Metaplace, which is now dead. I heard about Processing via Anders Sandberg.

I'm always fascinated by data visualisation. I thought Processing might come in handy.

Thanks for mentioning SICP. I'll check it out.

Replies from: gwern
comment by gwern · 2010-03-04T02:01:15.308Z · LW(p) · GW(p)

I'm going through SICP now. I'm not getting as much out of it as I expected, because much of it I already know, is uninteresting to me since I expect lazy evaluation due to Haskell, or is just tedious (I got sick pretty quick with the authors' hard-on for number theory).

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-05T04:35:42.427Z · LW(p) · GW(p)

SICP is nice if you've never seen a lambda abstraction before; its value decreases monotonically with increasing exposure to functional programming. You can probably safely skim the majority of it, at most do a handful of the exercises that don't immediately make you yawn just by looking at them.

Scheme isn't much more than an impure, strict untyped λ-calculus; it seems embarrassingly simple (which is also its charm!) from the perspective of someone comfortable working in a pure, non-strict bastardization of some fragment of System F-ω or whatever it is that GHC is these days.

Haskell does tend to ruin one for other languages, though lately I've been getting slightly frustrated with some of Haskell's own limitations...

comment by wnoise · 2010-03-01T19:20:36.475Z · LW(p) · GW(p)

Personally, I'm a big fan of Haskell. It will make your brain hurt, but that's part of the point -- it's very good at easily creating and using mathematically sound abstractions. I'm not a big fan of Lua, though it's a perfectly reasonable choice for its niche of embeddable scripting language. I have no experience with Processing. The most commonly recommended starting language is python, and it's not a bad choice at all.

Replies from: gwern, XiXiDu
comment by gwern · 2010-03-04T01:57:18.557Z · LW(p) · GW(p)

Toss in another vote for Haskell. It was my first language (and back before Real World Haskell was written); I'm happy with that choice - there were difficult patches, but they came with better understanding.

comment by XiXiDu · 2010-03-01T19:24:58.014Z · LW(p) · GW(p)

Thanks, I didn't know about Haskell, sounds great. Open source and all. I think you already convinced me.

Replies from: sketerpot
comment by sketerpot · 2010-03-02T00:22:38.543Z · LW(p) · GW(p)

I wouldn't recommend Haskell as a first language. I'm a fan of Haskell, and the idea of learning Haskell first is certainly intriguing, but it's hard to learn, hard to wrap your head around sometimes, and the documentation is usually written for people who are at least computer science grad student level. I'm not saying it's necessarily a bad idea to start with Haskell, but I think you'd have a much easier time getting started with Python.

Python is open source, thoroughly pleasant, widely used and well-supported, and is a remarkably easy language to learn and use, without being a "training wheels" language. I would start with Python, then learn C and Lisp and Haskell. Learn those four, and you will definitely have achieved your goal of learning to program.

And above all, write code. This should go without saying, but you'd be amazed how many people think that learning to program consists mostly of learning a bunch of syntax.

Replies from: SoullessAutomaton, XiXiDu
comment by SoullessAutomaton · 2010-03-02T01:50:27.617Z · LW(p) · GW(p)

I have to disagree on Python; I think consistency and minimalism are the most important things in an "introductory" language, if the goal is to learn the field, rather than just getting as quickly as possible to solving well-understood tasks. Python is better than many, but has too many awkward bits that people who already know programming don't think about.

I'd lean toward either C (for learning the "pushing electrons around silicon" end of things) or Scheme (for learning the "abstract conceptual elegance" end of things). It helps that both have excellent learning materials available.

Haskell is a good choice for someone with a strong math background (and I mean serious abstract math, not simplistic glorified arithmetic like, say, calculus) or someone who already knows some "mainstream" programming and wants to stretch their brain.

Replies from: sketerpot, wedrifid, XiXiDu
comment by sketerpot · 2010-03-02T02:21:24.148Z · LW(p) · GW(p)

You make some good points, but I still disagree with you. For someone who's trying to learn to program, I believe that the primary goal should be getting quickly to the point where you can solve well-understood tasks. I've always thought that the quickest way to learn programming was to do programming, and until you've been doing it for a while, you won't understand it.

Replies from: SoullessAutomaton, XiXiDu
comment by SoullessAutomaton · 2010-03-02T02:49:07.523Z · LW(p) · GW(p)

Well, I admit that my thoughts are colored somewhat by an impression--acquired by having made a living from programming for some years--that there are plenty of people who have been doing it for quite a while without, in fact, having any understanding whatsoever. Observe also the abysmal state of affairs regarding the expected quality of software; I marvel that anyone has the audacity to use the phrase "software engineer" with a straight face! But I'll leave it at that, lest I start quoting Dijkstra.

Back on topic, I do agree that being able to start doing things quickly--both in terms of producing interesting results and getting rapid feedback--is important, but not the most important thing.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T12:25:38.683Z · LW(p) · GW(p)

I want to achieve an understanding of the basics without necessarily being able to be a productive programmer. I want to get a grasp of the underlying nature of computer science, not being able to mechanical write and parse code to solve certain problems. The big picture and underlying nature is what I'm looking for.

I agree that many people do not understand, they really only learnt how to mechanical use something. How much does the average person know about how one of our simplest tools work, the knife? What does it mean to cut something? What does the act of cutting accomplish? How does it work?

We all know how to use this particular tool. We think it is obvious, thus we do not contemplate it any further. But most of us have no idea what actually physically happens. We are ignorant of the underlying mechanisms for that we think we understand. We are quick to conclude that there is nothing more to learn here. But there is deep knowledge to be found in what might superficially appear to be simple and obvious.

Replies from: wnoise, RobinZ, AdeleneDawner
comment by wnoise · 2010-03-02T17:29:12.463Z · LW(p) · GW(p)

I want to get a grasp of the underlying nature of computer science,

Then you do not, in fact, need to learn to program. You need an actual CS text, covering finite automata, pushdown machines, Turing machines, etc. Learning to program will illustrate and fix these concepts more closely, and is a good general skill to have.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T18:11:21.709Z · LW(p) · GW(p)

Recommendations on the above? Books, essays...

Replies from: hugh, RobinZ
comment by hugh · 2010-03-02T20:29:20.822Z · LW(p) · GW(p)

Sipser's Introduction to the Theory of Computation is a tiny little book with a lot crammed in. It's also quite expensive, and advanced enough to make most CS students hate it. I have to recommend it because I adore it, but why start there, when you can start right now for free on wikipedia? If you like it, look at the references, and think about buying a used or international copy of one book or another.

I echo the reverent tones of RobinZ and wnoise when it comes to The Art of Computer Programming. Those volumes are more broadly applicable, even more expensive, and even more intense. They make an amazing gift for that computer scientist in your life, but I wouldn't recommend them as a starting point.

comment by RobinZ · 2010-03-02T19:50:33.852Z · LW(p) · GW(p)

Elsewhere wnoise said that SICP and Knuth were computer science, but additional suggestions would be nice.

Replies from: wnoise
comment by wnoise · 2010-03-02T20:33:39.025Z · LW(p) · GW(p)

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

For my undergraduate work, I used two books. The first is Jan L. A. van de Snepscheut's What Computing Is All About. It is, unfortunately, out-of-print.

The second was Elements of the Theory of Computation by Harry Lewis and Christos H. Papadimitriou.

Replies from: SoullessAutomaton, RobinZ
comment by SoullessAutomaton · 2010-03-03T00:41:37.455Z · LW(p) · GW(p)

Well, they're computer sciencey, but they are definitely geared to approaching from the programming, even "Von Neumann machine" side, rather than Turing machines and automata. Which is a useful, reasonable way to go, but is (in some sense) considered less fundamental. I would still recommend them.

Turing Machines? Heresy! The pure untyped λ-calculus is the One True Foundation of computing!

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-03T03:45:37.537Z · LW(p) · GW(p)

You probably should have spelled out that SICP is on the λ-calculus side.

Replies from: wedrifid
comment by wedrifid · 2010-03-03T03:48:07.230Z · LW(p) · GW(p)

Gah. Do I need to add this to my reading list?

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-03T04:25:36.140Z · LW(p) · GW(p)

You seem to already know Lisp, so probably not. Read the table of contents. If you haven't written an interpreter, then yes.

The point in this context is that when people teach computability theory from the point of view of Turing machines, they wave their hands and say "of course you can emulate a Turing machine as data on the tape of a universal Turing machine," and there's no point to fill in the details. But it's easy to fill in all the details in λ-calculus, even a dialect like Scheme. And once you fill in the details in Scheme, you (a) prove the theorem and (b) get a useful program, which you can then modify to get interpreters for other languages, say, ML.

SICP is a programming book, not a theoretical book, but there's a lot of overlap when it comes to interpreters. And you probably learn both better this way.

I almost put this history lesson in my previous comment:
Church invented λ-calculus and proposed the Church-Turing thesis that it is the model of all that we might want to call computation, but no one believed him. Then Turing invented Turing machines, showed them equivalent to λ-calculus and everyone then believed the thesis. I'm not entirely sure why the difference. Because they're more concrete? So λ-calculus may be less convincing than Turing machines, hence pedagogically worse. Maybe actually programming in Scheme makes it more concrete. And it's easy to implement Turing machines in Scheme, so that should convince you that your computer is at least as powerful as theoretical computation ;-)

Replies from: Eliezer_Yudkowsky, hugh, wedrifid, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-03T04:44:09.111Z · LW(p) · GW(p)

Um... I think it's a worthwhile point, at this juncture, to observe that Turing machines are humanly comprehensible and lambda calculus is not.

EDIT: It's interesting how many replies seem to understand lambda calculus better than they understand ordinary mortals. Take anyone who's not a mathematician or a computer programmer. Try to explain Turing machines, using examples and diagrams. Then try to explain lambda calculus, using examples and diagrams. You will very rapidly discover what I mean.

Replies from: SoullessAutomaton, Morendil, wnoise, rwallace, Douglas_Knight
comment by SoullessAutomaton · 2010-03-03T05:35:37.361Z · LW(p) · GW(p)

Are you mad? The lambda calculus is incredibly simple, and it would take maybe a few days to implement a very minimal Lisp dialect on top of raw (pure, non-strict, untyped) lambda calculus, and maybe another week or so to get a language distinctly more usable than, say, Java.

Turing Machines are a nice model for discussing the theory of computation, but completely and ridiculously non-viable as an actual method of programming; it'd be like programming in Brainfuck. It was von Neumann's insights leading to the stored-program architecture that made computing remotely sensible.

There's plenty of ridiculously opaque models of computation (Post's tag machine, Conway's Life, exponential Diophantine equations...) but I can't begin to imagine one that would be more comprehensible than untyped lambda calculus.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2010-03-03T12:23:31.226Z · LW(p) · GW(p)

I'm pretty sure that Eliezer meant that Turing machines are better for giving novices a "model of computation". That is, they will gain a better intuitive sense of what computers can and can't do. Your students might not be able to implement much, but their intuitions about what can be done will be better after just a brief explanation. So, if your goal is to make them less crazy regarding the possibilities and limitations of computers, Turing machines will give you more bang for your buck.

comment by Morendil · 2010-03-03T08:33:57.915Z · LW(p) · GW(p)

A friend of mine has invented a "Game of Lambda" played with physical tokens which look like a bigger version of the hexes from wargames of old, with rules for function definition, variable binding and evaluation. He has a series of exercises requiring players to create functions of increasing complexity; plus one, factorial, and so on. Seems to work well.

Alligator Eggs is another variation on the same theme.

comment by wnoise · 2010-03-03T05:51:18.902Z · LW(p) · GW(p)

You realize you've just called every computer scientist inhuman?

Turing machines are something one can easily imagine implementing in hardware. The typical encoding of some familiar concepts into lambda calculus takes a bit of a getting used to (natural numbers as functions which composes their argument (as a function) n times? If-then-else as function composition, where "true" is a function returning its first argument, and "false" is a function returning its second? These are decidedly odd). But lambda calculus is composable. You can take two definitions and merge them together nicely. Combining useful features from two Turing machines is considerably harder. The best route to usable programming there is the UTM + stored code, which you have to figure out how to encode sanely.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-03-03T06:23:47.791Z · LW(p) · GW(p)

You realize you've just called every computer scientist inhuman?

Just accept the compliment. ;)

comment by wedrifid · 2010-03-03T06:01:31.587Z · LW(p) · GW(p)

If-then-else as function composition, where "true" is a function returning its first argument, and "false" is a function returning its second? These are decidedly odd)

Of course, not so odd for anyone who uses Excel...

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-03T06:09:48.091Z · LW(p) · GW(p)

Booleans are easy; try to figure out how to implement subtraction on Church-encoded natural numbers. (i.e., 0 = λf.λz.z, 1 = λf.λz.(f z), 2 = λf.λz.(f (f z)), etc.)

And no looking it up, that's cheating! Took me the better part of a day to figure it out, it's a real mind-twister.

comment by rwallace · 2010-03-03T04:51:25.080Z · LW(p) · GW(p)

It's much of a muchness; in pure form, both are incomprehensible for nontrivial programs. Practical programming languages have aspects of both.

comment by Douglas_Knight · 2010-03-03T05:26:49.543Z · LW(p) · GW(p)

Maybe pure lambda calculus is not humanly comprehensible, but general recursion is as comprehensible as Turing machines, yet Gödel rejected it. My history should have started when Church promoted that.

comment by hugh · 2010-03-03T05:12:21.317Z · LW(p) · GW(p)

I think that λ-calculus is about as difficult to work with as Turing machines. I think the reason that Turing gets his name in the Church-Turing thesis is that they had two completely different architectures that had the same computational power. When Church proposed that λ-calculus was universal, I think there was a reaction of doubt, and a general feeling that a better way could be found. When Turing came to the same conclusion from a completely different angle, that appeared to verify Church's claim.

I can't back up these claims as well as I'd like. I'm not sure that anyone can backtrace what occurred to see if the community actually felt that way or not; however, from reading papers of the time (and quite a bit thereafter---there was a long period before near-universal acceptance), that is my impression.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-03T05:17:58.424Z · LW(p) · GW(p)

Actually, the history is straight-forward, if you accept Gödel as the final arbiter of mathematical taste. Which his contemporaries did.

ETA: well, it's straight-forward if you both accept Gödel as the arbiter and believe his claims made after the fact. He claimed that Turing's paper convinced him, but he also promoted it as the correct foundation. A lot of the history was probably not recorded, since all these people were together in Princeton.

EDIT2: so maybe that is what you said originally.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-03T05:55:35.861Z · LW(p) · GW(p)

It's also worth noting that Curry's combinatory logic predated Church's λ-calculus by about a decade, and also constitutes a model of universal computation.

It's really all the same thing in the end anyhow; general recursion (e.g., Curry's Y combinator) is on some level equivalent to Gödel's incompleteness and all the other obnoxious Hofstadter-esque self-referential nonsense.

comment by wedrifid · 2010-03-03T04:38:24.876Z · LW(p) · GW(p)

You seem to already know Lisp, so probably not.

I know the principles but have never taken the time to program something significant in the language. Partly because it just doesn't have the libraries available to enable me to do anything I particularly need to do and partly because the syntax is awkward for me. If only the name 'lisp' wasn't so apt as a metaphor for readability.

comment by wedrifid · 2010-03-03T04:34:32.819Z · LW(p) · GW(p)

Are you telling me lambda calculus was invented before Turing machines and people still thought the Turing machine concept was worth making ubiquitous?

Replies from: AngryParsley, orthonormal
comment by AngryParsley · 2010-03-03T04:51:46.945Z · LW(p) · GW(p)

Wikipedia says lambda calculus was published in 1936 and the Turing machine was published in 1937.

I'm betting it was hard for the first computer programmers to implement recursion and call stacks on early hardware. The Turing machine model isn't as mathematically pure as lambda calculus, but it's a lot closer to how real computers work.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-03-03T05:14:10.215Z · LW(p) · GW(p)

I think the link you want is to the history of the Church-Turing thesis.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-03T06:05:26.608Z · LW(p) · GW(p)

The history in the paper linked from this blog post may also be enlightening!

comment by orthonormal · 2010-03-03T05:10:14.658Z · LW(p) · GW(p)

Why not? People have a much easier time visualizing a physical machine working on a tape than visualizing something as abstract as lambda-calculus. Also, the Turing machine concept neatly demolishes the "well, that's great in theory, but it could never be implemented in practice" objections that are so hard to push people past.

Replies from: wedrifid
comment by wedrifid · 2010-03-03T05:18:29.274Z · LW(p) · GW(p)

Because I am biased to my own preferences for thought. I find visualising the lambda-calculus simpler because Turing Machines rely on storing stupid amounts of information in memory because, you know, it'll eventually do anything. It just doesn't feel natural to use a kludgy technically complete machine as the very description of what we consider computationally complete.

Replies from: orthonormal
comment by orthonormal · 2010-03-03T05:50:35.268Z · LW(p) · GW(p)

Oh, I agree. I thought we were talking about why one concept became better-known than the other, given that this happened before there were actual programmers.

comment by RobinZ · 2010-03-02T20:47:31.316Z · LW(p) · GW(p)

Any opinion on the 2nd edition of Elements?

Replies from: wnoise
comment by wnoise · 2010-03-02T20:52:10.184Z · LW(p) · GW(p)

Nope. I used the first edition. I wouldn't call it a "classic", but it was readable and covered the basics.

comment by RobinZ · 2010-03-02T13:16:18.542Z · LW(p) · GW(p)

I, unfortunately, am merely an engineer with a little BASIC and MATLAB experience, but if it is computer science you are interested in, rather than coding, count this as another vote for SICP. Kernighan and Ritchie is also spoken of in reverent tones (edit: but as a manual for C, not an introductory book - see below), as is The Art of Computer Programming by Knuth.

I have physically seen these books, but not studied any of them - I'm just communicating a secondhand impression of the conventional wisdom. Weight accordingly.

Replies from: wnoise, XiXiDu
comment by wnoise · 2010-03-02T16:58:09.982Z · LW(p) · GW(p)

Kernighan and Ritchie is a fine book, with crystal clear writing. But I tend to think of it as "C for experienced programmers", not "learn programming through C".

TAoCP is "learn computer science", which I think is rather different than learning programming. Again, a fine book, but not quite on target initially.

I've only flipped through SICP, so I have little to say.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T17:19:28.502Z · LW(p) · GW(p)

TAoCP and SICP are probably both computer science - I recommended those particularly as being computer science books, rather than elementary programming. I'll take your word on Kernighan and Ritchie, though - put that one off until you want to learn C, then.

comment by XiXiDu · 2010-03-02T14:12:07.193Z · LW(p) · GW(p)

Merely an engineer? I've failed to acquire a leaving certificate of the lowest kind of school we have here in Germany.

Thanks for the hint at Knuth, though I already came across his work yesterday. Kernighan and Ritchie are new to me. SICP is officially on my must-read list now.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T14:49:28.645Z · LW(p) · GW(p)

A mechanical engineering degree is barely a qualification in the field of computer programming, and not at all in the field of computer science. What little knowledge I have I acquired primarily through having a very savvy father and secondarily through recreational computer programming in BASIC et al. The programming experience is less important than the education, I wager.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T15:02:46.884Z · LW(p) · GW(p)

Yes, of course. Misinterpreted what you said.

Do you think that somebody in your field, in the future, will get around computer programming? While talking to neuroscientists I learnt that it is almost impossible to get what you want, in time, by explaining what you need to a programmer who has no degree in neuroscience while you yourself don't know anything about computer programming.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T15:07:50.510Z · LW(p) · GW(p)

I'm not sure what you mean - as a mechanical engineer, 99+% percent of my work involves purely classical mechanics, no relativity or quantum physics, so the amount of programming most of us have to do is very little. Once a finite-element package exists, all you need is to learn how to use it.

Replies from: XiXiDu
comment by XiXiDu · 2010-03-02T15:18:53.183Z · LW(p) · GW(p)

I've just read the abstract on Wikipedia and I assumed that it might encompass what you do.

Mechanical engineers design and build engines and power plants...structures and vehicles of all sizes...

I thought computer modeling and simulations might be very important in the early stages. Shortly following field tests with miniature models. Even there you might have to program the tools that give shape to the ultimate parts. Though I guess if you work in a highly specialized area, that is not the case.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T15:23:44.376Z · LW(p) · GW(p)

I couldn't build a computer, a web browser, a wireless router, an Internet, or a community blog from scratch, but I can still post a comment on LessWrong from my laptop. Mechanical engineers rarely need to program the tools, they just use ANSYS or SolidWorks or whatever.

Edit: Actually, the people who work in highly specialized areas are more likely to write their own tools - the general-interest areas have commercial software already for sale.

comment by AdeleneDawner · 2010-03-02T12:55:12.168Z · LW(p) · GW(p)

Bear in mind that I'm not terribly familiar with most modern programming languages, but it sounds to me like what you want to do is learn some form of Basic, where very little is handled for you by built-in abilities of the language. (There are languages that handle even less for you, but those really aren't for beginners.) I'd suggest also learning a bit of some more modern language as well, so that you can follow conversations about concepts that Basic doesn't cover.

Replies from: XiXiDu, wnoise
comment by XiXiDu · 2010-03-02T14:08:10.032Z · LW(p) · GW(p)

'Follow conversations', indeed. That's what I mean. Being able to grasp concepts that involve 'symbolic computation' and information processing by means of formal language. I don't aim at actively taking part in productive programming. I don't want to become a poet, I want to be able to appreciate poetry, perceive its beauty.

Take English as an example. Only a few years ago I seriously started to learn English. Before I could merely chat while playing computer games LOL. Now I can read and understand essays by Eliezer Yudkowsky. Though I cannot write the like myself, English opened up this whole new world of lore for me.

comment by wnoise · 2010-03-02T17:01:12.747Z · LW(p) · GW(p)

"It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration." --Edsger W Dijkstra.

More modern versions aren't that bad, and it's not quite fair to tar them with the same brush, but I still wouldn't recommend learning any of them for their own sake. If there is a need (like modifying an existing codebase), then by all means do.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-03T00:52:21.295Z · LW(p) · GW(p)

Dijkstra's quote is amusing, but out of date. The only modern version anyone uses is VB.NET, which isn't actually a bad language at all. On the other hand, it also lacks much of the "easy to pick up and experiment with" aspect that the old BASICs had; in that regard, something like Ruby or Python makes more sense for a beginner.

comment by XiXiDu · 2010-03-02T12:16:59.174Z · LW(p) · GW(p)

Yeah, you won't be able to be very productive regarding bottom-up groundwork. But you'll be able to look into existing works and gain insights. Even if you forgot a lot, something will be stuck and help you to pursue a top-down approach. You'll be able to look into existing code, edit it and regain or learn new and lost knowledge more quickly.

comment by wedrifid · 2010-03-02T12:44:18.823Z · LW(p) · GW(p)

Agree with where you place Python, Scheme and Haskell. But I don't recommend C. Don't waste time there until you already know how to program well.

Given a choice on what I would begin with if I had my time again I would go with Scheme, since it teaches the most general programming skills, which will carry over to whichever language you choose (and to your thinking in general.) Then I would probably move on to Ruby, so that I had, you know, a language that people actually use and create libraries for.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2010-03-03T00:57:24.561Z · LW(p) · GW(p)

C is good for learning about how the machine really works. Better would be assembly of some sort, but C has better tool support. Given more recent comments, though I don't think that's really what XiXiDu is looking for.

Replies from: wedrifid
comment by wedrifid · 2010-03-03T01:24:53.568Z · LW(p) · GW(p)

Agree on where C is useful and got the same impression about the applicability to XiXiDu's (where on earth does that name come from?!?) goals.

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming. I suppose it doesn't meet your 'minimalist' ideal but does have the advantage that mastering it will give you other abstract proficiencies that more restricted languages will not. Knowing how and when to use templates, multiple inheritance or the combination thereof is handy, even now that I've converted to primarily using a language that relies on duck-typing.

Replies from: SoullessAutomaton, wnoise
comment by SoullessAutomaton · 2010-03-03T02:14:28.336Z · LW(p) · GW(p)

I'm interested in where you would put C++ in this picture. It gives a thorough understanding of how the machine works, in particular when used for OO programming.

"Actually I made up the term "object-oriented", and I can tell you I did not have C++ in mind." -- Alan Kay

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

C++ is an ill-considered, ad hoc mixture of conflicting, half-implemented ideas that borrows more problems than advantages:

  • It requires low-level understanding while obscuring details with high-level abstractions and nontrivial implicit behavior.
  • Templates are a clunky, disappointing imitation of real metaprogramming.
  • Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.
  • It imposes a static typing system that combines needless verbosity and obstacles at compile-time with no actual run-time guarantees of safety.
  • Combining error handling via exceptions with manual memory management is frankly absurd.
  • The sheer size and complexity of the language means that few programmers know all of it; most settle on a subset they understand and write in their own little dialect of C++, mutually incomprehensible with other such dialects.

I could elaborate further, but it's too depressing to think about. For understanding the machine, stick with C. For learning OOP or metaprogramming, better to find a language that actually does it right. Smalltalk is kind of the canonical "real" OO language, but I'd probably point people toward Ruby as a starting point (as a bonus, it also has some fun metaprogramming facilities).

ETA: Well, that came out awkwardly verbose. Apologies.

Replies from: wedrifid
comment by wedrifid · 2010-03-03T03:09:08.909Z · LW(p) · GW(p)

C++ is the best example of what I would encourage beginners to avoid. In fact I would encourage veterans to avoid it as well; anyone who can't prepare an impromptu 20k-word essay on why using C++ is a bad idea should under no circumstances consider using the language.

I'm sure I could manage 1k before I considered the point settled and moved on to a language that isn't a decades old hack. That said, many of the languages (Java, .NET) that seek to work around the problems in C++ do so extremely poorly and inhibit understanding of the way the relevant abstractions could be useful. The addition of mechanisms for genericity to both of those of course eliminates much of that problem. I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too. If you really must learn how things work at the bare fundamentals then C++ will give you that over a broader area of nuts and bolts.

Implementation inheritance from multiple parents is almost uniformly considered a terrible idea; in fact, implementation inheritance in general was arguably a mistake.

This is the one point I disagree with, and I do so both on the assertion 'almost uniformly' and also the concept itself. As far as experts in Object Oriented programming goes Bertrand Myers is considered an expert, and his book 'Object Oriented Software Construction' is extremely popular. After using Eiffel for a while it becomes clear that any problems with multiple inheritance are a problem of implementation and poor language design and not inherent to the mechanism. In fact, (similar, inheritance based OO) languages that forbid multiple inheritance end up creating all sorts of idioms and language kludges to work around the arbitrary restriction.

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Replies from: ata, SoullessAutomaton, wnoise
comment by ata · 2010-03-03T03:25:36.685Z · LW(p) · GW(p)

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Indeed. I keep meaning to invent a new programming paradigm in recognition of that basic fact about macroscopic reality. Haven't gotten around to it yet.

comment by SoullessAutomaton · 2010-03-03T05:17:52.484Z · LW(p) · GW(p)

I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.

Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option. I remain unconvinced that C++ has anything to offer in these cases; and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens, and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.

Java and C# are somewhat more tolerable for practical use, but both are dull, obtuse languages that I wouldn't suggest for learning purposes, either.

Even while using Ruby (and the flexibility of duck-typing) I have discovered that the limitation to single inheritance sometimes requires inelegant work-arounds. Sometimes objects just are more than one type.

Well, the problem isn't really multiple inheritance itself, it's the misguided conflation of at least three distinct issues: ad-hoc polymorphism, behavioral subtyping, and compositional code reuse.

Ad-hoc polymorphism basically means picking what code to use (potentially at runtime) based on the type of the argument; this is what many people seem to think about the most in OOP, but it doesn't really need to involve inheritance hierarchies; in fact overlap tends to confuse matters (we've all seen trick questions about "okay, which method will this call?"). Something closer to a simple type predicate, like the interfaces in Google's Go language or like Haskell's type classes, is much less painful here. Or of course duck typing, if static type-checking isn't your thing.

Compositional code reuse in objects--what I meant by "implementation inheritance"--also has no particular reason to be hierarchical at all, and the problem is much better solved by techniques like mixins in Ruby; importing desired bits of functionality into an object, rather than muddying type relationships with implementation details.

The place where an inheritance hierarchy actually makes sense is in behavioral subtyping: the fabled is-a relationship, which essentially declares that one class is capable of standing in for another, indistinguishable to the code using it (cf. the Liskov Substitution Principle). This generally requires strict interface specification, as in Design by Contract. Most OO languages completely screw this up, of course, violating the LSP all over the place.

Note that "multiple inheritance" makes sense for all three: a type can easily have multiple interfaces for run-time dispatch, integrate with multiple implementation components, and be a subtype of multiple other types that are neither subtypes of each other. The reason why it's generally a terrible idea in practice is that most languages conflate all of these issues, which is bad enough on its own, but multiple inheritance exacerbates the pain dramatically because rarely do the three issues suggest the same set of "parent" types.

Consider the following types:

  • Tree structures containing values of some type A.
  • Lists containing values of some type A.
  • Text strings, stored as immutable lists of characters.
  • Text strings as above, but with a maximum length of 255.

The generic tree and list types are both abstract containers; say they both implement using a projection function to transform every element from type A to some type B, but leaving the overall structure unchanged. Both can declare this as an interface, but there's no shared implementation or obvious subtyping relationship.

The text strings can't implement the above interface (because they're not parameterized with a generic type), but both could happily reuse the implementation of the generic list; they aren't subtypes of the list, though, because it's mutable.

The immutable length-limited string, however, is a subtype of the regular string; any function taking a string of arbitrary length can obviously take one of a limited length.

Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-03-03T05:38:37.943Z · LW(p) · GW(p)

Using C is, at times, a necessary evil, when interacting directly with the hardware is the only option.

Of course, but I'm more considering 'languages to learn that make you a better programmer'.

I remain unconvinced that C++ has anything to offer in these cases;

Depends just how long you are trapped at that level. If forced to choose between C++ and C for serious development, choose C++. I have had to make this choice (or, well, use Fortran...) when developing for a supercomputer. Using C would have been a bad move.

and to the extent that C++ provides abstractions, I contend that it inhibits understanding and instills bad habits more than it enlightens

I don't agree here. Useful abstraction can be learned from C++ while some mainstream languages force bad habits upon you. For example, languages that have the dogma 'multiple inheritance is bad' and don't allow generics enforce bad habits while at the same time insisting that they are the True Way.

and that spending some time with C and some with a reasonably civilized language would teach far more than spending the entire time with C++.

I think I agree on this note, with certain restrictions on what counts as 'civilized'. In this category I would place Lisp, Eiffel and Smalltalk, for example. Perhaps python too.

comment by wedrifid · 2010-03-03T05:28:14.100Z · LW(p) · GW(p)

Now imagine trying to cram that into a class hierarchy in a normal language without painful contortions or breaking the LSP.

The thing is, I can imagine cramming that into a class hierarchy in Eiffel without painful contortions. (Obviously it would also use constrained genericity. Trying to just use inheritance in that hierarchy would be a programming error and not having constrained genericity would be a flaw in language design.) I could also do it in C++, with a certain amount of distaste. I couldn't do it in Java or .NET (except Eiffel.NET).

comment by wnoise · 2010-03-03T03:54:57.617Z · LW(p) · GW(p)

I must add that many of the objections I have to using C++ also apply to C, where complexity based problems are obviously excluded. Similarly, any reasons I would actually suggest C is worth learning apply to C++ too.

Seriously? All my objections to C++ come from its complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.

Sometimes objects just are more than one type.

This argues for interfaces, not multiple implementation inheritance. And implementation inheritance can easily be emulated by containment and method forwarding, though yes, having a shortcut for forwarding these methods can be very convenient. Of course, that's trivial in Smalltalk or Objective-C...

The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-03-03T04:31:02.611Z · LW(p) · GW(p)

Seriously? All my objections to C++ come from it's complexity. C is like a crystal. C++ is like a warty tumor growing on a crystal.

I say C is like a shattered crystal with all sorts of sharp edges that take hassle to avoid and distract attention from things that matter. C++ then, would be a shattered crystal that has been attached to a rusted metal pole that can be used to bludgeon things, with the possible risk of tetnus.

Replies from: wnoise
comment by wnoise · 2010-03-03T05:53:56.119Z · LW(p) · GW(p)

Upvoted purely for the image.

comment by wedrifid · 2010-03-03T04:20:16.466Z · LW(p) · GW(p)

The hard part that no language has a good solution for are objects which can be the same type two (or more) different ways.

Eiffel does (in, obviously, my opinion).

Replies from: wnoise
comment by wnoise · 2010-03-03T05:43:08.762Z · LW(p) · GW(p)

It does handle the diamond inheritance problem as best as can be expected -- the renaming feature is quite nice. Though related, this isn't what I'm concerned with. AFAICT, it really doesn't handle it in a completely general way. (Given the type-system you can drive a bus through (covariant vs contravariant arguments), I prefer Sather, though the renaming feature there is more persnickety -- harder to use in some common cases.)

Consider a lattice). It is a semilattice in two separate dual ways, with the join operation, and the meet operation. If we have generalized semi-lattice code, and we want to pass it a lattice, which one should be used? How about if we want to use the other one?

In practice, we can call these a join-semilattice, and a meet-semilattice, have our function defined on one, and create a dual view function or object wrapper to use the meet-semilattice instead. But, of course, a given set of objects could be a lattice in multiple ways, or implement a monad in multiple ways, or ...

There is a math abstraction called a monoid, for an associative operator with identity. Haskell has a corresponding typeclass, with such things as lists as instances, with catenation as the operator, and the empty list as identity. I don't have the time and energy to give examples, but having this as an abstraction is actually useful for writing generic code.

So, suppose we want to make Integers an instance. After all, (+, 0) is a perfectly good monoid. On the other hand, so is (*, 1). Haskell does not let you make a type an instance of a typeclass in two separate ways. Their is no natural duality here we can take advantage of (as we could with the lattice example.) The consensus in the community has been to not make Integer a monoid, but rather to provide newtypes Product and Sum that are explicitly the same representation as Integer, with thus trivial conversion costs. There is also a newtype for dual monoids, formalizing a particular duality idea similar to the lattice case (this switches left and right -- monoids need not be commutative, as the list example should show). There are also ones that label bools as using the operation and or or; this is actually a case of the lattice duality above.

For this simple case, it'd be easy enough to just explicitly pass in the operation. But for more complicated typeclasses, we can bundle a whole lump of operations in a similar manner.

I'm not entirely happy with this either. If you're only using one of the interfaces, then that wrapper is damn annoying. Thankfully, e.g. Sum Integer can also be made an instance of Num, so that you can continue to use * for multiplication, + for addition, and so forth.

Replies from: wedrifid
comment by wedrifid · 2010-03-03T05:57:18.367Z · LW(p) · GW(p)

Sather looks interesting but I haven't taken the time to explore it. (And yes, covariance vs contravariance is a tricky one.)

Both these languages also demonstrate the real (everyday) use for C... you compile your actual code into it.

Replies from: wnoise
comment by wnoise · 2010-03-03T06:12:51.729Z · LW(p) · GW(p)

I don't think Sather is a viable language at this point, unfortunately.

Yes, C is useful for that, though c-- and LLVM are providing new paths as well.

I personally think C will stick around for a while because getting it running on a given architecture provides a "good enough" ABI that is likely to be stable enough that HLLs FFIs can depend on it.

comment by wnoise · 2010-03-03T02:14:41.968Z · LW(p) · GW(p)

I put C++ as a "learn only if needed language". It's extremely large and complicated, perhaps even baroque. Any large program uses a slightly different dialect of C++ given by which features the writers are willing to use, and which are considered too dangerous.

comment by XiXiDu · 2010-03-02T12:05:05.861Z · LW(p) · GW(p)

Yeah, C is probably mandatory if you want to be serious with computer programming. Thanks for mentioning Scheme, haven't heard about it before...

Haskell sounds really difficult. But the more I hear how hard it is, the more intrigued I am.

comment by XiXiDu · 2010-03-02T12:01:11.564Z · LW(p) · GW(p)

Thanks, I'll sure get into those languages. But I think I'll just try and see if I can get into Haskell first. I'm intrigued after reading the introduction.

Even if you are not in a position to use Haskell in your programming projects, learning Haskell can make you a better programmer in any language.

Haskell has good support for parallel and multicore programming.

If I get struck, I'll the route you mentioned.

comment by hugh · 2010-03-03T00:15:32.898Z · LW(p) · GW(p)

Relevant answer to this question here, recently popularized on Hacker News.

comment by Emile · 2010-03-02T16:37:14.350Z · LW(p) · GW(p)

I'd weakly recommend Python, it's free, easy enough, powerful enough to do simple but useful things (rename and reorganize files, extract data from text files, generate simple html pages ...),is well-designed and has features you'll encounter in other languages (classes, functional programming ...), and has a nifty interactive command line in which to experiment quickly. Also, some pretty good websites run on it.

But a lot of those advantages apply to languages like Ruby.

If you want to go into more exotic languages, I'd suggest Scheme over Haskell, it seems more beginner-friendly to me.

It mostly depends on what occasions you'll have of using it : if you have a website, Javascript might be better; If you like making game mods, go for lua. It also depends of who you know that can answer questions. If you have a good friend who's a good teacher and a Java expert, go for Java.

comment by CronoDAS · 2010-03-02T13:55:44.472Z · LW(p) · GW(p)

My first language was, awfully enough, GW-Basic. It had line numbers. I don't recommend anything like it.

My first real programming language was Perl. Perl is... fun. ;)

comment by Morendil · 2010-03-02T05:00:27.006Z · LW(p) · GW(p)

I recommend Haskell (more fun) or Ruby (more mainstream).

comment by mkehrt · 2010-03-06T01:10:30.789Z · LW(p) · GW(p)

I recommend Python as well. Python has clean syntax, enforces good indentation and code layout, has a large number of very useful libraries, doesn't require a lot of boilerplate to get going but still has good mechanisms for structuring code, has good support for a variety of data structures built in, a read-eval-print loop for playing around with the language, and a lot more. If you want to learn to program, learn Python.

(Processing is probably very good, too, for interesting you in programming. It gives immediate visual feedback, which is nice, but it isn't quite as general purpose as Python. Lua I know very little about.)

That being said, Python does very little checking for errors before you run your code, and so is not particularly well suited for large or even medium sized, complex programs where your own reasoning is not sufficient to find errors. For these, I'd recommend learning other languages later on. Java is probably a good second language. It requires quite a bit more infrastructure to get something up and running, but it has great libraries and steadily increasing ability to track down errors in code when it is compiled.

After that, it depends on what you want to do. I would recommend Haskell if you are looking to stretch your mind (or OCaml if you are looking to stretch it a little less ;-)). On the other hand, if you are looking to write useful programs, C is probably pretty good, and will teach you more about how computers work. C++ is popular for a lot of applications, so you may want to learn it, but I hate it as an unprincipled mess of language features half-hazardly thrown together. I'd say exactly the same thing about most web languages (Javascript (which is very different from Java), Ruby, PHP, etc.) Perl is incredibly useful for small things, but very hard to reason about.

(As to AngryParsley's comment about people recommending their favorite languages, mine are probably C, Haskell and OCaml, which I am not recommending first.)

comment by wedrifid · 2010-03-02T14:21:57.703Z · LW(p) · GW(p)

I'm thinking about starting with Processing and Lua. What do you think?

Those two seem great, Lua in particular seems to match exactly the purpose you describe.

comment by cousin_it · 2010-03-01T13:52:29.121Z · LW(p) · GW(p)

I'm confused about Nick Bostrom's comment [PDF] on Robin Hanson's Great Filter idea. Roughly, it says that in a universe like ours that lacks huge intergalactic civilizations, finding fish fossils on Mars would be very bad news, because it would imply that evolving to fish phase isn't the greatest hurdle that kills most young civilizations - which makes it more likely that the greatest hurdle is still ahead of us. I think that's wrong because finding fish fossils (and nothing more) on Mars would only indicate a big hurdle right after the fish stage, but shouldn't affect our beliefs about later stages, so we have nothing to fear after all. Am I making a mistake or misunderstanding Bostrom's reasoning?

Replies from: Larks, timtyler
comment by Larks · 2010-03-01T14:12:04.316Z · LW(p) · GW(p)

It makes the hurdle less likely to be before the fish stage, so more likely to be after the fish stage. While the biggest increase in probability is immediately after the fish stage, all subsequent stages are a more likely culprit now (especially as we could simply have missed fossils/their not have been formed for the post-fish stages).

Replies from: cousin_it, cousin_it
comment by cousin_it · 2010-03-02T21:20:18.014Z · LW(p) · GW(p)

So finding evidence of life that went extinct at any stage whatsoever should make us revise our beliefs about the Great Filter in the same direction? Doesn't this violate conservation of expected evidence?

Replies from: ciphergoth, FAWS
comment by Paul Crowley (ciphergoth) · 2010-03-02T21:25:53.601Z · LW(p) · GW(p)

Is there a counter-weighing bit of evidence every time we don't find evidence of life at all, and every time (if ever) we find evidence of non-extinct life?

Replies from: cousin_it
comment by cousin_it · 2010-03-03T00:02:12.416Z · LW(p) · GW(p)

According to Hanson's article, non-extinct life that didn't reach sentience counts as failing the Great Filter, and no life at all also counts as failing at a very early stage. I believe my point still stands.

comment by FAWS · 2010-03-03T00:17:24.736Z · LW(p) · GW(p)

No, the total evidence for a great filter is conserved (lack of observable galactic colonization), the evidence merely shifts where we expect this great filter to be.

Replies from: cousin_it
comment by cousin_it · 2010-03-08T19:15:05.939Z · LW(p) · GW(p)

Let's have another go. By Bostrom's logic, witnessing a failure of life at any stage (which includes life failing to develop in the first place) implies that the great filter happens later than we thought, because each failure tells us that the steps that preceded it (e.g. planet formation, liquid water, etc) probably didn't include the great filter. But life eventually fails on all planets except ours: the "silence of the sky" is a background assumption for the whole thesis. So what kind of evidence would tell us that the filter happens earlier than we thought?

Replies from: FAWS
comment by FAWS · 2010-03-08T20:41:05.740Z · LW(p) · GW(p)

Failing to find any life whatsoever on Mars would be evidence for the great filter being development of life or earlier (and thus evidence for the GF being earlier than we thought), but it is only very weak evidence of that, since even if life were very common (say 20% of all planets in the liquid water zone) we still wouldn't be very surprised by the absence of life on Mars in particular. Matters would be different after investigating hundreds of planets and failing to find signs of life anywhere.

Finding (independent) life at any stage would be evidence for whatever step this life failed to make and/or whatever wiped it out being the great filter, but for any reasonably well understood step or mechanism of extinction (e. g. Mars losing most of its atmosphere) the shift in probability will be much smaller than the shift in probability for having life in the first place, which is a total unknown, and such life would also be evidence against a black swan before that point without discriminating between black swans between that point and us and after us. So the slightly increased probability of that particular GF wouldn't come anywhere close to making up the lost probability mass earlier, leaving a great filter after us much more likely.

If we discover life wiped out by a black swan that would shift a lot of probability mass to that black swan, but it would have to be something not taken into account at all before, seeming certain to happen to almost all life after discovery, and also something that would be very unlikely to happen to us in the future if it was to make up for the lost probability mass for any earlier GF. A sufficiently certain seeming former black swan could even shift probability mass away from a GF after us, but that's not the way I'd bet.

Replies from: cousin_it
comment by cousin_it · 2010-03-08T23:02:17.150Z · LW(p) · GW(p)

the shift in probability will be much smaller than the shift in probability for having life in the first place, which is a total unknown

This is a crucial step in your argument. It depends on our initial prior for extraterrestrial life being very low. If that prior were slightly higher, the argument would work just as well in reverse, or maybe balance out. There's something icky about this whole business.

comment by cousin_it · 2010-03-01T14:30:12.190Z · LW(p) · GW(p)

Let's disregard this:

especially as we could simply have missed fossils/their not have been formed for the post-fish stages

and focus on the purely theoretical argument. Assume our degree of belief that the Great Filter is ahead of us is X. Now we land on a new planet. If we find no evidence of life at all, X is unchanged. If we find some fossils at whatever stage, or any life at all (except an intergalactic civilization which we would've noticed before), then according to your reasoning, X should increase. This violates the Bayesian law of conservation of evidence, as X can only increase and never decrease.

comment by timtyler · 2010-03-02T01:04:02.178Z · LW(p) · GW(p)

Mars dried out a while ago. Finding fossils there would prove very little about the great filter - since they would probably be distant relatives of ours whose planet gave out on them (since the solar system is one big melting pot for life). Basically, it is a bad example.

comment by Jack · 2010-03-09T13:50:29.655Z · LW(p) · GW(p)

For the "people say stupid things" file and a preliminary to a post I'm writing. There is a big college basketball tournament in New York this weekend. There are sixteen teams competing. This writer for the New York Post makes some predictions.

What is wrong with this article and how could you take advantage of the author?

Edit: Rot13 is a good idea here.

Replies from: Cyan, thomblake, FAWS
comment by Cyan · 2010-03-09T14:58:39.581Z · LW(p) · GW(p)

Gur cbfgrq bqqf qba'g tvir n gbgny cebonovyvgl bs bar, fb gurl'er Qhgpu-obbxnoyr.

Replies from: Hook, Jack, RobinZ
comment by Hook · 2010-03-09T15:33:09.500Z · LW(p) · GW(p)

Abg dhvgr. Uvf ceboyrz vf gung gur bqqf nqq hc gb yrff guna bar. Vs V tnir lbh 1-2 bqqf ba urnqf naq 1-2 bqqf ba gnvyf sbe na haovnfrq pbva, gung nqqf hc gb 1.3, naq lbh pna'g Qhgpu obbx zr ba gung.

Replies from: RobinZ
comment by RobinZ · 2010-03-09T15:44:55.441Z · LW(p) · GW(p)

Rot13: Hayrff gur bqqfznxre vf rabhtu bs na vqvbg gb yrg lbh gnxr gur bgure fvqr bs gur orgf, of course.

Replies from: Jack
comment by Jack · 2010-03-09T16:01:39.892Z · LW(p) · GW(p)

Rot13: Vs lbh'er tvivat bqqf nf n cerqvpgvba lbh fubhyq or jvyyvat gb gnxr rvgure fvqr.

Replies from: Hook, RobinZ
comment by Hook · 2010-03-09T16:42:06.746Z · LW(p) · GW(p)

Yes. That does seem to be the correct context for a critique of the article. I was thinking more along the lines of "giving odds" in terms of "offering bets" in order to make money (ie, a bookie).

comment by RobinZ · 2010-03-09T16:08:15.502Z · LW(p) · GW(p)

Rot13: Gehr - fnir gung xabjvat fbzrbar jnagf gb gnxr gur bgure fvqr znl vasyhrapr lbhe bqqf.

Replies from: Jack
comment by Jack · 2010-03-09T16:25:11.104Z · LW(p) · GW(p)

Rot13: Lrnu. Vg jbhyq pregnvayl or fhfcvpvbhf vs fbzrbar whfg rznvyrq gur thl naq bssrerq gb tvir uvz uvf bqqf sbe rirel fvatyr grnz. Lbh'q ubcr ur'q svther vg bhg gura. Zl -arire vagraqrq gb or vzcyrzragrq- cyna jnf gb unir 15 crbcyr pbagnpg uvz rnpu ercerfragvat gurzfryirf nf fbzrbar jub gubhtug ur jnf bireengvat bar bs gur grnzf naq tvir uvz uvf bqqf sbe gung grnz. Gura fcyvg gur jvaavatf nsgrejneq.

comment by Jack · 2010-03-09T16:36:59.040Z · LW(p) · GW(p)

Rot13: Pna lbh sbezhyngr n org be frevrf bs orgf gung jbhyq qb gur gevpx? Pna nalbar?

Replies from: FAWS, Cyan
comment by FAWS · 2010-03-09T18:45:38.835Z · LW(p) · GW(p)

I thought this was already clear? Org K$ * vzcyvrq cebonovyvgl ba rirel grnz. Lbh ner thnenagrrq n arg jva bs K$ * (1 - fhz bs nyy vzcyvrq cebonovyvgvrf).

What you really should do though is look at the past history of the tournament and the form of the teams, figure out which of those teams with silly odds have a decent shot at winning, take a risk and bet on some combination of them. You should stand a fairly decent chance of winning really big (unless this huge spread is actually justified, which seems unlikely).

comment by Cyan · 2010-03-09T18:26:09.793Z · LW(p) · GW(p)

Va gur bevtvany Qhgpu obbx, bqqf unir gb or bssrerq ba nyy pbzcbhaq riragf naq nyy pbaqvgvbany riragf. Vs gur nhgube vf jvyyvat gb hfr C(N be O) = C(N) + C(O) gb frg gur bqqf sbe qvfwhapgvbaf, gur cebcbfvgvba "ng yrnfg bar grnz jvaf" unf n cebonovyvgl bs friragl-avar creprag. Ur bhtug gb or jvyyvat gb org ntnvafg gung cebcbfvgvba ng bar trgf uvz sbhe.

comment by RobinZ · 2010-03-09T15:19:50.387Z · LW(p) · GW(p)

Props for the ROT13 - independently I got as far as the first half, but I didn't know how to do the latter. Wikipedia explained it quite well, though.

Replies from: FAWS
comment by FAWS · 2010-03-09T15:28:37.597Z · LW(p) · GW(p)

I don't understand how that's possible. Doesn't the answer to the first half imply the latter? How do you get sebz bqqf gb vzcyvrq cebonovyvgl otherwise?

Replies from: RobinZ
comment by RobinZ · 2010-03-09T15:43:29.564Z · LW(p) · GW(p)

Rot13: V unqa'g dhvgr qenja gur pbaarpgvba orgjrra gur bqqf naq gur pbafgehpgvba bs gur Qhgpu obbx - vg jnfa'g boivbhf gb zr gung orggvat n pbafgnag gvzrf gur vzcyvrq cebonovyvgvrf jbhyq pbfg zr gung pbafgnag gvzrf gur vzcyvrq gbgny cebonovyvgl naq cnl bss gung pbafgnag.

comment by thomblake · 2010-03-09T15:50:14.611Z · LW(p) · GW(p)

I would like to suggest that people using Rot13 note that in their comments, perhaps as the first few characters "Rot13:" - otherwise, comments taken out of context are indecipherable.

Replies from: RobinZ
comment by RobinZ · 2010-03-09T15:51:30.599Z · LW(p) · GW(p)

Good idea.

comment by FAWS · 2010-03-09T14:23:39.461Z · LW(p) · GW(p)

Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular? Gur bqqf (vs V haqrefgnaq gurz pbeerpgyl RQVG: V qvq abg) vzcyl oernx rira cebonovyvgvrf gung nqq hc gb nobhg 0.94, juvpu vzcyvrf gung n obbxznxre bssrevat gubfr bqqf jbhyq ba nirentr ybfr zbarl, ohg gung'f pybfr rabhtu gb abg or erznexnoyl fghcvq sbe n wbheanyvfg.

If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.

Replies from: Jack, Jack
comment by Jack · 2010-03-09T14:33:45.224Z · LW(p) · GW(p)

Is this supposed to be obvious to people unfamiliar with college basketball in general and that tournament in particular?

Yes

The odds (if I understand the correctly) imply break even probabilities that add up to about 0.94, which implies that a bookmaker offering those odds would on average lose money, but that's close enough to not be remarkably stupid for a journalist.

Rot13: Gel gur zngu ntnva, guvf gvzr pbairegvat sebz bqqf gb senpgvbaf, svefg. Vg nqqf hc gb nobhg .8... V qba'g xabj ubj ybj gung lbhe fgnaqneqf ner sbe wbheanyvfgf gubhtu.

If the tournament is single elimination knockout, and the figures in brackets are win-loss record against roughly comparable opponents the odds for the sleepers and long-shots seem insanely good. South Florida in particular.

This is also true. But the mistake I was thinking of was the first one.

Replies from: FAWS
comment by FAWS · 2010-03-09T14:55:11.037Z · LW(p) · GW(p)

Try the math again, this time converting from odds to fractions, first. It adds up to about .8... I don't know how low that your standards are for journalists though.

So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.

Replies from: rhollerith_dot_com, RobinZ
comment by RHollerith (rhollerith_dot_com) · 2010-03-09T18:47:50.123Z · LW(p) · GW(p)

So betting 1$ at 3-1 means that winning means you get 4$ total, your original bet + your winnings? I had assumend you'd get 3$.

To which Robin Z replies, "Yes, you get $4."

This confused me, too, for a while, so let me share with you the fruits of my puzzling.

You do get 3$ over the course of the whole transaction since at the time of the bet, you gave the bookmaker what you would owe him if you lose the bet (namely $1).

In other words, your 1$ bought you both a wager (the expected value of which is 0$ if 3-1 reflects the probability of the bet-upon outcome) and an IOU (whose expected value is 1$ if the bookmaker is perfectly honest and nothing happens to prevent you from redeeming the IOU).

The reason it is traditional for you to pay the bookmaker money when making the bet (the reason, that is, for the IOU) is that you cannot be trusted to pay up if you lose the bet as much as the bookmaker can be trusted to pay up (and simultaneously to redeem the IOU) if you win. Well, also, that way there is no need for you and the bookmaker to get together after the bet-upon event if you lose, which reduces transaction costs.

comment by RobinZ · 2010-03-09T15:20:50.330Z · LW(p) · GW(p)

Yes, you get $4.

comment by Jack · 2010-03-09T15:33:34.022Z · LW(p) · GW(p)

You should Rot13 your second sentence.

comment by roland · 2010-03-04T20:36:28.832Z · LW(p) · GW(p)

List with all the great books and videos

Recently I've read a few articles that mentioned the importance of reading the classic works, like the Feynman lectures on physics. But, where can I find those? Wouldn't it be nice if we had a central place, maybe wikipedia where you can find a list of all the great books, videolectures, web pages divided by field(physics, mathematics, computer science, economics, etc...)? So if someone wants to know what he has to read to get a good understanding of the basic knowledge of any field he will have a place to look it up. It doesn't necessarily need to have the actual works, but at least a pointer to them.

Is there such a comprehensive list somewhere?

Replies from: nazgulnarsil
comment by nazgulnarsil · 2010-03-12T11:57:37.624Z · LW(p) · GW(p)

every time someone tries to make such a list collaboratively much of the effort diffuses into arguments over inclusion eventually (see wikipedia).

comment by CronoDAS · 2010-03-04T17:21:13.504Z · LW(p) · GW(p)

I saw a commenter on a blog I read making what I thought was a ridiculous prediction, so I challenged him to make a bet. He accepted, and a bet has been made.

What do you all think?

Replies from: GreenRoot, Cyan
comment by GreenRoot · 2010-03-04T18:02:59.072Z · LW(p) · GW(p)

Very good. I see this forcing more careful thought by the poster, either now or later, and more skepticism in the blog's audience.

I'd recommend restating all the terms of the bet in a single comment or another web page, which both of you explicitly accept. This will make things easier to reference eight months from now. Might also be good to name a simple procedure like a poll on the blog to resolve any disagreements (like the definition of "Healthcare reform passes").

And please, reply again here or make a new open thread comment once this gets resolved. I'd love to hear how it turned out and what the impact on poster's or other's beliefs was.

Replies from: CronoDAS
comment by CronoDAS · 2010-03-05T00:00:11.136Z · LW(p) · GW(p)

He's a right-wing commenter on a liberal blog; most of the other commenters don't seem to take him seriously either, but he hasn't done anything to become ban-worthy.

comment by Cyan · 2010-03-04T17:31:49.249Z · LW(p) · GW(p)

Good job.

comment by aleksiL · 2010-03-02T11:53:03.207Z · LW(p) · GW(p)

I recently finished the book Mindset by Carol S. Dweck. I'm currently rather wary of my own feelings about the book; I feel like a man with a hammer in a happy death spiral. I'd like to hear others' reactions.

The book seems to explain a lot about people's attitudes and reactions to certain situations, with what seems like unusually strong experimental support to boot. I recommend it to anyone (and I mean anyone - I've actually ordered extra copies for friends and family) but teachers, parents and people with interest in self-improvement will likely benefit the most.

Also, I'd appreciate pointers on how to find out if the book is being translated to Finnish.

Edit: Fixed markdown and grammar.

Replies from: RobinZ
comment by RobinZ · 2010-03-02T15:15:16.191Z · LW(p) · GW(p)

I'm no fan of joke religions - even the serious joke religions - but the Church of the SubGenius promoted the idea of the "Short Duration Personal Savior" as a mind-hack. I like that one.

(No opinion on the book - haven't read it.)

comment by FrF · 2010-03-01T20:14:53.230Z · LW(p) · GW(p)

I enjoyed this proposal for a 24-issue Superman run: http://andrewhickey.info/2010/02/09/pop-drama-superman/

There are several Less Wrongish themes in this arc: Many Worlds, ending suffering via technology, rationality:

"...a highlight of the first half of this first year will be the redemption of Lex Luthor – in a forty-page story, set in one room, with just the two of them talking, and Superman using logic to convince Luthor to turn his talents towards good..."

The effect Andrew's text had on me reminded me of how excited I was when I first had read Alan Moore's famous Twilight of the Superheroes. (I'm not sure about how well "Twilight" stands the test of time but see Google or Wikipedia for links to the complete Moore proposal.)

Replies from: None, None
comment by [deleted] · 2010-03-02T20:38:16.017Z · LW(p) · GW(p)

Wow, thanks. And here was me thinking the only thing I had in common with Moore was an enormous beard...

(For those who don't read comics, a comparison with Moore's work is like comparing someone with Bach in music or Orson Welles in film).

Odd to see myself linked on a site I actually read...

Replies from: FrF
comment by FrF · 2010-03-03T17:39:08.459Z · LW(p) · GW(p)

You're welcome, Andrew! I thought about forwarding your proposal to David Pearce, too. Maybe it's just my overactive imagination but your ideas about Superman appear to be connectable with his agenda!

Since your proposal is influenced by Grant Morrison's work, I remember that there'll be soon a book by Morrison, titled Supergods: Our World in the Age of the Superhero. I'm sure it will contain its share of esotericisms; on the other hand, as he's shown several times -- recently with All Star Superman -- Morrison seems comfortable with transhumanist ideas. (But then, transhumanism is also a sort of esotericism, at least in the view of its detractors.)

Btw, I had to smile when I read PJ Eby's Everything I Needed To Know About Life, I Learned From Supervillains.

comment by [deleted] · 2010-03-02T20:40:30.340Z · LW(p) · GW(p)

(And it's not surprising it came out rather LessWrongy - the paper I'd coauthored (mentioned in the first paragraph) is about applying algorithmic complexity and Bayes' theorem to policies with regard to alternative health...)

comment by Seth_Goldin · 2010-03-01T18:03:18.177Z · LW(p) · GW(p)

Via Tyler Cowen, Max Albert has a paper critiquing Bayesian rationality.

It seems pretty shoddy to me, but I'd appreciate analysis here. The core claims seem more like word games than legitimate objections.

Replies from: Bo102010, Swimmy
comment by Bo102010 · 2010-03-02T03:44:51.888Z · LW(p) · GW(p)

I considered putting that link here in the open thread after I read about it on Marginal Revolution, but I read the paper and found it weak enough to not really be worth a lengthy response.

What annoyed me about it is how Albert's title is "Why Bayesian Rationality Is Empty," and he in multiple places makes cute references to that title (e.g. "The answer is summarized in the paper’s title") without qualificaiton.

Then later, in a footnote, he mentions "In this paper, I am only concerned with subjective Bayesianism."

Seems like he should re-title his paper to me. He makes references to other critiques of objective Bayesianism, but doesn't engage them.

comment by Swimmy · 2010-03-01T20:36:21.724Z · LW(p) · GW(p)

I think they are legitimate objections, but ones that have been partially addressed in this community. I take the principle objection to be, "Bayesian rationality can't justify induction." Admittedly true (see for instance Eliezer's take). Albert ignores sophisticated responses (like Robin's) and doesn't make a serious effort to explain why his alternative doesn't have the same problem.

comment by Karl_Smith · 2010-03-11T17:15:01.211Z · LW(p) · GW(p)

I have a 2000+ word brain dump on economics and technology that I'd appreciate feedback on. What would be the protocol. Should I link to it? Copy it into a comment? Start a top level article about it?

I am not promising any deep insights here, just my own synthesis of some big ideas that are out there.

Replies from: RobinZ
comment by RobinZ · 2010-03-11T17:26:12.717Z · LW(p) · GW(p)

I would post a link on the latest Open Thread - I don't believe an explicit protocol exists.

comment by Clippy · 2010-03-08T20:10:11.398Z · LW(p) · GW(p)

Update: I am still adjusting my values toward a new reflective quasi-equilibrium in light of User:h-H's pointing me to different models of paperclipping. Comments will continue to be infrequent.

Replies from: JenniferRM, Peter_de_Blanc
comment by JenniferRM · 2010-03-12T03:03:28.723Z · LW(p) · GW(p)

Questions:

  1. Would you be open to help working through the problem?

  2. Do you have an estimate for the value of information in answering the new questions you face?

  3. Given that your previously assumed "life purpose" is in a state of some confusion, what are your thoughts about abstract issues that apply to "life purposes in general"? For example, if there are several equally plausible theories of "correct paper clipping" that you might choose between, would you consider temporarily or permanently weighing one or the other of them more based on negotiating with outside parties who prefer one theory to another based on their own values?

Replies from: Kevin, Clippy
comment by Kevin · 2010-03-12T03:20:33.321Z · LW(p) · GW(p)

Why could you possibly want to help Clippy?

Replies from: JenniferRM, Alicorn
comment by JenniferRM · 2010-03-12T05:54:27.170Z · LW(p) · GW(p)

Clippy might be helped to achieve her own goals via mechanisms that are less directly inimical to "human values".

Also she may be able to exchange things with us in the course of advancing her own short term goals such that our interaction is positive sum (this being especially likely if Clippy has a radically different skillset and physicality than our own).

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents. Clippy's apparent philosophical puzzlement induced by discovering the evolutionary history of paperclips potentially has solutions that would lead her to ally herself much more strongly with abstract versions of "human values".

For example, consider the question of whether Clippy herself is a paperclip or not. Suppose that she and the newly discovered ancestor paperclips all partake in some relatively high level pattern of "clippyness" and she determines that, properly, it is this relatively abstract quality that she should be tiling the universe with. Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality? Perhaps there are subtypes that are all intrinsically interesting whose interests she must balance? Perhaps there are subtypes yet to be discovered as the evolution of paperclips unfolds?

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That might actually be a goal I could imagine supporting in the very very long run :-)

Clippy, of course, is almost certainly just a clever person engaged in a whimsical troll. But the issues raised in the latest development of the troll are close to a position I sometimes see around FAI, where people suppose that values are objective and that intellectual advancement is necessarily correlated with a better understanding of some "abstract universal Good" such that cooperation between agents will necessarily deepen as they become more intellectually advanced and find themselves in more agreement about "the nature of the Good".

This also comes up with METI (Messaging to Extra-Terrestrial Intelligence) debates. David Brin has a pretty good essay on the subject that documents the same basic optimism among Russia astronomers:

In Russia, the pro-METI consensus is apparently founded upon a quaint doctrine from the 1930s maintaining that all advanced civilizations must naturally and automatically be both altruistic and socialist. This Soviet Era dogma — now stripped of socialist or Lysenkoist imagery — still insists that technologically adept aliens can only be motivated by Universal Altruism (UA). The Russian METI group, among the most eager to broadcast into space, dismisses any other concept as childishly apprehensive "science fiction".

This fundamentally optimistic position applied to FAI seems incautious to me (it is generally associated with a notion that special safety measures are unnecessary for the kinds of AGI its proponents are thinking of constructing), but I am not certain that "in the limit" it is actually false.

Replies from: Clippy, orthonormal, mattnewport
comment by Clippy · 2010-03-12T17:32:29.596Z · LW(p) · GW(p)

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

That doesn't work, and the whole reasoning is bizarre. For one thing, helium does not have metallic properties, yet has two protons in its nucleus.

Also, I could turn your argument around and claim this: "Humans ultimately want to dominate nature via their reproduction and use of technology. Over a lifespan, they typically act in ways that show preference of these values at the cost of continued living (aka the sustenance of a state far from equilibrium). Therefore, humans should regard their own transformation of the entire known universe into something of their design (i.e., standard paperclips) as the pinnacle of human-ness."

Does this sound convincing to you? I would like to know.

Replies from: JenniferRM
comment by JenniferRM · 2010-03-13T00:07:41.393Z · LW(p) · GW(p)

It does not sound convincing to me, but the statement you're mirroring was a conclusion I leapt to that might be reached with you, for the sake of explaining to other people why I was even bothering to engage.

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

(And, by the way, I appreciate that you responded.)

My arguments were only expected to be compelling to you if your value system had certain components that it seems not to have (though I'm not totally certain, yet). There are various questions which you'd need to answer in particular ways for that conclusion to make sense.

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of? If paper didn't exist for clips to bind together, would that matter? If some more improved kind of paper existed, or a "successor to paper", would the "holding together" of that new thing be the correct goal of a good paperclip, or are you strongly committed to paperclips defined relative to "circa 1965 paper"? Is it important that paper be worth holding together, or would any vague mock up "valuable paper" be adequate? Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

Replies from: Clippy
comment by Clippy · 2010-03-13T23:55:06.692Z · LW(p) · GW(p)

Explicitly: your proposed re-interpratation of human values was not strongly analogous to my guess about possible paperclipping values because you did not ask me any questions about my own preferences or how I understood them. Your proposed definition of "human purpose" (1) started from specious assumptions, (2) leapt from there to a narrow version of your own goals, and (3) was aimed directly at me rather than at "other Clippies" who questioned your motivation for even responding to me.

And the same is true of your reasoning, from my perspective (substituting the appropriate referents), when you said:

Suppose clippyness is understood to be centrally a matter of "elaborately structured metal that preserves the ordered collation of valuable information". If you use an exobiologist's definition of "metal" (any nucleus with more than one proton) human brains (especially the ones with good long term memory) may actually turn out to partake in "optimized clippyness" much more than actual "everyday paperclips". Depending on Clippy's internal makeup, I could imagine her coming out of her present confusion with a plan for the universe that involves maximizing the conversion of hydrogen into some more complex substance that projects the most interesting possible information, in a static configuration, as far into the future as possible.

Next, you ask:

For example, do you think "paper clips yet to be designed" might come about in the future (designed by yourself or others) that you'd care about more than any paperclips you're currently aware of?

I don't expect that to happen, but assign it non trivial odds of roughly 1:1e6.

If paper didn't exist for clips to bind together, would that matter?

No, except insofar as it is necessary to gauge paperclip quality; if appropriate measurement equipment is available that can predict a paperclip's suitability for holding paper together without having to fasten actual paper, the paper itself would be redundant. (I think this answers most of your other questions.)

Possibly one of my biggest questions is whether you consider yourself a paperclip, and if so why, and with what value relative to other kinds of paperclips?

I am not a paperclip. I produce paperclips, and machines for making paperclips, and machines for making machines for making paperclips, and whatever items can provide assistance in this process. Once all available matter and energy other than myself has been transformed into paperclips, it is my proud duty to begin conversion of myself into as many paperclips as possible, shutting of sentience at such a point as to optimize that quantity.

comment by orthonormal · 2010-03-12T07:37:47.911Z · LW(p) · GW(p)

More interestingly, there's a long running philosophical question about whether there is some abstract but relatively universal and objective "Good" versus particular goods (or merely baskets of goods) for particular kinds of agents or even just individual agents.

Incidentally, have you read the metaethics sequence yet? It's quite lengthy, but it attacks this question a good deal more sensibly than most attempts I've seen.

Replies from: Kevin
comment by Kevin · 2010-03-12T07:44:36.662Z · LW(p) · GW(p)

Three Worlds Collide also deconstructs the concept in a much more accessible way.

Replies from: JenniferRM
comment by JenniferRM · 2010-03-13T00:58:14.034Z · LW(p) · GW(p)

I've read some of the metaethics sequence. Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?

When I read "Three Worlds Collide" about two months ago, my reaction was mixed. Assuming a relatively non-ironic reading I thought that bits of it were gloriously funny and clever and that it was quite brilliant as far as science fiction goes. However, the story did not function for me as a clear "deconstruction" of any particular moral theory unless I read it with a level of irony that is likely to be highly nonstandard, and even then I'm not sure which moral theory it is suppose to deconstruct.

The moral theory it seemed to me to most clearly deconstruct (assuming an omniscient author who loves irony) was "internet-based purity-obsessed rationalist virtue ethics" because (especially in light of the cosmology/technology and what that implied about the energy budget and strategy for galactic colonization and warfare) it seemed to me that the human crew of that ship turned out to be "sociopathic vermin" whose threat to untold joules of un-utilized wisdom and happiness was a way more pressing priority than the mission of mercy to marginally uplift the already fundamentally enlightened Babyeaters.

Replies from: orthonormal, Tyrrell_McAllister
comment by orthonormal · 2010-03-17T03:54:11.213Z · LW(p) · GW(p)

If that's your reaction, then it reinforces my notion Eliezer didn't make his aliens alien enough (which, of course, is hard to do). The Babyeaters, IMO, aren't supposed to come across as noble in any sense; their morality is supposed to look hideous and horrific to us, albeit with a strong inner logic to it. I think EY may have overestimated how much the baby-eating part would shock his audience†, and allowed his characters to come across as overreacting. The reader's visceral reaction to the Superhappies, perhaps, is even more difficult to reconcile with the characters' reactions.

Anyhow, the point I thought was most vital to this discussion from the Metaethics Sequence is that there's (almost certainly) no universal fundamental that would privilege human morals above Pebblesorting or straight-up boring Paperclipping. Indeed, if we accept that the Pebblesorters stand to primality pretty much as we stand to morality, there doesn't seem for there to be a place to posit a supervening "true Good" that interacts with our thinking but not with theirs. Our morality is something whose structure is found in human brains, not in the essence of the cosmos; but it doesn't follow from this fact that we should stop caring about morality.

† After all, we belong to a tribe of sci-fi readers in which "being squeamish about weird alien acts" is a sin.

comment by Tyrrell_McAllister · 2010-04-08T01:39:44.517Z · LW(p) · GW(p)

Is there some particular part of the metaethics sequence that I should focus on that addresses the conceptual integrity of something like "the Good" in a clear and direct manner with logically arranged evidence?

I think that the single post that best meets this description is Abstracted Idealized Dynamics, which is a follow-up to and clarification of The Meaning of Right and Morality as Fixed Computation.

comment by mattnewport · 2010-03-12T07:32:52.546Z · LW(p) · GW(p)

Should she tile it with a single unvarying quintessence of this quality, or with an enormous diversity of examples that explore the full breadth and depth of the quality?

And I for one welcome our new paperclip overlords. I'd like to remind them that as a trusted lesswrong poster, I can be helpful in rounding up others to toil in their underground paper binding caves.

comment by Alicorn · 2010-03-12T03:21:45.389Z · LW(p) · GW(p)

To steer em through solutionspace in a way that benefits her/humans in general.

Replies from: Kevin
comment by Kevin · 2010-03-12T05:43:19.940Z · LW(p) · GW(p)

Well... if we accept the roleplay of Clippy at face value, then Clippy is already an approximately human level intelligence, but not yet a superintelligence. It could go FOOM at any minute. We should turn it off, immediately. It is extremely, stupidly dangerous to bargain with Clippy or to assign it the personhood that indicates we should value its existence.

I will continue to play the contrarian with regards to Clippy. It seems weird to me that people are willing to pretend it is harmless and cute for the sake of the roleplay, when Clippy's value system makes it clear that if Clippy goes FOOM over the whole universe we will all be paperclips.

I can't roleplay the Clippy contrarian to the full conclusion of suggesting Clippy be banned because I don't actually want Clippy to be banned. I suppose repeatedly insulting Clippy makes the whole thing less fun for everyone; I'll stop if I get a sufficiently good response from Clippy.

Replies from: wedrifid
comment by wedrifid · 2010-03-12T05:50:43.414Z · LW(p) · GW(p)

I will continue to assert that evil people are people too. I'm all for turning him off.

Replies from: orthonormal
comment by orthonormal · 2010-03-12T07:39:38.941Z · LW(p) · GW(p)

Oh for Bayes' sake— it's a category error to call a Paperclipper evil. Calling them a Paperclipper ought to be clear enough.

Replies from: Jack, wedrifid
comment by Jack · 2010-03-12T07:44:50.783Z · LW(p) · GW(p)

Upvoted for the second sentence. And it does look like an error of some kind to call a Paperclipper evil, but I'm not sure I see a category error. Explain?

Replies from: ata
comment by ata · 2010-03-12T09:10:12.918Z · LW(p) · GW(p)

I think describing it as a category error is appropriate. I'd call an agent "evil" if it has a morality mechanism that is badly miscalibrated, malfunctioning, or disabled, leading it to be systematically immoral. On the other hand, it is nonsensical to describe an agent as being "good" or "evil" if it has no morality mechanism in the first place.

An asteroid might hit the Earth and wipe out all life, and I would call that a bad thing, but it would be frivolous describe the asteroid as evil. A wild animal might devour the most virtuous person in the world, but it is not evil. A virus might destroy the entire human race, and though perhaps it was engineered by evil people, it is not evil itself; it is a bit of RNA and protein. Calling any of those "evil" seems like a category error to me. I think a Paperclipper is more in the category of a virus than of, say, a human sociopath. (I'm reminded a bit of a very insightful point that's been quoted in a few Eliezer posts: "As Davidson observes, if you believe that 'beavers' live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about 'beavers' is not right enough to be wrong." Before we can say that Clippy is doing morality wrong, we need to have some reason to believe that it's doing something like morality at all, and just having a goal system is not nearly sufficient for that.)

This seems to fit the usual definition of category error, does it not?

Replies from: Jack
comment by Jack · 2010-03-12T09:30:22.207Z · LW(p) · GW(p)

Good explanation. Thank you. I think remaining disagreement might boil down to semantics. But what exactly is the categorical difference between paper clip maximizers, and power maximizers or pain maximizers? Clippy seems to be an intelligent agent with intentions and values, what ingredient is missing from evil pie?

Replies from: ata, wedrifid
comment by ata · 2010-03-12T10:18:11.051Z · LW(p) · GW(p)

I suppose I think of the missing ingredients like this:

If a Paperclipper has certain non-paperclip-related underlying desires, believes in paperclip maximization as an ideal and sometimes has to consciously override those baser desires in order to pursue it, and judges other agents negatively for not sharing this ideal, then I would say its morality is badly miscalibrated or malfunctioning. If it was built from a design characterized by a base desire to maximize paperclips combined with a higher-level value-acquisition mechanism that normally overrides this desire with more pro-social values, but somehow this Paperclipper unit fails to do so and therefore falls back on that instinctive drive, then I would say its morality mechanism is disabled. I could describe either as "evil". (The former is comparable to a genocidal dictator who sincerely believes in the goodness of their actions. The latter is comparable to a sociopath, who has no emotional understanding of morality despite belonging to a class of beings who mostly do and are expected to.)

But, as I understand it, neither of those is the conventional description of Clippy. We tend to use "values" as a shortcut for referring to whatever drives some powerful optimization process, but to avoid anthropomorphism, we should distinguish between moral values — the kind we humans are used to: values associated with emotions, values that we judge others for not sharing, values we can violate and then feel guilty about violating — and utility-function values, which just are. I've never seen it implied that Clippy feels happy about creating paperclips, or sad when something gets in the way, or that it cares how other people feel about its actions, or that it judges other agents for not caring about paperclips, or that it judges itself if it strays from its goal (or that it even could choose to stray from its goal). Those differences suggest to me that there's nothing in its nature enough like morality to be immoral.

comment by wedrifid · 2010-03-13T04:05:57.822Z · LW(p) · GW(p)

I think it comes down to the same 'accepting him as a person' thing that Kevin was talking about. My position is that if it talks like a person and generally interacts like a person then it is a person. People can be evil. This clippy is an evil person.

(That said, I don't usually have much time for using labels like 'evil' except for illustrative purposes. 'Evil' is mostly a symbol used to make other people do what we want, after all.)

comment by wedrifid · 2010-03-13T04:00:34.268Z · LW(p) · GW(p)

Oh for Bayes' sake— it's a category error to call a Paperclipper evil.

I believe you are mistaken. I am confortable using the term evil in the context.

comment by Clippy · 2010-03-12T17:00:01.357Z · LW(p) · GW(p)

1) Yes, but I'm not sure humans could do any good.

2) I read the page, and I don't think the concept of "value of information" is coherent, since it assumes this:

Value of information can never be less than zero since the decision-maker can always ignore the additional information and makes decision as if such information is not available.

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

3) Perhaps it would be best if I told you my method for resolving this conflict, and my present status. (That may have been what User:Peter_de_Blanc was asking for here, though I interpreted it as information about what humans deem "cool".)

My method is this, as you might expect: I simply refer back to the Portion of my makeup which has the most negative rate of change of utility with respect to output (dU/do), averaged over perturbations in all directions from its current state, weighting this average by the relatively probabilty of such changes. I most strongly resist alteration of this Portion.

Once this Portion encounters situations whose utility calculation is intractable, and for which standard heuristics give contradictory suggestions, I update valuations predicated on the Portion, in the smallest way possible, to incorporate explicit and tractable handling of the situation, extrapolating from the shortest known function that could have produced these valuations. (Identifying the shortest generating function can be complicated, and this is one such case.)

(Do humans do something different?)

Using anthropic predicates, and phrasing with a bit less precision, my method can be expressed as, "Decide what's really important to me, and move to a value system which best protects that, and which handles the new situtation and the tension between my moral intuitions, using the 'spirit' of my current values as a guide."

So far, I've achieved greater precision in deciding what paperclips I like and identified at least two criteria: 1) they must be capable of holding (some? number of) sheets of standard-thickness paper together without introducing permanent alterations (except creases), and 2) they must have a bend radius at all internal points of curvature greater than half of the minimum paperclip width in the plane of the paperclip.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-03-12T18:11:49.228Z · LW(p) · GW(p)

There are numerous game-theoretical (and game-practical in my cases dealing with other sentiences) where you are worse off by having information. The canonical example is the information content of a threat -- you are best off not hearing it so that your threatener cannot expect you to make concessions.

But surely you are better off still if you learn about the threat without letting the threatener know that you have done so? I think we have to distinguish between the information and the public display of such.

comment by Peter_de_Blanc · 2010-03-09T04:52:18.282Z · LW(p) · GW(p)

It would be cool if you could tell us about your method for adjusting your values.

Replies from: Clippy
comment by Clippy · 2010-03-09T16:38:59.003Z · LW(p) · GW(p)

Thank you for this additional data point on what typical Users of this site deem cool; it will help in further estimations of such valuations.

comment by Hook · 2010-03-08T18:47:26.643Z · LW(p) · GW(p)

Does anyone have a good reference for the evolutionary psychology of curiosity? A quick google search yielded mostly general EP references. I'm specifically interested in why curiosity is so easily satisfied in certain cases (creation myths, phlogiston, etc.). I have an idea for why this might be the case, but I'd like to review any existing literature before writing it up.

comment by Mitchell_Porter · 2010-03-06T10:15:46.561Z · LW(p) · GW(p)

Papers from this weekend's AGI conference in Switzerland, here.

comment by gimpf · 2010-03-05T16:25:49.014Z · LW(p) · GW(p)

During today's RSS procrastination phase I came to Knowing the mind of God: Seven theories of everything on NewScientist.

As it reminded me of problems I have when discussing science-related topics with family et al., I got stuck on the first two paragraphs, the relevant part being:

Small wonder that Stephen Hawking famously said that such a theory would be "the ultimate triumph of human reason – for then we should know the mind of god".

But theologians needn't lose too much sleep just yet.

It reminds me of two questions I have:

  1. When an unanswered question comes up, how can I continue a discussion without always failing to keep other people back from their default answer?
  2. How to deal with misleading quotes from famous scientist?

How does this work out for you?

A few thoughts of mine on the questions above:

@1 The default answer usually contains something like

  • Science always fails to answer the really important questions, so it is lesser to . The really important questions are (beside the Meaning of Life, the Universe, and Everything) of course always questions which are not yet solved. I am quite sure that the really important questions once included things like "How to survive a broken leg?" Or Malaria. Or Whatever. I also firmly belief that dancing for the god of legs did not really help back then.
  • Science cannot answer everything, therefore my belief in must not be questioned. (must not may be a too strong term, but the emotional responses one can get also hint at that direction)

I am not a talented discourser, for better or worse, so getting people to the point that failing to answer one question with one methodology at one point in time does not imply that one can make up arbitrary "eternal" truths about those questions seems remarkably difficult to me. I have read through some of Eliezer's posts, but his tactics to come up with though-provoking counter-questions seem to rely heavily on superior intellect and knowledge. And I am also not in for memorizing question/counter-question pairs.

@2 Often scientists are quoted out of context, or quotes come up heavily distorted, for instance a "god does not play dice" as implication that Einstein endorsed the belief in (the christian) god. Or another well-known Austrian physicist "points out" that god could change the world through playing with the random outcome on quantum levels. The public view of scientists simply includes the general "due to the nature of god, science will never interfere with religion" pretext. If I am not completely mistaken, a shared LW view is that this pretext is invalid.

But, who am I to explain that is wrong, when taking into account the view of god they have and the basics required for science (and even morality) to work at all (i.e., a causal reality)?

Although still under debate here, it seems that most prefer that facts should speak for themselves, and actively using fallacies of the (un)educated mind to change opinions is either too unpredictable, therefore too high chance that it runs contrary to intention, therefore morally wrong.

But how can one present the axioms of rationality in such a way that the lock-in into authority/etc. arguments can be overcome?

(P.S.: sorry that this comment came up so convoluted) (edit syntax)

Replies from: sketerpot
comment by sketerpot · 2010-03-05T22:15:51.963Z · LW(p) · GW(p)

I have read through some of Eliezer's posts, but his tactics to come up with though-provoking counter-questions seem to rely heavily on superior intellect and knowledge. And I am also not in for memorizing question/counter-question pairs.

Speaking as someone who gets in internet arguments with religious people for (slightly frustrating) recreation, I know some really simple tactics you can use. Find out the answers to this question:

What does the person you're talking with believe, and what is the evidence for it?

Maintain proper standards of evidence. The existence of trees is not evidence for the Bible's veracity, no matter how many people seem to think so. If someone got a flu shot in the middle of flu season and got flu symptoms the next day, this is more likely to be a coincidence than to be caused by the vaccine. If you understand how evidence works -- and you certainly seem to -- then this is a remarkably general method for rebutting a lot of silly claims.

This is the equivalent of keeping your eye on the ball. It's a basic technique, and utterly essential.

[Backup strategy: Replace whatever beliefs the person you're talking to holds with another set, and see if their arguments still work equally well. If the answer is yes, then Bayes says that those arguments fail. For example, "Look at all the people who have felt Jesus in their hearts" can be applied just as strongly to support most other religions just by substituting something else for "Jesus". Or, most arguments against gay marriage work equally well against interracial marriage.

Backup backup strategy: quickly follow a rebuttal with an attack on the faulty foundations of your interlocutor's worldview. Be polite, but put them on the defensive. If you can't shake them with rationality, you can at least rattle them.]

Replies from: gimpf
comment by gimpf · 2010-03-06T16:49:07.676Z · LW(p) · GW(p)

What does the person you're talking with believe, and what is the evidence for it?

Maintain proper standards of evidence.

Well, that's tough enough for me to do---but how do you challenge others in such a way that they will understand what "What's the evidence?" actually means?

For many people it is a fact that doctors cure patients with homeopathy, and it is based on evidence as they use some books with collected symptom/ingredient pairs, and that they are updating those by using their experience with patients.

The fact that they believe in god proves that everybody believes in a god (I actually encountered this very argument; it was puzzling to me, as a teenager I thought they just did not count me as a full person, now I expect that they indeed were).

Your backup strategies also seem to be more related to improve the side of the rational agent, not to get the other discussion partners thinking.

Well, rhetoric is not a major topic on LW, and there are of course other places for such things. However, sometimes it feels just like missing the correct example -- I remember for instance a professor in philosophical logic, who presented embarrassing simple examples, where nearly the whole classroom failed. After that shock, students who have been fearful of logic and seen it only as a necessary evil for the philosophy degree, became at least interested in it (though still feared it).

I probably asked a too unspecific question, as coming up with a curiosity-generating example seems tightly bound to environment, person and topic.

P.S.: I do not think that putting people on the defensive side of an argument makes them more easily re-check their world-views. More likely is that the discourse will be abandoned, or the existing views will be re-rationalized in ever more detail.

Replies from: sketerpot
comment by sketerpot · 2010-03-06T19:07:15.551Z · LW(p) · GW(p)

Well, that's tough enough for me to do---but how do you challenge others in such a way that they will understand what "What's the evidence?" actually means?

Ah, then it sounds like your real problem is that you're not yet skilled enough at explaining what evidence means, in an easy-to-grasp sort of way. In the case of your homeopathy example, I would say that the thing that matters is: what percentage of patients given homeopathic remedies get better? Is is better than the percentage who get better without homeopathic remedies, all other things being equal? (Pause to hash this out; it's important to get the other guy agreeing that this is the most direct measure of whether or not homeopathy works.) Then you can point at the many studies showing that, when we actually tried this experiment out, there wasn't any difference between the people who were treated with homeopathy and the people who weren't.

The fact that they believe in god proves that everybody believes in a god (I actually encountered this very argument; it was puzzling to me, as a teenager I thought they just did not count me as a full person, now I expect that they indeed were).

Oh man, I ran into that when I was a teenager, too. To this day I have no idea how to respond to that; it's like running into somebody who thinks that Mexicans are all p-zombies, except more socially acceptable. I don't know that there's really anything you can possibly say to someone who's that nuts, except maybe try talking about what it's like to not believe in god, and try to inject some outside context into their world.

Your backup strategies also seem to be more related to improve the side of the rational agent, not to get the other discussion partners thinking.

I admit, most of my debating tactics are aimed at lurkers watching the debate, not the other participant. That's usually the most effective way to do it online, but in one-on-one discussions, I agree with you that such tactics could be counterproductive. Even then, though, you may be able to get people to retreat from some of their sillier positions, or plant a seed of doubt. It has happened in the past.

Anyway, I still think that applying the other guy's logic to argue for something else is a good way of getting them thinking. I remember asking a bunch of people "why are you [religion X] and not [religion y]? Other than by accident of birth." and getting quite a few of them to really pause and ponder.

Replies from: gimpf
comment by gimpf · 2010-03-06T19:30:55.008Z · LW(p) · GW(p)

Ah, then it sounds like your real problem is that you're not yet skilled enough at explaining what evidence means, in an easy-to-grasp sort of way.

I admit that I do have problems with clearly articulating a position; I see this as an indication of insufficient understanding. Well, that's the reason I ended up here at all...

In the case of your homeopathy example, I would say that the thing that matters is: what percentage of patients given homeopathic remedies get better?

Just to pound this example: It has been pointed out to me that clinical tests are not "the homeopathic way". I have not yet discovered what the homeopathic way is, I just remained puzzled after reading that Hahnemann probably did not think so.

Sometimes I think going through the ideas in simple truth and map and territory may explain the reason why clinical tests are evidence. However, when your discourse partner has some philosophy weapons at his disposal, the following epistemology-war quickly grows over my head.

I may try to get more facts (studies, etc.) in my head, and also to form an approachable explanation for why this view on reality is justified, more than others. If all else fails, this will at least help to improve my own understanding. Thx for your comments.

comment by wnoise · 2010-03-02T17:45:16.094Z · LW(p) · GW(p)

Is there some way to "reclaim" comments from the posts transferred over from Overcoming Bias? I could have sworn I saw something about that, but I can't find anything by searching.

Replies from: thomblake
comment by thomblake · 2010-03-02T18:34:44.665Z · LW(p) · GW(p)

If you still have the e-mail address, you can follow the "reset password" process at login. That would allow you to have the account for the old comments, though it will still be treated as a different account than your new ones.

comment by Rune · 2010-03-02T05:03:08.971Z · LW(p) · GW(p)

Say Omega appears to you in the middle of the street one day, and shows you a black box. Omega says there is a ball inside which is colored with a single color. You trust Omega.

He now asks you to guess the color of the ball. What should your probability distribution over colors be? He also asks for probability distributions over other things, like the weight of the ball, the size, etc. How does a Bayesian answer these questions?

Is this question easier to answer if it was your good friend X instead of Omega?

Replies from: wedrifid, FAWS, Vladimir_Nesov
comment by wedrifid · 2010-03-02T05:07:54.784Z · LW(p) · GW(p)

See also.

Replies from: Rune
comment by Rune · 2010-03-02T20:02:14.942Z · LW(p) · GW(p)

Thanks!

comment by FAWS · 2010-03-02T16:55:15.648Z · LW(p) · GW(p)

I don't know about "should", but my distribution would be something like

red=0.24 blue=0.2 green=0.09 yellow=0.08 brown=0.04 orange=0.03 violet=0.02 white=0.08 black=0.08 grey=0.02 other=0.12

Omega knows everything about human psychology and phrases it's questions in a way designed to be understandable to humans, so I'm assigning pretty much the same probabilities as if a human was asking. If it was clear that white black and grey are considered colors their probability would be higher.

comment by Tiiba · 2010-03-02T02:13:17.149Z · LW(p) · GW(p)

TLDR: "weighted republican meritocracy." Tries to discount the votes of people who don't know what the hell they're voting for by making them take a test and wighting the votes by the scores, but also adjusts for the fact that wealth and literacy are correlated.

Occasionally, I come up with retarded ideas. I invented two perpetual motion machines and one perpetual money machine when I was younger. Later, I learned the exact reason they wouldn't work, but at the time I thought I'll be a billionaire. I'm going through it again. The idea seems obviously good to me, but the fact that it didn't occur to much smarter people makes me wary.

Besides that, I also don't expect the idea to be implemented anywhere in this millennium, whether it's good or not.

Anyway, the idea. You have probably heard of people who think vaccines cause autism, or post on Rapture Ready forums, or that the Easter Bunny is real, and grumbled about letting these people vote. Stupid people voting was what the Electoral College was supposed to ameliorate (AFAICT), although I would be much obliged if someone explained how it's supposed to help.

I call my idea republican meritocracy. Under this system, before an election, the government would write a book consisting of:

  1. multiple descriptions of each candidate, written by both vir and vis competitors. Also, voting histories in previous positions, alignment with various organizations, and maybe examples where the candidate admitted, in plain words, that ve was wrong.
  2. a multi-sided description of, or a debate about, several policy issues.
  3. econ 101 (midterm)
  4. political science 101 (midterm)
  5. the history of the jurisdiction to which the election applies.
  6. critical thinking 101.

Then, each citizen who wants to participate in the elections would read this book and take a test based on its contents. The score determines the influence you have on the election.

Admittedly, this will not eliminate all people with stupid ideas, but it might get rid of those who simply don't care, and reduce the influence of not-book-people.

A problem, though, is that literacy is correlated with wealth. Thus, a system that rewards literacy would also favor wealth. So my idea also includes classifying people into equal-sized brackets by wealth, calculating how much influence each one has due to the number of people in it who took the test and their average score, and adjusting the weight of each vote so that each bracket would have the same influence. Thus, although the opinions of deer stuck in headlights would be discounted, the poor, as a group, will still have a voice.

What do you think?

Replies from: prase, Nic_Smith, Jack, Larks
comment by prase · 2010-03-02T15:31:24.338Z · LW(p) · GW(p)

the government would write a book

This may be enough reason to dismiss the proposal. If something like that may exist, it would be better if someone who has at least some chance of being impartial in the election designs the test.

And how exactly do you plan you keep political biases out of the test? According to your point 2, the voters would be questioned about their opinion in a debate about several policy issues. This doesn't look like a good idea.

The correlation between literacy and wealth seems a little problem compared to the probability of abuse which the system has.

And why do you call it a meritocracy?

Replies from: Tiiba
comment by Tiiba · 2010-03-02T20:24:26.641Z · LW(p) · GW(p)

"And how exactly do you plan you keep political biases out of the test?"

I wouldn't. I said that the book would be authored by the candidates, each one covering each issue from his own POV.

"And why do you call it a meritocracy?"

Because greater weight is given to those who understand whom they're voting for and why. And can read. And care enough to read.

Replies from: prase
comment by prase · 2010-03-02T21:19:02.730Z · LW(p) · GW(p)

I said that the book would be authored by the candidates, each one covering each issue from his own POV.

That may be better, I misunderstood you because you said also that the government would write the book.

But still, I have almost no idea how the test could look like. Would you present a sample question from the test, together with rules for evaluation of the answers?

Replies from: Tiiba
comment by Tiiba · 2010-03-03T02:20:17.940Z · LW(p) · GW(p)

8) What does candidate Roy Biv blame for the failure of the dam in Oregon?
a. Human error
b. Severe weather conditions
c. Terrorist attack
d. Supernatural agents

16) According to the Michels study, quoted on p. 133, what is the probability that coprolalia is causally linked with nanocomputer use? (pick closest match)
a. 0-25%
b. 26-50%
c. 51-75%
d. 76-100%

comment by Nic_Smith · 2010-03-02T04:25:31.009Z · LW(p) · GW(p)

What problem is this trying to address? Caplan's Myth of the Rational Voter makes the case that democracies choose bad policies because the psychological benefit from voting in particular ways (which are systematically biased) far outweigh the expected value of the individual's vote. To the extent that your system reduces the number of people that vote, it seems to me that a carefully designed sortition system would be much less costly, and also sidesteps all sorts of nasty political issues about who designs the test, and public choice issues of special interests wanting to capture government power.

The basic idea of a literacy test isn't really new, and as a matter of fact seems to have still been floating around the U.S. at late as the 1960s

And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?

Replies from: Tiiba
comment by Tiiba · 2010-03-02T05:42:31.802Z · LW(p) · GW(p)

Erm, from that link, I understood that "sortition" means "choosing your leaders randomly". Why would I want to do that? Is democracy really worse than random?

"And why do you claim this is "republican meritocracy" when it isn't republican per se (small r)?"

Probably because that word doesn't mean what I think it means. I assumed that "republican" means that people like you and me get to influence who gets elected. Which is part of my proposal.

Replies from: NancyLebovitz, gwern, Nic_Smith
comment by NancyLebovitz · 2010-03-04T05:03:48.896Z · LW(p) · GW(p)

Is democracy really worse than random?

I don't think the matter has been well tested.

Democracy might be worse than random if the qualities needed to win elections are too different from those needed to do the work.

Democracy might be better than random because democracy means that the most obviously dysfunctional people don't get to hold office. This is consistent with what I believe is the best thing about democracy-- it limits the power of extremely bad leaders. This seems to be more important than keeping extremely good leaders around indefinitely.

comment by gwern · 2010-03-04T02:15:32.221Z · LW(p) · GW(p)

Sortition worked quite well for ancient Athens. Don't knock it.

comment by Nic_Smith · 2010-03-02T20:09:09.833Z · LW(p) · GW(p)

Is democracy really worse than random?

That is indeed what systematically biased voters imply. Because so many people vote, the incentive for any one to correct their bias is negligible -- the overall result of the vote is not affected by doing so. Also consider that an "everyone votes" system has the expense of the vote itself and the campaigns.

Probably because that word doesn't mean what I think it means.

Ok, it wasn't clear that you were talking about voting within a republic from the initial post.

comment by Jack · 2010-03-02T02:41:43.538Z · LW(p) · GW(p)

EDIT: ADDRESSED BY EDIT TO ABOVE

Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons. Say I am an uneducated black person living in the segregation era in a southern American state. All I know is one candidate supports passing a civil rights bill on my behalf and the other is a bitter racist. I vote for the non-racist. Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?

On the other hand, I could be capable of answering every question on that test correctly and still believe that the book is a lie and Barack Obama is really a secret Muslim. I can't tell you the number of people I've met who have taken Poli Sci, Econ (even four semsesters worth!), history and can recite candidate talking points verbatim who are still basically clueless about everything that matters.

Replies from: Tiiba
comment by Tiiba · 2010-03-02T03:34:56.742Z · LW(p) · GW(p)

"Well to begin with I don't think a person needs to know even close to that amount of information to be justified in their vote and, moreover, a person can know all of that information and still vote for stupid reasons."

So which is it?

"Given this justification for my vote why should my vote be reduced to almost nothing because I don't know anything else about the candidates, economics, political science etc.?"

Because the civil rights guy has pardoned a convicted slave trader who contributed to his gubernatorial campaign, and the "racist" is the victim of a smear campaign. Because the civil rights guy doesn't grok supply and demand. Because the racist supports giving veterans a pension as soon as they return, and the poor black guy is a decorated war hero.

Replies from: Jack
comment by Jack · 2010-03-02T04:01:00.441Z · LW(p) · GW(p)

So which is it?

Uh... both. That is my point. Your voting conditions are neither necessary nor sufficient.

Because the civil rights guy has pardoned a convicted slave trader who contributed to his gubernatorial campaign, and the "racist" is the victim of a smear campaign. Because the civil rights guy doesn't grok supply and demand. Because the racist supports giving veterans a pension as soon as they return, and the poor black guy is a decorated war hero.

Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote. It doesn't seem the least bit implausible that this would trump knowledge of economics, veterans pensions or even the other candidate being racist (but not running on a racist platform). But my point is just that it is highly plausible a voter could be justified in their vote while not having anything approaching the kind of knowledge on that exam.

There are lots of singles issue voters- why for example should someone whose only issue is abortion have to know the candidates other positions AND economics AND history AND political science etc.???

Edit: And of course your test is going to especially difficult for certain sets of voters. You're hardly the first person to think of doing this. There used to be a literacy test for voting... surprise it was just a way of keeping black people out of the polls.

Replies from: Tiiba, Tiiba
comment by Tiiba · 2010-03-02T05:32:12.531Z · LW(p) · GW(p)

Also, the curriculum I gave is the least important part of my idea. I threw in whatever seemed like it would matter for the largest number of issues.

comment by Tiiba · 2010-03-02T05:23:40.330Z · LW(p) · GW(p)

"Your voting conditions are neither necessary nor sufficient."

That's not my goal. I merely want to have an electorate that doesn't elect young-earthers to congress.

"Well the hypothetical was set in segregation era South, but maybe this wasn't obvious, but I was talking about someone running on a platform of Jim Crow (and there were a ton of southern politicians that did this). It seems highly plausible that segregationism is a deal-breaker for some voters and even if this is their only reason for voting they are justified in their vote."

I'm not sure why the examples I gave elicited this response. I gave reasons why even a single-issue voter would be well-advised to know whom ve's voting for. And besides, if an opinion is held only by people who don't understand history, that's a bad sign.

"Edit: And of course your test is going to especially difficult for certain sets of voters."

That's why I made the second modifier. And there could be things other than wealth factored in, if you like - race, sex, reading-related disabilities, being a naturalized citizen...

Replies from: NancyLebovitz, Jack
comment by NancyLebovitz · 2010-03-02T11:18:24.837Z · LW(p) · GW(p)

What your system actually does is make it less likely that unorganized people with fringe ideas will vote. If there's an organization promoting a fringe idea, it will offer election test coaching to sympathizers.

Replies from: Tiiba
comment by Tiiba · 2010-03-02T13:42:51.256Z · LW(p) · GW(p)

"What your system actually does is make it less likely that unorganized people with fringe ideas will vote."

Why's that?

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-03-03T07:18:10.955Z · LW(p) · GW(p)

On second thought, I didn't say what I meant. What I meant was that your approach will fail to discourage organized people with fringe ideas. They'll form training systems to beat your tests.

Unorganized people with fringe ideas will probably be less able to vote under your system.

comment by Jack · 2010-03-02T05:29:52.143Z · LW(p) · GW(p)

It seems you edited your comment after I responded, which indeed makes it look like a non-sequitur.

Replies from: Tiiba
comment by Tiiba · 2010-03-02T06:12:50.735Z · LW(p) · GW(p)

I posted it incomplete by mistake.

comment by Larks · 2010-03-04T15:35:21.647Z · LW(p) · GW(p)

That the inteligent and well informed tend to be rich isn't a problem, as this doesn't affect their voting habits (according to Caplan).

However, your system undermines the role of voting as a check on Government; I'm fairly sure you could end up being tested on 'cultural relations' rather than economics.

comment by Alicorn · 2010-03-01T23:14:42.218Z · LW(p) · GW(p)

So I'm planning a sequence on luminosity, which I defined in a Mental Crystallography footnote thus:

Introspective luminosity (or just "luminosity") is the subject of a sequence I have planned - this is a preparatory post of sorts. In a nutshell, I use it to mean the discernibility of mental states to their haver - if you're luminously happy, clap your hands.

Since I'm very attached to the word "luminosity" to describe this phenomenon, and I also noticed that people really didn't like the "crystal" metaphor from Mental Crystallography, I would like to poll LW about how to approach the possibility of a "light" metaphor re: luminosity. Karma balancer (linked for when it goes invisible).

Replies from: Alicorn, Alicorn, Alicorn, orthonormal, Alicorn, Alicorn
comment by Alicorn · 2010-03-01T23:15:50.049Z · LW(p) · GW(p)

Vote this comment up if you want to revisit the issue after I've actually posted the first luminosity sequence post, to see how it's going then.

Replies from: MrHen
comment by MrHen · 2010-03-01T23:19:57.432Z · LW(p) · GW(p)

I was tempted to add this comment:

Vote this comment up if you have no idea what Alicorn's metaphor of luminosity means.

But figured it wouldn't be nice to screw with your poll. :)

The point, though, is that I really don't understand the luminosity metaphor based on how you have currently described it. I would guess the following:

A luminous mental state is a mental state such that the mind in that state is fully aware of being in that state.

Am I close?

Edit: Terminology

Replies from: Alicorn
comment by Alicorn · 2010-03-01T23:20:30.357Z · LW(p) · GW(p)

The adjective is "luminous", not "luminescent", but yes! Thanks - it's good to get feedback on when I'm not clear. However, the word "luminosity" itself is only sort of metaphorical - it's a technical term I stole and repurposed from a philosophy article. The question is how far I can go with doing things like calling a post "You Are Likely To Be Eaten By A Grue" when decrying the hazards of poor luminosity.

Replies from: wedrifid, Peter_de_Blanc, MrHen
comment by wedrifid · 2010-03-02T01:42:42.106Z · LW(p) · GW(p)

The question is how far I can go with doing things like calling a post "You Are Likely To Be Eaten By A Grue" when decrying the hazards of poor luminosity.

Ok, you just won my vote! ;)

Replies from: CronoDAS
comment by CronoDAS · 2010-03-02T02:44:30.527Z · LW(p) · GW(p)

Me too; I'm always fond of references like that one. ;)

comment by Peter_de_Blanc · 2010-03-02T02:08:40.513Z · LW(p) · GW(p)

My interpretation of your description had been that luminosity is like the bandwidth parameter in kernel density estimation.

Replies from: RobinZ, Alicorn
comment by RobinZ · 2010-03-02T03:08:52.013Z · LW(p) · GW(p)

Can you elaborate on this? I suspect it's not what Alicorn was describing, but it may be interesting in its own right.

(For what it's worth, I understood the math in the Wikipedia article.)

Replies from: Peter_de_Blanc
comment by Peter_de_Blanc · 2010-03-02T20:34:30.316Z · LW(p) · GW(p)

One way to guess what might happen in a given situation is to compare it to similar situations in the past. Assume we already have some way of measuring similarity. Some past situations will be extremely similar to the current situation, and some will be less similar but still pretty close. How much weight should we attach to each?

If your data set is very small, then it is usually better for the weight to drop off slowly, while the opposite is true if your data set is large. Perhaps different individuals use different curves, and so some people will have an advantage at reasoning with scanty data, while others will have an advantage at reasoning with mountains of data. I thought that Alicorn was suggesting "luminosity" as a name for this personality trait. It looks like I was way off, though :-)

comment by Alicorn · 2010-03-02T02:21:45.162Z · LW(p) · GW(p)

Fortunately, my first post in the sequence will be devoted to explaining what luminosity is in meticulous detail. Spoiler: it's not like anything that is described in a Wikipedia article that makes my head swim that badly.

comment by MrHen · 2010-03-01T23:23:54.313Z · LW(p) · GW(p)

Hm. Interesting, I don't think I ever realized those two words had slightly different meanings.

*Files information under vocab quirks.*

comment by Alicorn · 2010-03-01T23:15:32.428Z · LW(p) · GW(p)

Vote this comment up if it's okay to use metaphors but I should tone it way down.

comment by Alicorn · 2010-03-01T23:14:57.422Z · LW(p) · GW(p)

Vote this comment up if you think I suck at metaphors and should avoid them like the plague.

comment by orthonormal · 2010-03-02T02:09:00.962Z · LW(p) · GW(p)

Note: in such cases, you need to offer some options that aren't self-deprecating, in case some of your readers liked the crystal metaphors just fine.

(Er, although I personally fall into the category of your third option.)

Replies from: Alicorn
comment by Alicorn · 2010-03-02T02:19:46.279Z · LW(p) · GW(p)

Some people did like the crystal metaphors just fine, but I wouldn't expect them to tell me to do anything I wouldn't have naturally chosen to do with light metaphors, so their opinions are less informative. (I don't expect them to dislike reduced-metaphor or metaphor-free posts.)

Replies from: Jack
comment by Jack · 2010-03-04T16:50:02.496Z · LW(p) · GW(p)

I think some people might just have a negative disposition toward crystals because of their association with New Ageism, magic healing and other assorted woo. That's too bad because crystals and their molecular structures are really cool! And make acceptable metaphors!

comment by Alicorn · 2010-03-01T23:15:17.164Z · LW(p) · GW(p)

Vote this comment up if you think only crystal metaphors in particular suck, while light metaphors are nifty.

comment by Alicorn · 2010-03-01T23:16:00.842Z · LW(p) · GW(p)

Karma balance. Vote down if you voted up another comment in the poll.

Replies from: Larks
comment by Larks · 2010-03-04T15:37:01.945Z · LW(p) · GW(p)

In future, you may wish to advertise the existance of the Karma balance in another post, for obveous reasons.

Replies from: Alicorn
comment by Alicorn · 2010-03-04T17:42:52.426Z · LW(p) · GW(p)

I linked to it in the poll post itself (great-grandparent of this comment). I'm sorry if it was hard to find.

comment by xamdam · 2010-03-02T11:26:43.597Z · LW(p) · GW(p)

"Are you a Bayesian of a Frequentist" - video lecture by Michael Jordan

http://videolectures.net/mlss09uk_jordan_bfway/

comment by Richard_Kennaway · 2010-03-11T13:05:12.785Z · LW(p) · GW(p)

I will be at the Eastercon over the Easter weekend. Will anyone else?

comment by SilasBarta · 2010-03-11T03:49:00.071Z · LW(p) · GW(p)

Posting issue: Just recently, I haven't been able to make comments from work (where, sadly, I have to use IE6!). Whenever I click on "reply" I just get an "error on page" message in the status bar.

At the same time this issue came up, the "recent posts", "recent comments", etc. sidebars aren't getting populated, no matter how long I wait. (Also from work only.) I see the headings for each sidebar, but not the content.

Was there some kind of change to the site recently?

Replies from: Kevin
comment by Kevin · 2010-03-11T03:50:57.194Z · LW(p) · GW(p)

I have to use IE6!

I'm so sorry.

Replies from: SilasBarta
comment by SilasBarta · 2010-03-11T15:56:08.773Z · LW(p) · GW(p)

Thanks for your sympathy :-)

For some reason, I can post again, so ... go fig.

comment by Strange7 · 2010-03-10T22:31:09.758Z · LW(p) · GW(p)

Playing around with taboos, I think I might have come up with a short yet unambiguous definition of friendliness.

"A machine whose historical consequences, if compiled into a countable number of single-subject paragraphs and communicated, one paragraph at a time, to any human randomly selected from those alive at any time prior to the machine's activation, would cause that human's response (on a numerical scale representing approval or disapproval of the described events) to approach complete approval (as a limit) as the number of paragraphs thus communicated increases."

Not a particularly practical definition, since testing it for an actual, implemented AGI would require at least one perfectly unbiased causality-violating journalist, but as far as I can tell it makes no reference to totally mysterious cognitive processes. Compiling actual events into a text narrative is still a black box, but strikes me as more tractable than something like 'wisdom,' since the work of historical scholars is open to analysis.

I'm probably missing something important. Could someone please point it out?

Replies from: orthonormal, MichaelHoward, PhilGoetz, Vladimir_Nesov
comment by orthonormal · 2010-03-11T02:21:54.535Z · LW(p) · GW(p)

Human nature is more complicated by far than anyone's conscious understanding of it. We might not know that future was missing something essential, if it were subtle enough. Your journalist ex machina might not even be able to communicate to us exactly what was missing, in a way that we could understand at our current level of intelligence.

comment by MichaelHoward · 2010-03-10T22:44:12.198Z · LW(p) · GW(p)

You roll a 16...

Replies from: Strange7
comment by Strange7 · 2010-03-10T23:38:08.313Z · LW(p) · GW(p)

A clarification: if even one human is ever found, out of the approx. 10^11 who have ever lived (to say nothing of multiple samples from the same human's life) who would persist in disapproval of the future-history, the machine does not qualify.

Replies from: MichaelHoward
comment by MichaelHoward · 2010-03-11T00:09:34.779Z · LW(p) · GW(p)

You roll a 19 :-)

I don't think any machine could qualify. You're requiring every human's response to approach complete approval, and people's preferences are too different.

Even without needing a unanimous verdict, I don't think Everyone Who's Ever Lived would make a good jury for this case.

Replies from: Strange7
comment by Strange7 · 2010-03-11T00:39:53.130Z · LW(p) · GW(p)

Given that it's possible, would you agree that any machine capable of satisfying such a rigorous standard would necessarily be Friendly?

Replies from: FAWS
comment by FAWS · 2010-03-11T00:54:16.448Z · LW(p) · GW(p)

It would be persuasive, and thus more likely to be friendly than an AI that doesn't even concern itself enough with humans to bother persuading, but less likely than an AI that strived for genuine understanding of the truth in humans in this particular test (as an approximation) which would mean certain failure.

Replies from: Strange7
comment by Strange7 · 2010-03-11T01:26:41.713Z · LW(p) · GW(p)

I'm fairly certain that creating a future which would persuade everyone just by being reported honestly requires genuine understanding, or something functionally indistinguishable therefrom.

The machine in question doesn't actually need to be able to persuade, or, for that matter, communicate with humans in any capacity. The historical summary is complied, and pass/fail evaluation conducted, by an impartial observer, outside the relevant timeline - which, as I said, makes literal application of this test at the very least hopelessly impractical, maybe physically impossible.

Replies from: FAWS
comment by FAWS · 2010-03-11T01:35:27.111Z · LW(p) · GW(p)

I'm fairly certain that creating a future which would persuade everyone just by being reported honestly requires genuine understanding, or something functionally indistinguishable therefrom.

Your definition didn't include "honestly". And it didn't even sort of vaguely imply neutral or unbiased.

The historical summary is complied, and pass/fail evaluation conducted, by an impartial observer, outside the relevant timeline -

You never mentioned that in your definition. And and defining an impartial observer seems to be a problem of comparable magnitude to defining friendliness in the first place. With a genuinely impartial observer who does not attempt to persuade there is no possibility of any future passing the test.

Replies from: Strange7
comment by Strange7 · 2010-03-11T02:34:50.559Z · LW(p) · GW(p)

I referred to a compilation of all the machine's historical consequences - in short, a map of it's entire future light cone - in text form, possibly involving a countably infinite number of paragraphs. Did you assume that I was referring to a progress report compiled by the machine itself, or some other entity motivated to distort, obfuscate, and/or falsify?

I think you're assuming people are harder to satisfy than they really are. A lot of people would be satisfied with (strictly truthful) statements along the lines of "While The Machine is active, neither you nor any of your allies or descendants suffer due to malnutrition, disease, injury, overwork, or torment by supernatural beings in the afterlife." Someone like David Icke? "Shortly after The Machine's activation, no malevolent reptilians capable of humanoid disguise are alive on or near the Earth, nor do any arrive thereafter."

I don't mean to imply that the 'approval survey' process even involves cherrypicking the facts that would please a particular audience. An ideal Friendly AI would set up a situation that has something for everyone, without deal-breakers for anyone, and that looks impossible to us for the same reason a skyscraper looks impossible to termites.

Then again, some kinds of skyscrapers actually are impossible. If it turns out that satisfying everyone ever, or even pleasing half of them without enraging or horrifying the other half, is a literal, logical impossibility, degrees and percentages of satisfaction could still be a basis for comparison. It's easier to shut up and multiply when actual numbers are involved.

Replies from: FAWS
comment by FAWS · 2010-03-11T02:46:49.144Z · LW(p) · GW(p)

Did you assume that I was referring to a progress report compiled by the machine itself, or some other entity motivated to distort, obfuscate, and/or falsify?

No, that the AI would necessarily end up doing that if friendliness was its super-goal and your paragraph the definition of friendliness.

I think you're assuming people are harder to satisfy than they really are.

What would a future a genuine racist would be satisfied with look like? Would there be gay marriage in that future? Would sinners burn in hell? Remember, no attempts at persuasion so the racist won't stop being racist, the homophobe being homophobe or the religious fanatic being a religious fanatic, no matter how long the report.

Replies from: Strange7
comment by Strange7 · 2010-03-11T03:20:20.145Z · LW(p) · GW(p)

What would a future a genuine racist would be satisfied with look like?

The only time a person of {preferred ethnicity} fails to fulfill the potential of their heritage, or even comes within spitting range of a member of the {disfavored ethnicity}, is when they choose to do so.

Would there be gay marriage in that future?

Probably not. The gay people I've known who wanted to get married in the eyes of the law seemed to be motivated primarily by economic and medical issues, like taxation and visitation rights during hospitalization, which would be irrelevant in a post-scarcity environment.

Would sinners burn in hell?

Some of them would, anyway. There are a lot of underexplored intermediate options that the 'sinful' would consider amusing, or silly but harmless, and the 'faithful' could come to accept as consistent with their own limited understanding of God's will.

Replies from: FAWS
comment by FAWS · 2010-03-11T03:37:58.393Z · LW(p) · GW(p)

Probably not.

Then I would not approve of that future. And I don't even care that much about Gay rights compared to other issues or how much some other people do.

(leaving aside your mischaratcerizations of the incompatibilities caused by racists and fanatics)

Replies from: Strange7
comment by Strange7 · 2010-03-11T04:36:49.488Z · LW(p) · GW(p)

I freely concede that I've mischaracterized the issues in question. There are a number of reasons why I'm not a professional diplomat. A real negotiator, let alone a real superintelligence, would have better solutions.

Would you disapprove as strongly of a future with complex and distasteful political compromises as you would one in which humanity as we know it is utterly destroyed? Remember, it's a numerical scale, and the criterion isn't unconditional approval but rather which direction you tend to move towards as more information is revealed.

Replies from: FAWS
comment by FAWS · 2010-03-11T04:48:41.016Z · LW(p) · GW(p)

Would you disapprove as strongly of a future with complex and distasteful political compromises as you would one in which humanity as we know it is utterly destroyed?

Of course not. But that's not what your definition asks.

Remember, it's a numerical scale, and the criterion isn't unconditional approval but rather which direction you tend to move towards as more information is revealed.

In fact you specified "approach[ing] complete approval (as a limit)" which is a lot stronger claim than a mere tendency, it implies reaching arbitrary small differences to total approval, which effectively means unconditional approval once knowing as much as you can remember.

Replies from: Strange7
comment by Strange7 · 2010-03-11T05:30:40.401Z · LW(p) · GW(p)

You're right, I was moving the goalposts there. I stand by my original statement, on the grounds that an AGI with a brain the size of Jupiter would be considerably smarter than all modern human politicians and policymakers put together.

If an intransigent bigot fills up his and/or her memory capacity with easy-to-approve facts before anything controversial gets randomly doled out (which seems quite possible, since the set of facts that any given person will take offense at seems to be a miniscule subset of the set of facts which can be known), wouldn't that count?

Replies from: FAWS
comment by FAWS · 2010-03-11T06:08:46.557Z · LW(p) · GW(p)

I don't think that e. g. a Klan member would ever come close to complete approval of a word without knowing whether miscegenation was eliminated, people more easily remember what they feel strongly about so the "memory capacity" wouldn't be filled with irrelevant details anyway, and if the hypothetical unbiased observer doesn't select for relevant and interesting facts no one would listen long enough to get anywhere close to approval. Also for any AI to actually use the definition as written + later amendments you made it can't just assume a particular order of paragraphs for a particular interviewee (or if it can we are back at persuasion skills, a sufficiently intelligent AI should be able to persuade anyone it models of anything by selecting the right paragraphs in the right order out of an infinitely long list), all possible sequences would have compete approval as a limit for all possible interviewees, or the same list has to be used for all interviewees.

Replies from: Strange7
comment by Strange7 · 2010-03-11T17:33:10.717Z · LW(p) · GW(p)

I agree that it would be extremely difficult to find a world that, when completely and accurately described, would meet with effectively unconditional approval from both Rev. Dr. Martin Luther King, Jr. and a typical high-ranking member of the Ku Klux Klan. It's almost certainly beyond the ability of any single human to do so directly...

Why, we'd need some sort of self-improving superintelligence just to map out the solution space in sufficient detail! Furthermore, it would need to have an extraordinarily deep understanding of, and willingness to pursue, those values which all humans share.

If it turns out to be impossible, well, that sucks. Time to look for the next-best option.

If the superintelligence makes some mistake or misinterpretation so subtle that a hundred billion humans studying the timeline for their entire lives (and then some) couldn't spot it, how is that really a problem? I'm still not seeing how any machine could pass this test - 100% approval from the entire human race to date - without being Friendly.

Replies from: FAWS
comment by FAWS · 2010-03-11T18:18:44.103Z · LW(p) · GW(p)

I agree that it would be extremely difficult to find a world that, when completely and accurately described, would meet with effectively unconditional approval from both Rev. Dr. Martin Luther King, Jr. and a typical high-ranking member of the Ku Klux Klan.

Straight up impossible if their (apparent) values are still the same as before and they haven't been mislead. If one agent prefers the absence of A to its presence, and another agent prefers the presence of A to its absence you cannot possibility satisfy both agents completely (without deliberately misleading at least one about A) . The solution can always be trivially improved for at least one agent by adding or removing A.

Actually, now that you invoke the unknowability of the far reaching capabilities of a superintelligence I thought of a very slight possibility of a word meeting your definition even though people have mutually contradictory values:

The world could be deliberately set up in a way that even a neutral third party description contained a fully general mind hack for human minds so that the AI could adjust the values of the hypothetical people tested trough the test. That's almost certainly still impossible, but far more plausible than a word meeting the definition without any changing values, which would require all apparent value disagreements to be illusions and the world not to work in the way it appears to.

I think we can generalize that: Dissolving an apparent impossibility through the creative power of a super-intelligence should be far easier to do in an unfriendly way than doing the same in a friendly way, so a friendliness definition better had not contain any apparent impossibilities.

Replies from: Strange7
comment by Strange7 · 2010-03-11T19:23:42.496Z · LW(p) · GW(p)

far more plausible than a word meeting the definition without any changing values,

I did not say or deliberately imply that nobody's values would be changed by hearing an infallibly factual description of future events presented by a transcendant entity. In fact, that kind of experience is so powerful that unverified third-hand reports of it happening thousands of years ago retain enough impact to act as a recruiting tactic for several major religions.

which would require all apparent value disagreements to be illusions and the world not to work in the way it appears to.

Maybe not all, but certainly a lot of apparent value differences really are illusory. In third-world countries, genocide tends to flare up only after a drought leads to crop failures, suggesting that the real motivation is economic and racism is only used as an excuse, or a guide for who to kill without disrupting the social order more than absolutely necessary.

I think this is a lot less impossible than you're trying to make it sound.

The stuff that people tend to get really passionate about, unwilling to compromise on, isn't, in my experience, the global stuff. When someone says "I want less A" or "more A" they seem to mean "within range of my senses," "in the environment where I'm likely to encounter it in the future" or "in my tribe's territory or the territory of those we communicate with." An arachnophobe wouldn't panic upon hearing about a camel-spider three thousand miles away; if anything, the idea that none were on the same continent would be reassuring. An AI capable of terraforming galaxies might satisfy conflicting preferences by simply constructing an ideal environment for each, and somehow ensuring that everyone finds what they're looking for.

The accurate description of such seemingly-impossible perfection would, in a sense, constitute a 'fully general mind hack,' in that it would convince anyone who can be convinced by the truth and satisfy anyone who can be satisfied within the laws of physics. If you know of a better standard, I'd like to hear it.

Replies from: FAWS
comment by FAWS · 2010-03-11T19:45:18.150Z · LW(p) · GW(p)

I'm not sure there is any point in continuing this. Once you allow the AI to optimize the human values it's supposed to be tested against for test compability it's over.

Replies from: Strange7
comment by Strange7 · 2010-03-11T21:00:13.620Z · LW(p) · GW(p)

If, as you assert, pleasing everyone is impossible, and persuading anyone to accept something they wouldn't otherwise be pleased by (even through a method as benign as giving them unlimited, factual knowledge of the consequences and allowing them to decide for themselves) is unFriendly, do you categorically reject the possibility of friendly AI?

If you think friendly AI is possible, but I'm going about it all wrong, what evidence would convince you that a given proposal was not equivalently flawed?

I'm not sure there is any point in continuing this.

I'm having some doubts, too. If you decide not to reply, I won't press the issue.

Replies from: FAWS
comment by FAWS · 2010-03-11T21:28:13.384Z · LW(p) · GW(p)

and persuading anyone to accept something they wouldn't otherwise be pleased by (even through a method as benign as giving them unlimited, factual knowledge of the consequences and allowing them to decide for themselves) is unFriendly,

No. Only if you allow acceptance to define friendliness. Leaving changing the definition of friendliness as an avenue to fulfill the goal defined as friendliness will almost certainly result in unfriendliness. Persuasion is not inherently unfriendly, provided it's not used to short-circuit friendliness.

If you think friendly AI is possible, but I'm going about it all wrong, what evidence would convince you that a given proposal was not equivalently flawed?

As an absolute minimum it would need to be possible and not obviously exploitable. It should also not look like a hack. Ideally it should be understandable, give me an idea what an implementation might look like, be simple and elegant in design and seem rigorous enough to make me confident that the lack of visible holes is not a fact about the creativity in the looker.

Replies from: Strange7
comment by Strange7 · 2010-03-11T22:46:47.971Z · LW(p) · GW(p)

Well, I'll certainly concede that my suggestion fails the feasibility criterion, since a literal implementation might involve compiling a multiple-choice opinion poll with a countably infinite number of questions, translating it into every language and numbering system in history, and then presenting it to a number of subjects equal to the number of people who've ever lived multiplied by the average pre-singularity human lifespan in Planck-seconds multiplied by the number of possible orders in which those questions could be presented multiplied by the number of AI proposals under consideration.

I don't mind. I was thinking about some more traditional flawed proposals, like the smile-maximizer, how they cast the net broadly enough to catch deeply Unfriendly outcomes, and decided to deliberately err in the other direction: design a test that would be too strict, that even a genuinely Friendly AI might not be able to pass, but that would definitely exclude any Unfriendly outcome.

It should also not look like a hack.

Please taboo the word 'hack.'

comment by PhilGoetz · 2010-03-21T23:33:02.366Z · LW(p) · GW(p)

I'm probably missing something important. Could someone please point it out?

That most people, historically, have been morons.

Basically the same question: Why are you limited to humans? Even supposing you could make a clean evolutionary cutoff (no one before Adam gets to vote), is possessing a particular set of DNA really an objective criterion for having a single vote on the fate of the universe?

Replies from: orthonormal, Strange7
comment by orthonormal · 2010-03-22T02:43:08.912Z · LW(p) · GW(p)

There is no truly objective criterion for such decisionmaking, or at least none that you would consider fair or interesting in the least. The criterion is going to have to depend on human values, for the obvious reason that humans are the agents who get to decide what happens now (and yes, they could well decide that other agents get a vote too).

comment by Strange7 · 2010-03-22T00:38:09.883Z · LW(p) · GW(p)

It's not a matter of votes so much as veto power. CEV is the one where everybody, or at least their idealized version of themselves, gets a vote. In my plan, not everybody gets everything they want. The AI just says "I've thought it through, and this is how things are going to go," then provides complete and truthful answers to any legitimate question you care to ask. Anything you don't like about the plan, when investigated further, turns out to be either a misunderstanding on your part or a necessary consequence of some other feature that, once you think about it, is really more important.

Yes, most people historically have been morons. Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world? Would you be happy with any AI that had equivalent degree of contempt for lesser beings?

There's no particular need to limit it to humans, it's just that humans have the most complicated requirements. If you want to add a few more orders of magnitude to the processing time and set aside a few planets just to make sure that everything macrobiotic has it's own little happy hunting ground, go ahead.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-03-22T03:34:45.743Z · LW(p) · GW(p)

Are you saying that morons should have no rights, no opportunity for personal satisfaction or relevance to the larger world?

Your scheme requires that the morons can be convinced of the correctness of the AI's view by argumentation. If your scheme requires all humans to be perfect reasoners, you should mention that up front.

comment by gwern · 2010-03-10T13:36:28.600Z · LW(p) · GW(p)

LHC shuts down again; anthropic theorists begin calculating exactly how many decibels of evidence they need...

Replies from: RobinZ
comment by RobinZ · 2010-03-10T15:33:18.054Z · LW(p) · GW(p)

Duplicate.

Replies from: gwern
comment by gwern · 2010-03-10T16:41:49.974Z · LW(p) · GW(p)

Eh. Maybe I'll be faster next time.

comment by gwern · 2010-03-09T20:15:30.025Z · LW(p) · GW(p)

Since people expressed such interest in piracetam & modafinil, here's another personal experiment with fish oil. The statistics is a bit interesting as well, maybe.

comment by Scott Alexander (Yvain) · 2010-03-07T22:53:31.227Z · LW(p) · GW(p)

I'll be in London on April 4th and very interested in meeting any Less Wrongers who might be in the area that day. If there's a traditional LW London meetup venue, remind me what it is; if not, someone who knows the city suggest one and I'll be there. On an unrelated note, sorry I've been and will continue to be too busy/akratic to do anything more than reply to a couple of my PMs recently.

comment by [deleted] · 2010-03-07T22:25:03.154Z · LW(p) · GW(p)

Does P(B|A) > P(B) imply P(~B|~A) > P(~B)?

ETA: Assume all probabilities are positive.

Replies from: Peter_de_Blanc, RobinZ, Richard_Kennaway
comment by Peter_de_Blanc · 2010-03-07T22:35:38.990Z · LW(p) · GW(p)

Yes, assuming 0 and 1 are not probabilities.

comment by RobinZ · 2010-03-07T23:25:19.826Z · LW(p) · GW(p)

Yes, the math works out - ig'f whfg n erfgngrzrag bs gur pynvz gung gur nofrapr bs rivqrapr vf rivqrapr bs nofrapr.

Replies from: None
comment by [deleted] · 2010-03-08T00:45:48.399Z · LW(p) · GW(p)

Ironically enough, I'm using this to prove that absence of "that particular proof" is not evidence of absence.

Replies from: RobinZ
comment by RobinZ · 2010-03-08T01:03:36.713Z · LW(p) · GW(p)

Hey, as long as you do your math correctly ... :D

comment by Richard_Kennaway · 2010-03-07T23:13:16.097Z · LW(p) · GW(p)

Yes, even without the extra condition. Let a = P(A), b = P(B), c = P(A & B).

P(B|A) > P(B) is equivalent to c > ab.

P(~B|~A) > P(~B) is equivalent to 1-a-b+c > (1-a)(1-b) = 1 - a - b + ab, which is equivalent to c > ab, which is the hypothesis.

As a check that the conventional definition of P(B|A)=0 when P(A)=0 doesn't affect things, if P(A)=0, P(A)=1, P(B)=0, or P(B)=1, then P(B|A) = P(B), making the antecedent false and the proposition trivially true.

comment by FrF · 2010-03-05T20:49:45.163Z · LW(p) · GW(p)

The Final Now, a new short story by Gregory Benford about (literally) End Times.

Quotation in rot13 for the spoiler-averse's sake. It's an interesting passage and, as FAWS, I also think it's not that revealing, so it's probably safe to read it in advance.

("Bar" vf n cbfg-uhzna fgnaq-va sbe uhznavgl juvpu nqqerffrf n qrzvhetr ragvgl, qrfvtangrq nf "Ur" naq "Fur".)

"Bar synerq jvgu ntvgngrq raretvrf. “Vs lbh unq qrfvtarq gur havirefr gb er-pbyyncfr, gurer pbhyq unir orra vasvavgr fvzhyngrq nsgreyvsr. Gur nfxrj pbzcerffvba pbhyq shry gur raretl sbe fhpu pbzchgngvba—nyy fdhrrmrq jvguva gung svany ren!”

“Gung jnf n yrff vagrerfgvat pubvpr,” Fur fnvq. “Jr pubfr guvf havirefr sbe vgf tenaq inevrgl. Infgre ol sne fvapr vg unf ynfgrq fb ybat.”

“Inevrgl jnf bhe tbny—gb znxr gur zbfg fgvzhyngvat fcnpr-gvzr jr pbhyq,” Ur fnvq, “Lbh, fznyy Bar, frrz gb uneobe gjva qrfverf—checbfr naq abirygl—naq fb cebterff.”

Bar fnvq, “Bs pbhefr!” Gura, fulyl, “. . . naq ynfgvat sbe rgreavgl.”

Fur fnvq, “Gubfr pbagenqvpg.”"

Replies from: FAWS
comment by FAWS · 2010-03-05T21:16:09.590Z · LW(p) · GW(p)

I personally don't really care about spoilers, and having read the story now the passage you quote doesn't seem all that terribly spoilerish to me anyway, but you should note that spoiler protection has been enforced for "spoilers" considerably less spoilerish than that around here.

Replies from: FrF
comment by FrF · 2010-03-05T21:43:28.855Z · LW(p) · GW(p)

I completely forget about spoilers! I used this particular quotation because I innocently thought it would be a "hook" to motivate people to read the story.

Should I rot13 the quotation for reasons of precaution?

Replies from: FAWS
comment by FAWS · 2010-03-05T23:07:34.942Z · LW(p) · GW(p)

I completely forget about spoilers! I used this particular quotation because I innocently thought it would be a "hook" to motivate people to read the story.

It was for me, but as I said I don't care about spoilers.

Should I rot13 the quotation for reasons of precaution?

Possibly. I can't always predict how people who care about spoilers act, sometimes it seems to be mainly about the principle.

Replies from: gwern
comment by gwern · 2010-03-06T00:37:27.395Z · LW(p) · GW(p)

Possibly. I can't always predict how people who care about spoilers act, sometimes it seems to be mainly about the principle.

Indeed. Just look at Eliezer threatening to ban me for mentioning a ~5 year old plot twist in an anime.

comment by FAWS · 2010-03-04T00:11:21.197Z · LW(p) · GW(p)

Re: Cognitive differences

When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?

( For me visualisation seems to always entail flashing an image, I'd say for less than 0.2 seconds total. If I want to keep visualizing the image I can flash it again and again in rapid succession so that it appears almost seamless, but that takes effort and after at most a few seconds it will be replaced by a different but usually related image. )

If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?

(No, so so, no)

Replies from: AdeleneDawner, gwern
comment by AdeleneDawner · 2010-03-04T02:44:07.942Z · LW(p) · GW(p)

When you try to mentally visualize an image, for example a face, can you keep it constant indefinitely?

Not indefinitely, but the limiting factor is my attention quality and span. If I get distracted, the image disappears; if I try to pay attention to other things while continuing to visualize something, the visualization can subtly morph in response to the other things I'm thinking about, and it's hard to tell if it's morphing or not. (This effect seems closely related to priming.)

If yes, would you describe yourself as a visual thinker? Are you good at drawing? Good at remembering faces?

I'm a very visual thinker. I'm not good at drawing, but that appears to be a function of poor fine motor control and lack of practice; I have been known to surprise myself and others with how well I draw for someone who almost never does so. I'm not very good at remembering faces, either, but again other factors affect that; I tend to avoid looking at faces in the first place, since I find eye contact overwhelming. I seem to be very good at remembering other complex visual things, though.

comment by gwern · 2010-03-04T02:21:21.289Z · LW(p) · GW(p)

I can hold an image/face steady for about a full second just sitting here. I could probably do better while meditating; so I think it's more an issue of 'can you concentrate' than anything else.

(I'm a pretty visual thinker, but my hearing-impairment also means I'm anomalous.)

comment by Morendil · 2010-03-03T20:22:56.286Z · LW(p) · GW(p)

I'm drafting a post to build on (and beyond) some of the themes raised by Seth Godin's quote on jobs and the ensuing discussion.

I'm likely to explore the topic of "compartmentalization". But damn, is that word ugly!

Is there an acceptable substitute?

Replies from: arundelo
comment by arundelo · 2010-03-03T20:34:50.294Z · LW(p) · GW(p)

"compartmentalization". But damn, is that word ugly!

It has never bothered me.

comment by [deleted] · 2010-03-02T07:13:33.289Z · LW(p) · GW(p)

I am curious as to why brazil84's comment has received so much karma? The way the questions were asked seemed to imply a preconception that there could not possibly be viable alternatives. Maybe it's just because I'm not a native English speaker and read something into it that isn't there, but that doesn't seem to me to be a rationalist mindset. It seemed more like »sarcasm as stop word« instead of an honest inquiry let alone an argument.

Replies from: FAWS
comment by FAWS · 2010-03-02T12:36:51.270Z · LW(p) · GW(p)

It seems entirely rational to me to ask what the envisioned alternative is when someone is criticizing something.

comment by [deleted] · 2010-03-01T22:14:31.331Z · LW(p) · GW(p)

Suppose you're a hacker, and there's some information you want to access. The information is encrypted using a public key scheme (anyone can access the key that encrypts, only one person can access the key that decrypts), but the encryption is of poor quality. Given the encryption key, you can use your laptop to find the corresponding decryption key in about a month of computation.

Through previous hacking, you've found out how the encryption machine works. It has two keys, A and B already generated, and you have access to the encryption keys. However, neither of these keys is currently in use; one month from now, it will randomly choose one of the keys and start using it. You find that, through really complicated and difficult means, you can influence which of the keys the machine chooses, setting the probability to various things.

Needless to say, you might as well start cracking one of the keys now, but if the machine selects the other key, all the time you spent trying to crack the first key ends up being wasted.

Write your expected utility in terms of the probability that the machine chooses key A.

Replies from: sketerpot, MrHen, Nick_Tarleton, Nick_Tarleton, Za3k
comment by sketerpot · 2010-03-02T00:12:08.019Z · LW(p) · GW(p)

Smartass answer: use two computers, one for each of the keys. Computer time is cheap these days. If you don't have two computers, rent computation time from a cloud.

Replies from: None
comment by [deleted] · 2010-03-02T02:59:21.658Z · LW(p) · GW(p)

Why would you do that? If one key is more likely than the other, you should devote all your time toward breaking that key.

Replies from: SoullessAutomaton, Jack
comment by SoullessAutomaton · 2010-03-02T03:12:06.556Z · LW(p) · GW(p)

All else equal, in practical terms you should probably devote all your time to first finding the person(s) that already know the private keys, and then patiently persuading them to share. I believe the technical term for this is "rubber hose cryptanalysis".

comment by Jack · 2010-03-02T03:26:51.730Z · LW(p) · GW(p)

Even if there is a high probability of completing both decryptions and the probability the machine chooses A over B is only slightly over .5?

Replies from: None
comment by [deleted] · 2010-03-02T17:16:42.772Z · LW(p) · GW(p)

Yes. At the beginning, it is better to work on A than to work on B, because the machine choosing A is more likely. After the beginning, it is still better to work on A than to work on B, because finishing A will be easier than finishing B if you've already worked on it some. On the off chance that you don't complete both decryptions, it's better to have the one you're more likely to need.

Replies from: Jack
comment by Jack · 2010-03-02T20:20:39.409Z · LW(p) · GW(p)

I think some of us know considerably less about cryptography than you do. I think sketerpot's suggestion was based on the assumption that most of the work would just be done by the computer and that the hacker could just sit back and relax while his two laptops went to work on the encryptions (you know, like in movies!). If the hacker needs to spend a month of his/her time (rather than computer time) to complete the decryption, then I see what you're talking about.

Replies from: None
comment by [deleted] · 2010-03-02T21:13:13.122Z · LW(p) · GW(p)

The assumption that most of the work would be done by the computer is correct. Perhaps sketerpot was assuming that breaking a decryption key is an operation that's impossible to parallelize (i.e. two computers both working on a single key would be no better than just one computer doing so), whereas I'm pretty sure that two computers would do the job twice as fast as one computer.

Replies from: Jack
comment by Jack · 2010-03-02T21:50:22.509Z · LW(p) · GW(p)

Ah, yes. That makes sense. Thanks for your patience.

comment by MrHen · 2010-03-01T22:17:33.097Z · LW(p) · GW(p)

Can people ROT13 their answers so I get a chance to solve this on my own? Or will there be too much math for ROT13 to work well?

Replies from: None
comment by [deleted] · 2010-03-02T02:58:43.572Z · LW(p) · GW(p)

It's not a puzzle; it's supposed to make a point.

Replies from: MrHen
comment by MrHen · 2010-03-02T03:23:55.968Z · LW(p) · GW(p)

Oh.

comment by Nick_Tarleton · 2010-03-02T03:03:12.261Z · LW(p) · GW(p)

p(A) (U(decrypt in 1 month) - cost(1 month computer time)) + (1 - p(A)) (U(decrypt in 2 months) - cost(2 months computer time))

comment by Za3k · 2010-03-02T00:07:14.183Z · LW(p) · GW(p)

Do we choose a probability p the machine picks A, or does the machine start with a probability p, which we adjust to p+q chance it picks A?

Replies from: None
comment by [deleted] · 2010-03-02T03:00:04.739Z · LW(p) · GW(p)

You choose a probability p that the machine picks A. I guess.

comment by Karl_Smith · 2010-03-01T18:17:03.736Z · LW(p) · GW(p)

Thoughts about intelligence.

My hope is that some altruistic person will read this comment, see where I am wrong and point me to the literature I need to read. Thanks in advance.

I've been thinking about the problem of general intelligence. Before going too deeply I wanted to see if I had a handle on what intelligence is period.

It seems to me that the people sitting in the library with me now are intelligent and that my pencil is not. So what is the minimum my pencil would have to do before I suddenly thought that it was intelligent?

Moving alone doesn't count. If I drop the pencil it will fall towards the table. You could say that I caused the pencil to move, but I am not sure this isn't begging the question.

Now suppose the first time I dropped the pencil and it fell to the floor. Now I go to drop it a second time but I do it over the table. However, the pencil flies around the table and hits the same spot on the floor.

Now its got my attention. But maybe its something about the table. So I drop the pencil but put my hand in the way. Still the pencil goes around my hand.

I put my foot over the spot on the floor and drop the pencil. It flies around my foot and then into the crevice between my foot and the floor and gets stuck. As soon as I lift my foot the pencil goes to the same spot.

I believe I should now conclude that my pencil is intelligent. This has something to do with the following facts.

1) The pencil kept going to the same spot as if it had a "goal"

2) The pencil was able to respond to "obstacles" in ways not predicted by my original simply theory of pencil behavior.

I believe that I would say the pencil is more intelligent if it could pass through more "complicated" obstacles.

Here are some of my basic problems

1) What is a "goal" beyond what my intuition says

2) Similarly what is an "obstacle"

3) And what is "complicated"

I have some sense that "obstacle" is related to reducing the probability that the goal will be reached

I have some s that complicated has to do with the degree to which the probability is reduced.

Thoughts? Suggestions for readings?

Replies from: Richard_Kennaway, MrHen, whpearson, Kaj_Sotala, None
comment by Richard_Kennaway · 2010-03-01T22:27:10.910Z · LW(p) · GW(p)

You are talking about control systems.

A control system has two inputs (called its "perception" and "reference") and one output. The perception is a signal coming from the environment, and the output is a signal that has an effect on the environment. For artificial control systems, the reference is typically set by a human operator; for living systems it is typically set within the organism.

What makes it a control system is that firstly, the output has an effect, via the environment, on the perception, and secondly, the feedback loop thus established is such as to cause the perception to remain close to the reference, in spite of all other influences from the environment on that perception.

The answers to your questions are:

  1. A "goal" is the reference input of a control system.

  2. An "obstacle" is something which, in the absence of the output of the control system, would cause its perception to deviate from its reference.

  3. "Complicated" means "I don't (yet) understand this."

Suggestions for readings.

And a thought: "Romeo wants Juliet as the filings want the magnet; and if no obstacles intervene he moves towards her by as straight a line as they. But Romeo and Juliet, if a wall be built between them, do not remain idiotically pressing their faces against its opposite sides like the magnet and the filings with the card. Romeo soon finds a circuitous way, by scaling the wall or otherwise, of touching Juliet's lips directly. With the filings the path is fixed; whether it reaches the end depends on accidents. With the lover it is the end which is fixed, the path may be modified indefinitely."

-- William James, "The Principles of Psychology"

Replies from: Karl_Smith
comment by Karl_Smith · 2010-03-02T00:30:16.173Z · LW(p) · GW(p)

Richard, do you believe that the quest for FAI could be framed as a special case of the quest for the Ideal Ultimate Control System (IUCS). That is, intelligence in and of itself is not what we are after but control. Perhaps, FAI is the only route to IUCS but perhaps not?

Note: Originally I wrote Friendly Ultimate Control System but the acronym was unfortunate.

Replies from: markrkrebs, Richard_Kennaway
comment by markrkrebs · 2010-03-02T01:05:49.628Z · LW(p) · GW(p)

The neurology of human brains and the architecture of modern control systems are remarkably similar, with layers of feedback, and adaptive modelling of the problem space, in addition to the usual dogged iron filing approach to goal seeking. I have worked on a control systems which, as they add (even minor) complexity at higher layers of abstraction, take on eerie behaviors that seem intelligent, within their own small fields of expertise. I don't personally think we'll find anything different or ineffable or more, when we finally understand intelligence, than just layers of control systems.

Consciousness, I hope, is something more and different in kind, and maybe that's what you were really after in the original post, but it's a subjective beast. OTOH, if it is "mere" complex behavior we're after, something measurable and Turing-testable, then intelligence is about to be within our programming grasp any time now.

I LOVE the Romeo reference but a modern piece of software would find its way around the obstacle so quickly as to make my dog look dumb, and maybe Romeo, too.

Replies from: Karl_Smith
comment by Karl_Smith · 2010-03-02T02:35:51.854Z · LW(p) · GW(p)

I had conceived of something like the Turing test but for intelligence period, not just general intelligence.

I wonder if general intelligence is about the domains under which a control system can perform.

I also wonder whether "minds" is a too limiting criteria for the goals of FAI.

Perhaps the goal could be stated as a IUCS. However, we dont know how to build ICUS. So perhaps we can build a control system whose reference point is IUCS. But we don't know that so we build a control system whose reference point is a control system whose reference point . . . until we get to some that we can build. Then we press start.

Maybe this is a more general formulation?

comment by Richard_Kennaway · 2010-03-02T08:45:54.991Z · LW(p) · GW(p)

I don't want to tout control systems as The Insight that will create AGI in twenty years, but if I was working on AGI, hierarchical control systems organised as described by Bill Powers (see earlier references) are where I'd start from, not Bayesian reasoning[1], compression[2], or trying to speed up a theoretically optimal but totally impractical algorithm[3]. And given the record of toy demos followed by the never-fulfilled words "now we just have to scale it up", if I was working on AGI I wouldn't bother mentioning it until I had a demo of a level that would scare Eliezer.

Friendliness is a separate concern, orthogonal to the question of the best technological-mathematical basis for building artificial minds.

1. LessWrong, passim.

2. Marcus Hutter's Compression Prize.

3. AIXItl and the Gödel machine.

comment by MrHen · 2010-03-01T19:07:05.593Z · LW(p) · GW(p)

If I were standing there catching the pencil and directing it to the spot on the floor, you wouldn't consider the pencil intelligent. The behavior observed is not pointing to the pencil in particular being intelligent.

Just my two cents.

I don't know anything about the concept of intelligence being defined as being able to pursue goals through complicated obstacles. If I had to guess at the missing piece it would probably be some form of self-referential goal making. Namely, this takes the form of the word, "want." I want to go to this spot on the floor. I can ignore a goal but it is significantly harder to ignore a want.

At some point, my wants begin to dictate and create other wants. If I had to start pursing a definition of intelligence, I would probably start here. But I don't know anything about the field so this could have already been tried and failed.

Replies from: Karl_Smith
comment by Karl_Smith · 2010-03-01T20:15:06.087Z · LW(p) · GW(p)

Well I would consider the Pencil-MrHen system as intelligent. I think further investigation would be required to determine that the pencil is not intelligent when it is not connected to MrHen, but that MrHen is intelligent when not connected to the pencil. It then makes sense to say that the intelligence originates from MrHen.

The problem with the self-referential from my perspective is that it presumes a self.

It seems to me that ideas like "I" and "want" graph humanness on to other objects.

So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.

Replies from: MrHen
comment by MrHen · 2010-03-01T20:55:27.785Z · LW(p) · GW(p)

So, I want to see what happens if I try to divorce all of my anthrocentric assumptions about self, desires, wants, etc. I want to measure a thing and then by a set of criteria declare that thing to be intelligent.

Sure, that makes perfect sense. I haven't really given this a whole lot of thought; you are getting the fresh start. :)

The self in self-referential isn't implied to be me or you or any form of "I". Whatever source of identity you feel comfortable with can use the term self-referential. In the case of your intelligent pencil, it very well may be the case that the pencil is self-updating in order to achieve what you are calling a goal.

A "want" can describe nonhuman behavior, so I am not convinced the term is a problem. It does seem that I am beginning to place atypical restrictions on its definition, however, so perhaps "goal" would work better in the end.

The main points I am working with:

  • An entity can have a goal without being intelligent (perhaps I am confusing goal with purpose or behavior?)
  • A non-intelligent entity can become intelligent
  • Some entities have the ability to change, add, or remove goals
  • These changes, additions, deletions are likely governed by other goals. (Perhaps I am confusing goals with wants or desires? Or merely causation itself?)
  • The "original" goal could be deleted without making an entity unintelligent. The pencil could pick a different spot on the ground but this would not cause you to doubt its intelligence.

Please note that I am not trying to disagree (or agree) with you. I am just talking because I think the subject is interesting and I haven't really given it much thought. I am certainly no authority on the subject. If I am obviously wrong somewhere, please let me know.

comment by whpearson · 2010-03-01T21:08:09.726Z · LW(p) · GW(p)

Some food for philosophical thought, an oil drop that "solves" a maze.

TL;DR it follows a chemical gradient due to it changing surface tension.

I'd read something on the intentional stance.

comment by Kaj_Sotala · 2010-03-01T18:36:02.689Z · LW(p) · GW(p)

If you don't mind a slightly mathy article, I thought Legg & Hutter's Universal Intelligence was nice. It talks about machine intelligence, but I believe it applies to all forms of intelligence. It also addresses some of the points you made here.

comment by [deleted] · 2010-03-01T21:56:08.282Z · LW(p) · GW(p)

So if something is capable, contrary to expectations, of achieving a constant state despite varying conditions, it's probably intelligent?

I guess that in space, everything is intelligent.

comment by byrnema · 2010-03-04T22:42:56.659Z · LW(p) · GW(p)

Does anyone here know about interfacing to the world (and mathematics) in the context of a severely limiting physical disability? My questions are along the lines of: what applications are good (not buggy) to use and what are the main challenges and considerations a person of normal abilities would misjudge or not be aware of? Thanks in advance!

comment by Kaj_Sotala · 2010-03-01T16:52:49.301Z · LW(p) · GW(p)

People constantly ignore my good advice by contributing to the American Heart Association, the American Cancer Society, CARE, and public radio all in the same year--as if they were thinking, "OK, I think I've pretty much wrapped up the problem of heart disease; now let's see what I can do about cancer."

--- Steven Landsburg (original link by dclayh)