Open Thread, May 16-31, 2012

post by OpenThreadGuy · 2012-05-16T07:36:59.871Z · LW · GW · Legacy · 122 comments

Contents

122 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

122 comments

Comments sorted by top scores.

comment by Grognor · 2012-05-17T15:32:35.186Z · LW(p) · GW(p)

Walt Whitmanisms
The original:

Do I contradict myself? Very well then I contradict myself. I am large, I contain multitudes.

Sark Julian:

Do I make tradeoffs? Very well then I make tradeoffs. I am poor, I need to make compromises.

HonoreDB:

Do I repeat myself? Very well then, I repeat myself. You are large, you contain multitudes.

Me:

Am I signaling? Very well then, I am signaling. I am human; I am part of a tribe.

Steven Kaas:

Do I contradict myself? Very well, then I contradict myself. I am large, I can beat up anyone who calls me on it. #whitmanthebarbarian

Steven Kaas:

Do I have no opinion? Very well, then I have no opinion. I am small, I do not contain a team of pundits.

comment by Kaj_Sotala · 2012-05-16T12:39:27.790Z · LW(p) · GW(p)

Welcome to Life: the singularity, ruined by lawyers.

(Humor, three-minute YouTube clip.)

Replies from: Tuxedage, TimS, RobertLumley
comment by Tuxedage · 2012-05-18T01:32:52.888Z · LW(p) · GW(p)

The dark arts are in action here. Beware, lest you may Generalize from fiction..

comment by TimS · 2012-05-18T13:36:13.411Z · LW(p) · GW(p)

It's an interesting vision, but lawyers have nothing to do with the problem. The problem is commercialization of something that or moral intuitions say should not be commercialized.

Being upset at lawyers about this state of affairs is like being angry at a concrete truck for building the foundation of a building in an offensive location.

comment by RobertLumley · 2012-05-17T22:22:37.140Z · LW(p) · GW(p)

The majority of the stuff on that guy's website is pretty interesting. He's got several TED talks, one of which is essentially on prediction markets.

comment by Viliam_Bur · 2012-05-16T11:43:12.500Z · LW(p) · GW(p)

If you had to pick exactly 20 articles from LessWrong to provide the greatest added value for a reader, which 20 articles would you select?

In other words, I am asking you to pick "Sequences: Micro Edition" for new readers, or old readers who feel intimidated by the size and structure of Sequences. No sequences and subsequences, just 20 selected articles that should be read in the given order.

It is important to consider that some information is distributed in many articles, and some articles use information explained in previous articles. Your selection should make sense for people who have read nothing else on LW, and cannot click on hyperlinks for explanation (as if they are reading the articles on a paper, without comments). Do the introductory articles provide enough value even if you won't put the whole sequence to the selected 20? Is it better to pick examples from more topics, or focus on one?

Yes, I am hoping that reading those 20 articles would encourage the reader to read more, perhaps even the whole Sequences. But the 20 articles should provide enough value when taken alone; they should be a "food", not just an "appetizer".

It is OK to pick also those LW articles that are not part of the traditional Sequences. It is OK to suggest less than 20 articles. (Suggesting more than 20 is not OK, because the goal is to select a small number of articles that provide value without reading anything more.)

Replies from: Viliam_Bur, Viliam_Bur, David_Gerard, None, JoachimSchipper, djcb
comment by Viliam_Bur · 2012-05-17T15:03:53.700Z · LW(p) · GW(p)

Now let's try it differently. Even if you feel that 20 articles is too small subset to describe the richness of this site, let's push it even further. Imagine that you can only list 10 articles, or 7 articles, 5 articles, 3 articles, or just 1 single best articles of the LessWrong. It will be painful, but please do your best.

Why? Well, unless one of us puts their selection of 20 articles on the wiki ignoring the others, the resulting selection will be a mix of something that you would select and something that you wouldn't. The resulting 20 articles will contain only 10 or maybe less articles from your personal "top 20" selection. So let's make it the best 10 articles.

However I ask you to avoid using strategies like this: "I think articles A and B are good. A is better than B, so if I have to choose only one article, I should have chosen A. But article A is widely popular, and most other people will probably choose it too, therefore I will pick B, which maximizes the chance that both A and B will be in the final selection." Please avoid this. Just pretend that the remaining articles will be chosen randomly (even if other people have already posted their choices), so you should really choose what you prefer most. Please cooperate on this Prisonner's Dilemma.

Also, please explain your reason behind selecting those articles. Maybe you see an aspect others are missing. Maybe others can suggest you another article which fulfills your goal better. (In other words, if you explain yourself, others can extrapolate your volition.)

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-05-17T16:00:46.549Z · LW(p) · GW(p)

My choice, the most important three articles:

  • Why truth? And... -- it contains motivation for doing what we do, and explains the "Spock Rationality" misunderstanding
  • An Intuitive Explanation of Bayes' Theorem -- a biology/medicine example focusing on women, and an interactive math textbook (great to balance the LW bias: male, sci-fi, computers, impractical philosophy, nonstandard science)
  • Why Our Kind Can't Cooperate -- a frequent fail mode of unknowingly trying to reverse stupidity in real life, important for those who hope to have a rational community

then these:

  • How to Be Happy -- a lot of low-hanging fruit for a new reader, applying science to everyday life; bonus points for being written by someone else
  • Something to Protect -- bringing the motivation to the near mode; the moral aspect of becoming rational
  • Well-Kept Gardens Die By Pacifism -- a frequent fail mode of online communities; explanation of the LW moderation system

and then these:

Note: I think that each these articles can be read and understood separately, which in my opinion is good for total newbies. People are expecting short inferential distance, and you must first gain their attention before you can lead them further. If they will enjoy the MicroSequences, they will more likely continue with the Sequences. I also think these articles are not controversial or weird, so they will give a good impression to an outsider. The selection includes math, instrumental rationality, social aspects of rationality.

Funny thing, it was rather painful to reduce my suggested list to only 10 articles, but now I feel happy and satisfied with the result. Please make your own list, independently of this one. (Imagine that you have to select 10 or less articles for your friend.)

comment by Viliam_Bur · 2012-05-16T13:33:27.927Z · LW(p) · GW(p)

OK, my first shot, probably just to encourage people to do better than this:

EDIT: Oops, it was more than 20, I was in a hurry. The more important (IMHO) articles are now marked by a bold font, with explanation added.

Replies from: RobertLumley, Grognor, TimS, JoachimSchipper
comment by RobertLumley · 2012-05-16T17:16:25.626Z · LW(p) · GW(p)

Seems to me like you need Mysterious Answers to Mysterious Questions in there. That's far and beyond one of my favorites.

comment by Grognor · 2012-05-16T14:45:45.365Z · LW(p) · GW(p)

(You can single-space your posts by putting two spaces at the end of each line. Do this, for it will save scrolltime.)

I'm going to avoid repeating ones on your list, entirely because I think repetition is bad. Here I go:

The trouble with picking stand-alone posts is that Eliezer's sequences of posts are so much better.

Replies from: dbaupp
comment by dbaupp · 2012-05-17T07:29:47.004Z · LW(p) · GW(p)

What are the ones you would include if you were including repeats? (Viliam_Bur is asking for an absolute top 20, not several independent lists of good posts.)

comment by TimS · 2012-05-18T13:46:30.808Z · LW(p) · GW(p)

Who exactly is "Simple Truth" aimed at? As far as I can tell, the message is that worrying about the cashing out the meaning of truth is not worth the effort in ordinary circumstances. That's true, but it is a fully generalizable counter-argument to studying anything - worrying about the meaning of "quantum configuration" has no practical payoff, even though building things like computers relies on studying those sorts of things. Likewise, the meaning of truth is really hard if you actually examine it.

Put differently, religious people don't disagree with us about truth means, they disagree about what is actually true. And they are wrong, for the reasons detailed in "Making Beliefs Pay Rent." In short, no real person is analogous to Mark, so no real person's philosophical positions are contradicted by the story.

To repeat, the story doesn't solve any real questions about truth, it simply says they are practically [Edit] unimportant (which is true, but makes the story itself pretty unhelpful),

Replies from: Viliam_Bur, TheOtherDave
comment by Viliam_Bur · 2012-05-18T15:36:21.833Z · LW(p) · GW(p)

For me the message of "Simple Truth" was that the intelligence should not be used to defeat itself. To be right, even if you can't define it to philosopher's satisfaction, is better than to be wrong, even if you can find some smart words to support that. The truth business is not about words (that's signalling business), but when you are right, nature rewards you, and when you are wrong, nature punishes you. (Although among humans, speaking truth can cause you a lot of trouble.) At the same time it explains the origins of our ability to understand truth -- we have this ability because having it was an evolutionary advantage.

Or maybe I just like that the annoying wise-ass guy dies in the end.

This is not about religious people, who disagree about what is actually true, as you said. This is about people who try to do "philosophy" by inventing more complex ways to sound stupid... errr... profound, and perhaps they even sometimes succeed to convince themselves. People who say things like "there is no truth", because for anything you say they can generate a long sequence of words that you just don't have time to analyze and debunk (and even if you did, they would just use a fraction of that time to generate a new sequence of words). If you didn't meet such people, consider yourself lucky, but I know people who can role-play Mark and thus ruin any chance of a rational discussion, and for a non-x-rational listener it often seems like their arguments are rather important and deep, and should be addressed seriously.

Anyway, the "Simple Truth" is kinda long, which I enjoyed, but other people may hate; so it is probably no harm in removing it, as long as "Making Beliefs Pay Rent" and "Something to Protect" stays in the list.

Replies from: TimS
comment by TimS · 2012-05-18T15:54:17.102Z · LW(p) · GW(p)

the intelligence should not be used to defeat itself

I agree with this feeling, but "Do the impossible" or one of the nearby posts raises this point more explicitly and more effectively.

The problem with "Simple Truth" is that - beyond the message I highlighted - the text is too open ended. Mirror-like, the story contains whatever philosophical positions the reader wishes to see in it.

I know people who can role-play Mark

There are two possible kinds of people who can do this. (1) People with useful but complicated theories that you happen not to understand, and (2) stupid people - who might be poorly parroting a useful theory. Please don't let the (negative) halo effect of the second type infect your view of the first type of people.

Generally, your objection pattern matches with the argument that law is too complicated. Respectfully, I disagree.

comment by TheOtherDave · 2012-05-18T15:19:36.614Z · LW(p) · GW(p)

I think you mean "practically unimportant" in your last sentence.

I've always understood the purpose of that article to be to pre-emptively foreclose objections of the form "but being rational is irrelevant, because you can't really know what's true" by declaring them rhetorically out-of-bounds.

Replies from: TimS
comment by TimS · 2012-05-18T15:46:29.768Z · LW(p) · GW(p)

Indeed a typo, thanks.

I've always taken the objection you mentioned as invoking the problem of reliability of the sense (i.e. Cartesian skepticism), not the meaningfulness of truth. In the story, Mark is no Cartesian skeptic (of course, it's hard to tell, because Mark is a terribly confused person)

I think skeptical objections to Bayesian reasoning are like questions about the origin of life directed at evolutionary theory. The criticisms aren't exactly wrong - it's just that the theory targeted by the criticism is not trying to provide an answer on that issue.

comment by David_Gerard · 2012-05-16T14:17:16.643Z · LW(p) · GW(p)

Which twenty have the highest number of votes?

Replies from: None
comment by [deleted] · 2012-05-16T14:23:28.660Z · LW(p) · GW(p)

These, but that's probably not the best way to go about making a list. Many of the top posts require prerequisites, and there are some equally good posts that are not as heavily upvoted because they were published on OB or in LW's infancy.

comment by [deleted] · 2012-05-16T14:12:45.750Z · LW(p) · GW(p)

I actually started working on something similar, but it never really took off and real-world responsibilities prevented me from working on it for a while. Feel free to pick up where I left off. Anyway, here's my first attempt (I may try again later):

Replies from: moridinamael
comment by moridinamael · 2012-05-16T20:53:05.653Z · LW(p) · GW(p)

I don't know if the intention here is to debate other people's choices, but: my wife started The Simple Truth because it was the first sequence post on the list and quickly became frustrated and annoyed that it didn't seem to lead anywhere and seemed to be composed of "in jokes." She didn't try to read further into the Sequences because of the bad impression she got off this article, which is an unusually weird, long, rambling, quirky article.

I actually like The Simple Truth but I don't feel that it makes a good introduction to the Sequences. But hey, this is just one data point.

Replies from: arundelo, beoShaffer, None
comment by arundelo · 2012-05-18T17:06:26.469Z · LW(p) · GW(p)

I predict that when your wife read "The Simple Truth" she was not acquainted with (or was not thinking about) the various theories of truth that philosophers have come up with. I like it a lot, but when I first read it I was able to see it as a defense of a particular theory of truth and a critique of some other ones.

(In particular, it's a defense of the correspondence theory, though see this thread.)

Edit: In other words, I think "The Simple Truth" appeals mainly to people who have read descriptions of the other theories of truth and said to themselves, "People actually believe that?!"

Replies from: moridinamael
comment by moridinamael · 2012-05-18T18:59:23.568Z · LW(p) · GW(p)

You're correct. What I love about the Sequences in general is that it's a colloquial, patient introduction to lots of new concepts. In theory, even somebody with no background in decision theories or quantum mechanics can actually learn these concepts from the Sequences. The Simple Truth is significantly different in tone and style from the majority of Sequence posts and the concepts which that post satirizes are not really introduced before the comedy begins.

If you go to http://wiki.lesswrong.com/wiki/Sequences and choose the first option (1 Core Sequences), then choose the first listed subsequence (Map and Territory), the very first post is The Simple Truth. The second choice is What Do We Mean by Rationality? which really, really seems like it should be the first thing a newcomer reads.

comment by beoShaffer · 2012-05-16T21:34:08.980Z · LW(p) · GW(p)

I actually like The Simple Truth but I don't feel that it makes a good introduction to the Sequences.

Same here, though I think it does depend on the readers background. People who strongly disbelieve in the concept of objective truth might find it helpful to have that taken care of before starting the sequences proper, but even then I'm not sure if the simple truth is the best way.

comment by [deleted] · 2012-05-16T20:58:48.150Z · LW(p) · GW(p)

You might be right--I'll have to re-read it. I put this list together based on my memory of what these posts are like, and given how volatile memories are, I may be mistaken about their quality.

Edit: You're right. I'll change my list accordingly.

comment by JoachimSchipper · 2012-05-17T06:33:05.344Z · LW(p) · GW(p)

What is your intended audience, and what is the intended effect of reading these sequences? "Politics is the Mind-Killer" and "Well-Kept gardens die by pacifism" seem particularly relevant to online communities, for instance.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-05-17T08:32:30.376Z · LW(p) · GW(p)

It was intended for new people on LW, who should be introduced to our "community values" (even without reading the whole Sequences). Also for smart people outside LW, who are curious what is LW about; and might decide to join later.

In both cases, the goal is make clear what LW (and x-rationality) is, and what it is not, in a short amount of text. Perhaps writing a new text would be better, but making a selection of existing text should be quicker.

"Politics is the Mind-Killer" and "Well-Kept gardens die by pacifism" seem particularly relevant to online communities, for instance.

Yes, but I think they also apply well offline. People can discuss politics in person, too. The lesson of well-kept gardens is indirect: some people are net loss, and if you don't filter them out of your social network, your quality of life will go down.

Now I added some explanations to my list, so the message is like this:

  • there is such thing as a truth/territory, and it has consequences in real life
  • to know = to make good predictions
  • it's not about speaking mysteriously or using the right keywords, but about understanding the details
  • protect your values, don't use your intelligence to defeat yourself
  • don't let your emotions and biases make you stupid, but also don't try to reverse stupidity
  • a rational community is a great idea, but it requires specific skills
  • here is how to use rationality to improve your everyday life
comment by djcb · 2012-05-16T18:38:04.271Z · LW(p) · GW(p)

Nice idea - but maybe we should compress things further? I've read most of the sequences, but think/hope they could be condensed to about 10-20 pages with the core messages, in such a way that would be more accessible outside these realms.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-17T04:58:50.494Z · LW(p) · GW(p)

I guess the idea is to find 20 articles that provide both ideas and arguments out of those that already exist. After there is some solution, it becomes easier to write those 20 pages than when starting from scratch. Obviously, 20 paper pages you mention have yet to be written - and "20 best articles for isolated reading" may be written already.

comment by [deleted] · 2012-05-17T11:46:58.387Z · LW(p) · GW(p)

Stuff by Yvain

On the applications of bad translation.

Replies from: NancyLebovitz, thomblake
comment by NancyLebovitz · 2012-05-21T17:05:04.377Z · LW(p) · GW(p)

I think "words don't have meanings, people have meanings" is overdoing the concept, but not by much.

comment by thomblake · 2012-05-17T19:50:48.194Z · LW(p) · GW(p)

Nice find!

comment by [deleted] · 2012-05-17T07:39:00.276Z · LW(p) · GW(p)

Genes are overrated, genetics is underrated

by Razib Khan

... I agree one one thing in particular: an emphasis on concrete and specific genes for traits is a motif in science journalism that can be very frustrating, and often misleading. Nevertheless, that’s not the only story. I believe our current culture greatly underestimates the power of genetics in shaping broader social patterns.

How can these be reconciled? Do not genes and genetics go together? The resolution is a simple one: when you speak of 1,000 genes, you speak of no genes. You can’t list 1,000 genes in prose, even if you know them. But using standard quantitative and behavior genetic means one can apportion variation in the population of a trait to variation in genes. 1,000 genes added together can be of great effect. The newest findings in genomics are reinforcing assertions of non-trivial heritability of many complex traits, though rendering problematic attributing that heritability to a specific set of genes.

Replies from: billswift
comment by billswift · 2012-05-17T14:00:29.541Z · LW(p) · GW(p)

Genes and genetics go together in very nearly the same way as words and language.

Or, even more closely, as terms in a mass of spaghetti code.

Understanding the genetics of an organism is hard, because what they are trying to do is to simultaneously reverse engineer that mass of code and learn what the terms are.

comment by Grognor · 2012-05-20T13:26:59.081Z · LW(p) · GW(p)

A very basic introduction to empiricism and Occam's Razor.

Replies from: tut
comment by tut · 2012-05-22T12:06:29.855Z · LW(p) · GW(p)

They also missed the theory that is shaped like a star, but without the extraneous nonsense in the middle. Which is exactly as simple as their preferred theory.

Replies from: othercriteria
comment by othercriteria · 2012-05-22T13:29:28.722Z · LW(p) · GW(p)

So I'm entering an argument over fictional evidence, which is already a losing move, but who cares.

Taking the convex hull of the observations is obviously the right thing to do!

If you asked a mathematician for the simplest function from a point set in the plane to a point set in the plane, they'd flip a coin and say either the constant function that's always the empty set or the constant function that's always the plane. But that's silly, because those functions don't use your evidence.

(Other constant functions are out, because there's no way to pick between them.)

So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they'd say the identity function. That's not silly, but if you want a theory that's not just a recapitulation of your evidence, it won't help you.

(Projections or other ways of taking subsets are out because there's no natural way to pick individual points out.)

(Things like the mean are out because of measure-theoretic difficulties.)

So if you asked a mathematician for the next simplest function from a point set in the plane to a point set in the plane, they'd say the convex hull. It has all sorts of nice properties (idempotent, nondecreasing, etc.) and just sort of feels like the right thing to do with a point set.

On the other hand, sticking line segments between the points (and in a hard to specify order) is a few more "next"s down the list and only makes sense for finite point sets with pretty special geometry anyways.

comment by [deleted] · 2012-05-20T10:01:50.560Z · LW(p) · GW(p)

Ovulation Leads Women to Perceive Sexy Cads as Good Dads (HT: Heartiste)

Why do some women pursue relationships with men who are attractive, dominant, and charming but who do not want to be in relationships—the prototypical sexy cad? Previous research shows that women have an increased desire for such men when they are ovulating, but it is unclear why ovulating women would think it is wise to pursue men who may be unfaithful and could desert them. Using both college-age and community-based samples, in 3 studies we show that ovulating women perceive charismatic and physically attractive men, but not reliable and nice men, as more committed partners and more devoted future fathers. Ovulating women perceive that sexy cads would be good fathers to their own children but not to the children of other women. This ovulatory-induced perceptual shift is driven by women who experienced early onset of puberty. Taken together, the current research identifies a novel proximate reason why ovulating women pursue relationships with sexy cads, complementing existing research that identifies the ultimate, evolutionary reasons for this behaviour.

I think it is isn't much disputed that ovulating women seem to find dark tirade and some other personality traits more sexy when ovulating, so to me the above sounded like a clear example of the halo effect. Sexy men will seems smarter and kinder than they are, because any positive trait seems to beef up our perceptions of people in other areas as well. But even as my mind slowly noted that this should effect how they see the odds of a man caring for other women's children and that I don't have any info to suggest that women are more prone to halo effect for male sexiness in general during ovulation, I saw the authors had considered this:

Finally, there were no main effects of fertility or fertility by target male interactions for any of the other positive attributes: attractiveness, financial status, and social status (all ps  .33). Ovulation also had no effect on the perception of men’s attractiveness (Mlow fertility dad  5.06, Mhigh fertility dad  4.73; Mlow fertility cad  5.79, Mhigh fertility cad  5.65), financial status (M low fertility dad  4.76, Mhigh fertility dad  4.77; Mlow fertility cad  5.64, Mhigh fertility cad  5.64), or social status (Mlow fertility dad  4.82, M high fertility dad4.74;Mlow fertility cad6.21,Mhigh fertility cad6.07). The ovulatory-induced perception of paternal investment, therefore, is not produced by a halo effect when women evaluate sexy cads at high fertility.

Study 2 also tested whether the ovulatory-induced overperception of paternal investment was a product of a broader ovulatory-induced halo effect that occurs when women evaluate attractive and charismatic men. The results showed that there was no ovulatory effect on women’s perceptions of the sexy cad’s attractiveness, financial status, or social status. Thus, ovulation appears to shift women’s perceptions of a man’s willingness to invest in her offspring specifically, but not his other positive traits.

I guess heterosexual women should be conscious of this bias, especially those desiring family formation or perhaps when judging in other contexts about which adult men they want their children to interact with. While obviously they probably aren't wrong about how sexy they find someone, they are biased when it comes to the other traits they, judging from their stated preferences, seek to maximize in such men.

comment by Oscar_Cunningham · 2012-05-17T11:51:37.365Z · LW(p) · GW(p)

LessWrong's worst posts:

http://lesswrong.com/r/all/top/?count=2811&after=t3_327

Replies from: Desrtopa, NancyLebovitz
comment by Desrtopa · 2012-05-22T13:59:09.678Z · LW(p) · GW(p)

The most heavily downvoted post in Less Wrong history is actually not on that list. Curi's "The Conjunction Fallacy Does Not Exist" was removed by Eliezer on the basis of it being massively downvoted and too stupid to productively discuss.

Replies from: dbaupp
comment by dbaupp · 2012-05-23T09:11:38.520Z · LW(p) · GW(p)

(If anyone wishes to see this article, it can be read on Curi's user page, but one can't view it or its comments directly.)

comment by NancyLebovitz · 2012-05-21T17:06:18.550Z · LW(p) · GW(p)

The link doesn't work.

Replies from: ghf, Oscar_Cunningham
comment by ghf · 2012-05-21T19:19:38.524Z · LW(p) · GW(p)

It works for me, but only after changing my preferences to view articles with lower scores (my cutoff had been set at -2).

comment by Oscar_Cunningham · 2012-05-21T18:16:30.270Z · LW(p) · GW(p)

It works for me. ???

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-21T18:43:17.118Z · LW(p) · GW(p)

I can't get it to work either. Maybe just c&p the text?

comment by Luke_A_Somers · 2012-05-17T14:43:38.321Z · LW(p) · GW(p)

Dinosaur Comics today involves WBE

http://www.qwantz.com/index.php?comic=2208

Replies from: Grognor
comment by Grognor · 2012-05-17T19:47:33.027Z · LW(p) · GW(p)

I once asked Ryan North, via the twitters, if he was a transhumanist. He said he wouldn't accept the label, but T-Rex is obviously a transtyrannosaurist.

comment by cousin_it · 2012-05-16T14:50:06.970Z · LW(p) · GW(p)

Some vague ideas about decision theory math floating in my head right now. Posting them in this raw state because my progress is painfully slow and maybe someone will have the insight that I'm missing.

1) thescoundrel has suggested that spurious counterfactuals can be defined as counterfactuals with long proofs. How far can we push this? Can there be a "complexity-based decision theory"?

2) Can we write a version of this program that would reject at least some spurious proofs?

3) Define problem P1 as "output an action that maximizes utility", and P2 as "output a program that solves P1". Can we write a general enough agent that solves P1 correctly, and outputs its own source code as the answer to P2? To stop the agent from solving P1 as part of solving P2, we can add a resource restriction to P2 but not P1. This is similar to Eliezer's "AI reflection problem".

Replies from: gRR, gRR
comment by gRR · 2012-05-17T03:51:53.969Z · LW(p) · GW(p)

Thoughts on problem 3:

def P1():  
  sumU = 0;  
  for(#U=1; #U<3^^^3; #U++):  
    if(#U encodes a well-defined boundedly-recursive parameterless function,  
           that calls an undefined single-parameter function "A" with #U as a parameter):  
      sumU += eval(#U + #A)  
  return sumU  

def P2():
  sumU = 0;  
  for(#U=1; #U<3^^^3; #U++):  
    if(#U encodes a well-defined boundedly-recursive parameterless function that calls  
           an undefined single-parameter function "A" with #U as a parameter):  
      code = A(#P2)
      sumU += eval(#U + code)  
  return sumU  

def A(#U):  
  Enumerate proofs by length L = 1 ... INF  
    if found any proof of the form "A()==a implies eval(#U + #A)==u, and A()!=a implies eval(#U + #A)<=u"  
      break;  
  Enumerate proofs by length up to L+1 (or more)  
    if found a proof that A()!=x  
      return x  
  return a

Although A(#P2) won't return #A, I think eval(A(#P2)(#P2)) will return A(#P2), which will therefore be the answer to the reflection problem.

comment by gRR · 2012-05-16T22:46:51.318Z · LW(p) · GW(p)

2) Can we write a version of this program that would reject at least some spurious proofs?

It's trivial to do at least some:

def A(P):  
  if P is a valid proof that A(P)==a implies U()==u, and A(P)!=a implies U()<=u  
  and P does not contain a proof step "A(P)=x" or "A(P)!=x" for any x:  
    return a  
  else:  
    do whatever
Replies from: cousin_it
comment by cousin_it · 2012-05-17T00:29:32.686Z · LW(p) · GW(p)

Sure, but that's too trivial for my taste :-( You understand the intent of the question, right? It doesn't call for "an answer", it calls for ideas that might lead toward "the answer".

Replies from: gRR
comment by gRR · 2012-05-17T01:09:18.907Z · LW(p) · GW(p)

To tell the truth, I just wanted to write something, to generate some activity. The original post seems important and useful, in that it states several well-defined and interesting problems. Seeing it staying alone in a relative obscurity of an Open Thread even for a day was a little disheartening :)

comment by gwern · 2012-05-30T19:30:10.617Z · LW(p) · GW(p)

Wikipedia experiment finished: http://www.gwern.net/In%20Defense%20Of%20Inclusionism#sins-of-omission-experiment-2

Close to zero resistance to random deletions. Most disappointing.

Replies from: wedrifid
comment by wedrifid · 2012-05-30T20:10:09.747Z · LW(p) · GW(p)

I was persuaded.

comment by [deleted] · 2012-05-19T20:30:25.573Z · LW(p) · GW(p)

The Essence Of Science Explained In 63 Seconds

A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about science and our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.

Here it is, in a nutshell: The logic of science boiled down to one, essential idea. It comes from Richard Feynman, one of the great scientists of the 20th century, who wrote it on the blackboard during a class at Cornell in 1964. YouTube

Think about what he's saying. Science is our way of describing — as best we can — how the world works. The world, it is presumed, works perfectly well without us. Our thinking about it makes no important difference. It is out there, being the world. We are locked in, busy in our minds. And when our minds make a guess about what's happening out there, if we put our guess to the test, and we don't get the results we expect, as Feynman says, there can be only one conclusion: we're wrong.

The world knows. Our minds guess. In any contest between the two, The World Out There wins. It doesn't matter, Feynman tells the class, "how smart you are, who made the guess, or what his name is, if it disagrees with the experiment, it is wrong."

This view is based on an almost sacred belief that the ways of the world are unshakeable, ordered by laws that have no moods, no variance, that what's "Out There" has no mind. And that we, creatures of imagination, colored by our ability to tell stories, to predict, to empathize, to remember — that we are a separate domain, creatures different from the order around us. We live, full of mind, in a mindless place. The world, says the great poet Wislawa Szymborska, is "inhuman." It doesn't work on hope, or beauty or dreams. It just...is.

comment by JoachimSchipper · 2012-05-16T11:29:07.496Z · LW(p) · GW(p)

I know quite a bit about crypto and digital security. If I could find the time to write something, which won't be soon, is there something that would interest LessWrong? (If you just want to read crypto stuff, Matthew Green's blog is good; "how to protect a nascent known-to-be-actually-working GAI from bad guys" will read like "stay the fsck away from any mobile phones and the internet and don't trust your hardware; bring an army", which won't be terribly interesting.)

comment by wedrifid · 2012-05-25T22:39:27.326Z · LW(p) · GW(p)

First time this has happened since the 30day karma score was implemented. Lesswrong addictions are apparently easy to squelch!

Replies from: wedrifid, TheOtherDave
comment by wedrifid · 2012-05-28T22:51:49.545Z · LW(p) · GW(p)

I also like this one. Lucky timing to check in at the round number!

comment by TheOtherDave · 2012-05-25T22:41:13.513Z · LW(p) · GW(p)

Go you!
I've noticed your absence, FWIW.

comment by RomeoStevens · 2012-05-22T00:49:12.006Z · LW(p) · GW(p)

Can someone help me corrupt this wish?

"Give humans control over their own sensory inputs."

Replies from: JoshuaZ, CuSithBell
comment by JoshuaZ · 2012-05-22T00:55:10.336Z · LW(p) · GW(p)

Everyone falls into a coma where they get to control their own individual apparent reality. Meanwhile they all starve to death or run into other problems because nothing about the wish says they need to stay alive.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T00:56:35.885Z · LW(p) · GW(p)

Doesn't discontinuation of the sensory experience count as a lack of control?

Replies from: Desrtopa, JoshuaZ
comment by Desrtopa · 2012-05-22T13:47:43.410Z · LW(p) · GW(p)

Well, the wish doesn't say "give me the ability to control my sensory experience forever". If you die, your ability to control your body is discontinued, but that doesn't mean you couldn't control your body.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T18:55:34.672Z · LW(p) · GW(p)

can you expand a little on this?

Replies from: Desrtopa
comment by Desrtopa · 2012-05-22T19:56:04.955Z · LW(p) · GW(p)

Suppose that a person with locked-in-syndrome wished for voluntary control of their body. Their disorder is completely cured, and they gain the ability to control their body like anyone else. Would you say that their wish wasn't really granted unless they never die?

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T20:24:45.602Z · LW(p) · GW(p)

personally yes, but I realize this is strange.

comment by JoshuaZ · 2012-05-22T01:01:17.146Z · LW(p) · GW(p)

Hmm, possibly. But everyone stuck in their own sensory setting with no connection to anyone else is still pretty bad.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T01:22:52.127Z · LW(p) · GW(p)

You aren't necessarily stuck anywhere. How the statement "I want to talk to Brian" gets unpacked once the wish has been implemented depends on how "control" gets unpacked. Any statement we make about sensory experiences we wish to have involve control only on one conceptual level. We can't control what Brian says once we're talking to him, but we never specified that we wanted control over it either. I think that you wind up with a conflict where you ask for control on the wrong conceptual level, or two different levels conflict. I'm having trouble coming up with examples though.

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-22T01:49:59.224Z · LW(p) · GW(p)

And if "I want to talk to Brian" is parsed that way doesn't that require telling Brian that someone wants to talk to him, which for at least a few seconds takes control away from Brian of part of his sensory input?

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T05:29:48.732Z · LW(p) · GW(p)

So a problem is that it would be impossible to know what options to make more obviously available to you. If the action space isn't screened off the number of options you have is huge. There's no way to present these options to a person in a way that satisfies "maximum control". As soon as we get into suggesting actions we're back to the problem of optimizing for what makes humans happy.

This is highly helpful BTW.

comment by CuSithBell · 2012-05-22T20:41:31.009Z · LW(p) · GW(p)

None of that control is automated, and this manual control is the only source of input.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T22:48:55.967Z · LW(p) · GW(p)

hahaha please specify wavelengths of light that will hit each receptor. Very good.

Replies from: CuSithBell
comment by CuSithBell · 2012-05-23T17:58:10.901Z · LW(p) · GW(p)

Exactly! It'd be pretty sucky.

comment by Shmi (shminux) · 2012-05-18T15:03:21.557Z · LW(p) · GW(p)

A low-inferential-distance perspective on the inferential distance concept.

comment by khafra · 2012-05-16T12:38:53.196Z · LW(p) · GW(p)

I like the Operations Research subreddit. Other people looking for applied rationality might like it, too. This probablistic analysis of problems with federal vanpools is a characteristic example.

Replies from: gwern
comment by gwern · 2012-05-16T16:24:59.683Z · LW(p) · GW(p)

Looks interesting; I've subscribed.

comment by JoshuaZ · 2012-05-29T17:46:08.894Z · LW(p) · GW(p)

Using large scale genetic sequencing has for the first time found the cause of a new illness. Short summary here and full article here. In this situation, an individual had a unique set of symptoms, and by doing a full exome scan for him and his parents they were able to successfully locate the gene that was creating the problem and understand what was going wrong.

comment by NancyLebovitz · 2012-05-21T20:26:58.259Z · LW(p) · GW(p)

Setting up policies to discuss politics without being mind-killed-- I'm linking to this in the early phase because LWers might be interested in following the voluminous discussions on that site to see whether this is possible, and it will be easier to start from the beginning, and also possible to make predictions.

comment by Jabberslythe · 2012-05-18T23:46:22.179Z · LW(p) · GW(p)

I haven't heard this problem mentioned on here yet: http://www.philosophyetc.net/2011/04/puzzle-of-self-torturer.html

What do you think of the puzzle? Do you think the analysis here is correct?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-05-19T02:29:16.272Z · LW(p) · GW(p)

It's a good puzzle, and the analysis dealing with it is correct.

Replies from: steven0461
comment by steven0461 · 2012-05-19T02:38:15.170Z · LW(p) · GW(p)

How is it even possible for A and B to be indiscriminable, B and C to be indiscriminable, but A and C to be discriminable? It seems like if A and B cause the exact same conscious thoughts (or whatever you're updating on as evidence), and B and C do, then A and C do. I think in practice, what's more likely is that you can very weakly probabilistically discriminate between any two adjacent states.

Replies from: TheOtherDave, crazy88, tut
comment by TheOtherDave · 2012-05-19T15:07:50.618Z · LW(p) · GW(p)

If the difference between A and B is less than the observer's just-noticeable-difference, and the difference between B and C is as well, it doesn't follow that the difference between A and C is.

comment by crazy88 · 2012-05-28T23:46:57.597Z · LW(p) · GW(p)

Frank Arntzenius (a philosopher at Oxford) has argued something along these lines.

I don't think that article is paywalled (though I'm using a university computer, logged on to my account so I'm not sure whether I automatically get passed through any paywall that may exist).

comment by tut · 2012-05-19T10:00:19.155Z · LW(p) · GW(p)

Chunking of sensory input happens at a lower layer in the brain than consciousness. So if you have learned that two stimuli are the same then they might be indistinguishable to you unless you spend thousands of hours deliberately practicing distinguishing them even if there is a detectable difference, and even if you can distinguish stimuli that are just a bit further apart.

comment by David_Gerard · 2012-05-26T23:35:53.777Z · LW(p) · GW(p)

Luke's comment on just how arse-disabled SIAI was until quite recently (i.e., not to any unusual degree) inspired me to read Nonprofit Kit For Dummies, which inspired me to write a blog post telling everyone to buy it. Contains much of my bloviating on the subject of charities from LessWrong over the past couple of years. Includes extensive quote from Luke's comment.

comment by beoShaffer · 2012-05-20T16:35:48.717Z · LW(p) · GW(p)

Does any know of any good online recourses for Bayesain statistics? I'm looking for something fairly basic, but beyond the here's what Bayes theorem is level that Khan academy offers.

Replies from: RomeoStevens
comment by RomeoStevens · 2012-05-22T18:59:38.469Z · LW(p) · GW(p)

pick up a used textbook for cheap, I don't remember which one is good, but there's a textbook recommendation thread somewhere.

comment by [deleted] · 2012-05-20T13:31:54.216Z · LW(p) · GW(p)

I would be interested in setting up an online study group, preferably via google hangout or skype for several key sequences that I want to grok and question more fully. Anyone else interested in this?

Replies from: JoachimSchipper
comment by JoachimSchipper · 2012-05-22T18:15:55.382Z · LW(p) · GW(p)

I currently do not have time, but it may be helpful if you state which sequences you intend to look at.

Replies from: None
comment by [deleted] · 2012-05-22T18:27:27.676Z · LW(p) · GW(p)

Meta-ethics for starters.

Replies from: JoachimSchipper
comment by JoachimSchipper · 2012-05-22T18:48:10.613Z · LW(p) · GW(p)

Good choice - I've read all of it, and I still don't have a really good idea what it says. Please do post something if you can make an accessible and concise summary.

comment by Bill_McGrath · 2012-05-19T08:40:54.159Z · LW(p) · GW(p)

I'm hoping to do some reading on music cognition. I've got a pretty busy few months ahead, so I can't say how far I'll get, and I'm not used to reading scientific literature, so it'll be slow going at first I'm sure, but I'd like to get a better grasp of this field.

In the vein of lukeprog's posts on scholarship, does anyone here know anything on this field, or where I might begin to learn about it? I've got access to a library with a few books dealing with the psychology of music and I can get online access to a small few journals. I've also read most of Levitin's Music and Your Brain which is a reasonably good pop-science (and largely pop-music) introduction to the topic, and Wikipedia actually has a graded reading list that seems promising.

Any other thoughts?

comment by Shmi (shminux) · 2012-05-19T00:00:36.475Z · LW(p) · GW(p)

Suppose that, after some hard work, EY or someone else proves that a provably-friendly AGI is impossible (in principle, or due to it being many orders of magnitude harder than what can reasonably be achieved, or because a spurious UFAI is created along the way with near certainty, or for some other reason).

What would be a reasonable backup plan?

Replies from: JoshuaZ
comment by JoshuaZ · 2012-05-29T17:43:45.383Z · LW(p) · GW(p)

Try really hard to get reasonably safe oracle AI? Focus on human uploading first?

Replies from: shminux
comment by Shmi (shminux) · 2012-05-29T18:08:56.723Z · LW(p) · GW(p)

All good questions, I hope someone at SI asks them, instead of betting on a single horse.

comment by [deleted] · 2012-05-16T14:15:21.495Z · LW(p) · GW(p)

This play in NYC looks pretty sweet. It looks like it touches on some concepts like Godshatter, idea from Three Worlds Collide, and a healthy understanding of the idea that technology could make us very very different from who we are now.

While exploring many of the common ideas that come attendant with our fascination with A.I., from Borglike interfaced brains to 2001-esque god complexes, DEINDE is particularly focused on two aspects: how to return to being "normal" after experiencing superhuman intelligence, and how, or if we should, return from the experience of being deeply networked with one another. Would we forsake enhanced intellect or profound psychic connection, once felt?

Looks like it's stopped running for now, though.

comment by Zaine · 2012-05-16T10:25:44.135Z · LW(p) · GW(p)

To give potentially interested parties a greater chance of learning about Light Table, I'm reposting about it here:

"I know there are many programmers on LW, and thought they might appreciate word of the following Kickstarter project. I don't code myself, but from my understanding it's like Scrivener for programmers:

http://www.kickstarter.com/projects/ibdknox/light-table?ref=discover_pop"

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-05-16T14:41:44.287Z · LW(p) · GW(p)

It sounds like it might be a useful program for any complicated project, even if the project isn't a program.

Replies from: vi21maobk9vp
comment by vi21maobk9vp · 2012-05-17T05:09:08.306Z · LW(p) · GW(p)

As a programmer, I am tempted to say "unless the project is actually a large program". "Large" is relative, of course.

Of course, I have seen LightTable before the comment on LW, and I tried to imagine applying it to any basically data-crunching (as oppposed to mostly UI) program. Visualising computation may look like a good idea. Unfortunately, at the level it is demonstrated in the demo, it is simple enough for anyone who even tries to write a big program to keep it in mind.

When you have multiple layers of abstraction and each of them has a reason to do non-trivial double loops (which is not that much if you can say what each level is doing and why) what we see in demo would become overcluttered. I am not sure whether LightTable demo will grow into a tool to make UI fine-tuning more comfortable or it will try to invent some approaches that work for back-ends and isolated data-crunching. In the former case it will stay a niche thing but may become a well-olished narrow-focus tool. In the latter case it will have to transform so much that it is hard to tell whether the current developer will succeed.

comment by knb · 2012-05-30T11:13:41.962Z · LW(p) · GW(p)

I seem to remember someone posting on Less Wrong about software that locks your computer to only doing certain tasks for a given period (to fight web-surfing will-power failures, I guess). After some cursory digging on the site, I couldn't find it. Does anybody remember the thread were this kind of self-binding software was discussed or at least the name of some brand of this software?

(Ideally I would like to read the thread first, and get a sense of how well this works.)

comment by CWG · 2012-05-30T04:46:32.369Z · LW(p) · GW(p)

How old are you?

I'm 41. I'm curious what the age distribution is in the LW community, having been to one RL meetup and finding I was the oldest one there. (I suspect I was about 1.8 times the median age.)

I love what the LW community stands for, and age isn't a big deal... youthful passion is great (trying to hold onto mine!) and I suspect there isn't a particularly strong correlation between age and rationality, but life experience can be valuable in these discussions. In particular, having done more dumb things and believed more irrational things, and gotten over them.

comment by gwern · 2012-05-29T22:40:42.473Z · LW(p) · GW(p)

Iodine post up: http://www.gwern.net/Nootropics#iodine

I've been working on this off and on for months. I think it's one of my better entries on that page, and I imagine some of the citations there will greatly interest LWers - eg. not just the general IQ impacts, but that iodization causes voters to vote more liberally.

I also include subsections for a meta-analysis to estimate effect size, a power analysis using said effect size as guidance to designing any iodine experiment, and a section on value of information, tying all the information together.

My general conclusion is that it looks like I should take some iodine, but currently self-experimentation is just too hard to do for iodine.

comment by Kindly · 2012-05-29T15:36:27.231Z · LW(p) · GW(p)

Ever since getting an apartment of my own I've found that, well, I spend more time alone than I used to. Rather than try to take every possible action to ensure that I'm alone as little as possible (which is desperate some of the time and silly a lot of the time) I want to try to learn to like being alone.

So what are some reasons to enjoy spending time alone as opposed to spending it with other people? Or other suggestions about how to self-modify in this way?

Replies from: knb
comment by knb · 2012-05-30T11:05:25.522Z · LW(p) · GW(p)

Not sure if this counts as "alone" but you could schedule regular skype video calls with friends/relatives. It took some doing, but I'm a lot happier living alone when I still see and talk to my family a few times a week. I'm actually surprised more people don't do this.

Replies from: Kindly
comment by Kindly · 2012-05-30T17:09:39.110Z · LW(p) · GW(p)

Thank you for your advice, but I don't think that's exactly what I'm looking for. Rather than seek out human contact because I'm not comfortable being alone, I would rather be comfortable being alone and then seek out human contact for its own sake.

comment by Gastogh · 2012-05-22T22:12:29.797Z · LW(p) · GW(p)

I'm looking for a book recommendation on anthropology. I have almost no prior knowledge of the field. I'm after something roughly equivalent to what The Moral Animal was for evolutionary psychology: from-the-ground-up stuff that works by itself and doesn't assume assume significant background knowledge or further reading for a payoff. An easily accessible pop-writing approach à la The Moral Animal is a must-have; I can't read academic textbooks.

comment by NancyLebovitz · 2012-05-21T17:42:05.532Z · LW(p) · GW(p)

I'm reading Ursula Vernon's Digger (nominated for the Graphic Novel Hugo), and it's very much in the spirit of extrapolating logically from odd premises. Digger (a wombat) is sensible and pragmatic and known to complain about how irresponsible Dwarves are for using magic to shore up their mines.

comment by Suryc11 · 2012-05-21T00:48:13.178Z · LW(p) · GW(p)

My major (field of study) in college/university is most likely going to be philosophy. I'm an avid reader of this blog, and as such have internalized many LW concepts and terminology, particularly relating to philosophy. In short, should I cite this site if I make use of a LW concept - learnt several years ago on here - in a paper for a philosophy class? If yes (and I'm leaning towards yes), how?

In general, if one internalizes a blog-specific idea off of the Internet and then, perhaps unintentionally, includes it in a somewhat unrelated undergraduate paper, how does one go about referencing the blog - especially if the idea came from a comment that has since disappeared and/or cannot be found?

This is so far hypothetical, but I am sure that this situation will occur at least once in the next few years.

Replies from: beoShaffer
comment by beoShaffer · 2012-05-21T04:34:00.416Z · LW(p) · GW(p)

How you cite it depends on the citation format for the paper as a whole, but most major formats now have instructions on how to cite blogs. So check the reference book/website for whatever formatting school section about how to cite blogs. A decent example is the owl's guide to citing "electronic resources" in MLA which is a fairly common style for philosophy papers.

Edit-fixed typo

comment by [deleted] · 2012-05-20T14:34:52.349Z · LW(p) · GW(p)

An excellent debate between SIAI donor Peter Thiel and George Gilder on:

"The Prospects for Technology and Economic Growth"

I suggest skipping the first 8 minutes since they are mostly intro fluff. Thiel makes a convincing case that we are living in a time of technological slowdown. His argument has been discussed on LessWrong before.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-05-22T22:28:59.020Z · LW(p) · GW(p)

I found Gilder so annoying (information does not trump the laws of physics!!) that I listened to this instead-- Thiel and Niall Fergusson at Harvard.

Does Gilder say anything intelligent? If he doesn't, does he get squashed flat?

comment by AlexSchell · 2012-05-17T12:59:29.634Z · LW(p) · GW(p)

There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one's predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you're bound to be underconfident in your ~A-type predictions.

Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don't -- in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as "predictions" whereas ~A-type beliefs don't. Some potential factors in what counts as a "prediction": belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-05-17T13:19:37.889Z · LW(p) · GW(p)

There is an obvious-in-retrospect symmetry between overconfidence and underconfidence in one's predictions. Suppose you have made a class of similar predictions of the form A and have on average assigned 0.8 confidence to them on average, while 60% actually came true. You might say that you are suffering from overconfidence in your predictions. But when you predict A with confidence p, you also predict ~A with confidence (1-p): you have on average assigned 0.2 confidence to your ~A-type predictions, while 40% actually came true. So if you are overconfident in your A-type predictions you're bound to be underconfident in your ~A-type predictions.

Intuitively, overconfidence and underconfidence feel like very different sins. It looks like this is due to systematic tendencies in what we view as a prediction and what we don't -- in the exercise above, assuming the set of A-type beliefs is self-selected, it seems that the A-type beliefs count as "predictions" whereas ~A-type beliefs don't. Some potential factors in what counts as a "prediction": belief > 0.5; hope that the prediction will come true; the prediction is very specific and yet assigned a substantial credence (say, above 0.1), so is supported by a lot of evidence, whereas the negation is a nonspecific catch-all.

Yeah, we have discussed this before.

comment by maia · 2012-05-16T15:51:20.242Z · LW(p) · GW(p)

Question about anti-akrasia measures and precommitments to yourself.

Suppose you need to do action X to achieve the most utility, but it's somewhat unpleasant. To incentivize yourself, you precommit to give yourself reward Y if and only if you do action X. You then complete action X. But now reward Y has become somewhat inconvenient to obtain.

Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?

Replies from: shminux, sixes_and_sevens, beoShaffer
comment by Shmi (shminux) · 2012-05-16T15:57:34.072Z · LW(p) · GW(p)

But now reward Y has become somewhat inconvenient to obtain. Should you make the effort to obtain reward Y, in order to make sure your precommitments are still credible?

Is there an equivalent reward that is easier to obtain?

comment by sixes_and_sevens · 2012-05-16T16:33:52.606Z · LW(p) · GW(p)

Can you provide some specific examples?

Replies from: Grognor
comment by Grognor · 2012-05-16T17:09:28.649Z · LW(p) · GW(p)

Let me make one.

Suppose you are reading your favorite blogs, when the idea strikes you, "Okay, I need to do X, but I can't do it without an incentive. I shall order chicken wings, which are delicious, upon X's completion."

Dozens of minutes later, X is finished! But wait! You fell victim to the planning fallacy! Everywhere in the city that delivers chicken wings is closed now because X took longer than you thought it would.

In this case, it would be fairly senseless to wait until the next day to order the wings, as by then the reward would be completely disconnected from the action. Driving 35 minutes to get them would also be pretty senseless. I don't know about driving 15 minutes.

This seems like a fairly difficult problem, but also one that simply will not occur very often, especially if you make your incentive something that's unlikely to be difficult to obtain by the time you finish X.

Replies from: sixes_and_sevens, None
comment by sixes_and_sevens · 2012-05-17T11:06:17.030Z · LW(p) · GW(p)

That's how I interpreted it as well, but I'm not sure the OP is distinguishing the signalling purpose of pre-commitment strategies from mechanisms of pre-commitment..

Reputations of pre-commitment are about signalling credible consequences in circumstances of asymmetric information. When bargaining with oneself, information is about as symmetric as it can get. It's not like you mistrust your future self's willingness to go through with getting chicken wings. Any obstacle to getting them is transparent to all parties (you), and shouldn't impact your future expectation of being able to reward yourself unless you're staggeringly incompetent at obtaining chicken wings. If that's the case, you'll probably factor this in when planning your incentive.

Mechanisms of pre-commitment are a more salient tool when bargaining with oneself over time (cf. dynamic inconsistency ), but only when your goals are inconsistent over time. Post-X you presumably wants chicken wings as much as pre-X you, but is more informed about the cost of obtaining them. There is presumably a level of expense pre-X you would sensibly commit to for the specified reward. If some sort of catastrophe occurred as soon as you'd finished X, pre-X you wouldn't expect post-X you to crawl through the dust with your one remaining limb muttering "must...get...chicken...wings..."

The issue seems to boil down to "are you staggeringly incompetent at rewarding yourself? If not, don't worry."

comment by [deleted] · 2012-05-16T23:16:48.950Z · LW(p) · GW(p)

Are you entering into a sub function of the original x/y assessment here? As in if X is done, Y, but Y is a function in itself of assessing the optimal reward for X?

If it's still important to add a reward of Y (in addition to the personal value of having completed X), you probably need to substitute with something novel and maintain the understanding that it is a reward for X (even if not the originally scoped one).

comment by beoShaffer · 2012-05-16T16:05:20.523Z · LW(p) · GW(p)

It depends on the difficulty of obtaining Y relative to its pleasantness, but in general I would say yes. Specifically, good anti-akrasia measures are valuable enough that you should be willing to go through quite a bit of effort to persevere them. Thus if the effect of obtaining Y in these circumstances is to persevere your precommitmint ability and than it is worth going expending a large amount of effort on Y. But, you should also keep in mind the possibility that you will develop a negative association between fulling your precomintmints and then having to go through a large amount of effort for a reward that isn't worth it. Is their some "guilty pleasure" or other suitable reward that you could substitute for Y, that would be keeping in the spirt of the bargain you made with yourself?

comment by Multiheaded · 2012-05-18T09:07:43.592Z · LW(p) · GW(p)

It's not very related to LW or rationality (although in technical terms it touches on Pascal's Mugging), but I want to post this underrated "creepypasta" anyway; it's one of my favourites and I remembered it after flipping through that hippie blog that Will linked me to:

On his way home that night, as he walked through town, a man stepped out of an alley in front of him. He tensed to defend himself, but the man just stood there. Looking him over, he realized the man looked like a hippie. Something of a comedy caricature of a hippie, really. Long unwashed hair and beard, sandals...and a sandwich board reading 'THE END IS NIGH'. That, he thought, was unusual, even for a hippie.

"You want something?" he asked.

"The world's ending," said the hippie. "I need your help."

He stepped around the hippie and kept walking. High as a kite, he thought to himself. The hippie started walking after him, and fell into step beside him.

"Please, I need your help," said the hippie.

"Look, man, I'm really not interested," he said, and kept walking.

The hippie leaned against a wall, watching him walk away. The hippie wasn't all that disappointed; lots of people gave this kind of response. Another skeptic, he thought to himself, fingering the ragged holes through the middles of his hands.