Open Thread, July 16-31, 2012

post by OpenThreadGuy · 2012-07-16T12:47:22.516Z · LW · GW · Legacy · 142 comments

Contents

142 comments

If it's worth saying, but not worth its own post, even in Discussion, it goes here.

142 comments

Comments sorted by top scores.

comment by gwern · 2012-07-24T18:54:48.436Z · LW(p) · GW(p)

Today I got some good news: my counter-signed contract with O'Reilly came back. It's now official - I'm writing an ebook on Quantified Self self-experimenting with sleep.

I look forward to include LW-related material on the perils of self-experimenting and bias. Suggestions from readers on what they'd like to see or material covered?

Replies from: Andy_McKenzie, FiftyTwo
comment by Andy_McKenzie · 2012-07-31T02:11:32.826Z · LW(p) · GW(p)

Congrats! Some topics I'd like to see, most of which you'll probably already cover (though they may be more on general "sleep" than "self-experimentation and sleep"):

1) distributions of sleep in general population and any information you can find on correlates

2) any well-powered interventions to see whether sleeping more correlates with success in any way (more working memory, decreased bias, real-world measures, etc.)

3) whether lucid dreaming really might make your memory worse because you need that REM sleep to be "random" (generally, trade-offs to lucid dreaming)

4) trade-offs in sleep: what are the upsides/downsides to more, what are the upsides/downsides to less, and how might drugs like modafinil play into this.

Edit 7/30: fixed typo in #4

Replies from: Petra, gwern
comment by Petra · 2012-08-01T01:57:26.205Z · LW(p) · GW(p)

Re: #3, tradeoffs of lucid dreaming,

I don't know of any studies to support this, nor have I done sufficient systematic investigation of the phenomenon, but I am a lucid dreamer, and I have noted a general trend of increased alertness and improved memory of dreams after lucid dreams. I'm not well-versed enough in the science of dreaming to propose any credible explanation for this, nor do I have evidence that memory in general is improved. However, it has been consistently true that I have better memories of lucid dreams than non-lucid ones, and I am more likely to wake up fully alert after a lucid dream than is normal for me.

I know anecdotal evidence may not carry a great deal of weight, but I hope this is helpful.

comment by gwern · 2012-08-01T01:11:58.786Z · LW(p) · GW(p)

Those are some pretty challenging questions. #1 just requires some research into sleep tables and aggregate data, which while tedious is not necessarily difficult. #2 is worth looking into as a way to justify interest in sleep specifically.

But #3 seems impossible to answer now, since there are few enough lucid dreaming studies that likely none of them have investigated it, and I would expect any effect size to be small (which means the existing studies which use no _n_s <~30, for the obvious reason of it being hard to find lucid dreamers, will be badly underpowered to discover it) since lucid dreaming is fundamentally rare and dreams short in duration. It'd be like asking whether getting 30 minutes less sleep is bad for your memory: possibly, but you're going to need an awful lot of data to spot the ill effects!

Question #4 is somewhat similar but may be answerable for specific values.

Replies from: Andy_McKenzie
comment by Andy_McKenzie · 2012-08-01T04:12:27.260Z · LW(p) · GW(p)

Those are some pretty challenging questions

That's what you get for setting pretty high standards with your previous work.

seems impossible to answer now, since there are few enough lucid dreaming studies that likely none of them have investigated it

Since the book is on self-experimentation, couldn't you see whether you yourself have done worse on DNB/SR flashcards on days after you have some degree of lucid dreaming?

Replies from: gwern
comment by gwern · 2012-08-01T17:00:42.247Z · LW(p) · GW(p)

Since the book is on self-experimentation, couldn't you see whether you yourself have done worse on DNB/SR flashcards on days after you have some degree of lucid dreaming?

I haven't worked on lucid dreaming in years, because I was so unsuccessful; I never got beyond improving my dream recall with a dream journal.

comment by FiftyTwo · 2012-08-01T15:58:50.158Z · LW(p) · GW(p)

Scientifically backed evidence on waking up and falling asleep?

comment by NancyLebovitz · 2012-07-16T17:22:23.179Z · LW(p) · GW(p)

Shouldn't this be Open Thread, July 16-31, 2012?

comment by Adele_L · 2012-07-16T16:55:25.885Z · LW(p) · GW(p)

I've been trying to wrap my head around the SPECKS vs TORTURE argument, and I still haven't been able to convince myself that TORTURE is the right answer.

One idea that I had would be to apply the whole thing to myself. Suppose Omega comes to me and offers me two choices:

  1. I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with no lasting harm.

  2. I can have a satisfying and fulfilling life for 3^^^3 days, but I'll wake up with a speck in my eye everyday.

I have to say that I would still pick choice 2 for myself. I know that if I add up the utilities in any standard way, that option 2 is going to be way lower, but I still can't get myself to choose 1. Even if you move the torture time so that it's random or at the end (to get rid of near mode thinking), I still intuitively prefer 2 quite strongly.

Even though I can't formalize why I think option 2 is better, feeling that it is the right choice for myself makes me a bit more confidant that SPECKS would be the right choice as well. Also, this thought experiment makes me think the intuitive choice for SPECKS is less about fairness than I thought.

If anyone has any more insight about this, that would be helpful.

Replies from: TheOtherDave, army1987, maia, David_Gerard, None, NancyLebovitz, DanielLC, RomeoStevens, Emile
comment by TheOtherDave · 2012-07-16T17:07:45.870Z · LW(p) · GW(p)

No novel insights; you've precisely put your finger on why this example is interesting: it pits our intuitions against the conclusions of a certain flavor of utilitarianism. If we embrace that flavor of utilitarianism, we must acknowledge that our intuitions are unreliable. To accept our intuitions as definitive, we must reject that flavor of utilitarianism. If we wish to keep both, we must find a radically different way of framing the scenario.

The interesting stuff is in what comes next. If I reject that flavor of utilitarianism, what do I use instead, and how does that affect my beliefs about right action? If I reject my intuitions as a reliable source of information about good and bad outcomes, what do I use instead, and how does that affect my beliefs about right action? If I try to synthesize the apparent contradiction, how might I do that, and where does that leave me?

comment by A1987dM (army1987) · 2012-07-17T11:46:24.515Z · LW(p) · GW(p)

It's not obvious that the ‘utilities’ for different people should add as sublinearly as those for one person do. So a better comparison would be whether you prefer to receive a dust speck in the eye with probability p or 50 years of torture with probability p/3^^^3. (This is essentially the veil-of-ignorance thing, where p is 3^^^3 divided by the total population.)

Wow, it sounds terribly like Pascal's Mugging now. Had anyone noticed that before?

comment by maia · 2012-07-16T17:22:04.884Z · LW(p) · GW(p)

How do you feel about this framing? Would you rather have a 1 in 3^^^3 chance of being tortured for 50 years, or get a dust speck in your eye? (This is analogous to jaywalking vs. waiting a minute for a crosswalk.)

comment by David_Gerard · 2012-07-16T22:04:07.498Z · LW(p) · GW(p)

Thinking about it as if it's a meaningful choice may soften you up for Pascal's scams.

comment by [deleted] · 2012-07-16T20:28:27.533Z · LW(p) · GW(p)

but with no lasting harm.

The way you phrase it, it makes me think this caveat is really the key point. Consider if Omega doesn't offer that and says this:

  1. I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first.

  2. I can have a satisfying and fulfilling life for 3^^^3 days, but I'll wake up with a speck in my eye everyday.

My intuitive response would be "Don't pick 50 years of torture, you'll die!" Which is generally the case. It's explicitly not the case in the first scenario, because of the "but with no lasting harm." caveat. But without that caveat, I doubt I would survive 50 years of torture, which means that what would happen afterwards is useless, since I'd be dead.

For instance, imagine if the torture disutility was something like bleeding.

I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be lose 50 gallons of blood all at once first.

I can have a satisfying and fulfilling life for 3^^^3 days, but one blood cell will be removed from my body every day.

Or alternatively, starving.

I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to go without food for 50 years first.

I can have a satisfying and fulfilling life for 3^^^3 days, but one crumb will be removed from my plate every day.

My intuitive response will successfully allow me to avoid death!

But with the caveat in, your intuitive response consigns you to a greater total inconvenience because it doesn't quite get the caveat or doesn't trust the person giving the caveat.

Now, Omega is defined as generating circumstances which are 100% trustworthy. So to properly grasp the question on an intuitive level means you have to intuitively grasp caveats such as "I am certain that I know that I am talking to Omega, Omega is certainly correct at all times, and Omega said I certainly won't suffer any lasting harm, and I certainly understood Omega correctly when he said that." Because that's all stipulated as the fine print caveats in an Omega problem, in general.

If you think to yourself "Well, I'm NOT certain of any of those, I'm just really really sure of them!" and then rerun the numbers, then I think the intuitive response goes back to being correct. I mean, consider the following question where you aren't certain about that caveat, just really sure:

  1. I can have a satisfying and fulfilling life for 3^^^3 days. However, I have to be tortured continuously for fifty years first, but with a 99.99% chance of no lasting harm and a .01% chance of death.

  2. I can have a satisfying and fulfilling life for 3^^^3 days, but I'll wake up with a speck in my eye everyday.

Does this seem insightful, or am I missing something?

Replies from: fubarobfusco, wedrifid
comment by fubarobfusco · 2012-07-16T22:55:41.579Z · LW(p) · GW(p)

So to properly grasp the question on an intuitive level means you have to intuitively grasp caveats such as "I am certain that I know that I am talking to Omega, Omega is certainly correct at all times, and Omega said I certainly won't suffer any lasting harm, and I certainly understood Omega correctly when he said that."

Trouble is, I am running on corrupted hardware, and am not capable of being in the epistemic state that the problem asks me to occupy. Pretending that I am capable of such epistemic states, when I am not, seems like a pretty bad idea.

comment by wedrifid · 2012-07-16T21:16:04.739Z · LW(p) · GW(p)

My intuitive response would be "Don't pick 50 years of torture, you'll die!" Which is generally the case. It's explicitly not the case in the first scenario, because of the "but with no lasting harm." caveat. But without that caveat, I doubt I would survive 50 years of torture, which means that what would happen afterwards is useless, since I'd be dead.

Being dead does not seem to fit the description "have a satisfying and fulfilling life for 3^^^3 days" with or without the caveat. Instead you should be concerned that the "lasting harm" changes you in such a way that what remains is still 'satisfied and fulfilled' but in such a way that you as of now would not consider the outcome desirable or would consider the person remaining after the torture to be sufficiently not-you-anymore.

comment by NancyLebovitz · 2012-07-16T21:14:27.350Z · LW(p) · GW(p)

Does "no lasting harm" make sense if we're talking about human beings?

Does it matter how long recovery takes? (Probably not, so long as it's not an amount of time which requires special notation.)

Can being removed from your usual life for fifty years count as lasting harm?

comment by DanielLC · 2012-07-16T18:52:30.965Z · LW(p) · GW(p)

Do you feel like baseline happiness makes a difference? If not, imagine starting with 3^^^3+1 people each being tortured for 50 years. You can get one of them out of torture, at the expense of a pain increase equivalent to a speck of dust in the eye for each of the others. If you do this for everyone, each one will have a 3^^^3 speck of dust equivalent, an amount of pain far surpassing 50 years of torture.

comment by RomeoStevens · 2012-07-17T00:42:35.601Z · LW(p) · GW(p)

I prefer oblivion to significant amounts of negative utility for any sustained period.

Replies from: billswift
comment by billswift · 2012-07-17T07:55:52.396Z · LW(p) · GW(p)

Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won't be around to regret it afterward. It isn't something to seek, since you won't have any positive utility afterward, but it isn't something that is worth enduring much suffering to avoid either.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-07-17T11:58:29.592Z · LW(p) · GW(p)

Except for possible disutility to family and friends, oblivion has a lot to recommend it; not least that you won't be around to regret it afterward.

I judge that a disadvantage.

Replies from: billswift
comment by billswift · 2012-07-17T12:40:42.964Z · LW(p) · GW(p)

If you read the second sentence, I do too; it's just a very weak disadvantage when compared to almost any suffering. If I didn't consider it at least somewhat disadvantageous, I wouldn't be around now to write about it.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-07-17T12:44:47.254Z · LW(p) · GW(p)

If you read the second sentence, I do too; it's just a very weak disadvantage when compared to almost any suffering.

That seems to imply that you would rather commit suicide than, say, endure a toothache for a few days. Really?

comment by Emile · 2012-07-16T19:42:20.432Z · LW(p) · GW(p)

Putting the torture first changes things, we discount future events, so of course the torture scenario seems worse.

comment by AandNot-A · 2012-07-16T14:24:29.536Z · LW(p) · GW(p)

How exactly is sound additive? I'm having a festival near home (maybe some 2 kilometers away) and I know no individual has the capacity to shout at a volume that reaches my house. But when the 80k people that are watching do it, then it reaches my house. So, how does that happen?

Replies from: Barry_Cotter, billswift, ciphergoth
comment by Barry_Cotter · 2012-07-16T14:59:00.107Z · LW(p) · GW(p)

Not a real answer.

Sound is measured in Bels. This is a logarithimic scale. 2 Bels is 10 times as loud as 1, 3 ten times as loud as 2, etcetera. Since sound is a wave I expect the intensity to diminish at constant*(inverse square of distance).

Arbitralily say each individual emits 1 unit of sound.

80,000/(2,000^2) = 0.02 Magnitude of arbitrary unit is 0.02 at 2km 80,000 / (1000^2) = 0.08 Magnitude of arbitrary unit is 0.08 at 1km

Basically it's an example of an inverse square law. No real understanding of physics was used in this comment.

Replies from: AandNot-A
comment by AandNot-A · 2012-07-16T16:11:07.168Z · LW(p) · GW(p)

This was more specific than I imagined, thank you. The basis intuition should be that dropping two rocks on a pond makes two waves that collapse onto each other and form one bigger wave, right?

comment by billswift · 2012-07-16T16:56:12.584Z · LW(p) · GW(p)

Sounds are waves transmitted by air. Waves can reinforce or cancel each other, but cancelling can only go so far (to zero), so what is left is the sound resulting from the reinforced waves.

comment by Paul Crowley (ciphergoth) · 2012-07-16T17:44:44.567Z · LW(p) · GW(p)

I'm not an expert, but here's what I'd guess:

If 100 people stood exactly the same distance from your house, and sang the same pure note in phase with each other, the resulting sound would arrive with an amplitude 100 times greater than if it was just one person.

If 100 people stood at random distances from your house, and sang the same pure note all with different phases, the resulting sound would arrive with an amplitude 10 times greater.

Replies from: DanielLC
comment by DanielLC · 2012-07-16T18:59:54.213Z · LW(p) · GW(p)

Correct, but misleading. The power is proportional to the square of the amplitude. Also, the people being in phase depends on where you're standing. If you stand in just the right (or rather, wrong) spot, you get 100 times the amplitude and 10,000 times the power, but if you average the power output in all directions, it would end up being 100 times the power.

comment by MileyCyrus · 2012-07-16T16:55:35.280Z · LW(p) · GW(p)

Do psychology researchers endorse the near/far mode model of thinking, or is that just Robin Hanson's pet theory?

Replies from: None, Douglas_Knight
comment by [deleted] · 2012-07-16T17:10:39.902Z · LW(p) · GW(p)

I don't know how popular it is, but everyone else calls it "construal level theory". I also don't know why he decided to rename their close/distant to near/far.

Replies from: Viliam_Bur, CharlieSheen
comment by Viliam_Bur · 2012-07-18T04:40:31.300Z · LW(p) · GW(p)

I also don't know why he decided to rename their close/distant to near/far.

Perhaps because "close/distant" is far, but "near/far" is near.

This comment was originally meant as a joke, but seriously, the "near/far" pair is easier to remember.

comment by CharlieSheen · 2012-07-16T17:26:10.928Z · LW(p) · GW(p)

I think he came up with his theory first and then found it was basically construal level theory. In any case in the LW/OB social group he is hardly unique when it comes to using quirky terminology for existing stuff.

Replies from: Douglas_Knight, maia
comment by Douglas_Knight · 2012-07-16T21:23:35.094Z · LW(p) · GW(p)

I believe your account of the history is mistaken. Here are the earliest posts tagged Near/Far. In the earliest he cites the Construal Level Theory post, but does not emphasize the terms "near" and "far." In the comments, Roko writes out "near/far," before there has been any later post. A month later is an untagged post that emphasizes "near" and "far." Similarly, the third post emphasizes them while condemning CLT as awkwardly named.

The second tagged post uses the near/far dichotomy, but does not seem to me to be about contrual level theory. This is evidence that he adopted the terms from his own theory, but search engines do not provide earlier posts using the terms.

Replies from: CharlieSheen
comment by CharlieSheen · 2012-07-31T12:19:47.178Z · LW(p) · GW(p)

Thank you for the correction and actual info! I should have made it clearer I was speculating.

comment by maia · 2012-07-16T18:39:31.800Z · LW(p) · GW(p)

I think I've heard LW/OB people using near/far mode to talk about how their desire to do certain things depend on how far away they are (e.g. "In far mode I want to have exercised every day a month from now, but in near mode, I don't want to do it today"). Is there actually any connection between this sort of usage and construal level theory? All construal level theory covers, to my knowledge, is how our brains map different kinds of distance into the same buckets.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-07-16T21:43:19.159Z · LW(p) · GW(p)

That example is probably just hyperbolic discounting. But CLT does say that we think differently about near/far things. In particular, we think more abstractly about distant things. That sounds like a stronger claim than yours. Try Robin Hanson's first post on the subject. Do you agree with him? with his source?

An example of hypocrisy where RH goes beyond normal CLT, but where I think it is quite fair to say that there is some connection.

Replies from: maia
comment by maia · 2012-07-17T01:49:51.477Z · LW(p) · GW(p)

His source in the first place is where I learned about construal-level theory, and I find/found it quite convincing. Hanson seems pretty accurate in his summary/analysis there, too.

In the second post: The Good Samaritan experiment seems like a stretch to apply here, but his other source is just the kind of experiment I would have thought should tell you whether CT does apply to "ideals" or not, and it appears that it does. Thanks for pointing me to these posts.

comment by Douglas_Knight · 2012-07-16T20:15:34.606Z · LW(p) · GW(p)

Psychology is extremely fragmented. I don't think there is any theory well described as "endorsed by psychology researchers."

Replies from: MileyCyrus
comment by MileyCyrus · 2012-07-17T00:06:02.843Z · LW(p) · GW(p)

I'm not asking for a consensus. I just want to know if there's anyone in the field who has endorsed near/far mode.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-07-17T00:14:48.040Z · LW(p) · GW(p)

A fair question. Khoth answered it, but since you still write in the present tense, I'll answer, too: Robin Hanson started with Construal Level Theory and renamed it near/far. In the first link and this summary, he cites reviews, so you can check how closely he matches his sources; and with the name you can find the rest of the literature and check how closely he matches it.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-07-17T00:19:53.492Z · LW(p) · GW(p)

Thanks!

comment by Mitchell_Porter · 2012-07-17T02:37:54.041Z · LW(p) · GW(p)

Gerard 't Hooft, who got the Nobel Prize for work that made the standard model possible, has been writing a series of papers in which he tries to get quantum field theory from a classical cellular automaton. He described a bosonic theory in May and now he has a fermionic theory. It's quite deep, because he's trying to get reality holographically, from the "worldsheet" of a superstring; that is, these cellular automata are 1-dimensional, and describe degrees of freedom along a string, and macroscopic space is built up from these. String theory already works that way (that is, instead of a string moving through space-time, you can view it as fields evolving on the string, one field for each space-time coordinate), but normally one supposes that the fundamental theory is still quantum. It's hard to believe that these ideas could work exactly as outlined, but it's the beginning of yet another line of inquiry.

comment by beoShaffer · 2012-07-16T20:08:44.967Z · LW(p) · GW(p)

Brain plastination in the Chronicle of Higher Education.

comment by FiftyTwo · 2012-08-01T16:04:32.459Z · LW(p) · GW(p)

Could the great filter just be a case of anthropic bias?

  • Assume any interplanetary species will colonise everything within reasonable distance in a time-scale significantly shorter than it takes a new intelligent species to emerge.
  • If a species had colonised our planet their presence would have prevented our evolution as an intelligent species.
  • Therefore we shouldn't expect to see any evidence of other species.

So the universe could be teeming with intelligent life, and theres no good reason there can't be any near us, but if there were we would not have existed. Hence we don't see any.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-08-01T16:27:05.381Z · LW(p) · GW(p)

I've never heard that argument made before, but it makes a whole lot of sense to me.
If anyone knows of a more formal treatment of it anywhere, I would love a pointer.

comment by Viliam_Bur · 2012-07-19T18:08:11.840Z · LW(p) · GW(p)

Two minor inconveniences using LW site:

In "Inbox" there are two links under each message. Under discussion comments, "Report" is on the right. Under private messages, "Report" is on the left. (I have already clicked it mistakenly; expecting the "Reply" button to be there. Good that is asked whether I am sure.) Please move the links consistently.

On the starting page, there is a list of recent comments. I would like to go to the article where it happened, but it is not hyperlinked; there are only hyperlinks to the comment and author. I can click on the comment and then to see the original article, but because it is two clicks in the same article, I lose the information about which comments are unread. (This can be solved by clicking on comment author instead and then the article; but I usually forget to do it, and then it's too late.) Please make also the article name clickable; or don't remove the unread flag from comments when someone is seeing just one selected comment. (Or just remove the unread flag only from the comments really appearing in the screen, if this would be more simple to implement.)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-19T19:54:59.800Z · LW(p) · GW(p)

Yes! Both of these are massively annoying. The last solution you gave is the correct one. Mark as read precisely the comments that were read.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-07-19T20:50:48.667Z · LW(p) · GW(p)

Maybe it would require too much of a change. For example I can imagine implementing the unread comments by just remembering one number per user per article -- the highest seen comment ID in given article. This could save a lot of database space, so I suspect it could be implemented like this.

If this is the case, then please just don't change this value when only selected comments are displayed.

comment by Grognor · 2012-07-30T02:07:01.025Z · LW(p) · GW(p)

For a while, I assumed that I would never understand UDT. I kept getting confused trying to understand why an agent wouldn't want or need to act on all available information and stuff. I also assumed that this intuition must simply be wrong because Vladimir Goddamned Nesov and Wei Motherfucking Dai created it or whatever and they are both straight upgrades from Grognor.

Yesterday, I saw an exchange involving Mitchell Porter, Vladimir Nesov, and Dmytry_messaging. The latter of these insisted that one-boxing in transparent Newcomb's (when the box is empty) was irrational, and I downvoted him because of course I knew he was wrong. Today at work (it is a mindless job), I thought for a while about the reply I would have given private_messaging if I did not consider it morally wrong to reply to trolls, and I started thinking things like how he either doesn't understand reflective consistency or doesn't understand why it's important and how if you two-box then Omega predicted correctly and I also thought,

"Well sure the box is empty, but you can't condition on that fact or else-"

It hit me like a lightning bolt. That's why it's called updateless! That's why you need to- oh man I get it I actually get it now!

I think this happened because of how much time I've spent thinking about this stuff and also thanks to just recently having finished reading the TDT paper (which I was surprised to find contained almost solely things I already knew).

comment by Morendil · 2012-07-16T15:21:50.358Z · LW(p) · GW(p)

I took this test for the Big Five twice at an interval of two years. The Big Five traits have been discussed on LW, mostly in a favorable light.

My scores were as follows, showing quite significant variations:

  • O65-C52-E42-A44-N37 on 12/06/2010
  • O30-C41-E31-A44-N32 on 29/06/2012

As a result I have updated quite a bit away from lending much credence to this particular self-administered Big Five test; either the test itself is flawed, or self-administration in general doesn't ensure stability of the measurements, or the Big Five theory has a problem.

Replies from: VincentYu, fubarobfusco, gwern, siodine
comment by VincentYu · 2012-07-16T19:26:39.150Z · LW(p) · GW(p)

I agree with gwern that there's not really much variation apart from the openness domain. It's a bit dangerous to use percentile rankings on internet assessments for longitudinal studies, though – you never know if the norms have been changed, or if you have been normed against a different population when retaking the test due to differences in, e.g., IP address or age. It would be best to record the raw scores as well, if possible.

The test you linked to was created by Gosling et al. (2004) for a study on web-based Big Five tests. (I found it funny that this test was also created for the same study – ignoring the substantial differences in... decoration, they should give similar results.) The inventory in that test is the Big Five Inventory (BFI) (most recent reference: John et al., 2008); it's quite widely used.

I recommend the IPIP-NEO for anyone who wishes to do a self-assessment for the Big Five. That link provides two versions: Goldberg (1999) developed the original 300-item inventory, and Johnson (2011) shortened that to a 120-item inventory. For those who have time, the 300-item version is psychometrically superior, as expected. There are two main advantages to the IPIP-NEO, compared to alternatives like the BFI: (1) It was designed to correlate with the commercially-distributed NEO-PI-R, which remains the most popular inventory in the literature, (2) It gives percentile rankings on the 30 facet-level scales as well as the 5 domain-level scales in the NEO-PI-R (an example report).

(I recently spent ~2 weeks doing a literature review of personality psychology, with a brief focus on internet self-assessments for the Big Five. For a very brief overview of the field, I recommend Robins and Donnellan's (2009) article in The Corsini Encyclopedia of Psychology. For an up-to-date review of the Big Five, I recommend McCrae's (2009) article in The Cambridge Handbook of Personality Psychology.)

comment by fubarobfusco · 2012-07-16T20:12:07.082Z · LW(p) · GW(p)

This test seems to be measuring your self-image, not your behavior.

Consider the difference between asking, "Do you see yourself as someone who starts quarrels with others?" and following the person around for a week and seeing if they start quarrels with others.

I'd like to know how well-calibrated people's self-images are. It seems to me that for some variables, many people's self-images are very poorly calibrated.

comment by gwern · 2012-07-16T15:46:39.908Z · LW(p) · GW(p)

That doesn't seem to be much of a variation, except for your Openness - and without knowing more about you, I couldn't say whether that's good or bad...

comment by siodine · 2012-07-16T19:43:17.651Z · LW(p) · GW(p)

http://neuroskeptic.blogspot.com/2012/03/personality-without-genes.html

The author in the comments:

Scud: I never said it had to be wrong. I'm just saying that if you accept that there are no Big 5 SNPs, then the explanation must be either:

1) Personality is independent of genetics 2) The Big 5 is a poor measure of personality or 3) We just haven't found the personality SNPs yet because our sample sizes are too small or our stats aren't clever enough.

Or a mixture of those.

Also interesting: http://blogs.discovermagazine.com/gnxp/2012/06/heritability-of-behavioral-traits

Replies from: Douglas_Knight
comment by Douglas_Knight · 2012-07-16T20:11:42.726Z · LW(p) · GW(p)

I believe that the original post mistakenly implies that height SNPs have been replicated. Similarly, IQ SNPs haven't been replicated. Height and IQ have much higher retest correlation and much higher heritability than any big 5 factor. So option 3 is pretty reasonable. Option 2 might be true, but it doesn't help explain the data.

Fitness relevant variation should be hard to detect, because selection is purifying it. If personality is fitness-neutral, then it might be easier to detect.

Replies from: siodine
comment by siodine · 2012-07-16T20:18:47.870Z · LW(p) · GW(p)

I forgot about that:

Take height. Though health and nutrition can affect stature, height is highly heritable: no one thinks that Kareem Abdul-Jabbar just ate more Wheaties growing up than Danny DeVito. Height should therefore be a target-rich area in the search for genes, and in 2007 a genomewide scan of nearly 16,000 people turned up a dozen of them. But these genes collectively accounted for just 2 percent of the variation in height, and a person who had most of the genes was barely an inch taller, on average, than a person who had few of them. If that’s the best we can do for height, which can be assessed with a tape measure, what can we expect for more elusive traits like intelligence or personality?

From Steven Pinker in http://www.nytimes.com/2009/01/11/magazine/11Genome-t.html

Replies from: billswift, Douglas_Knight
comment by billswift · 2012-07-16T22:57:06.474Z · LW(p) · GW(p)

The genome is the ultimate spaghetti code. I would not be at all surprised if some genes code for the direct opposite characteristics in the presence of other genes. It is going to take more than just running relatively simple correlation studies to untangle the functions of most genes. We are going to have to understand what proteins they code for and how they work.

comment by Douglas_Knight · 2012-07-16T22:59:16.609Z · LW(p) · GW(p)

Yes, that's all people claim, and I believe even this 2% claim failed to replicate.

comment by NancyLebovitz · 2012-07-30T08:59:27.972Z · LW(p) · GW(p)

Something to defend: How Machiavelli's love of Florence led to the invention of utilitarianism and political science.

comment by Multiheaded · 2012-07-22T14:05:43.712Z · LW(p) · GW(p)

Holy shit, I just realized that looking from a TDT perspective, a Parfit's Hitchhiker, a Newcomb's Problem and ye olde one-shot PD... are more or less the same challenge, just with a few variables changed! They all can be resolved optimally and consistently only by being something, not just doing something.

Obvious in hindsight (that's sort of what EY has been meaning with that whole sequence), but there you go.

Replies from: wedrifid
comment by wedrifid · 2012-07-23T01:02:35.421Z · LW(p) · GW(p)

Holy shit, I just realized that looking from a TDT perspective, a Parfit's Hitchhiker, a Newcomb's Problem and ye olde one-shot PD... are more or less the same challenge, just with a few variables changed! They all can be resolved optimally and consistently only by being something, not just doing something.

Sometimes we even just lump them in as "Newcomblike" and be done.

Replies from: Multiheaded
comment by Multiheaded · 2012-07-23T01:16:27.110Z · LW(p) · GW(p)

Yeah, it's also obvious now. I used to think that "Newcomblike" referred strictly to variations upon the "Omega and boxes" set-up.

comment by CronoDAS · 2012-07-17T02:08:06.494Z · LW(p) · GW(p)

I want to give my 13 year old cousin a book on atheism for teenagers. Her mom has been raising her Catholic and had also sent her to a locally well-regarded Jewish preschool, saying things about "heritage" and such. (Her father is a non-believer but apparently hasn't objected to this religious upbringing.)

My parents say that doing so is a bad idea because it will offend my aunt. I feel strongly about my atheism and want to do something in this vein. Any advice?

Replies from: phonypapercut, wedrifid, Xachariah, Bruno_Coelho, maia
comment by phonypapercut · 2012-07-17T02:52:23.476Z · LW(p) · GW(p)

I'd suggest not giving her a book overtly about atheism. Something more broadly about skepticism would be a better choice I think. The Demon-Haunted World gets a lot of recommendations, though I haven't actually read it myself.

Replies from: philh
comment by philh · 2012-07-18T16:53:49.274Z · LW(p) · GW(p)

In non-fiction, The Philosophy Gym might be suitable. I've read the first three chapters and they seemed fairly clear-thinking. (I was reading with an eye to writing a review; it's been a month or so since I picked it up, but there was no particular reason I stopped.)

comment by wedrifid · 2012-07-17T03:41:27.117Z · LW(p) · GW(p)

I feel strongly about my atheism and want to do something in this vein. Any advice?

Get over it. Overt evangelism really isn't worth the hassle if you aren't doing it for the expectation of divine reward. If you want to have influence try to be the coolest seeming older cousin possible and hope your influence rubs off somewhat.

Replies from: CronoDAS
comment by CronoDAS · 2012-07-17T03:49:45.575Z · LW(p) · GW(p)

It's "Someone is wrong on the internet" syndrome, only much worse because it's "Someone I know is wrong right next to me", which makes it much stronger.

ETA: Also, it's just that atheism seems so obvious to me that anyone with a shred of sense should easily be able to see that Christianity is false, just by praying and seeing what happens. I know intellectually that religious people aren't actually idiots or insane people, but I feel like they are.

comment by Xachariah · 2012-07-17T07:49:47.572Z · LW(p) · GW(p)

Have you thought about giving her something with strong rationalist themes instead? HPMoR is what I'm thinking of specifically, but other stuff could work.

If she doesn't learn how to think rationally, an atheist book may or may not work regardless. If she does know how to think rationally, it's a matter of time before she sees the inconsistencies in religion anyhow. Plus, fiction is a much more compelling read than argument books and you get to keep plausible deniability without offending her aunt.

Replies from: Kaj_Sotala, CronoDAS
comment by Kaj_Sotala · 2012-07-18T07:36:10.769Z · LW(p) · GW(p)

If she does know how to think rationally, it's a matter of time before she sees the inconsistencies in religion anyhow.

Note that this presumes her to be approaching religion from a fact-based perspective: e.g. treating it as just a set of empirical beliefs. This is true for some people, but not for all. There are many people who approach religion from an emotion-based perspective, where they start from the emotion of faith which never goes away, even if they intellectually acknowledge that they have no real justification for it. And then there are people who are somewhere in between: they have an emotion of faith, but one which can be affected by factual knowledge.

It seems to me like a common failing of many atheists, including many LW posters, is that they've never experienced that emotion and therefore presume that all religious people treat their faith as merely a set of beliefs - which seems to me like an utter misunderstanding of the actual psychology of religion. It also disposes them to consider all religious people "stupid" for not seeing what they consider obvious, failing to consider the fact that religious people might see those things but in some cases elect to ignore them. Even if atheists do manage to see this, they call it "belief-in-belief" and say it's not actually real belief at all - which is still missing the point.

Replies from: Nornagest
comment by Nornagest · 2012-07-18T08:08:20.089Z · LW(p) · GW(p)

Is religious faith an emotion? That's not me being a smug empiricist, I'm actually curious. I've talked to enough theists and read enough apologia to understand that a lot of folks have a strong sense of the numinous that doesn't really go away, but I know very little about its actual phenomenology.

Replies from: Kaj_Sotala, Kaj_Sotala
comment by Kaj_Sotala · 2012-07-18T13:06:59.960Z · LW(p) · GW(p)

Here's what one person (Janos Honkonen) commented in that discussion, which I thought was a pretty awesome description:

Okay, hmm. Imagine a situation where you started seeing a new color. It would be damn difficult to describe how it looks and why the heck you feel that seeing that color made things shine just a bit brighter and more beautiful, and somehow that gave you certain kind of serenity that somehow makes you be a bit less if a dick you'd probably be without it, maybe. Some other people would see that color and make a huge amount of noise and nuisance about it, and others would say that people seeing that color are deluded, idiots or psychotic. You'd sit in the middle wishing everybody would just shut the fuck up, especially those people who forget that the main "use" of the color is to see it and see a different world and you, not to make noise about it existing and harass other people about it.

This is more or less how it works in my head, and it's not like I can fucking help it. Instead of dimming it, learning about biology, the cosmos, neurology and psychology makes that color just burn brighter and more beautiful.

Also, Googling a bit I found this summary of Jonathan Haidt's research to the experiences of sacredness. Reading that and Janos' description together made me a little more convinced that actually, just about everyone has experienced something akin to religious belief - it's just that (some varieties of) religious people experience something similar far more often.

ETA: Janos pointed out that the experience of awe in nature and the experience of the divine feel different, and I should clarify that I didn't think that the experience of nature and the experience of the sacred would be exactly the same, just... somewhere in the same rough neighborhood of experience-space, analogous to the way that listening to a good song is quite different from reading a good book, but still closer to reading a good book than getting punched in the face.

comment by Kaj_Sotala · 2012-07-18T10:24:23.877Z · LW(p) · GW(p)

"Emotion" probably isn't the best word. I cross-posted this comment on my Facebook account, where one person commented that

I agree, faith is not an emotion, just as love isn't. They are landscapes of experience which can be ornamented by a wide variety of emotions.

I'm not sure I agree with that entirely either, but it seems to be more in the right direction than just calling it an emotion is.

comment by CronoDAS · 2012-07-17T09:07:40.577Z · LW(p) · GW(p)

Maybe I should just go with the His Dark Materials trilogy first...

Replies from: drethelin
comment by drethelin · 2012-07-18T18:28:34.817Z · LW(p) · GW(p)

It's basically atheist narnia, so that makes sense.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-24T18:31:01.453Z · LW(p) · GW(p)

...it is?

I found it hopelessly nihilistic and self-defeating as a moralistic tale; the big moral struggle is against Original Sin, but it is framed in the end in some New Sin instead.

It only made sense to me as a story about one monolithic Authority being replaced by another, which institutes rules to try to prevent themselves from being supplanted by the same means in the future.

Replies from: drethelin
comment by drethelin · 2012-07-24T19:42:21.612Z · LW(p) · GW(p)

The big moral struggle is against GOD. In the end, they kill god, and then save the universe by having sex, ie by denying puritanical prudishness. Then society goes on to live happily after without god's pernicious influence, instead of everyone living happily in heaven.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-24T20:12:40.406Z · LW(p) · GW(p)

Except that it turns out they weren't fighting god, they were fighting an apparent angel who other angels claimed was god, fighting on the side of angels who were mostly interested in making more angels. And the protagonists didn't go on to live happily, it's implied that they were kind of depressed for the rest of their lives. If it's Narnia, it's William Blake's Narnia, not an atheist's.

Blake's influence is pretty clear (right down to the sex thing), and the whole thing could be interpreted as a Jesus allegory, within the framework of William Blake's belief that Satan, not God, was in the right, but fought the war immorally. Jesus, in Blake's works, was a divine entity who chose to oppose God morally, and so won. (Asriel and Coulter would be Satan; Lyra and Will, Jesus, who is notably absent from the story's religions.)

comment by Bruno_Coelho · 2012-07-17T07:55:23.380Z · LW(p) · GW(p)

I had much the same opinion, atheism vs theism debate raise red flags. People go talking about improbable events, morality, friends and parents in the emotional mode.

Teach how to think and some heuristics, not perfectly baked beliefs.

comment by maia · 2012-07-17T03:03:54.247Z · LW(p) · GW(p)

You don't mention what your young cousin believes. That's pretty important; if she believes what her parents do, you may offend and alienate her if you're not careful.

Replies from: CronoDAS
comment by CronoDAS · 2012-07-17T04:20:24.215Z · LW(p) · GW(p)

I don't really know, but I suspect that she believes in God with the same sincerity that an 8-year-old believes in Santa Claus... and that she also doesn't know that she has atheist relatives.

comment by FiftyTwo · 2012-07-22T23:46:37.966Z · LW(p) · GW(p)

I've heard some reports that hallucinogenic drugs can be psychologically beneficial and am wondering whether to experiment with them. My personal interest was triggered by articles about LSD being effective in dealing with depression, which is an ongoing issue for me. However I'm concerned at interaction with anti-depressants (SNRI) and possible harmful psychological effects.

So two related questions:

  • What are people's anecdotal reports on using hallucinogenics?

  • What would be the best way to go about investigating this in literature while avoiding obviously pro or anti biased positions, there seems to be an absence of serious studies.

Replies from: gwern
comment by gwern · 2012-07-23T00:25:01.468Z · LW(p) · GW(p)

In general, you're going to have a hard time researching this precisely because hallucinogens are so illegal. For example, when I was researching http://en.wikipedia.org/wiki/LSD_and_schizophrenia the area was just gearing up to really deal with the various confounds and alternate explanations when LSD was outlawed and bam, now the only studies you can do are of illegal users who tend to have huge confounds like abusing all sorts of other drugs or taking extremely nasty psychiatric drugs or just being intrinsically less mentally healthy.

(Likewise, looking into modafinil and schizophrenia was complicated by the fact that no studies directly bore on the problem, which will likely be an issue for you as well once you exhausted the few studies which are looking for specific benefits.)

Your best bet is probably looking for the modern trials investigating substances like psilocybin for PTSD, fear of death, etc. I know at least a few exist.

comment by iDante · 2012-07-20T05:15:40.579Z · LW(p) · GW(p)

I think the Intuitive Explanation of Bayes' Theorem needs a new introduction.

I'm pretty new to this site and I clearly remember that one of the first articles I attempted to read was this one. At this point, I didn't even know what Bayes' theorem was, so I really didn't care about how Bayes' theorem was often taught wrong, or about how Bayesian reasoning is counterintuitive, or anything in the current intro. I had seen a link on the home page about Bayes and wanted to know what it was, not about how other people teach it wrong. It is one of the first articles most people see on this site.

If the article was short or if it started out more piquant then I wouldn't have hesitated to dive in. I did dip my toe into the first few paragraphs, just to test the water. What I got was a story problem with no context, whose answer was obvious to me (I guessed just under 10% for the first problem; I'm an astronomer so 10% = 7%. Same order of magnitude!). After that I skimmed to the end and left out of boredom. After reading a good chunk of the sequences I realized how powerful Bayes' theorem actually is and went back and read the article.

I'm not a great writer so... yeah. Someone else gotta do that one.

tl;dr: Fix the intro to be an actual introduction to the subject, so that people who have no clue what Bayes' theorem is won't be driven away. Something along the lines of this one would be good.

comment by gwern · 2012-07-16T17:46:01.529Z · LW(p) · GW(p)

I've signed up for GoodReads: http://www.goodreads.com/gwern (Read by top rating)

Have other LWers found this worth doing?

Replies from: listic, iDante
comment by listic · 2012-07-16T20:58:06.054Z · LW(p) · GW(p)

Just made an account there: http://www.goodreads.com/friend/i?i=LTM1OTYwODg2NDI6NDAy

Does it handle foreign language literature?

comment by iDante · 2012-07-16T18:48:45.465Z · LW(p) · GW(p)

I signed up for fantasy recommendations and so far it's been great. I'm much too lazy to add in every book I've ever read though, so mostly it just shows me the most popular books on the site.

http://www.goodreads.com/user/show/10321583-idante

comment by RobertLumley · 2012-07-16T14:02:01.988Z · LW(p) · GW(p)

I posted this in the last open thread, but it received pretty limited response: Is it time for a new welcome thread? The current one says 2012, but it's over 1200 comments largely because of that infanticide discussion.

Replies from: Barry_Cotter, ciphergoth, ciphergoth
comment by Barry_Cotter · 2012-07-16T14:34:40.533Z · LW(p) · GW(p)

Poll time.

If it is time for a new welcome thread upvote this comment. There will be a child comment as a karma sink.

Replies from: ahartell, Normal_Anomaly, Barry_Cotter
comment by ahartell · 2012-07-16T15:32:04.105Z · LW(p) · GW(p)

Shouldn't there be a "if it is not time for a new thread, upvote this comment" option?

Replies from: Normal_Anomaly, ciphergoth
comment by Normal_Anomaly · 2012-07-17T03:23:06.783Z · LW(p) · GW(p)

Good point. Fixed.

comment by Normal_Anomaly · 2012-07-17T03:22:46.429Z · LW(p) · GW(p)

If it is not time for a new welcome thread, upvote this comment and downvote the child.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2012-07-17T03:22:53.507Z · LW(p) · GW(p)

karma sink

comment by Barry_Cotter · 2012-07-16T14:34:56.552Z · LW(p) · GW(p)

Karma sink.

comment by Paul Crowley (ciphergoth) · 2012-07-18T12:44:13.258Z · LW(p) · GW(p)

Great support, no dissent; done. I asked orthonormal if they'd prefer to do it but they handed the torch to me :)

Replies from: RobertLumley
comment by RobertLumley · 2012-07-18T14:20:48.787Z · LW(p) · GW(p)

Thanks, I was a bit uncomfortable making a post that if it reached the same karma score as the last one did would have been over 10% of my current karma.

comment by Paul Crowley (ciphergoth) · 2012-07-16T17:45:48.990Z · LW(p) · GW(p)

Poll time, this time with all the options needed. Please don't comment here; comment here.

Replies from: ciphergoth, ciphergoth, ciphergoth, ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-07-16T17:46:04.337Z · LW(p) · GW(p)

If it is time for a new welcome thread upvote this comment.

comment by Paul Crowley (ciphergoth) · 2012-07-16T17:47:11.008Z · LW(p) · GW(p)

If you'd like to comment on this poll, please comment here.

comment by Paul Crowley (ciphergoth) · 2012-07-16T17:46:18.970Z · LW(p) · GW(p)

If it is not time for a new welcome thread upvote this comment.

comment by Paul Crowley (ciphergoth) · 2012-07-16T17:46:30.025Z · LW(p) · GW(p)

Karma sink.

comment by Dan_Moore · 2012-07-26T19:16:40.291Z · LW(p) · GW(p)

EffectSizeFAQ.com provides a layman's (or newbie statistician's) guide to effect sizes and why they matter. Actually, any person reading about or doing statisitics would do well to be familiar with the concepts outlined there, as there are still current articles that conflate statistical significance with substantive significance.

comment by Kaj_Sotala · 2012-07-22T12:35:08.391Z · LW(p) · GW(p)

I finally updated my website to be a little more 21st century - http://kajsotala.fi . Doesn't have anything that'd be new for someone who's read everything that I've ever written online, but if you haven't and you like my writing in general, you might find something interesting.

Opinions have been divided on the logo graphic, so I'll have to see if I find something that's a little more universally liked. It's not a huge priority, though.

comment by [deleted] · 2012-07-17T11:32:29.499Z · LW(p) · GW(p)

Consider a hypothetical species living in a universe with different physics. They have unlimited living space and resources, no known existential risks to worry about, and see no evidence of other intelligent lifeforms. Their population of 7 billion is growing exponentially at the same rate as ours. The physics of their universe allow for this exponential growth to go on literally forever.

Given the doomsday argument, their mathematicians would compute the same doomsday that ours do, although their actual probability of going extinct is much lower. However, they cannot find fault with the logic of the argument.

What does this thought experiment tell us? The doomsday argument is only valid in some universes?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-07-17T12:38:26.597Z · LW(p) · GW(p)

Your thought experiment is like saying, "suppose I win the lottery, does that disprove probability theory?" You've basically said that their total population over time is infinite, but that you've decided to focus on the very beginning of their history. A randomly selected individual from that possible world should have an immeasurably long history behind them.

Replies from: None
comment by [deleted] · 2012-07-17T20:37:47.034Z · LW(p) · GW(p)

Thanks, this is a good point. I can see that the argument is logically sound but it seems so unscientific - our future will be determined by physics, not a priori logical arguments - so why take it seriously?

comment by Bill_McGrath · 2012-07-30T17:47:16.349Z · LW(p) · GW(p)

Where can I find solid figures on album sales over the past few years? It's for something I'm trying to write, and the RIAA website is useless, and wikipedia seems to reference a varied collection of journalistic articles rather than central sources.

comment by FiftyTwo · 2012-07-20T11:53:37.828Z · LW(p) · GW(p)

Is it worth carrying alcohol hand wash and regularly using it for health reasons? I'm unsure how real the threat actually is for people with normal lifestyles.

Replies from: DaFranker
comment by DaFranker · 2012-07-31T14:51:07.950Z · LW(p) · GW(p)

Small bit of anecdotal evidence: I've started using it once I got to my latest work environment as soon as I noticed that the place is basically perpetually-contaminated and, well, plain dirty. Carrying a small bottle around to use whenever you're about to unexpectedly end up putting your hands on your face is also something I was taught to do.

Turns out that it's useful in both cases for me, since it's easy with a bit of experimentation and discipline to strongly correlate to a 90% prediction accuracy times when you don't disinfect your hands before putting them on your face and getting sick within a week with some germ/virus/bacteria-related illness.

For me, the "time gained" and general utility of avoiding minor illnesses like a cold or flu far surpasses the cost of the disinfectant, and I've seen no evidence that there are health-detrimental side effects if used "reasonably" (AKA only when strictly needed before eating or some similar criteria of less-than-X-times-a-day).

Mind, though, as stated above: It's only me, so YMMV. Anecdotal evidence and all that.

comment by cousin_it · 2012-07-18T15:00:50.203Z · LW(p) · GW(p)

What does UDT do in asymmetric zero-sum games? Here's a simple such game: player 1 chooses which player is going to pay money to the other, and player 2 chooses the sum of money to be paid, either $100 or $200. The unique Nash equilibrium is player 2 paying $100 to player 1.

Here's a simple formalization of UDT. Let the universe be a program U which returns a utility value, and the agent be a subprogram A within U, which knows the source code of both A and U by quining, and returns an action value. The program A looks for proofs in a formal theory T that the statement A()=a logically implies U()=u, for different values of a and u. Then after spending some time looking for proofs, A returns the value of a that corresponds to the highest value of u found so far.

If two players play the above game, using identical copies of the above algorithm and treating the game as a universe containing instances of both players, will they find any proofs at all? I suspect that the answer is no, because Nash equilibrium reasoning without perfect symmetry requires you to assume that the other player is rational, which fails by infinite regress, like in Eliezer's "AI reflection problem". But I'd really like someone else to double check this.

ETA: if we make the algorithm look for proofs of statements like "A()=a implies U()≥u" instead, it will minimax in zero-sum games, cooperate in the PD against a logically equivalent copy of itself, and revert to defection if the copy is slightly imperfect. Same in Newcomb's problem, the algorithm one-boxes if Omega is provably right with high enough probability, and two-boxes otherwise. The drawback is that it doesn't lead to Nash equilibrium play against itself, only minimaxing. But it's still nice because it captures "resonant doubt".

comment by anotherblackhat · 2013-07-11T18:55:48.465Z · LW(p) · GW(p)

There's a scam I've heard of;

Mallet, a notorious swindler, picks 10 stocks and generates all 1024 permutations of "stock will go up" vs. "stock will go down" predictions. He then emails his predictions to 1024 different investors. One of the investors receive a perfect, 10 out 10 prediction sheet and is (Mallet hopes) convinced Mallet is a stock picking genius.

Since it's related to the Texas sharpshooter fallacy, I'm tempted to call this the Texas stock-picking scam, but I was wondering if anyone knew a "proper" name for it, and/or any analysis of the scam.

comment by Nisan · 2012-08-01T06:48:24.564Z · LW(p) · GW(p)

Is there a way to put hyperlinks or images in meetup posts? It seems that when one makes a meetup post, an identical discussion post is automatically generated. You can edit the latter in LessWrong's html or wysiwyg editor, but doing so does not modify the original meetup post. It looks like only a text editor is available for the meetup post itself.

EDIT: It turns out that if you put a url into the text editor for a meetup post, it will turn into a link.

comment by DaFranker · 2012-07-31T18:01:43.551Z · LW(p) · GW(p)

I have a bit of an "Unspoken LessWrong norms" question, which resolves into several questions:

In this comment, I edited the comment itself out of forum-going habit (where it's usually looked-down upon to "doublepost", or post consecutively in the same thread, for various reasons) when I wanted to add my meta-thoughts and analysis of what I discussed. I also do the same thing when I simply want to add more content / thought to a comment, by default.

Is this the preferred method of self-review or content-addition, such as might be inferred from the "Edited To Add" acronym I've seen on the wiki jargon page, or would replying to my own comment in this case have been slightly better in some way? The first potential cause to do so that props to mind would be to allow for separate upvoting/downvoting somesuch, which some might prefer. If I wanted to have maximal feedback, should I do that from now on? What situations would make replies-to-self (or "doubleposts") preferable, zero-sum, unacceptable?

Replies from: Oscar_Cunningham, TheOtherDave
comment by Oscar_Cunningham · 2012-07-31T19:02:33.492Z · LW(p) · GW(p)

I think that replying to yourself is generally acceptable (although there's no special benefit other than the ones you mentioned). Just don't do it so much that it clogs up a thread.

comment by TheOtherDave · 2012-07-31T18:09:41.746Z · LW(p) · GW(p)

AFAICT, there's no hard consensus here; I've seen both patterns, and I haven't seen either pattern disproportionately criticized. As you say, separating distinct thoughts into multiple posts maximizes the potential clarity of feedback provided through voting. In the case of edits, it also causes the new thought to appear in the Recent Comments queue, which some people read; this makes the new thought more visible to those people.

comment by Vaniver · 2012-07-28T17:52:11.074Z · LW(p) · GW(p)

Are private messages working for other people? I've tried to send three recently that I don't think have been sent. When I send them, I get sent to message/compose and it's blank, and when I check message/sent they aren't there. I'm on a different, dodgier internet connection than normal, and so the problem could very well be on my end.

comment by Multiheaded · 2012-07-24T17:52:02.730Z · LW(p) · GW(p)

OMG RATIONALBROS HELP ME OUT MY NIGGAZ

i need the complete lyrics for skrillex - my name is skrillex, from the album "my name is skrillex". the lyrics author is skrillex i think.

Replies from: Multiheaded
comment by Multiheaded · 2012-07-24T18:05:23.242Z · LW(p) · GW(p)

Nevermind, found them.

http://www.youtube.com/watch?v=LlyJuxjUm1c

Replies from: DaFranker
comment by DaFranker · 2012-07-31T14:36:11.297Z · LW(p) · GW(p)

We are not CrowdGoogle. Use Google itself for your own googling needs.

Replies from: Multiheaded
comment by Multiheaded · 2012-07-31T20:47:17.178Z · LW(p) · GW(p)

I was just feeling frustrated, and wanted to use LW for some instant gratification through herping and derping. I wish I could take this shit to 4chan's /b/, but alas, in these decadent times there's no longer even a /b/ that a self-respecting person could gb2.

I think I'm going to petition the powers-that-be for actually opening a "mental refuse"/"noise sink" space next to LW discussion, open by invitation only to long-time users. MLP pornography, Holocaust jokes, strings of racist invective - that's something for which I'd enjoy a neat, rationalist reservation.

Can you imagine it? 2girls1cup and the Methods of Rationality!

Replies from: wedrifid
comment by wedrifid · 2012-07-31T20:56:51.438Z · LW(p) · GW(p)

MLP pornography

MLP is My Little Pony? How does this manage to make it into the list? Horses mating is neither particularly erotic nor particularly disturbing. Or if it is MLP/human pornography then the rather artificial and colorful look of the ponies would give an aura of "sex toy" --> so not anywhere near 2girls1cup callibre.

comment by Multiheaded · 2012-07-22T19:31:15.727Z · LW(p) · GW(p)

Something I just fucking have to say - not to bring up gender politics or certain communities' habits or any of our mind-killing issues like that - just to express the depths of my frustration and disgust!

If I ever again see anyone online not just using that stupid fucking 1-10 "scale of attractiveness", but referring to a (concrete or hypothetical) person as "a [number]" - e.g. "It's so lame to settle for 3s and 4s when you could get more fit and hook up with 8s" - I'm going to kick them in the genitals over TCP/IP.

When I read such shit, I don't even process it in terms of concepts like "the sexual marketplace" or "objectification", I just feel something like "Graraarragh wharrrrrbl people are not numbers!" For reference, and to further risk mind-killing you, I don't get that fiercely emotional reaction when e.g. seeing men getting off-handedly classified into "alphas", "betas" and "omegas", so it's not just my ideological hostility to the whole, ah, school of thought.

Replies from: shminux
comment by shminux · 2012-07-22T19:49:33.884Z · LW(p) · GW(p)

Can you trace this strong emotional reaction of yours to its origins?

Replies from: Multiheaded
comment by Multiheaded · 2012-07-22T20:14:49.684Z · LW(p) · GW(p)

I... I think I could. Before, I commented a couple of times about how I'm always insecure about my own perceived moral failings and unacceptable/sociopathic urges, and somewhat hypocritically obsessed with interpersonal ethics. Prior reflexion suggests that this even extends to my very odd political outlook.

So, by that reasoning, it might well be some part of myself screaming "Rage at the slightest suggestion of viewing people only instrumentally! Bend over backwards with signaling about empathy and human dignity! Because otherwise you'll be pure evil and everyone will hate you!" Seeing in others what you fear in yourself, essentially. I know, this is quite unhinged, but it does explain many of my other moral emotions.

Might be something completely different, though, and I might be under the sway of pop-culture psychology.

comment by FiftyTwo · 2012-07-20T12:30:52.439Z · LW(p) · GW(p)

What do people think of the Myer-briggs personality system? I've seen it referenced a few times here and have occasionally had interesting insights from it, but I'm unsure about the empirical basis of it. I'm particularly worried by the self reporting of peoples features (e.g. if I'm asked if I'm extroverted what baseline am I comparing myself to?) and the possible reporting of what people want to believe about themselves not an objective assessment (e.g. I would want to say rationality is more important than emotion, even if in reality most of my decisions are emotional and I just rationalise them).

Replies from: VincentYu, army1987, Richard_Kennaway
comment by VincentYu · 2012-07-20T17:23:48.100Z · LW(p) · GW(p)

Within the academic literature for personality psychology, the Myers-Briggs Type Indicator (MBTI) is obsolete; the Big Five has been the dominant framework for studies on human personality since about 1990. The most-cited review of the MBTI was very critical of it (McCrae and Costa, 1989) (side-note: there is a conflict of interest in that McCrae and Costa receive royalties from their NEO-PI inventory of the Big Five). Here is the abstract:

The Myers-Briggs Type Indicator (MBTI; Myers & McCaulley, 1985) was evaluated from the perspectives of Jung's theory of psychological types and the five-factor model of personality as measured by self-reports and peer ratings on the NEO Personality Inventory (NEO-PI; Costa & McCrae, 1985b). Data were provided by 267 men and 201 women ages 19 to 93. Consistent with earlier research and evaluations, there was no support for the view that the MBTI measures truly dichotomous preferences or qualitatively distinct types; instead, the instrument measures four relatively independent dimensions. The interpretation of the Judging-Perceiving index was also called into question. The data suggest that Jung's theory is either incorrect or inadequately operationalized by the MBTI and cannot provide a sound basis for interpreting it. However, correlational analyses showed that the four MBTI indices did measure aspects of four of the five major dimensions of normal personality. The five-factor model provides an alternative basis for interpreting MBTI findings within a broader, more commonly shared conceptual framework.

This was interesting (italics mine):

Most conspicuous is the lack of a Neuroticism factor in the MBTI. Its absence is understandable on two counts: first, because emotional instability versus adjustment did not enter into Jung's definitions of the types, and second, because the authors of the test were apparently philosophically committed to a position which saw each type as equally valuable and positive (Myers with Myers, 1980)—a view that is difficult to hold with regard to Neuroticism. (To a lesser extent, the same criticism applies to the TF and JP indices. Descriptions downplay the antagonistic side of Thinking types and the lazy and disorganized side of Perceiving types.) Although it makes interpretation of results palatable to most respondents, this approach also omits information that may be crucial to employers, co-workers, counselors, and the individuals themselves. For many, if not most, applications, some measure of Neuroticism would be useful.

comment by A1987dM (army1987) · 2012-11-02T11:43:41.718Z · LW(p) · GW(p)

e.g. if I'm asked if I'm extroverted what baseline am I comparing myself to?

I took an online Big Five personality test, and it explicitly said I was to compare myself to typical people of the same gender and age as me (which, since there are huge selection effects in the kinds of male twentysomethings I hang around with, meant that in lots of questions I wanted to answer “how the hell should I know?”).

comment by Richard_Kennaway · 2012-07-20T13:03:48.125Z · LW(p) · GW(p)

What do people think of the Myer-briggs personality system?

Astrology for geeks.

comment by aelephant · 2012-07-19T11:20:58.570Z · LW(p) · GW(p)

I recently went through my wardrobe and inverted all of my stacks of clothes. I realized I have a tendency to wear whatever is on top and neglect to wear things for months at a time that are buried below. Then it occurred to me, maybe it doesn't matter.

Consider: a newly purchased shirt can be worn X number of times before it becomes unwearable. Does it really make a difference if those times are clustered together tightly or spread out?

My original thinking was that if I have several pairs of shoes & wear them in rotation, they will last longer. That might be true because the uses will be spread apart, but if I really kept to the rotation system, wouldn't all of my shoes be ready for replacement at roughly the same time, another problem in its own right?

There is also the social aspect to be considered. It seems that it is socially unacceptable in most circles to wear 1 shirt every day of the week and more socially acceptable to cycle through 7 or so. Maybe even more than 7 is "optimally" socially acceptable.

What strategy is better, clustering uses together by wearing your favorite items frequently or cycling through your wardrobe? How many different articles of clothing should one have to cycle through in terms of social acceptability? Do we have any practicing Minimalists here and how do they deal with the social aspect of this?

Replies from: mesilliac
comment by mesilliac · 2012-07-20T05:27:51.532Z · LW(p) · GW(p)

You could probably do an analysis looking at the expected utility in terms of social benefits (people seeing you as well-dressed or fashionable), or performance of the clothing (sports clothing, jeans, work boots), depending on what you wear and do.

In terms of clothing minimalism, it probably depends on your friends and work environment. Many people seem to have multiple similar-looking work outfits, so that they don't have to worry too much about changing their appearance regularly, and others have room to imagine any number of identical items of clothing in the wardrobe at home. Without evidence to the contrary people tend to expect you to have a similar amount of clothing as they themselves do.

A good tactic is to keep a number of discernably unique and stylish clothing items that you do not wear often at all. These can be rotated for social occasions, so that you aren't seen to be wearing the same thing twice in recent succession. This seems to be all most people look for, although it depends on your associates.

Personally I wear whatever seems comfortable and appropriate, making sure to wash most clothing items after one day of use, and thinking about the social implications of being seen wearing whatever I am wearing. The social implication of wearing the same thing for multiple days is that you are dirty and smelly. In my experience as long as you keep clean and presentable, and are seen to have multiple different clothing items, most people don't worry about it, but this may be highly dependant on your local culture.

Regarding things wearing out at the same time - well, even if you use them as evenly as possible there should be more than enough randomness in daily usage to mean things wear out at different times. Even if you buy two identical shirts, they won't get the exact same treatment, leading to an increasing difference in the number of wears before each becomes unusable. (see "Random Walk" problems for an interesting mathematical treatment of a similar concept)

Regarding rotation making things last longer - I think it's fine to think of it as a fixed (but unknowable) number of uses before something wears out. So no matter how you rotate your clothes, each item should have the same base value as it would have otherwise. Thus individual day-to-day changes in clothing (which items are more relevant to the current day's activities) are probably much more important.

One last unmentioned point - clothes sitting at the bottom of a drawer can be easily forgotten. It can be good to upend your drawers every once in a while just to make sure you haven't forgotten any hidden gems. Or, err, moth-eaten horror stories (although it's been a while since this happened to me).

comment by Mitchell_Porter · 2012-07-18T06:54:04.946Z · LW(p) · GW(p)

A question for TDT gurus. Do acausal trade, and other acausal coordinations, require a complete instance of each cooperating agent at each of the acausally connected sites? It seems that there at least has to be a model of each agent at each site, if not a "complete instance".

For example, the TDT solution to Newcomb's problem, as I understand it, amounts to you coordinating with the copy of you or the model of you which exists in Omega. There's only one actively coordinating agent, you (Omega is only reactive to whatever you decide), and there's a copy or a model of you at both ends of the arrangement.

Similarly, when people imagine AIs coordinating acausally - let's say, two AIs in two different Tegmark-level-IV worlds - we can say that at the very least, each AI that is a party to the deal must have a concept of the other one's existence, or else the deal could never get started. (If we imagine equilibria reached by whole populations of AIs scattered throughout a multiverse, then the local model may be of a subpopulation of AIs sharing a characteristic, rather than of individual AIs.) So it's not just a matter of "I'm here and you're there". There has to be a model of you here, and there has to be a model of me there. But how detailed do the models have to be?

Replies from: None
comment by [deleted] · 2012-07-19T11:29:23.628Z · LW(p) · GW(p)

To focus on Newcomb's problem: TDT still one-boxes even if Omega is a little bit bad at predicting whether or not you will one-box. How bad Omega can be while still resulting in TDT one-boxing depends on the precise rewards for one- and two-boxing.

Shminux asked a similar question a while ago and I forgot to tell him that it's in the TDT paper. Hey, shminux: it's in the TDT paper.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-07-19T12:05:17.832Z · LW(p) · GW(p)

You can indeed figure out what TDT recommends just by knowing that Omega predicts with a certain accuracy; you don't need to know how it makes its prediction.

But what I'm looking for is a realistic idea of the circumstances under which acausal deals can actually happen. People speculate about post-singularity intelligences throughout the multiverse establishing acausal equilibria with each other, and so on. I have in the past insisted that this is nonsense because, in a combinatorially exhaustive multiverse, every possible response to your actions will be taken somewhere. Any deal you think you have made is illusory, because even if there is a being somewhere else who acts like the deal-partner you have in mind, there should also be near-copies of that being which act in all other possible ways.

Also, there is nothing that requires an intelligent agent to engage in acausal dealmaking, even if it's possible. If it's a selfish agent, caring only about what happens to this instance of itself, then it literally has nothing to gain from acausally motivated behavior. It might be imagined that agents with impersonal utility functions, such as happiness maximizers or paperclip maximizers, have a reason to play the acausal game, because they will thereby have an affect on the amount of happiness or amount of paperclips in places beyond their immediate causal reach. But if acausal dealmaking is just an illusion, then even that won't happen. It seems that a minimum necessary criterion, for acausal dealmaking to make sense, is the belief that the deal won't be rendered meaningless by the heterogeneous behavior of our potential negotiating partners.

Returning to single-world acausal deals, how can Newcomb's scenario actually come to pass? It requires an Omega that is a good enough predictor, and the agent who reacts to Omega's offer has to have reason to believe that Omega is a good enough predictor. Presumably we can make this happen if Omega has a copy of the "source code" of the other agent, and if this can be proved to the other agent. One can then ask, how simple can these agents be, for these conditions to hold? Could you have simple Bayesian agents, in a simple software environment, which meet these conditions? And the other question is, how can you implement the weakening of this condition (Omega only a moderately good predictor, and known to be such), and does that affect the simplicity threshold?

comment by Incorrect · 2012-07-17T23:43:11.118Z · LW(p) · GW(p)

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"

Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

- Timeless Decision Theory: Problems I Can't Solve - Eliezer_Yudkowsky

I don't understand why "Yes" is the right answer. It seems to me that an agent that self-modified to answer "Yes" to this sort of question in the future but said "No" this time would generate more utility than an agent that already implemented the policy of saying yes.

If I was going to insert an agent into the universe at the moment the question was posed after the coin flip had occurred, I would place one that answered "No" this time, but answered "Yes" in the future. (Assuming I have no information other than the information provided in the problem description.)

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2012-07-17T23:57:07.638Z · LW(p) · GW(p)

When you get to the future, would you regret having to answer "Yes", given that if you weren't so rational you could just answer "No"? If so, you should answer "No" every time. What is the difference between this time and the future? From your present position, you see all future possibilities, and so you make tradeoffs between them, acting on expected utility. But at present, you could similarly take notice of the alternative possibilities that are "sideways" from where you are, things that could happen to you but didn't, and similarly act on expected utility. There doesn't seem to be any fundamental reason for discarding (ceasing to care about) possibilities on the basis of not happening to be located within them, it's just practical to do so since you normally can't do anything about them.

(See Counterfactual Mugging and UDT for more discussion.)

comment by [deleted] · 2012-07-17T23:58:46.830Z · LW(p) · GW(p)

It seems to me that an agent that self-modified to answer "Yes" to this sort of question in the future but said "No" this time

This strategy is not reflectively consistent. From the new TDT PDF:

A decision algorithm is reflectively inconsistent whenever an agent using that algorithm wishes she possessed a different decision algorithm.

If an agent implemented your strategy, they would change decision strategies every time they come across a predictor that flipped heads.

comment by Zaine · 2012-07-16T21:48:05.521Z · LW(p) · GW(p)

A new immersive language learning game has a Kickstarter right now; it's essentially a first person story based game with interactive subtitles.

comment by Oscar_Cunningham · 2012-07-19T11:18:11.973Z · LW(p) · GW(p)

People should stop commenting on posts that have been downvoted, otherwise it provides positive feedback.

Replies from: drethelin
comment by drethelin · 2012-07-19T15:30:40.735Z · LW(p) · GW(p)

Plenty of things that are downvoted are still worth having conversations about. I think a reasonable job is done of simply downvoting and not commenting on the really trolly ones.