Open Thread, February 1-14, 2012

post by OpenThreadGuy · 2012-02-01T04:57:48.816Z · LW · GW · Legacy · 128 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

128 comments

Comments sorted by top scores.

comment by gwern · 2012-02-11T15:59:47.102Z · LW(p) · GW(p)

More of my research for Luke, this time looking into the polyamory literature.

I read Opening Up and The Ethical Slut; the former was useful, the latter was not. My general impression of the research is that:

  1. it's all hard to get as the journals are marginal and relevant academics have bad habits of publishing stuff as book chapters or prefaces or books period
  2. the studied polyamorists are distinctly white, educated, urban or coastal, professional, older (how odd) middle/upper-class.

    This means there is zero generalizability to whether polyamory would work in other groups and massive selection biases (few other groups are so well-equipped to leave a community not working for them), and even if a survey finds that polyamorists are 'average' in various dysfunctionals or pathologies, one needs to check that the average is the right average (ie. non-amorous educated professional whites).

    These two points do not seem to be appreciated at all by many advocates (eg. the ones saying STDs are not a problem)

  3. the one academic doing good work in the area is Sheffer, who is running a longitudinal survey which may or may not have enough statistical power to rule out particularly dramatic variances in outcomes. (Sheffer mentions the selection bias problem but seems to have the attitude that it's not a problem for her work.)

My notes/bibliography/quotes: http://dl.dropbox.com/u/5317066/2012-gwern-polyamory.txt

Replies from: Grognor, ciphergoth, Larks
comment by Grognor · 2012-02-11T16:03:18.381Z · LW(p) · GW(p)

Why did Luke ask you to research polyamory.

Replies from: gwern
comment by gwern · 2012-02-11T16:07:43.674Z · LW(p) · GW(p)

You know, I never asked. Maybe someone was thinking about using it as an example of broadened possibilities in a transhumanist utopia along with the usual cat-girl-servants-in-a-volcano-lair examples like augmented senses and wanted to know if there were crippling defeaters to the suggestion?

comment by Paul Crowley (ciphergoth) · 2012-08-02T00:51:45.336Z · LW(p) · GW(p)

Only just noticed this. If you or Luke would find it useful to talk to Dr Meg Barker about this I can put you in touch.

Replies from: gwern
comment by gwern · 2013-09-09T22:38:45.795Z · LW(p) · GW(p)

No, that's not necessary. I think we went as far as was profitable: I wouldn't expect Barker to tell me about any major study which I missed.

comment by Larks · 2018-03-06T04:01:16.507Z · LW(p) · GW(p)

the one academic doing good work in the area is Sheffer, who is running a longitudinal survey which may or may not have enough statistical power to rule out particularly dramatic variances in outcomes. (Sheffer mentions the selection bias problem but seems to have the attitude that it's not a problem for her work.)

Was there any follow-up here?

Replies from: gwern
comment by gwern · 2018-03-19T01:02:54.376Z · LW(p) · GW(p)

No idea. Some Google Scholar checks turns up nothing.

comment by Grognor · 2012-02-01T06:18:45.837Z · LW(p) · GW(p)

After pondering for a while on why I'm so fixated on making meaningless numbers (such as LW karma or Khan Academy points) go up as a result of my actions, I came up with a hypothesis: the brain uses such numbers as a proxy for societal status. An experimental consequence of this idea is that status-seeking people and low-status people try harder (possibly also for longer periods of time) to achieve video game-style "points".

Just a thought, really. If this experiment has actually been done, it'd be cool to read about it if anyone has a link. I don't have the resources to do it myself. Anyone reading this comment who can do it is certainly free to, but I doubt that's the case.

Came up with the idea while responding to a question on Formspring.

Replies from: siodine, Solvent
comment by Solvent · 2012-02-01T07:02:39.901Z · LW(p) · GW(p)

Good idea. Maybe net worth comes in the same sense, except less meaninglessly?

Other examples:

Number of facebook friends High score in Temple Run

comment by Grognor · 2012-02-02T06:11:20.687Z · LW(p) · GW(p)

Has Michael Vassar published any essays or articles or general things to read anywhere (other than this)? I get the impression that he's this supreme phenomenon from the people who describe his conversational ability. I've watched his Singularity Summit talk, and it was incredible, so I know it's not just in person that he's formidable.

I catch glimpses of his mighty cleverness in comments with phrases like "I always advocate increased comfort with lying." But there's no link to the Michael Vassar essay on All The Very Convincing Reasons You Should Be Comfortable With Lying. I'm assuming that's because it doesn't exist. That's not fair. I want to read his non-existent work.

comment by daenerys · 2012-02-05T01:17:07.918Z · LW(p) · GW(p)

I thought LWers might be interested in the work of Vi Hart (I did a quick search to make sure she hadn't been mentioned before). I think she is a great resource for recruiting people towards rationality. In terms of finding Joy in the Merely Real, she explains natural phenomena rationally, but in a way that literally can bring tears to my eyes.

Here is an example: Doodling in Math: Spirals, Fibonacci, and Being a Plant- Part 3 of 3

This is the final video of a three-parter, but I think most LWers can infer the background knowledge. If you enjoy it, you can go back and watch the first two parts.

A quote that sums up what her vlogs are often all about: "This is why science and mathematics are so much fun; You discover things that seem impossible to be true, and then get to figure out why it's impossible for them NOT to be."

Replies from: Grognor, sixes_and_sevens
comment by Grognor · 2012-02-05T02:13:11.301Z · LW(p) · GW(p)

Those videos are great.

That thing you quoted is worthy of a rationality quote.

Replies from: daenerys
comment by daenerys · 2012-02-05T02:53:12.658Z · LW(p) · GW(p)

Thank you! Consider them quoted.

comment by sixes_and_sevens · 2012-02-15T17:30:56.146Z · LW(p) · GW(p)

She was mentioned in a quotes thread about a year ago (where I first discovered her), and has recently joined the team at Khan Academy. I likened this for my non-nerd friends to Tom Morello from Rage Against The Machine teaming up with Billy Joel to fight dragons.

comment by SilasBarta · 2012-02-02T21:50:02.753Z · LW(p) · GW(p)

FYI: I quit my day job last Monday and flew to San Francisco to start the program described at devbootcamp.com, which starts in full next Monday. Background.

Replies from: arundelo
comment by arundelo · 2012-02-02T22:54:41.808Z · LW(p) · GW(p)

Good luck!

comment by mstevens · 2012-02-02T15:02:33.300Z · LW(p) · GW(p)

I've pondered setting up some kind of "Mindkiller discussion mailing list" and trying to recruit people.

The idea would be to try to practice discussing these topics in a private forum without getting mindkilled. For example, we could try to have a sensible conversation about politics.

The main thing stopping me is that I think to work it'd need an excellent moderator who'd probably burn out quickly.

Replies from: praxis, Emile
comment by praxis · 2012-02-05T01:25:47.145Z · LW(p) · GW(p)

I'm mildly frightened by the prospect, because I'm mildly frightened by the possibility that my political beliefs so far might be built entirely on my ability to mindkill other people with clever argument. So, yes, I think this is a good if not vital idea.

comment by Emile · 2012-02-02T15:37:14.051Z · LW(p) · GW(p)

Konkvistador has been talking about doing something like that ...

Replies from: mstevens
comment by mstevens · 2012-02-02T15:52:09.985Z · LW(p) · GW(p)

I mentioned it to him on irc, he seemed sympathetic, but didn't mention plans of his own.

It'd be interesting whoever does it.

Replies from: None
comment by [deleted] · 2012-02-05T13:34:18.627Z · LW(p) · GW(p)

Indeed I am sympathetic, but people have presented pretty good counterarguments against such a mailing list being formed. There was a wide discussion on this and other ideas in the rational romance thread quite some time ago. It seem recently many people have independently come up with many of the same proposals.

Replies from: Multiheaded
comment by Multiheaded · 2012-02-10T21:22:11.667Z · LW(p) · GW(p)

If you people do make one and I'm left out, that'd be pretty ugly of you! Please, please, give me membership in the event of you ever getting up to it. I generally disengage from social situations I can't succeed at, but here I'd beg and grovel to be let in.

:D

comment by [deleted] · 2012-02-05T14:30:45.495Z · LW(p) · GW(p)

Too bad this isn't part of any sequence else I'd put it up as a rerun:

Can't say no to spending

I'm pretty sure most new posters are not familiar with the data and arguments presented here unless they have started reading LW's sisters site Overcoming Bias (which btw I think more LW users should). In any case an updated discussion of this 4 years later seems appropriate.

Edit: Made a rerun post of this, please discuss it there.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-11T08:43:30.337Z · LW(p) · GW(p)

11 points so far says you should make this a rerun post in Discussion.

Replies from: None
comment by [deleted] · 2012-02-11T08:54:08.996Z · LW(p) · GW(p)

Well I don't think its technically part of any sequence, but if people here think here it is ok to rerun this and since there was some interest I guess I may as well do it.

comment by Matt_Simpson · 2012-02-01T21:28:25.187Z · LW(p) · GW(p)

How many people knew that evidential decision theory recommends cooperating in a one-shot prisoner's dilemma where the choices of the two agents playing are highly (positively) correlated?

I apparently just independently invented evidential decision theory while bored in my micro class by thinking "why wouldn't you condition your uncertainty about others' on what you choose? The cooperation between rational players in PDs can clearly happen." This sounded suspiciously like what evidential decision theory should be, and lo and behold, after class I found out that it is.

comment by kpreid · 2012-02-02T13:05:31.370Z · LW(p) · GW(p)

There is a new Stack Exchange Q&A site in public beta. It seems quite relevant to our interests.

Cognitive Sciences beta - Stack Exchange

comment by Incorrect · 2012-02-03T20:36:19.002Z · LW(p) · GW(p)

Social dark arts and plotting are one of the main themes of HPMoR and I think they should have more of a place on LessWrong. "Bad signalling" and negative-from-baseline dark arts failures are often pointed out, but not lost opportunities for manipulation.

What are people's thoughts on this?

Replies from: None
comment by [deleted] · 2012-02-05T14:23:40.936Z · LW(p) · GW(p)

These two discussions might be of some interest to you if you haven't read them already.

Replies from: Anubhav
comment by Anubhav · 2012-02-07T12:10:03.065Z · LW(p) · GW(p)

Also, this and this.

But 3 posts and a thread are hardly satisfying, dammit, the Dark Arts are more important than this! We need a community of Dark Wizards!

.... unless the PUA guys are a community of Dark Wizards. Do they use their Arts for non-dating-related matters?

comment by [deleted] · 2012-02-02T13:18:32.793Z · LW(p) · GW(p)

Feminism and the Disposable Male

Overall this particular video isn't that well made, but I think the basic argument is more or less correct. 7:00 to 8:00 is especially relevant to ethical thinking.

Replies from: mstevens, gwern, Multiheaded, TimS
comment by mstevens · 2012-02-02T14:22:01.472Z · LW(p) · GW(p)

I agreed with the basic idea, although I did have a slight [citation needed] feel. She jumped around a bit without justifying things as much as I'd like. Although perhaps that's ok for what's basically a youtube rant.

comment by gwern · 2012-02-02T17:43:10.881Z · LW(p) · GW(p)

Is there any reason people should watch this rather than read Roy Baumeister's (excellent, IMO) 2010 book Is There Anything Good About Men? (which is available online in the usual sub rosa places)?

Replies from: None
comment by [deleted] · 2012-02-02T17:45:36.191Z · LW(p) · GW(p)

If I thought the video spectacular I would have made a separate post in the discussion section. I clearly think its not. So why did I post this? Because, I don't recall this specific topic being discussed on LessWrong, so when I saw this video, I wondered about how posters would respond to it.

If you have read the book might I suggest writing a review for this site?

comment by Multiheaded · 2012-02-10T21:26:02.884Z · LW(p) · GW(p)

Having been born and raised in Russia, this seems so alien to me. I'd say that here we have a sort of make-do gender equality in many respects, partly a heritage of the USSR.

comment by TimS · 2012-02-02T15:26:45.224Z · LW(p) · GW(p)

Oops. On reflection, I misinterpreted her point. I really can't endorse any of the following, except:

Also, I don't think female-autonomy is consistent with the political female-first distribution of benefits she identifies.

I do think her description of the actual success female-first-ism is not very accurate.

Wrong statements preserved for clarity of thread.


Synopsis: Women are more valuable in society than men because women get lifeboat seats and men don't. This different valuation is justified because women can have children and men cannot.

First, this is equivalent to saying that women's first social purpose is child-rearing.

Second, the Youtube video acknowledges that society geared towards women-as-reproduction-machines requires a lot of restrictions on female autonomy. Would you trade a substantial portion of your autonomy for increased priority of your life being protected in high-risk situations? I wouldn't, for the same reason that I think AI implementing the zeroth law of robotics is not Friendly.

Third, one might conclude that restricting female autonomy was necessary for social continuation purposes, but why would individual women want the world to be that way? Society might respond with genuine regret that this is how things must be, but in practice, I've never met anyone who thought that (1) women's autonomy should be restricted, and (2) this was something to regret. In other words, terminal values are not justified by (and don't need justification from) instrumental-value arguments.


I also think that the Youtuber's sex-based child-rearing advice is terrible. As she says, we are teaching men and women to be certain ways. Why should we need to teach what is inherently true?

Also, I don't think female-autonomy is consistent with the political female-first distribution of benefits she identifies.

Replies from: None
comment by [deleted] · 2012-02-02T17:43:55.119Z · LW(p) · GW(p)

I also think that the Youtuber's sex-based child-rearing advice is terrible. As she says, we are teaching men and women to be certain ways. Why should we need to teach what is inherently true?

Uh, she was describing the child rearing practices in a unsympathetic way quite deliberately. It wasn't advice, it was descriptive.

I think you missed the point.

Replies from: TimS
comment by TimS · 2012-02-02T17:47:07.049Z · LW(p) · GW(p)

Edit: as discussed below, this is an incorrect interpretation of her comments


Wasn't she saying that the child-rearing practices were a net good?

She talked about preparing men to be the solitary guardian with the rifle and women to take the lifeboat seat at the cost of her beloved's life. I thought she was saying the sex-based child-rearing techniques (like being more attentive to female than male crying) advanced that goal.

From my point of view, no child-rearing advice should suggest treating babies less than a year old differently based on the sex of the child, UNLESS the advice is about diapering.

Replies from: None
comment by [deleted] · 2012-02-02T17:48:22.131Z · LW(p) · GW(p)

Wasn't she saying that the child-rearing practices were a net good?

No. She said it was what we used to need. Her entire video is about how little we value male life and we indoctrinate males to sacrifice themselves for others.

Replies from: TimS
comment by TimS · 2012-02-02T17:52:06.418Z · LW(p) · GW(p)

Edit: Yeah, this is all wrong. See my discussion below.


She NEVER said we should stop that kind of indoctrination. She barely acknowledged it was indoctrination.

Replies from: None
comment by [deleted] · 2012-02-02T17:54:12.895Z · LW(p) · GW(p)

I think you need to re-watch the video. It is not explicitly stated, yet it is very hard to miss.

Replies from: TimS
comment by TimS · 2012-02-02T18:19:20.074Z · LW(p) · GW(p)

I looked again.

9:00 to 11:50 - She's saying the child-rearing techniques she describes lead to the "disposable man" attitudes in men and women.

11:50 to 13:10 - Attack on "dismantlers of gender roles" Set-aside programs, women-first policy, etc. reinforce "disposable man."

13:10 to - 14:00 And women-firsters get what they ask for. Feminist ONLY exploits the disposable man dynamic. Feminism = enforced chivalry.

14:00 - 15:00 Society succeeded because women were put first. And we don't need that dynamic any more. Call to action What's the worst that would happen if women no more valuable than men, and men no more valuable than women. If we keep following feminism, society will end by unbalancing.

15:00 - end We should celebrate manhood, and feminists don't want to. Instead, men come in "dead last, every time"


You are correct, in that I misread her call to action. Mostly because I was mindkilled about her definition of feminist. I'm not saying that no one acts how she describes from 11:50 - 14:00, but it's just not an inherent property of feminist to act and believe that way.

For example, I don't want to ignore male victim's of domestic violence, and I doubt most other feminists want to either. I like her call to action, but I think it is a feminist call, and I think her factual assertions from 14:00 to the end (especially "men come in dead last, every time") are almost entirely false.

comment by Prismattic · 2012-02-01T23:21:57.114Z · LW(p) · GW(p)

I've lately been experimenting with taking different amounts of vitamin D. While I have found a definite improvement in mood and energy during the day when taking vitamin D first thing in the morning, I haven't found much impact on my excessive night-owlishness, such that I still don't get enough sleep and mood/energy are not yet optimal. It occurred to me that I might be subverting the effect by spending too much time at the computer in the evenings, since the monitor emits a lot of blue light.

And lo and behold, I've discovered that you can download a free program that regulates the color of the light your monitor emits based on your latitude and the time of day. This seemed cool enough to merit sharing here.

Replies from: gwern, daenerys
comment by gwern · 2012-02-02T18:06:10.625Z · LW(p) · GW(p)

Yes, I'm a fan of an f.lux equivalent for Linux, Redshift. (Have you considered melatonin?) Incidentally, as far as vitamin D goes, I think it may be harmful for sleep when taken in the evening.

Replies from: army1987
comment by A1987dM (army1987) · 2012-02-02T19:10:53.207Z · LW(p) · GW(p)

Yes, I'm a fan of an f.lux equivalent for Linux, Redshift.

Awesome. Dunno how much this is due to placebo, but using it immediately made me feel more sleepy. (Of course, at 8 p.m. it's too early for that; maybe I'll lie to it about my longitude so it lags a few hours.)

comment by daenerys · 2012-02-04T01:00:14.979Z · LW(p) · GW(p)

Thanks for the download link! I also have "night owl" issues I am currently experimenting to fix. I just installed the f.lux program and will report back on its usefulness by the end of the month. (Immediate reaction- It looked REALLY red for about 5 minutes. Now it just looks a little red.)

Another new hack I'm trying is to take a sleep aid when I think I should go to bed soon.

My impetus for doing this was a mix of seeing people post about melatonin here, and also realizing that when I was sick, I LIKED being able to take PM meds which would knock me out and force me to go to sleep when I thought I should. (but which I of course do NOT want to take when I am not sick!)

Is there a reason LWers recommend melatonin rather than other non-addictive sleep aids?

Also, I would like to thank the LessWrong community in general for giving me the idea to even try these types of self-improvement mods.

Replies from: gwern, Prismattic
comment by gwern · 2012-06-08T21:06:40.546Z · LW(p) · GW(p)

I just installed the f.lux program and will report back on its usefulness by the end of the month.

Update?

Replies from: daenerys
comment by daenerys · 2012-06-08T21:18:03.074Z · LW(p) · GW(p)

Here is a link to the update.

My general view was that it (f.lux) didn't make a noticeable change (I wasn't recording sleep data at the time though. I am now.), but that the cost was so low (about 2 minutes of time) that it was still worth it for people to try.

The sleep aid I had started out trying gave me headaches. I use melotonin now, and it is much better. Still don't manage to get to bed before 1a though.

comment by Prismattic · 2012-02-04T01:16:28.270Z · LW(p) · GW(p)

I tried melatonin for several days and it really didn't seem to do anything for me.

Sometimes I'll take Benadryl (but only half the recommended dose for my weight, so it wears off before morning) which does help but seems like not a good thng to be taking long-term.

Replies from: sketerpot
comment by sketerpot · 2012-02-05T21:46:54.090Z · LW(p) · GW(p)

Tolerance to the sleep-inducing effects of Benadryl builds up fairly quickly. In this double-blind study, people given 50 mg of diphenhydramine (the active ingredient in Benadryl) got really sleepy the first few times, but after doing this for four days in a row, the effects were indistinguishable from a placebo. Benadryl usually comes in 25 mg tablets, so that's two pills per night.

comment by MileyCyrus · 2012-02-01T05:46:36.759Z · LW(p) · GW(p)

Is programming a bad career to get into? Is it true that you can't work in it more than a couple of decades because all your skills will go obsolete and you'll be replaced by someone younger?

Replies from: shminux, Viliam_Bur, sixes_and_sevens, CarlShulman
comment by shminux · 2012-02-01T05:59:24.872Z · LW(p) · GW(p)

Are you serious? If you have an aptitude for coding/design/software architecture, and no other burning passion, programming is an excellent choice. While indeed changing rapidly, it is an easy discipline to update your skills cheaply and with almost no red tape. Besides, most people change careers on average more often than every 20 years, so no point looking that far ahead.

Just Don't Call Yourself A Programmer.

Replies from: Thomas
comment by Thomas · 2012-02-01T06:57:11.710Z · LW(p) · GW(p)

Coding should be automatized, sooner or later. Can't expect nothing will change basically for decades.

Replies from: faul_sname, ShardPhoenix, shminux
comment by faul_sname · 2012-02-01T07:29:10.017Z · LW(p) · GW(p)

Yes. We call that "the singularity".

Replies from: Thomas
comment by Thomas · 2012-02-01T08:24:06.162Z · LW(p) · GW(p)

When I was a few years younger and naiver, I thought this site is all about the Singularity. Now I know it is about akrasia, conditional probability, self help and manything else. No wonder not everyone expects the Singularity to render our current coding habits obsolete.

comment by ShardPhoenix · 2012-02-01T07:24:14.975Z · LW(p) · GW(p)

Programming is automation.

Replies from: Thomas
comment by Thomas · 2012-02-01T08:12:45.657Z · LW(p) · GW(p)

Automation of automation. Of course.

comment by shminux · 2012-02-01T07:11:05.552Z · LW(p) · GW(p)

Maybe in a sense where creating a compiler or a high-level API is an automation. There will be a need to "code" in the current high-level language for quite some time. (There is still some demand for people who can code in C and even in Assembler, despite the decades passed since the languages were first introduced, and despite the Moore's law holding steady.)

Replies from: fubarobfusco
comment by fubarobfusco · 2012-02-01T07:33:21.216Z · LW(p) · GW(p)

Programming is the act of deciding exactly what needs to be done by the automation. At whatever level of automation exists, there's still telling that automation what to do.

comment by Viliam_Bur · 2012-02-02T10:34:33.231Z · LW(p) · GW(p)

Is it true that you can't work in it more than a couple of decades because all your skills will go obsolete and you'll be replaced by someone younger?

If this happens (and yes, to some people this happens), then you are doing it wrong. Getting older usually brings some problems, like accumulated bad experience, loss of illusions, less enthusiasm, possible burning out, and starting a family which means that you are less willing to work overtime, etc. But this happens in any profession.

What exactly are your programings skills? (The "larger picture" is already mentioned in a shminux's comment, so I focus here only on programming.) If you have memorized a few keywords and function names, then honestly you don't know anything about programming, and a new programming language or technology will make your skills obsolete. Even for a good programmer, having the important keywords in your "memory cache" is useful, but switching to another language is just a matter of time.

After "memorizing the keywords" level you get to the real programming -- you design algorithms, understand design patterns (which simply means: you will need to solve thousands of problems, but then you will see that 99% of them can be classified as belonging to one of cca dozen templates, and when you are familiar with the templates, solving these problems will become very easy), and you will see something really new only once in a while (even most of the new things are just reinventing the wheel). And even if you see the new thing, it still helps to have a knowledge about the old things, because you will understand why the new thing was designed this way.

You have to develop some meta-skills to make learning easier. For example if you work in multiple programming languages, you often use the same or similar thing with a different syntax. So why not make yourself a cheat-sheet per language per topic? Then if you have to learn a new language, you have to spend one day constructing a new cheat-sheet, and you are fluent in the new language. Using Google and parsing the official documentations are important skills. This can make your learning curve incredibly fast. Are there fresh people coming from university that know more than you? Listen to them, make notes of the keywords they use, ask what tools they use, then read Wikipedia, read a free online book, try the tools, use google, and within a month you will be able to give them good advice.

And then there is the higher meta-level where you decide what exactly will you do and how will you sell yourself.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-02-02T17:05:54.822Z · LW(p) · GW(p)

Thanks for the detailed response :)

comment by sixes_and_sevens · 2012-02-01T12:33:37.069Z · LW(p) · GW(p)

"Programming" isn't really a coherent vocation any more, and will probably become even less so as time passes. By way of analogy, being a scribe was once a trade in its own right, but any contemporary job you're ever likely to want will demand literacy.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-02-01T17:19:04.984Z · LW(p) · GW(p)

Are you saying that all jobs will soon require coding literacy?

Replies from: dbaupp
comment by dbaupp · 2012-02-01T22:43:29.302Z · LW(p) · GW(p)

Jobs might not require coding literacy, but knowing how to write rudimentary code (in a scripting language like Python) makes a computer another tool at your disposal (a very very powerful one!). e.g.

  • one can use a regular expression to find all the telephone numbers in a text document
  • if one has a list of 20 files to download, then knowing how to write a 4 or 5 line script that takes the list and downloads the files will make it much faster.
  • [edit] scripts are reusable, so an hour investment of time writing a script that cuts 5 minutes off a common task pays for itself quickly

(Also, being able to clarify ones thoughts enough to convey them unambiguously to a computer is possibly a useful skill in itself.)

Replies from: sketerpot, sixes_and_sevens, MileyCyrus
comment by sketerpot · 2012-02-05T21:41:12.144Z · LW(p) · GW(p)

one can use a regular expression to find all the telephone numbers in a text document

Recognizing phone numbers is actually a non-trivial problem, because people write them in so many crazy ways. It's easier if you have a list of phone numbers all formatted in roughly the same way, but that's not always the case.

Replies from: dbaupp
comment by dbaupp · 2012-02-06T03:15:50.869Z · LW(p) · GW(p)

Ah, good point, but something very general like /[0-9+\-() ]{4,}/ will at least reduce the amount of manual filtering required!

In a neat coincidence, I was just reading this article, of which the first 3 paragraphs are most relevant:

Performing manual, repetitive tasks enrages me. I used to think this was a corollary of being a programmer, but I’ve come to suspect (or hope) that this behaviour is inherent in being human.

But being able to hack together scripts simply makes it much easier to go from a state of rage to a basic solution in a very small amount of time. As a side point, this is one of the reasons that teaching the basics of programming in schools is so important. It’s hard to think of any job which wouldn’t benefit from a few simple scripts to perform more automation.

When we’re hiring, even for non-developer roles, we look for this kind of mentality - it’s extremely useful, especially when building a software businesses, if costs don’t scale linearly with revenue. The more we can invest up-front in automation, the less time our team has to spend on performing stupid, manual tasks. As we add more employees, the benefits are compounded. And less rage generally makes the workplace a much happier place.

comment by sixes_and_sevens · 2012-02-02T12:51:27.457Z · LW(p) · GW(p)

This more or less would have been my response. It may not be worth your while becoming a software developer, but it's definitely worth your while learning to code.

comment by MileyCyrus · 2012-02-02T06:19:57.016Z · LW(p) · GW(p)

That makes sense. Programming as a side dish.

comment by CarlShulman · 2012-02-01T07:23:50.903Z · LW(p) · GW(p)

It depends on other things about you that we don't know. What do you want? What's your skill/ability profile like?

If you're most interested in money, working as a salaried programmer can take you into the six figure range (the average for Silicon Valley has passed that now). Your skills will obsolesce faster than in other disciplines, and you'll actually be called on it (doctors skills vary a lot by time of graduation, with older being worse, but the patients don't do anything about it), but that's manageable. Unfortunately, as you get older you lose fluid intelligence and so can't learn new skills as easily.

You can make much more money in startups in expectation (from tail outcomes) if you're good, but note that one can be an entrepreneur in other fields (software/web startups are nice in terms of low barriers to entry, low capital requirements, etc, but that also means more competition). With a long time horizon if you're smart enough to reliably graduate medical school and find medicine tolerable you'll make more money as a doctor than an engineer. Likewise for elite law schools, if you both have the credentials to get in and go to the high end places (although that carries more risk). Finance (investment banking, hedge funds, etc) has substantially better financial prospects if you can get into it, although again nontrivial risk.

Other technically demanding jobs (other types of engineering, actuaries, etc) have similar or better aggregate compensation statistics.

In terms of quality of life, some people really like coding, at least compared to the demands of higher-paying fields (risk, self-motivation, management/sales/schmoozing, intense hours, many years of costly schooling, etc). Others don't.

comment by Anubhav · 2012-02-05T10:10:43.980Z · LW(p) · GW(p)

What are the (reasonably) low-cost high-return lifehacks most people probably haven't heard about?

Spaced repetition comes immediately to mind. So do nootropics.

What about speed-reading? It seems to get bad rap or be dismissed as pseudoscience. So... is it real, and if it is, how useful is it?

These are the three I can think of... Are there any more?

(I seem to remember seeing 'mindfullness meditation' mentioned on LW a few times... No idea what it's actually good for, though.)

[Edited to fix weird propositional slip-up. Dismissed as pseudoscience, not by pseudoscience.]

Replies from: HonoreDB
comment by HonoreDB · 2012-02-10T04:31:39.458Z · LW(p) · GW(p)

Old thread.

Replies from: Anubhav
comment by Anubhav · 2012-02-10T11:13:55.822Z · LW(p) · GW(p)

(sigh) No Rings of Power on that thread, I'm afraid.

Thanks for the link though.

comment by [deleted] · 2012-02-10T13:08:38.468Z · LW(p) · GW(p)

Reading some of Robin Hanson's older writting: If Uploads Come First

What if uploads decide to take over by force, refusing to pay back their loans and grabbing other forms of capital? Well for comparison, consider the question: What if our children take over, refusing to pay back their student loans or to pay for Social Security? Or consider: What if short people revolt tonight, and kill all the tall people?

In general, most societies have many potential subgroups who could plausibly take over by force, if they could coordinate among themselves. But such revolt is rare in practice; short people know that if they kill all the tall folks tonight, all the blond people might go next week, and who knows where it would all end? And short people are highly integrated into society; some of their best friends are tall people.

In contrast, violence is more common between geographic and culturally separated subgroups. Neighboring nations have gone to war, ethnic minorities have revolted against governments run by other ethnicities, and slaves and other sharply segregated economic classes have rebelled.

Thus the best way to keep the peace with uploads would be to allow them as full as possible integration in with the rest of society. Let them live and work with ordinary people, and let them loan and sell to each other through the same institutions they use to deal with ordinary humans. Banning uploads into space, the seas, or the attic so as not to shock other folks might be ill-advised. Imposing especially heavy upload taxes, or treating uploads as property, as just software someone owns or as non-human slaves like dogs, might be especially unwise.

But there is a barrier that is just as vast as any geographical barrier in terms of culture and social contact. Time. Imagine the world was filled with slow and fast people, slower people being just as smart but running at a speed about half that of fast people. Who would you tend to interact with more in your personal life?

Taking his basic Darwinian argument, overall human uploads will run at precisely at the speed at which it is most economical to run them. Which may not be the same depending on their particular profession or personality type. Over time any small initial cultural differences would compound.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-11T08:42:28.107Z · LW(p) · GW(p)

Indeed. This is why I have a hard time thinking of ems as "friendly", even as I concede they would be fully human - we have considerable historical precedent as to what happens when one group of humans is much more powerful than a colocated other group of humans.

Frankly, humans aren't human-friendly intelligences. As such, it's not clear to me that "human-friendly intelligence" is even a sufficiently coherent concept to make predictions from; much as "God" isn't a coherent concept.

comment by Anubhav · 2012-02-04T07:19:41.840Z · LW(p) · GW(p)

Apparently, Fuyuki City (the setting of Fate/stay night) is based on Kobe.

Also, the setting of Haruhi is based on Nishinomiya.

Kobe is here. And Nishinomiya is here.

Wonder if Eliezer's planning any epic fanfics after MoR...

Replies from: None
comment by [deleted] · 2012-02-05T13:30:38.592Z · LW(p) · GW(p)

Wonder if Eliezer's planning any epic fanfics after MoR...

Maesters of the Citadel from GRRM's world of Ice and Fire are basically crying out to be remade into a Bayesian conspiracy or better yet an organization fighting magic and banishing it from the world in order to reduce existential risk.

The destruction of Valyria, the Others beyond the Wall, the sheer destruction that Dragons can wrecked on human lands, the ability and possibly intelligence enhancing capability of the network of Godswoods... if we where living in that universe wouldn't we perhaps do the same?

Now that I think about it, maybe I'll rather start writing that.

Replies from: Anubhav
comment by Anubhav · 2012-02-05T13:54:48.623Z · LW(p) · GW(p)

Now that I think about it, maybe I'll rather start writing that.

Fanfics of GRRM's works aren't hosted on Fanfiction.net. Finding readers will be difficult. (Also, the man himself seems likely to send a C&D.)

Replies from: None
comment by [deleted] · 2012-02-05T14:01:07.447Z · LW(p) · GW(p)

(Also, the man himself seems likely to send a C&D.)

It seems you are right. Too bad, the universe seemed made for it. Fan fiction is srs bzns apparently.

comment by billswift · 2012-02-04T05:03:15.144Z · LW(p) · GW(p)

A new science journal recently published a seriously crackpot paper, this has the abstract a link for the pdf. I first heard about it from Derek Lowe, who has also written two follow-up posts. The first has a couple of links discussing how news of the paper spread, while the second includes a link to the journal making excuses for why they published it.

Moreover, members of the Editorial Board have objected to these papers; some have resigned, and others have questioned the scientific validity of the contributions. In response I want to first state some basic facts regarding all publications in this journal. All papers are peer-reviewed, although it is often difficult to obtain expert reviewers for some of the interdisciplinary topics covered by this journal. I feel obliged to stress that although we will strive to guarantee the scientific standard of the papers published in this journal, all the responsibility for the ideas contained in the published articles rests entirely on their authors.

I included the links to all of Derek Lowe's posts because they have other interesting links, including in the comments.

comment by tgb · 2012-02-02T13:39:22.285Z · LW(p) · GW(p)

I recently learned of the startup Knewton. They're an education company that focuses on developing computer-based courses out of textbooks in a manner that lets each student progress at their own pace and learn with methods that have proven successful for them in the past. This project seems like a good way to grab some low-hanging fruit in the education sphere and to start the process of computer-driven personalization of education, which strikes me as potentially quite powerful.

Some other details: my understanding is that they are creating efficient means to convert entire textbooks into a graph of each piece of knowledge along with knowledge of what pieces are prerequisite for other pieces. Then, a students can progress through the graph at varying speeds based on quiz scores and the system will suggest learning methods which have previously given good results for the students (eg: videos versus traditional text versus 'real world' examples).

Can this be effective? Are other companies doing similar projects? What are the other low-hanging fruit of education that computers can let us pick?

comment by gwern · 2012-02-13T22:36:48.412Z · LW(p) · GW(p)

After 40 days and 40 nights (plus a few), I've finished my little randomized double-blind placebo-controlled vitamin D sleep experiment: http://www.gwern.net/Zeo#vitamin-d

Conclusion: it probably hurts sleep when you take it at night.

Replies from: gwern
comment by gwern · 2012-05-02T14:20:05.061Z · LW(p) · GW(p)

Followup: it helps morning mood when taken in the morning: http://www.gwern.net/Zeo#vitamin-d-at-morn-helps

comment by beriukay · 2012-02-10T22:07:39.132Z · LW(p) · GW(p)

Accidental anti-akrasia effect I've recently discovered: I recently set my watch to hourly chime (first time I've used it in over 5 years) so that I could get up at least once an hour and walk around a bit. That's met with some success, but what I've found is that whenever the chime goes off, my sympathetic nervous system takes a jolt, and if I was in the middle of something unproductive, I start to berate myself with statements like "You're going to die someday, what have you got to show for it? Reading your RSS feeds? Writing emails? Come on, ya pansy, do something!" (This didn't start as a conscious decision, so much as a realization that an hour had just gone by without my noticing it, which fed into my death aversion. Maybe that's a small version of what a midlife crisis feels like.) It's not perfect, but it does seem to make me more productive for the first 15 minutes of each hour than I normally am.

Replies from: David_Gerard
comment by David_Gerard · 2012-02-11T08:39:41.789Z · LW(p) · GW(p)

Maybe that's a small version of what a midlife crisis feels like.

In my twenties (late '80s, early '90s), my friends and I used to talk about having a mid-life crisis every six months. Oh, the angst of Generation X, in the now-lost-to-history last years of the pre-Internet era. (I'm quite enjoying my actual middle age.)

comment by [deleted] · 2012-02-07T15:15:42.747Z · LW(p) · GW(p)

Intellectual Interests Genetically Predetermined? via FuturePundit.

From personality to neuropsychiatric disorders, individual differences in brain function are known to have a strong heritable component. Here we report that between close relatives, a variety of neuropsychiatric disorders covary strongly with intellectual interests. We surveyed an entire class of high-functioning young adults at an elite university for prospective major, familial incidence of neuropsychiatric disorders, and demographic and attitudinal questions. Students aspiring to technical majors (science/mathematics/engineering) were more likely than other students to report a sibling with an autism spectrum disorder (p = 0.037). Conversely, students interested in the humanities were more likely to report a family member with major depressive disorder (p = 8.8×10−4), bipolar disorder (p = 0.027), or substance abuse problems (p = 1.9×10−6). A combined PREdisposition for Subject MattEr (PRESUME) score based on these disorders was strongly predictive of subject matter interests (p = 9.6×10−8). Our results suggest that shared genetic (and perhaps environmental) factors may both predispose for heritable neuropsychiatric disorders and influence the development of intellectual interests.

comment by [deleted] · 2012-02-09T16:42:34.148Z · LW(p) · GW(p)

Our brains are paranoid. The feeling illustrated by this comic is, I must unfortunately admit, pretty familiar.

Replies from: TimS
comment by TimS · 2012-02-10T18:00:51.983Z · LW(p) · GW(p)

That's funny. My rationality has conquered my paranoia to the point that I don't fear murders hiding in my house. I fear Cthulhu-ish monstrosities. Such fears have the virtue of being internally consistent, even if they have the vice that they seem inconsistent with our understanding of the physical laws (and thus are extremely improbable). :)

comment by David_Gerard · 2012-02-11T08:38:11.677Z · LW(p) · GW(p)

HP:MoR is now not merely "fanfic", but an example of deconstruction-by-example:

So where do you go when all avenues explored with character and theme? You start tearing down the previous work. Good Fanfiction is a model for this. Harry Potter and the Methods of Rationality good example. Responds to the work by telling a new story while analyzing the nature of the old one, in this case by picking apart the nature of Wizard society. Hell, it's what Watchmen did to comics in the first place.

comment by ahartell · 2012-02-07T00:59:03.009Z · LW(p) · GW(p)

Does anyone know of any studies about the average life expectancy of Native Americans pre-columbus? Or any information at all better than a post on Yahoo Answers.

comment by Aharon · 2012-02-05T02:29:28.060Z · LW(p) · GW(p)

Hi! This is basically a question about sloppiness. I've recently noticed that I tend not to correct reports I do as part of my work sufficiently, I recently sent one to a coworker/supervisor and he criticised it for having too many careless mistakes. I then remembered that the supervisor for my diploma thesis had the same criticism. It may be connected to overconfidence bias - I noticed that when finishing work, it doesn't occur to me to double-check, I just assume I didn't make any mistakes.

Is there any hack that could help me to consistently remember avoiding this behavior pattern? I think I now know where the problem lies, but I don't know how to apply that knowledge to effectively avoid this behavior - it's usually just in retrospect that I notice I shouldn't have submitted something yet.

comment by Incorrect · 2012-02-01T05:40:05.635Z · LW(p) · GW(p)

In HPMoR chapter 60, at the very end of the chapter Quirrell is about to tell Harry why he thinks he is different but I cannot find the rest of the text in the subsequent context switches. Where does he answer Harry?

Replies from: James_Blair
comment by James_Blair · 2012-02-01T05:59:47.464Z · LW(p) · GW(p)

You can find it in chapter 63:

I will say this much, Mr. Potter: You are already an Occlumens, and I think you will become a perfect Occlumens before long. Identity does not mean, to such as us, what it means to other people. Anyone we can imagine, we can be; and the true difference about you, Mr. Potter, is that you have an unusually good imagination. A playwright must contain his characters, he must be larger than them in order to enact them within his mind. To an actor or spy or politician, the limit of his own diameter is the limit of who he can pretend to be, the limit of which face he may wear as a mask. But for such as you and I, anyone we can imagine, we can be, in reality and not pretense. While you imagined yourself a child, Mr. Potter, you were a child. Yet there are other existences you could support, larger existences, if you wished. Why are you so free, and so great in your circumference, when other children your age are small and constrained? Why can you imagine and become selves more adult than a mere child of a playwright should be able to compose? That I do not know, and I must not say what I guess. But what you have, Mr. Potter, is freedom.

comment by shminux · 2012-02-01T06:24:30.936Z · LW(p) · GW(p)

Why I think that the MWI is belief in belief: buy a lottery ticket, suicide if you lose (a version of the quantum suicide/immortality setup), thus creating an outcome pump for the subset of the branches where you survive (the only one that matters). Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don't follow this strategy, you are either irrational or don't really believe what you say you do (most likely both).

(I'm not claiming that this is a novel idea, just bringing it up for discussion.)

Possible cop-out: "Oh, but my family will be so unhappy in all those other branches where I die." LCPW: say, no one really cares about you all that much, would you do it?

Replies from: rwallace, Solvent, ArisKatsaris, Emile, None, KPier, shminux
comment by rwallace · 2012-02-01T12:35:14.492Z · LW(p) · GW(p)

That's not many worlds, that's quantum immortality. It's true that the latter depends on the former (or would if there weren't other big-world theories, cf. Tegmark), but one can subscribe to the former and still think the latter is just a form of confusion.

comment by Solvent · 2012-02-01T06:36:19.718Z · LW(p) · GW(p)

You're correct that with that outcome pump, some copies of you would win the lottery. However, I disagree that you should kill yourself upon noticing that you'd lost. This has been discussed on LW before here.

Replies from: shminux
comment by shminux · 2012-02-01T07:06:52.344Z · LW(p) · GW(p)

Seems like more cop-outs, instead of LCPWs: the Failure Amplification does not happen in a properly constructed experiment (it is easy to devise a way to die with enough reliability and fewer side effects in case of a failure). If you only find a 99.9% sure kill, then you can still accept bets up to 1000:1. The Quantum Sour Grapes is a math error: the (implicit) expected utility is taken over all branches in the case of win, instead of only those where you survive, as was pointed out in the comments, though the author refuses to acknowledge it. There are more convenient worlds in some of the comments.

Replies from: Solvent
comment by Solvent · 2012-02-01T07:24:26.957Z · LW(p) · GW(p)

Maybe if you're a particularly silly average utilitarian.

comment by ArisKatsaris · 2012-02-02T15:38:06.156Z · LW(p) · GW(p)

This doesn't make sense. If I'm copied 5 times (in this Everrett branch, nothing about other branches), and one of my copies wins the lottery, I still wouldn't want to kill myself. This doesn't mean that I wouldn't believe my copies existed -- it's just that their existence wouldn't automatically move me to suicide.

Why then would I want to kill myself if the copies happen to be located in different Everret branches? What does their location have to do with anything?

Replies from: shminux
comment by shminux · 2012-02-03T01:08:57.545Z · LW(p) · GW(p)

I don't follow your example...

Replies from: ArisKatsaris, Anubhav
comment by ArisKatsaris · 2012-02-03T09:41:39.242Z · LW(p) · GW(p)

Here's an example: let's assume for a sec there's no MWI, there's only one world. Let's assume further that you're copied atom-per-atom 5 times and each copy placed in different cities. One of your copies is guaranteed to win a lottery ticket. A different copy than you wins. Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn't magically transform you to the copy that won the lottery ticket, it would just make you dead.

So why should the logic be different when you apply it to copies in different Everett branches, than when you apply it to copies in different cities of the same Everett branches?

Replies from: shminux
comment by shminux · 2012-02-04T01:09:12.213Z · LW(p) · GW(p)

Once you find out you lost, do you kill yourself in order to be the one to win the lottery ticket? NO! Killing yourself wouldn't magically transform you to the copy that won the lottery ticket, it would just make you dead.

I must be still missing your point. 4/5 of you would be dead, but only the branches where you survive matter. No "magical transportation" required.

Replies from: ArisKatsaris, pedanterrific, Anubhav
comment by ArisKatsaris · 2012-02-04T11:26:20.309Z · LW(p) · GW(p)

but only the branches where you survive matter.

Did you not read the sentence where my hypothetical is placed in a single world, no "branches"? Can you for the moment answer the question in the world in which there are no branches?

In fact forget the multiple copies altogether, think about a pair of twins. Should one twin kill themselves if the other twin won the lottery, just because 1/2 of them would be dead but "only the twin which survives" matters?

Replies from: shminux
comment by shminux · 2012-02-04T19:41:25.945Z · LW(p) · GW(p)

Should one twin kill themselves if the other twin won the lottery

Ah, now I understand your setup. Thank you for simplifying it for me. So the issue here is whether to count multiple copies as one person or separate ones, and your argument with twins is pretty compelling... as far as it goes. Now consider the following experiment (just going down the LCPW road to isolate the potential belief-in-belief component of the MWI):

The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the "live but maimed" cop-out. Would you participate? There is no problem with twins: no live-but-winless copies ever exist in this scenario.

Same thing in a fantasy-like setting: there are two boxes in front of you, opening one will fulfill your dreams (in the FAI way, no tricks), opening the other will destroy the world. There is no way to tell which one is which. Should you flip a coin and open a box at random?

You value your life (and the world) much higher than simply fulfilling your dreams, so if you don't believe in the MWI, you will not go for it. If you believe the MWI, then the choice is trivial: one regular world before, one happy world after.

What would you do?

Again, there are many standard cop-outs: "but I only believe in the MWI with 99% probability, not enough to bet the world on it", etc. These can be removed by a suitable tweaking of the odds or the outcomes. The salient feature is that there is no more multiple-copies argument.

Replies from: pedanterrific, ArisKatsaris
comment by pedanterrific · 2012-02-04T19:47:27.629Z · LW(p) · GW(p)

If you believe the MWI, then the choice is trivial: one regular world before, one happy world after.

I think this is where you're losing people. Why isn't it "one regular world before, 999999 horrifying wastelands and 1 happy world after"? (or, alternately, "one horrifying wasteland with .999999 of the reality fluid and one happy world with .000001 of the reality fluid"?

comment by ArisKatsaris · 2012-02-08T01:50:56.912Z · LW(p) · GW(p)

The lottery is setup in a way that you either win big (the odds are small, but finite) or you die instantly and painlessly the rest of the time, with very high reliability, to avoid the "live but maimed" cop-out.

I'd need to understand how consciousness works, in order to understand if "I" would continue in this sense. Until then I'm playing it cautious, even if MWI was certain.

What would you do? but I only believe in the MWI with 99% probability, not enough to bet the world on it", etc. These can be removed by a suitable tweaking of the odds or the outcomes.

That's not as easy as you seem to think. If I believe in MWI with my current estimation of about 85%, and you think you can do an appropriate scenario for me by merely adjusting the odds or outcomes, then do you think you can do do an appropriate scenario even for someone who only believes in MWI with 10% probability, or 1% probability, or 0.01% probability? What's your estimated probability for the MWI?

Plus I think you overestimate my capacity to figure out what I would do if I didn't care if anyone discovered me dead. There probably were times in my life where I would have killed myself if I didn't care about other people discovering me dead, even without hope of a lottery ticket reward.

Replies from: shminux
comment by shminux · 2012-02-08T03:57:06.806Z · LW(p) · GW(p)

I agree, certainly 85% is not nearly enough. (1 chance out of 7 that I die forever? No, thanks!) I think this is the main reason no one takes quantum immortality seriously enough to set up an experiment: their (probably implicit) utility of dying is extremely large and negative, enough to outweigh any kind of monetary payoff. Personally, I give the MWI in some way a 50/50 chance (not enough data to argue one way or the other), and a much smaller chance to its literal interpretation of worlds branching out every time a quantum measurement happens, making quantum immortality feasible (probably 1 in a million, but the error bars are too large to make a bet).

Unfortunately, you are apparently the first person who admitted to their doubt in the MWI being the reason behind their rejection of experimental quantum suicide. Most other responses are still belief-in-belief.

comment by pedanterrific · 2012-02-04T01:20:24.229Z · LW(p) · GW(p)

only the branches where you survive matter.

To who? The branches in which I end up dead one way or another certainly matter to me. (Which is fortunate, since I don't have any real hope of continuing to live for infinity.)

Replies from: shminux
comment by shminux · 2012-02-04T01:28:50.332Z · LW(p) · GW(p)

Why do they matter to you?

Replies from: pedanterrific
comment by pedanterrific · 2012-02-04T01:41:01.384Z · LW(p) · GW(p)

I'm supposed to know this? My thought process went: 1. The branch in which I currently live (and all its descendants) matters to me; 2. I assign a very low probability to not dying eventually; 3. Believing 2 does not seem to affect 1 at all.

Why does your life matter to you?

comment by Anubhav · 2012-02-04T03:31:12.188Z · LW(p) · GW(p)

only the branches where you survive matter.

Lol.... no?

If you really believe that, there's no need for a lottery ticket. Just kill yourself in every single world where you're not the richest person in the world. Thus the only branch were you survive will be the one in which you're the richest person in the world.

(Assuming an every conceivable outcome is physically realised version of MWI, but then, the lottery ticket gedankenexperiment does that as well.)

comment by Anubhav · 2012-02-03T09:02:46.766Z · LW(p) · GW(p)

Quantum immortality: You kill yourself and die in the vast majority of Everett branches. But you find yourself alive, because you continue to observe only the Everett branches where you survive.

Lottery Ticket Win: You kill yourself if you get a losing ticket. By QI, you find yourself alive... with a losing lottery ticket.

The branch where you won the lottery diverged from your current branch before you killed yourself. There's no way to transport yourself into that branch.

(For the record, I believe that QI is pure BS.)

Replies from: shminux
comment by shminux · 2012-02-04T01:03:50.208Z · LW(p) · GW(p)

Lottery Ticket Win: You kill yourself if you get a losing ticket. By QI, you find yourself alive... with a losing lottery ticket.

If killing is more reliable than the odds of winning, in most surviving branches you end up rich.

Replies from: Anubhav, pedanterrific
comment by Anubhav · 2012-02-04T03:21:53.221Z · LW(p) · GW(p)

If killing is more reliable than the odds of winning, in most surviving branches you end up rich.

If the experience of the surviving copies is what's important for you, just do what Aris Katsaris suggests and call it a day. (ie, upload yourself to a million sims, wait to see if one of the copies wins. If none of them does, delete everything and start over. If any of them does, delete all the other copies and then kill yourself. HAPPII ENDO da ze~)

Just don't complain if everyone else reacts with "what an idiot".

ETA: Noticed shminux's respose to Aris in the sibling. Continuing the discussion there.

comment by pedanterrific · 2012-02-04T01:12:46.804Z · LW(p) · GW(p)

Yeah, but once you average net worth over reality fluid volume, you end up poorer than before.

comment by Emile · 2012-02-02T14:44:58.169Z · LW(p) · GW(p)

Thus, if you subscribe to the MWI, this is one of the most rational ways to make money. So, if you need money and don't follow this strategy, you are either irrational or don't really believe what you say you do (most likely both).

No: if I follow that strategy it makes it more likely that others will follow that strategy; so even if I do successfully end up in a world where I won the lottery, it may also be a world where all my love ones committed suicide.

Replies from: shminux
comment by shminux · 2012-02-03T01:09:32.517Z · LW(p) · GW(p)

Note that I said:

LCPW: say, no one really cares about you all that much, would you do it?

comment by [deleted] · 2012-02-01T14:54:32.354Z · LW(p) · GW(p)

When considering this, I thought of another related question. If MWI/Quantum Immortality insists that you not die, would it also insist that you come into existence earlier? If you can't die ever (because that keeps you in more branches), then the earlier you're born, the more branches in which you are alive, therefore MWI/Quantum Immortality indicates that if you exist the most likely explanation is... (I don't know. I seem to be confused.)

MWI/Quantum Immortality feels like puddle thinking. But I'm not sure I fully understand puddle thinking either, so me saying MWI/Quantum Immortality feels like puddle thinking feels like me explaining a black box with a smaller black box inside.

Given those thoughts, I think my next step is to ask "In what ways is MWI/Quantum Immortality like puddle thinking and in what ways is it not like puddle thinking?

Reference to puddle thinking: http://en.wikipedia.org/wiki/Fine-tuned_Universe#In_fiction_and_popular_culture

... imagine a puddle waking up one morning and thinking, 'This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact it fits me staggeringly well, must have been made to have me in it!' This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this world was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.

comment by KPier · 2012-02-03T18:48:28.529Z · LW(p) · GW(p)

I believe that my death has negative utility. (Not just because my family and friends will be upset; also because society has wasted a lot of resources on me and I am at the point of being able to pay them back, I anticipate being able to use my life to generate lots of resources for good causes, etc.)

Therefore, I believe that the outcome (I win the lottery ticket in one world; I die in all other worlds) is worse than the outcome (I win the lottery in one world; I live in all other worlds) which is itself worse than (I don't waste money on a lottery ticket in any world).

Least Convenient Possible World, I assume, would be believing that my life has negative utility unless I won the lottery, in which case, sure, I'd try quantum suicide.

thus creating an outcome pump for the subset of the branches where you survive (the only one that matters).

What? No! All of the worlds matter just as much, assuming your utility function is over outcomes, not experiences..

Replies from: shminux
comment by shminux · 2012-02-04T01:13:38.229Z · LW(p) · GW(p)

The LCPW is the one where your argument fails while mine works: suppose only the worlds where you live matter to you, so you happily suicide if you lose. So any egoist believing the MWI should use quantum immortality early and often if he/she is rational.

Replies from: KPier
comment by KPier · 2012-02-04T02:57:50.922Z · LW(p) · GW(p)

An egoist is generally someone who cares only about their own self-interest; that should be distinct from someone who has a utility function over experiences, not over outcomes.

But a rational agent with a utility function only over experiences would commit quantum suicide if we also assume there's minimal risk of the suicide attempt failing/ the lottery not really being random, etc.

In short, it's an argument that works in the LCPW but not in the world we actually live in, so the absence of suiciding rationalists doesn't imply MWI is a belief-in-belief.

comment by shminux · 2012-02-03T01:06:27.204Z · LW(p) · GW(p)

So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try. This is a classic signature of belief in belief.

A mildly rational reply from an MWI adept would sound as follows: "While the MWI-based outcome pump has some issues, the concept is interesting enough to try to refine and resolve them."

Replies from: Anubhav
comment by Anubhav · 2012-02-03T09:07:00.394Z · LW(p) · GW(p)

So far most replies are of the type of the invisible dragons in your garage: multiple reasons why looking for them would never work, so one should not even try.

Except that you're the only one who's postulating the dragon, while everyone else is going "Of course dragons don't exist, why'd we look for them? We should look for unicorns, dammit, unicorns! Not fire-breathing lizards!"