Welcome to Less Wrong! (July 2012)

post by Paul Crowley (ciphergoth) · 2012-07-18T17:24:51.381Z · score: 20 (21 votes) · LW · GW · Legacy · 850 comments

Contents

  A few notes about the site mechanics
  A few notes about the community
  A list of some posts that are pretty awesome
None
850 comments

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fourth incarnation of the welcome thread, the first three of which which now have too many comments. The text is by orthonormal from an original by MBlume.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax  (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome!  I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page.
There's also a Facebook group.  If you've your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Normal_Anomaly 
Randaly 
shokwave 
Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

850 comments

Comments sorted by top scores.

comment by OnTheOtherHandle · 2012-07-19T07:01:10.246Z · score: 50 (52 votes) · LW · GW

Hello!

  • Age: Years since 1995
  • Gender: Female
  • Occupation: Student

I actually started an account two years ago, but after a few comments I decided I wasn't emotionally or intellectually ready for active membership. I was confused and hurt for various reasons that weren't Less Wrong's fault, and I backed away to avoid saying something I might regret. I didn't want to put undue pressure on myself to respond to topics I didn't fully understand. Now, after many thousands of hours reading and thinking about neurology, evolutionary psychology, and math, I'm more confident that I won't just be swept up in the half-understood arguments of people much smarter than I am. :)

Like almost everyone here, I started with atheism. I was raised Hindu, and my home has the sort of vague religiosity that is arguably the most common form in the modern world. For the most part, I figured out atheism on my own, when I was around 11 or 12. It was emotionally painful and socially costly, but I'm stronger for the experience. I started reading various mediocre atheist blogs, but I got bored after a couple of years and wanted to do something more than shoot blind fish in tiny barrels. I wanted to build something up, not just tear something down (no matter how much it really should be torn down.)

The actual direct link to Less Wrong came from TV Tropes. I suspect it's one of the best gateway drugs because TV Tropes, while not explicitly atheist or rationalist, does more to communicate the positive ideals and emotional memes of LW-style rationality than most of the atheosphere does. For the first time, I got the sense that "our" way of thinking could be so much more powerful than simply bashing religion and astrology.

One important truth beyond atheism that I have slowly come to accept is inborn IQ differentials, between individuals and groups of individuals. I had to face the fact that P(male| IQ 2 standard deviations above mean) was significantly higher than 50%. I had to deal with the fact that historical oppression probably wasn't the end-all be-all explanation for why women on average hadn't done as much inventing and discovering and brilliant thinking as men. I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I further learned that my brain was modular, and the bits of me that I choose to call "I" don't constitute everything. My own brain could sabotage the values and ideals and that "I" hold so dearly. For a long time I struggled with the idea that everything I believed in and loved was fake, because I couldn't force my body to actually act accordingly. Did I value human life? Why wasn't I doing everything I possibly could to save lives, all the time? Did I value freedom and autonomy and gender equality? Why could I not help sometimes being attracted to domineering jerks?

It took me a while to accept that the newly-evolved, conscious, abstractly-reasoning, self-reflecting "I" simply did not have the firepower to bully ancient and powerful urges into submission. It took me a while to accept that my values were not lies simply because my monkey brain sometimes contradicted them. The "I" in my brain does not have as much power as she would like; that does not mean she doesn't exist.

Other, non-rationality related information: I love writing, and for a long time I convinced myself that therefore I would love being a novelist. Now, I recognize that I would much rather compose a non-fiction or reflective essay, although ideas for fiction stories still flood in and I rarely do much about it due to laziness and/or fear. I fell in love with Avatar: The Last Airbender for its great storytelling and its combination of intelligence and idealism. I adore Pixar and many Disney movies for the sweetness and heart. I like somewhat traditional-sounding music with easily discernible lyrics that tells a story; I can't get into anything that involves screaming or deliberate disharmony. Show-tunes are great. :)

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar; I consciously cultivate it and am lucky to have it. If this disposition will be "destroyed by the truth"...well, I have a choice to make then.

comment by Swimmer963 · 2012-07-19T09:03:36.889Z · score: 16 (16 votes) · LW · GW

Welcome to Less Wrong, and I for one am glad to have you here (again)! You sound like someone who thinks very interesting thoughts.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I can't say that this is something that has ever really bothered me. Your IQ is what it is. Whether or not there's an overall gender-based trend in one direction or another isn't going to change anything for you, although it might change how people see you. (If anything, I found that I got more attention as a "girl who was good at/interested in science"...which, if anything, was irritating and made me want to rebel and go into a "traditionally female" field just because I could.

Basically, if you want to accomplish greatness, it's about you as an individual. Unless you care about the greatness of others, and feel more pride or solidarity with females than with males who accomplish greatness (which I don't), the statistical tendency doesn't matter.

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar; I consciously cultivate it and am lucky to have it. If this disposition will be "destroyed by the truth"...well, I have a choice to make then.

I think that more than idealism, what I wouldn't want to lose is a sense of humour. Idealism, in the sense of "believing that the world is good deep down/people will do the best they can/etc", can be broken by enough bad stuff happening. A sense of humour is a lot harder to break.

comment by OnTheOtherHandle · 2012-07-19T16:58:33.495Z · score: 8 (12 votes) · LW · GW

I know that it's not particularly rational to feel more affiliation with women than men, but I do. It's one of the things my monkey brain does that I decided to just acknowledge rather than constantly fight. It's helped me have a certain kind of peace about average IQ differentials. The pain I described in the parent has mellowed. Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both. I wish I had the inner confidence to care about self-improvement more than competition, but as yet I don't.

ETA: I characterize "idealism" as a hope for the future more than a belief about the present.

comment by Viliam_Bur · 2012-07-19T21:11:20.483Z · score: 32 (32 votes) · LW · GW

Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both.

As long as you know your own skills, there is no need to use your gender as a predictor. We use the worse information only in the absence of better information; because the worse information can be still better than nothing. We don't need to predict the information we already have.

When we already know that e.g. "this woman has IQ 150", or "this woman has won a mathematical olympiad" there is no need to mix general male and female IQ or math curves into the equation. (That's only what you do when you see a random woman and you have no other information.)

If there are hundred green balls in the basket and one red ball, it makes sense to predict that a randomly picked ball will be almost surely green. But once you have randomly picked a ball and it happened to be red... then it no longer makes sense to worry that this specific ball might still be green somehow. It's not; end of story.

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small. But that's not the situation; you already had some math experience. So make your guesses based on that experience. Your gender is already included in the probability of you having that specific experience. Don't count it twice!

comment by Bugmaster · 2012-07-26T22:49:54.479Z · score: 5 (5 votes) · LW · GW

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small.

To be perfectly accurate, any person's chances of being a math genius are going to be small anyway, regardless of that person's gender. There are very few geniuses in the world.

comment by ViEtArmis · 2012-07-19T17:37:55.164Z · score: 6 (6 votes) · LW · GW

It is particularly not rational to ignore the effect of your unconscious in your relationships. That fight is a losing battle (right now), so if having happy relationships is a goal, the pursuit of that requires you pay attention.

There is almost no average IQ differential, since men pad out the bottom as well. Greater chromosomal genetic variations in men lead to stupidity as often as intelligence.

Really, this gender disparity only matters at far extremes. Men may pad out the top and bottom 1% (or something like that) in IQ, but applied mathematicians aren't all top 1% (or even 10%, in my experience). It is easy to mistake finally being around people who think like you do (as in high IQ) with being less intelligent than them, but this is a trick!

comment by OnTheOtherHandle · 2012-07-19T19:17:45.573Z · score: 5 (7 votes) · LW · GW

There is almost no average IQ differential, since men pad out the bottom as well.

Sorry, you're right, I did know that. (And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.)

I was thinking about "IQ differentials" in the very broad sense, as in "it sucks that anyone is screwed over before they even start." I also suffer from selection bias, because I seek out people in general for intelligence, so I see the men to the right of the bell curve, while I just sort of abstractly "know" there are more men than women to the left, too.

comment by philh · 2012-07-19T22:22:00.302Z · score: 9 (9 votes) · LW · GW

And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.

Another possible explanation comes to mind: people with high IQs consider the "stupid" borderline to be significantly above 100 IQ. Then if they associate equally with men and women, the women will more often be stupid; and if they associate preferentially with clever people, there will be fewer women.

(This doesn't contradict selection bias. Both effects could be at play.)

comment by ViEtArmis · 2012-07-20T14:37:58.135Z · score: 6 (6 votes) · LW · GW

You'd have to raise the bar really far before any actual gender-based differences showed up. It seems far more likely that the cause is a cultural bias against intellectualism in women (women will under-report IQ by 5ish points and men over-report by a similar margin, women are poorly represented in "smart" jobs, etc.). That makes women present themselves as less intelligent and makes everyone perceive them as less intelligent.

comment by juliawise · 2012-07-20T15:46:57.574Z · score: 4 (4 votes) · LW · GW

Does anyone know of a good graph that shows this? I've seen several (none citing sources) that draw the crossover in quite different places. So I'm not sure what the gender ratio is at, say, IQ 130.

comment by Vaniver · 2012-07-20T16:26:33.114Z · score: 2 (2 votes) · LW · GW

La Griffe Du Lion has good work on this, but it's limited to math ability, where the male mean is higher than the female mean as well as the male variance being higher than the female variance.

The formulas from the first link work for whatever mean and variance you want to use, and so can be updated with more applicable IQ figures, and you can see how an additional 10 point 'reporting gap' affects things.

comment by OnTheOtherHandle · 2012-07-21T01:38:51.923Z · score: 2 (2 votes) · LW · GW

Unfortunately, intelligence in areas other than math seem to be an "I know it when I see it" kind of thing. It's much harder to design a good test for some of the "softer" disciplines, like "interpersonal intelligence" or even language skills, and it's much easier to pick a fight with results you don't like.

It could be that because intelligence tests are biased toward easy measurement, they focus too much on math, so they under-predict women's actual performance at most jobs not directly related to abstract math skills.

comment by ViEtArmis · 2012-07-20T17:02:11.156Z · score: 0 (0 votes) · LW · GW

Of course, if you use IQ testing, it is specifically calibrated to remove/minimize gender bias (so is the SAT and ACT), and intelligence testing is horribly fraught with infighting and moving targets.

I can't find any research that doesn't at least mention that social factors likely poison any experimental result. It doesn't help any that "intelligence" is poorly defined and thus difficult to quantify.

Considering that men are more susceptible to critical genetic failure, maybe the mean is higher for men on some tests because the low outliers had defects that made them impossible to test (such as being stillborn)?

comment by OnTheOtherHandle · 2012-07-21T01:40:38.306Z · score: 0 (0 votes) · LW · GW

The SAT doesn't seem to be calibrated to make sure average scores are the same for math, at least. At least as late as 2006, there's still a significant gender gap.

comment by ViEtArmis · 2012-07-22T02:33:50.027Z · score: 0 (0 votes) · LW · GW

Apparently, the correction was in the form of altering essay and story questions to de-emphasize sports and business and ask more about arts and humanities. This hasn't been terribly effective. The gap is smaller in the verbal sections, but it's still there. Given that the entire purpose of the test is to predict college grades directly and women do better in college than men, explanations and theories abound.

comment by Desrtopa · 2012-07-22T14:23:41.881Z · score: 0 (0 votes) · LW · GW

Not a rigorously conducted study, but this (third poll) suggests a rather greater tendency to at least overestimate if not willfully over-report IQ, with both men and women overestimating, but men overestimating more.

comment by OnTheOtherHandle · 2012-07-21T01:53:41.629Z · score: 2 (2 votes) · LW · GW

You're right; my explanation was drawn from many PUA-types who had said similar things, but this effect is perfectly possible in non-sexual contexts, too.

There's actually little use in using words like "stupid", anyway. What's the context? How intelligent does this individual need to be do what they want to do? Calling people "stupid" says "reaching for an easy insult," not "making an objective/instrumentally useful observation."

Sure, there will be some who say they'll use the words they want to use and rail against "censorship", but connotation and denotation are not so separate. That's why I didn't find the various "let's say controversial, unspeakable things because we're brave nonconformists!" threads on this site to be all that helpful. Some comments certainly were both brave and insightful, but I felt on the whole a little bit of insight was brought at the price of a whole lot of useless nastiness.

comment by Jayson_Virissimo · 2012-07-19T09:39:43.598Z · score: 5 (5 votes) · LW · GW

Idealism, in the sense of "believing that the world is good deep down/people will do the best they can/etc", can be broken by enough bad stuff happening. A sense of humour is a lot harder to break.

Arguably, if it was "broken" this way it would be a mistake (specifically, of generalizing from too small a sample size). I have a job where I am constantly confronted with suffering and death, but at the end of the day, I can still laugh just like everyone else, because I know my experience is a biased sample and that there is still lots of good going on in the world.

comment by Rubix · 2012-07-30T17:45:26.902Z · score: 1 (1 votes) · LW · GW

I like this post more than I like most things; you've helped me, for one, with a significant amount of distress.

comment by shminux · 2012-07-19T17:12:46.634Z · score: 7 (9 votes) · LW · GW

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

Consciously keeping your identity small and thus not identifying with everyone who happens to have the same internal plumbing might be helpful there.

comment by OnTheOtherHandle · 2012-07-19T19:14:04.002Z · score: 6 (6 votes) · LW · GW

PG is awesome, but his ideas do basically fall into the category of "easier said than done." This doesn't mean "not worth doing," of course, but practical techniques would be way more helpful. It's easier to replace one group with another (arguably better?) group than to hold yourself above groupthink in general.

comment by shminux · 2012-07-19T19:43:33.208Z · score: 5 (7 votes) · LW · GW

easier said than done

My approach is to notice when I want to say/write "we", as opposed to "I", and examine why. That's why I don't personally identify as a "LWer" (only as a neutral and factual "forum regular"), despite the potential for warm fuzzies resulting from such an identification.

There is an occasional worthy reason to identify with a specific group, but gender/country/language/race/occupation/sports team are probably not good criteria for such a group.

comment by OnTheOtherHandle · 2012-07-19T21:15:26.535Z · score: 1 (1 votes) · LW · GW

Thank you! I'll look for that.

comment by shminux · 2012-07-20T00:04:34.788Z · score: 2 (2 votes) · LW · GW

Here is a typical LW comment that raises the "excessive group identification" red flag for me.

comment by ViEtArmis · 2012-07-19T20:55:53.284Z · score: 2 (2 votes) · LW · GW

I always think of that in the context of conflict resolution, and refer to it as "telling someone that what they did was idiotic, not that they are an idiot." Self-identifying is powerful, and people are pretty bad at it because of a confluence of biases.

comment by GLaDOS · 2012-07-19T13:05:34.236Z · score: 5 (7 votes) · LW · GW

Great to see you here and great to hear you took the time to read up on the relevant material before jumping in. I'm confident that you will find many people who comment quite a bit don't have such prudence, so don't be surprised if you outmatch a long time commenter. (^_^)

For the first time, I got the sense that "our" way of thinking could be so much more powerful than simply bashing religion and astrology.

Yesss! This is exactly how I felt when I found this community.

comment by Xachariah · 2012-07-20T00:53:10.259Z · score: 4 (6 votes) · LW · GW

I fell in love with Avatar: The Last Airbender for its great storytelling and its combination of intelligence and idealism.

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar

I'm not sure about Disney, but the you should still be able to enjoy Avatar. Avatar (TLA and Korra) is in many ways a deconstruction of magical worlds. They take the basic premise of kung-fu magic and then let that propagate to it's logical conclusions. The TLA war was enabled by rapid industrialization when one nation realized they could harness their breaking the laws of thermodynamics for energy. The premise of S1 Korra is exploring social inequality in the presence of randomly distributed magical powers.

In these ways, Avatar is less Harry Potter and more HPMoR.

comment by OnTheOtherHandle · 2012-07-20T17:54:45.326Z · score: 0 (2 votes) · LW · GW

Honestly, I was disappointed with the ending of Season 1 Korra: (rot13)

Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat.

I'm not necessarily idealistic enough to be happy with a world that has no consequences or really difficult choices; I'm just not cynical enough to find misanthropy and defeatism cool. That's why children's entertainment appeals to me - while it can be overly sugary-sweet, adult entertainment often seems to be both narrow and shallow, and at the same time cynical. Outside of science fiction, there doesn't seem to be much adult entertainment that's about things I care about - saving the world, doing something big and important and good.

ETA: What Zach Weiner makes fun of here - that's what I'm sick of. Not just misanthropy and undiscriminating cynicism, but glorifying it as the height of intelligence. LessWrong seemed very pleasantly different in that sense.

comment by Bugmaster · 2012-07-26T22:17:17.748Z · score: 1 (1 votes) · LW · GW

I agree; I found the ending very disappointing, as well.

The authors throw one of the characters into a very powerful personal conflict, making it impossible for the character to deny the need for a total accounting and re-evaluation of the character's entire life and identity. The authors resolve this personal conflict about 30 seconds later with a Deus Ex Machina. Bleh.

comment by Nornagest · 2012-07-20T18:32:39.841Z · score: 0 (0 votes) · LW · GW

Are you sure that's rot13? It's generating gibberish in two different decoders for me, although I'm pretty sure I know what you're talking about anyway.

ETA: Yeah, looks like a shift of three characters right.

ETA AGAIN: Fixed now, thanks.

comment by OnTheOtherHandle · 2012-07-21T01:21:26.104Z · score: 0 (0 votes) · LW · GW

Sorry, I dumped it into Briangle and forgot to change the setting.

comment by Xachariah · 2012-07-20T22:32:27.143Z · score: -1 (1 votes) · LW · GW

Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat.

V gubhtug vg jnf irel rssrpgvir. Gubhtu irvyrq fb xvqf jba'g pngpu vg, univat gur qnevat gb fubj n znva punenpgre pbagrzcyngvat naq nyzbfg nggrzcgvat fhvpvqr jnf n terng jnl gb pybfr gur nep. Gurer'f nyernql rabhtu 'npgvba' pbafrdhraprf qhr gb gur eribyhgvba, fb vg'f avpr onynapvat bhg univat gur irel raq or gur erfhygvat punatrf gb Xbeen'f punenpgre. Jura fur erwrpgf fhvpvqr nf na bcgvba, fur ernyvmrf gung fur ubyqf vagevafvp inyhr nf n uhzna orvat engure guna nf na Ningne. Cyhf nf bar bs gur ener srznyr yrnqf va puvyqera'f gryrivfvba, gur qenzngvp pyvznk bs gur fgbel orvat gur qr-bowrpgvsvpngvba bs gur srznyr yrnq vf uhtr. Nyfb gur nagv-fhvpvqr zrffntr orvat gung onq thlf pbzzvg zheqre/fhvpvqr naq gur tbbq thlf qba'g vf tbbq gb svavfu jvgu. V'z irel fngvfsvrq jvgu gurz raqvat vg gung jnl.

Znal fubjf raq jvgu jvgu ovt onq orvat orngra. Fubjf gung cergraq gb or zngher unir cebgntbavfgf qvr ng gur raq. Ohg Xbeen'f raqvat vf bar bs gur bayl gung fgevxrf zr nf npghnyyl zngher, orpnhfr vg'f qverpgyl n zbeny/cuvybfbcuvpny ceboyrz ng gur raq.

comment by Desrtopa · 2012-07-21T01:51:46.610Z · score: 0 (2 votes) · LW · GW

V funerq BaGurBgureUnaqyr'f qvfnccbvagzrag jvgu gur raqvat, naq V jnfa'g irel vzcerffrq jvgu Xbeen'f rzbgvbany erfbyhgvba ng gur raq. Fur uvgf n anqve bs qrcerffvba, frrzvatyl pbagrzcyngrf fhvpvqr, naq gura... rirelguvat fhqqrayl erfbyirf vgfrys. Fur trgf ure oraqvat onpx, jvgubhg nal rssbeg be cynaavat, naq jvgu ab zber fvtavsvpnag punenpgre qrirybczrag guna univat orra erqhprq gb qrfcrengvba. Gur Ovt Onq vf xvyyrq ol fbzrbar ryfr juvyr gur cebgntbavfgf' nggragvba vf ryfrjurer, naq Xbeen tnvaf gur novyvgl gb haqb nyy gur qnzntr ur pnhfrq va gur svefg cynpr. Gur fbpvrgny vffhrf sebz juvpu ur ohvyg uvf onfr bs fhccbeg jrer yrsg hanqqerffrq, ohg jvgubhg n pyrne nirahr gb erfbyir gurz nf n pbagvahngvba bs gur qenzngvp pbasyvpg.

Vs Xbeen unq orra qevira gb qrfcrengvba, naq nf n erfhyg, frnepurq uneqre sbe fbyhgvbaf naq sbhaq bar, V jbhyq unir sbhaq gung n ybg zber fngvfslvat. Gung'f bar bs gur ernfbaf V engr gur raqvat bs Ningne: Gur Ynfg Nveoraqre uvture guna gung bs gur svefg frnfba bs Xbeen. Vg znl unir orra vanqrdhngryl sberfunqbjrq naq orra fbzrguvat bs n Qrhf Rk Znpuvan, ohg ng yrnfg Nnat qrnyg jvgu n fvghngvba jurer ur jnf snprq jvgu bayl hanpprcgnoyr pubvprf ol frrxvat bgure nygreangvirf, svaqvat, naq vzcyrzragvat bar. Ohg Xbeen'f ceboyrzf jrer fbyirq, abg ol frrxvat fbyhgvbaf, ohg ol pbzvat va gbhpu jvgu ure fcvevghny fvqr ol ernpuvat ure rzbgvbany ybj cbvag.

Jung Fcvevg!Nnat fnvq unf erny jbeyq gehgu gb vg. Crbcyr qb graq gb or zber fcvevghny va gurve ybjrfg naq zbfg qrfcrengr pvephzfgnaprf. Ohg engure guna orvat fbzrguvat gb ynhq, V guvax guvf ercerfragf n sbez bs tvivat hc, jurer crbcyr ghea gb gur fhcreangheny sbe fbynpr be ubcr orpnhfr gurl qba'g oryvrir gurl pna fbyir gurve ceboyrzf gurzfryirf. Fb nf erfbyhgvbaf bs punenpgre nepf tb, V gubhtug gung jnf n cerggl onq bar.

Nyy va nyy V jnf n sna bs gur frevrf, ohg gur raqvat haqrefubg zl rkcrpgngvbaf.

comment by OnTheOtherHandle · 2012-07-21T01:30:17.267Z · score: 0 (2 votes) · LW · GW

Gung'f na vagrerfgvat jnl gb chg vg, naq V guvax V'z unccvre jvgu gur raqvat orpnhfr bs gung. Ubjrire, V jnf rkcrpgvat Frnfba Gjb gb or Xbeen'f wbhearl gbjneq erpbirel (rvgure culfvpny be zragny be obgu) nsgre Nzba gbbx njnl ure oraqvat. Vg'f abg gung V qba'g jnag ure gb or jubyr naq unccl; vg'f whfg gung vg frrzrq gbb rnfl. V gubhtug Nzba/Abngnx naq Gneybpx'f fgbel nep jnf zhpu zber cbjreshy. Va snpg, gurve zheqre/fhvpvqr frrzrq gb unir fb zhpu svanyvgl gung V svtherq vg zhfg or gur raq bs gur rcvfbqr hagvy V ernyvmrq gurer jrer fvk zvahgrf yrsg.

Va bgure jbeqf, vg'f terng gung gur fgbel yraqf vgfrys gb gur vagrecergngvba gung vg jnf nobhg vagevafvp jbegu nf n uhzna orvat qvfgvapg sebz bar'f cbjref, ohg gurl unq n jubyr frnfba yrsg gb npghnyyl rkcyvpvgyl rkcyber gung. Nnat'f wbhearl jnf nobhg yrneavat gb fgbc ehaavat njnl naq npprcg gur snpg gung ur vf va snpg gur Ningne, naq ur pna'g whfg or nal bgure xvq naq sbetrg nobhg uvf cbjre naq erfcbafvovyvgl. Xbeen'f wbhearl jnf gb or nobhg npprcgvat gung whfg orpnhfr fur vf gur Ningne, naq fur ybirf vg naq qrevirf zrnavat sebz vg, qbrfa'g zrna fur'f abguvat zber guna n ebyr gb shysvyy. Vg sryg phg fubeg. Nnat tnir vg gb Xbeen; fur qvqa'g svaq vg sbe urefrys.

comment by Alicorn · 2012-07-20T01:06:56.163Z · score: 0 (2 votes) · LW · GW

randomly distributed magical powers

They run strongly in families (although it's not clear exactly how, since neither of Katara's parents appears to have been a waterbender). It's not really random.

comment by Xachariah · 2012-07-20T03:28:32.793Z · score: 1 (3 votes) · LW · GW

You are correct. I wouldn't consider it much different from personality. It's part heritable, part environmental and upbringing, and part randomness.

Now you've got me wondering if philosophers in the Avatar universe have debates on whether your element/bending is nature vs nurture.

comment by OnTheOtherHandle · 2012-07-20T18:47:07.109Z · score: 0 (2 votes) · LW · GW

Now I want an ATLA fanfic infused with Star Trek-style pensive philosophizing. :D

I would argue that it has even more potential than HP for a rationalist makeover. Aang stays in the iceberg and Sokka saves the planet?

comment by Solvent · 2012-07-28T01:20:30.725Z · score: 1 (1 votes) · LW · GW

I wonder why it is that so many people get here from TV Tropes.

Also, you're not the only one to give up on their first LW account.

comment by shokwave · 2012-07-28T18:50:07.052Z · score: 4 (4 votes) · LW · GW

I wonder why it is that so many people get here from TV Tropes.

Possibly: TV Tropes approaches fiction the way LessWrong approaches reality.

comment by Solvent · 2012-07-29T01:18:07.144Z · score: 0 (0 votes) · LW · GW

How do you mean?

comment by OnTheOtherHandle · 2012-07-29T03:32:32.955Z · score: 0 (0 votes) · LW · GW

At a guess, I would say: looking for recurring patterns in fiction, and extrapolating principles/tropes. It's a very bottom-up approach to literature, taking special note of subversions, inversions, aversions, etc, as opposed to the more top-down academic study of literature that loves to wax poetic about "universal truths" while ignoring large swaths of stories (such as Sci Fi and Fantasy) that don't fit into their grand model. Quite frankly, from my perspective, it seems they tend to force a lot of stories into their preferred mold, falling prey to True Art tropes.

comment by [deleted] · 2012-07-29T12:22:02.324Z · score: 2 (2 votes) · LW · GW

I wonder why it is that so many people get here from TV Tropes.

Because it uses as many examples from HP:MoR as it possibly could?

comment by iceman · 2012-07-19T22:46:18.553Z · score: 1 (3 votes) · LW · GW

I adore Pixar and many Disney movies for the sweetness and heart.

Have you seen the new My Little Pony show? It's really good. It's sweet without being twee.

comment by hankx7787 · 2012-07-19T10:55:05.324Z · score: 1 (3 votes) · LW · GW

I further learned that my brain was modular, and the bits of me that I choose to call "I" don't constitute everything. My own brain could sabotage the values and ideals and that "I" hold so dearly. For a long time I struggled with the idea that everything I believed in and loved was fake, because I couldn't force my body to actually act accordingly. Did I value human life? Why wasn't I doing everything I possibly could to save lives, all the time? Did I value freedom and autonomy and gender equality? Why could I not help sometimes being attracted to domineering jerks?

It took me a while to accept that the newly-evolved, conscious, abstractly-reasoning, self-reflecting "I" simply did not have the firepower to bully ancient and powerful urges into submission. It took me a while to accept that my values were not lies simply because my monkey brain sometimes contradicted them. The "I" in my brain does not have as much power as she would like; that does not mean she doesn't exist.

I've been through this kind of thing before, and Less Wrong did nothing for me in this respect (although Less Wrong is awesome for many other reasons). Reading Ayn Rand on the other hand made all the difference in the world in this respect, and changed my life.

comment by OnTheOtherHandle · 2012-07-19T17:01:22.536Z · score: 3 (3 votes) · LW · GW

I haven't read Ayn Rand, but those who do seem to talk almost exclusively about the politics, and I just can't work up the energy to get too excited about something I have such little chance of affecting. Would you mind telling me where/how Ayn Rand discussed evolutionary psychology or modular minds? I'm curious now. :)

comment by OrphanWilde · 2012-07-19T17:32:22.965Z · score: 3 (5 votes) · LW · GW

She doesn't, is the short answer.

She does discuss, however, the integration of personal values into one's philosophical system. I was struggling with a possibly similar issue; I had previously regarded rationalism as an end in itself. Emotions were just baggage that had to be overcome in order to achieve a truly enlightened state. If this sounds familiar to you, her works may help.

The short version: You're a human being. An ethical system that demands you be anything else is fatally flawed; there is no universal ethical system, what is ethical for a rabbit is not ethical for a wolf. It's necessary for you to live, not as a rabbit, not as a rock, not as a utility or paperclip maximizer, but as a human being. Pain, for example, isn't to be denied - for to do so is as sensible as denying a rock - but experienced as a part of your existence. (That you shouldn't deny pain is not the same as that you should seek it; it is simply a statement that it's a part of what you are.)

Objectivism, the philosophy she founded, is named on the claim that ethics are objective; not subjective, which is to say, whatever you want it to be; not universal, which is to say, there's a single ethics system in the whole universe that applies equally to rocks, rabbits, mice, and people; but objective, which is to say, it exists as a definable property for a given subject, given certain preconditions (ethical axioms; she chose "Life" as her ethical axiom).

comment by OnTheOtherHandle · 2012-07-19T19:45:38.060Z · score: 4 (4 votes) · LW · GW

I don't know that I would call that "objective." I mean, the laws of physics are objective because they're the same for rabbits and rocks and humans alike.

I honestly don't trust myself to go much more meta than my own moral intuitions. I just try not to harm people without their permission or deceive/manipulate them. Yes, this can and will break down in extreme hypothetical scenarios, but I don't want to insist on an ironclad philosophical system that would cause me to jump to any conclusions on, say, Torture vs. Dust Specks just yet. I suspect that my abstract reasoning will just be nuts.

My understanding of morality is basically that we're humans, and humans need each other, so we worked out ways to help one another out. Our minds were shaped by the same evolutionary processes, so we can agree for the most part. We've always seemed to treat those in our in-group the same way; it's just that those we included in the in-group changed. Slowly, women were added, and people of different races/religions, etc.

comment by thomblake · 2012-07-19T20:33:08.249Z · score: 1 (1 votes) · LW · GW

I don't know that I would call that "objective."

It's a sticky business, and different ethicists will frame the words different ways. On one view, objective includes "It's true even if you disagree" and subjective includes "You can make up whatever you want". On another, objective includes "It's the same for everybody" and subjective includes "It's different for different people". The first distinction better matches the usual meaning of 'objective', and the second distinction better matches the usual meaning of 'subjective', so I think the terms were just poorly-chosen as different sides of a distinction.

Because of this, my intuition these days is to say that ethics is both subjective and objective, or "subjectively objective" as Eliezer has said about probability. Though I'd like it if we switched to using "subject-sensitive" rather than "subjective", as is now commonly used in Epistemology.

comment by TheOtherDave · 2012-07-19T20:53:45.909Z · score: 1 (1 votes) · LW · GW

So, this isn't the first time I've seen this distinction made here, and I have to admit I don't get it.

Suppose I'm studying ballistics in a vacuum, and I'm trying to come up with some rules that describe how projectiles travel, and I discover that the trajectory of a projectile depends on its mass.

I suppose I could conclude that ballistics is "subjectively objective" or "subject-sensitive," since after all the trajectory is different for different projectiles. But this is not at all a normal way of speaking or thinking about ballistics. What we normally say is that ballistics is "objective" and it just so happens that the proper formulation of objective ballistics takes projectile mass as a parameter. Trajectory is, in part, a function of mass.

When we say that ethics is "subject-sensitive" -- that is, that what I ought to do depends on various properties of me -- are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among individuals?

Similarly, while we acknowledge that the same projectile will follow a different trajectory in different environments, and that different projectiles of the same mass will follow different trajectories in different environments, we nevertheless say that ballistics is "universal", because the equations that predict a trajectory can take additional properties of the environment and the projectile as parameters. Trajectory is, in part, a function of environment.

When we say that ethics is not universal, are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among environments?

comment by drethelin · 2012-07-22T08:44:59.219Z · score: 0 (0 votes) · LW · GW

I think it's an artifact of how we think about ethics. It doesn't FEEL like a bullet should fly the same exact way as an arrow or as a rock, but when you feel your moral intuitions they seem like they should obviously apply to everyone. Maybe because we learn about throwing things and motion through infinitely iterated trial and error, but we learn about morality from simple commands from our parents/teachers, we think about them in different ways.

comment by TheOtherDave · 2012-07-22T16:50:54.642Z · score: 1 (1 votes) · LW · GW

So, I'm not quite sure I understood you, but you seem to be explaining how someone might come to believe that ethics are universal/objective in the sense of right action not depending on the actor or the situation at all, even at relatively low levels of specification like "eat more vegetables" or whatever.

Did I get that right?

If so... sure, I can see where someone whose moral intuitions primarily derive from obeying the commands of others might end up with ethics that work like that.

comment by hankx7787 · 2012-07-20T01:46:19.925Z · score: 0 (2 votes) · LW · GW

"the proper formulation of objective ballistics takes projectile mass as a parameter"

I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a "universal" optimum (that is, universal "for all minds"... presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values - there is no "universal" terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is "universal" in the sense of optimal decision theory, but "person dependent" in the sense of plugging in one's own particular terminal values - but terminal values and ethics are not necessarily "mind-dependent", as explained here.

comment by TheOtherDave · 2012-07-20T17:11:09.817Z · score: 0 (0 votes) · LW · GW

I would certainly agree that there is no terminal value shared by all minds (come to that, I'm not convinced there are any terminal values shared by all of any given mind).

Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to "plug in" some subset of information about my own values and about my environment.

I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is.

How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.

comment by hankx7787 · 2012-07-19T20:25:32.328Z · score: 1 (1 votes) · LW · GW

See this comment regarding this common confusion about 'objective'...

comment by hankx7787 · 2012-07-19T19:39:06.577Z · score: 0 (0 votes) · LW · GW

Surprisingly, this isn't a bad short explanation of her ethics.

I've been reading a lot of Aristotle lately (I highly recommend Aristotle by Randall, for anyone who is in to that kind of thing), and Rand mostly just brought Aristotle's philosophy into the 20th century - of course note now that it's the 21st century, so she is a little dated at this point. Take for example, Rand was offered by various people to get fully paid-for cryonics when she was close to death, but for unknown reasons she declined, very sadly (if you're looking for someone to take her philosophy into the 21st century, you will need to talk to, well... ahem... me).

It's important to mention that politics is only one dimension of her philosophy and of her writing (although, naturally, it's the subject that all the pundits and mind-killed partisans obsess over) - and really it is the least important, since it is the most derivative of all of her other more fundamental philosophical ideas on metaphysics, epistemology, man's nature, and ethics.

comment by OrphanWilde · 2012-07-19T19:57:28.244Z · score: 1 (3 votes) · LW · GW

I'll willingly confess to not being interested in Aristotle in the least. Philosophy coursework cured me of interest in Greek philosophy. Give me another twenty years and I might recover from that.

Have you read TVTropes' assessment of Objectivism? It's actually the best summary I've ever read, as far as the core of the philosophy goes.

comment by hankx7787 · 2012-07-19T20:16:11.201Z · score: 0 (0 votes) · LW · GW

No I haven't! That was quite good, thanks.

By the way, I fully share yours (and Eliezer's) sentiment in regard to academic philosophy. I took a "philosophy of mind" course in college, thinking that would be extremely interesting, and I ended up dropping the class in short order. It was only after a long study of Rand that I ever became interested in philosophy again, once I realized I had a sane basis on which to proceed.

comment by ViEtArmis · 2012-07-19T20:28:44.261Z · score: 1 (1 votes) · LW · GW

Specifically, her non-fiction work (if you find that sort of thing palatable) provides a lot more concrete discussion of her philosophy.

Unfortunately, Ayn Rand is little too... abrasive... for many people who don't agree entirely with her. She has a lot of resonant points that get rejected because of all the other stuff she presents along with it.

comment by Jayson_Virissimo · 2012-07-19T08:14:50.151Z · score: 1 (1 votes) · LW · GW

Welcome to Less Wrong! I would say something about a rabbit hole but it would be pointless, since you already seem to be descending at quite a high rate of speed.

comment by MBlume · 2012-07-19T23:15:45.845Z · score: 0 (2 votes) · LW · GW

We seem to have a lot of Airbender fans here at LW -- Alicorn was the one who started me watching it, and I know SarahC and rubix are fans.

Welcome =)

comment by RobertLumley · 2012-07-19T21:45:50.529Z · score: 0 (2 votes) · LW · GW

I adore Pixar and many Disney movies for the sweetness and heart.

Did you see Brave? I thought it was great.

comment by OnTheOtherHandle · 2012-07-21T01:32:38.214Z · score: 0 (0 votes) · LW · GW

I did. :) I was so happy to see a mother-daughter movie with no romantic angle (other than the happily married king and queen).

comment by RobertLumley · 2012-07-21T01:57:11.528Z · score: 0 (0 votes) · LW · GW

I thought she was going to have to end up married at the end and I was so. angry. Brave ranked up there with Mulan in terms of kids movies that I think actually teach kids good lessons, which is a pretty high honor in my book.

comment by Desrtopa · 2012-07-21T02:50:54.658Z · score: 8 (8 votes) · LW · GW

Personally, for their first female protagonist, I felt like Pixar could have done a lot better than a Rebellious Princess. It's cliche, and I would have liked to see them exercise more creativity, but besides that, I think the instructive value is dubious. Yes, it's awfully burdensome to have one's life direction dictated to an excessive degree by external circumstances and expectations. But on the other hand, Rebellious Princesses, including Merida, tend to rail against the unfairness of their circumstances without stopping to consider that they live in societies where practically everyone has their lives dictated by external circumstances, and there's no easy transition to a social model that allows differently.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant, since all the necessary house and field work wouldn't leave her with the time, her family wouldn't own a horse, unless it was a ploughhorse she wouldn't be able to take out for pleasure riding, and she'd be married off at an early age out of economic rather than political necessity. And she'd be similarly out of luck if her parents were merchants, or craftsmen, or practically anyone else. Like most Rebellious Princesses, she has modern expectations of entitlement in a society where those expectations don't make sense.

It sucks to be told you can't do something you love because of societal preconceptions; "You shouldn't try to be a mathematician, you're a girl," "'You're a black ghetto kid, what are you doing aiming to be a businessman?" etc. But Rebellious Princesses are in a situation more analogous to "You might want not to have to go to school and be able to spend your time partying with friends and maybe make a living drawing pictures of cartoons you like, but there's no social structure to support you if you try to do that."

By the end of the movie, Merida and her mother birepbzr gurve cevqr naq zhghny zvfhaqrefgnaqvat, naq Zrevqn'f zbgure yrneaf gb frr gur vffhr sebz ure Zrevqn'f cbvag bs ivrj naq abg sbepr ure vagb n fhqqra zneevntr sbe cbyvgvpny rkcrqvrapl, juvyr Zrevqn yrneaf... gung fur ybirf ure zbz rabhtu gb abg jnag ure gb or ghearq vagb n orne? Fhccbfvat gur bgure gevorf jrera'g cercnerq gb pnyy bss gur zneevntr, naq fur jnf fghpx pubbfvat orgjrra n cebonoyl haunccl zneevntr naq crnpr, be ab zneevntr naq jne, jbhyq fur unir pubfra nal qvssreragyl guna fur qvq ng gur fgneg bs gur zbivr?

This probably all sounds like I disapproved of the movie a lot more than I really did, but I definitely wouldn't rank it alongside Mulan terms of positive social message. Mulan wanted to bring her family honor and keep her father safe, so she went and performed a service for her society which demanded great perseverance and courage, which her society neither expected nor encouraged her to perform. Merida wasn't happy with the expectations and duties her society placed on her, so she tried to duck out of them, nearly caused a disaster, and ultimately got what she wanted without having to make a hard choice between personal satisfaction and doing her part for her society.

comment by Bugmaster · 2012-07-26T23:18:23.890Z · score: 3 (3 votes) · LW · GW

I thought that Brave was actually a somewhat subversive movie -- perhaps inadvertently so. The movie is structured and presented in a way that makes it look like the standard Rebellious Princess story, with the standard feminist message. The protagonist appears to be a girl who overcomes the Patriarchy by transgressing gender norms, etc. etc. This is true to a certain extent, but it's not the main focus of the movie.

Instead, the movie is, at its core, a very personal story of a child's relationship with her parent, the conflict between love and pride, and the difference between having good intentions and being able to implement them into practice. By the end of the movie, both Merida and her mother undergo a significant amount of character development. Their relationship changes not because the social order was reformed, or because gender norms were defeated -- but because they have both grown as individuals.

Thus, Brave ends up being a more complex (and IMO more interesting) movie than the standard "Rebellious Princess" cliche would allow. In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong. Contrast this with something like Disney's Rapunzel, where the mother is basically a glorified plot device, as opposed to a full-fledged character.

comment by wedrifid · 2012-07-27T00:28:07.331Z · score: 0 (2 votes) · LW · GW

In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong.

How boring. Was there at least some monsters to fight or an overtly evil usurper to slay? What on earth remains as motivation to watch this movie?

comment by Alicorn · 2012-07-27T00:53:33.805Z · score: 1 (3 votes) · LW · GW

The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.

comment by wedrifid · 2012-07-27T02:37:07.201Z · score: 2 (2 votes) · LW · GW

The antagonist is the rapey cultural artifact of forced marriage.

There should be a word for forcing other people to have sex (with each other, not yourself). The connotations of calling a forced arranged marriage 'rapey' should be offensive to the victims. It is grossly unfair to imply that the wife is a 'rapist' just because her husband's father forced his son to marry her for his family's political gain. (Or vice-versa.)

comment by Alicorn · 2012-07-27T08:05:21.219Z · score: 1 (1 votes) · LW · GW

I wasn't specifying who was being rapey. Just that the entire setup was rapey.

comment by wedrifid · 2012-07-27T08:07:34.616Z · score: 2 (2 votes) · LW · GW

I wasn't specifying who was being rapey. Just that the entire setup was rapey.

That was clear and my reply applies.

(The person to whom the applies is the person who forces the marriage. Rape(y/ist) would also apply if that person was also a participant in the marriage.)

comment by Bugmaster · 2012-07-27T02:05:18.882Z · score: 1 (1 votes) · LW · GW

As per my post above, I'd argue that the "rapey cultural artifact of forced marriage" is less of a primary antagonist, and more of a bumbling comic relief character.

comment by wedrifid · 2012-07-27T02:01:20.674Z · score: 0 (0 votes) · LW · GW

The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.

Cute rot13. I never would have predicted that in a Pixar animation!

comment by Desrtopa · 2012-07-27T02:59:32.203Z · score: 0 (0 votes) · LW · GW

There is an evil monster to fight, of a more literal sort, but it would be a bit of a stretch to call it the primary antagonist.

comment by Vaniver · 2012-07-21T06:05:25.801Z · score: 3 (5 votes) · LW · GW

Upvoted. My thoughts on Brave are over here, but basically Merida is actually a really dark character, and it's sort of sickening that she gets away with everything she does.

Interesting enough to repeat is my suggestion for a better setting:

Consider another movie they could have made, Paisley, about a Scottish girl on the cusp of womanhood who gets a job in one of the first textile mills and is able to support herself and live independently through hard work. This story has the supreme virtue of having actually happened: arranged marriage was not done away with because a preteen girl complained that she wasn't ready, it was done away with because people got richer and could afford something better.

Of course, it's difficult to make a movie glorifying sweatshop labor, whereas princesses are distant enough to be a tame example.

comment by OnTheOtherHandle · 2012-07-21T03:39:08.989Z · score: 1 (1 votes) · LW · GW

I understand your critique, and I mostly agree with it. I actually would have been even happier if Merida had bitten the bullet and married the winner - but for different reasons. She would have married because she loved her mother and her kingdom, and understood that peace must come at a cost - it would still very much count as a movie with no romantic angle. She would have been like Princess Yue in Avatar, a character I had serious respect for. When Yue was willing to marry Han for duty, and then was willing to fnpevsvpr ure yvsr gb orpbzr gur zbba, that was the first time I said to myself, "Wow, these guys really do break convention."

Merida would have been a lot more brave to accept the dictates of her society (but for the right reasons), or to find a more substantial compromise than just convincing the other lords to yrg rirelbar zneel sbe ybir. But I still think it was a sweet movie.

comment by Desrtopa · 2012-07-21T05:07:09.884Z · score: 2 (2 votes) · LW · GW

I agree that it was a sweet movie, and overall I enjoyed watching it. The above critique is a lot harsher than my overall impression. But when I heard that Pixar was making their first movie with a female lead, I expected a lot out of them and thought they were going to try for something really exceptional in both character and message, and it ended up undershooting my expectations on those counts.

I can sympathize with the extent to which simply having competent important female characters with relatable goals is a huge step forward for a lot of works. Ironically, I don't think I really grasped how frustrating the lack of them must be until I started encountering works which are supposed to be some sort of wish fulfillment for guys. There are numerous anime and manga, particularly harem series, which are full of female characters graced with various flavors of awesomeness, without any significant male protagonists other than the lead who's a total loser, and I find it infuriating when the closest thing I have to a proxy in the story is such a lousy and overshadowed character. It wasn't until I started encountering works like those that it hit me how painful it must be to be hard pressed to find stories that aren't like that on some level.

comment by OnTheOtherHandle · 2012-07-22T01:00:20.607Z · score: 3 (3 votes) · LW · GW

One thing that disappointed me about this whole story was that it was the one and only Pixar movie that was set in the past. Pixar has always been about sci fi, not fantasy, and its works have been set in contemporary America (with Magic Realism), alternate universes, or the future. Did "female protagonist" pattern-match so strongly with "rebellious medieval princess" that even Pixar didn't do anything really unusual with it?

Even though I was happy Merida wasn't rebelling because of love, it seems like they stuck with the standard old-fashioned feminist story of resisting an arranged marriage, when they could have avoided all of that in a work set in the present or the future, when a woman would have more scope to really be brave.

All in all, it seems like their father-son movie was a lot stronger than their mother-daughter movie.

comment by Nornagest · 2012-07-21T05:26:00.564Z · score: 1 (1 votes) · LW · GW

I don't think "This Loser Is You" is the right trope for that. Actually, I don't think TV Tropes has the right trope for that; as best I can tell, harem protagonists are the way they are not because they're supposed to stand for the audience in a representative sort of way but because they're designed as a receptacle for the audience to pour their various insecurities into. They can display negative traits, because that's assumed to make them more sympathetic to viewers that share them. But they can't display negative traits strong enough to be grounds for actual condemnation, or to define their characters unambiguously; you'll never see Homer Simpson as a harem lead. And they can't show positive traits except for a vague agreeableness and whatever supernatural powers the plot requires, because that breaks the pathos. Yes, Tenchi Muyo, that's you I'm looking at.

More succinctly, we're all familiar with sex objects, right? Harem anime protagonists are sympathy objects.

comment by Desrtopa · 2012-07-21T05:42:49.452Z · score: 0 (0 votes) · LW · GW

I agree that This Loser Is You isn't quite the right trope. There's a more recent launch, Loser Protagonist, which doesn't quite describe it either, but uses the same name as I did when I tried to put the trope which I thought accurately described it through the YKTTW ages ago.

If I understand what you mean by "sympathy objects," I think we have the same idea in mind. I tend to think of them as Lowest Common Denominator Protagonists, because they lack any sort of virtue or achievement that would alienate them from the most insecure or insipid audience members.

comment by RobertLumley · 2012-07-21T03:19:01.854Z · score: 1 (1 votes) · LW · GW

That's a very fair critique. A few things though:

First, you might want to put that in ROT13 or add a [SPOILER](http://lh5.ggpht.com/_VZewGVtB3pE/S5C8VF3AgJI/AAAAAAAAAYk/5LJdTCRCb8k/eliezer_yudkowskyjpg_small.jpg) tag or something.

Zrevqn yrneaf... gung fur ybirf ure zbz rabhtu gb abg jnag ure gb or ghearq vagb n orne?

Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant etc.

Well yeah, but the answer to "society sucks and how can I fix it" isn't "oh it sucks for everyone and even more for others, I'll just sit down and shut up". (Not that you argue it is.)

From TV Tropes:

If she's not the hero, quite often she's the hero's love interest. This will sometimes invoke Marry for Love not only as another way for her to rebel, but to also get out of an Arranged Marriage

This is exactly why I thought Brave was good - it moved away from this trope. It wasn't "I don't love this person, I love this other person!", it was "I don't have to love/marry someone to be a competent and awesome person". She was the hero of her own story, and didn't need anyone else to complete her. That doesn't have to be true for everyone, but the counterpoint needs to be more present in society.

And I said it ranked up there. Not that it passed Mulan. :) And it gets that honor by being literally one of the two movies I can think of that has a positive message in this respect. Although I will concede that I'm not very familiar with a particularly high number of kids movies.

comment by Desrtopa · 2012-07-21T04:09:10.600Z · score: 4 (4 votes) · LW · GW

I edited my comment to rot13 the ending spoilers; I left in the stuff that's more or less advertised as the premise of the movie. You might want to edit your reply so that it doesn't quote the uncyphered text.

Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound.

I think that's a valuable lesson, but I felt like Brave's presentation of it suffered for the fact that Merida and her mother really only reconcile after Merida essentially gets her way about everything. Teenagers who feel aggrieved in their relationships with their parents and think that they're subject to pointless unfairness are likely to come away with the lesson "I could get along so much better with my parents if they'd stop being pointlessly unfair to me!" rather than "Maybe I should be more open to the idea that my parents have legitimate reasons for not being accommodating of all my wishes, and be prepared to cut them some slack."

A more well rounded version of the movie's approximate message might have been something like "Some burdensome social expectations and life restrictions have good reasons behind them and others don't, learn to distinguish between them so you can focus your effort on solving the right ones." But instead, it came off more like "Kids, you should love and appreciate your parents, at least when you work past their inclination to arbitrarily oppress you."

comment by OnTheOtherHandle · 2012-07-22T01:22:15.511Z · score: 1 (1 votes) · LW · GW

Now that I think about it, very few movies or TV shows actually teach that lesson. There are plenty of works of fiction that portray the whiney teenager in a negative light, and there are plenty that portray the unreasonable parent in a negative light, but nothing seems to change. It all plays out with the boring inevitability of a Greek tragedy.

comment by aaronsw · 2012-08-04T09:56:50.973Z · score: 31 (31 votes) · LW · GW

I'm Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I'm interested in maximizing positive impact, so I follow GiveWell carefully. I've always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I've been following Eliezer's writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.

I found myself wanting to post but don't have any karma, so I thought I'd start by introducing myself.

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

comment by Jayson_Virissimo · 2012-08-05T08:50:08.978Z · score: 5 (5 votes) · LW · GW

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

Instead of a spinoff, maybe Discussion should be split into more sections (one being primarily about instrumental rationality/self-help).

comment by kilobug · 2012-08-24T08:01:04.448Z · score: 2 (2 votes) · LW · GW

Topic-related discussion seems a good idea to me. Some here may be interested in rationality/cognitive bias but not in IA or not in space exploration or not in cryonics, ...

This would also allow to lift the "bans" like "no politics", if it says in a dedicated section not "polluting" those not interested in it.

comment by Jayson_Virissimo · 2012-08-24T08:57:53.049Z · score: 0 (0 votes) · LW · GW

I endorse this idea.

comment by ata · 2012-08-04T23:21:09.172Z · score: 3 (3 votes) · LW · GW

Yay, it is you!

(I've followed your blog and your various other deeds on-and-off since 2002-2003ish and have always been a fan; good to have you here.)

comment by Jonathan_Graehl · 2012-08-05T06:54:28.527Z · score: 2 (2 votes) · LW · GW

LessWeak - good idea. On the name: cute but I imagine it getting old. But it's not as embarrassing as something unironically Courage Wolf, like 'LiveStrong'.

comment by Emile · 2012-08-04T13:23:27.496Z · score: 2 (2 votes) · LW · GW

Welcome to LessWrong!

Apparently I used to comment on your blog back in 2004 - my, how time flies!

comment by the_sober_grudge · 2013-02-23T11:45:58.357Z · score: 0 (0 votes) · LW · GW

Reboot in peace, friend.

comment by Dahlen · 2012-07-18T21:10:02.047Z · score: 21 (21 votes) · LW · GW

'Twas about time that I decided to officially join. I discovered LessWrong in the autumn of 2010, and so far I felt reluctant to actually contribute -- most people here have far more illustrious backgrounds. But I figured that there are sufficiently few ways in which I could show myself as a total ignoramus in an intro post, right?

I don't consider my gender, age and nationality to be a relevant part of my identity, so instead I'd start by saying I'm INTP. Extreme I (to the point of schizoid personality disorder), extreme T. Usually I have this big internal conflict going on between the part of me that wishes to appear as a wholly rational genius and the other part, who has read enough psychology and LW (you guys definitely deserve credit for this) to know I'm bullshitting myself big time.

My educational background so far is modest, a fact for which procrastination is the main culprit. I'm currently working on catching up with high school level math... so far I've only reviewed trigonometry, so I'm afraid I won't be able to participate in more technical discussions around here. Aside from a few Khan Academy videos, I'm still ignorant about probability; I did try to solve that cancer probability problem though, and when put like that into a word problem, I used Bayes' theorem intuitively. (Funny thing is, I still don't understand the magic behind it, even if I can apply it.) I know no programming beyond really elementary C++ algorithms; I have a pretty good grasp of high school physics, minus relativity and QM. I am seeking to do everything in my power to correct these shortcomings, and when/if I achieve results, I'll be happy to post my findings about motivation & procrastination on LW, if anyone is interested.

That which I have in common with the rest of this community is a love for rational, intelligent and productive discussions. I'm hugely disappointed with the overwhelming majority of internet and RL debates. Many times I've found myself trying to be the voice of reason and pointing out flaws in people's reasoning, even when I agreed with the core idea, only to have them tell me that I'm being too analytical and that I should... what... close off my mind and stop noticing mistakes, right? So I come here seeking discussions with people who would listen to reason and facilitate intellectually fruitful debates.

I'm very eager to help spread the knowledge about cognitive biases and educate people in the art of good reasoning.

I'm also interested (although not necessarily well-versed, as mentioned above) in most topics people here are interested in -- everything concerning mathematics and science, as well as philosophy and the mind (which are, by comparison, my two strongest points).

There are quite a few ways in which I don't fit the typical LW mold, though, and I'm mentioning this so that I find out whether any of these are going to be problematic in our interaction.

  • For one, I'm not particularly interested in AI and transhumanism. Not opposed to, just indifferent. The only related topic which interests me is life extension research. In the eventuality that some people might try to change my mind about this from the get-go, as I've seen some do with other newbies, I know you probably have some very good arguments for your position, but hopefully nobody's going to mind one less potential AI enthusiast. My interests are spread thin enough as they are.
  • I seem to be significantly more left-leaning than the majority of folks here. I'm decidedly not dogmatic about it, though, and on occasion I speak out against heavily ideological discourse even when it has a central message that I agree with.
  • Kind of clueless and mathematically illiterate at this moment.

This has to be getting rather long, so I'll stop here, hoping that I've said everything that I believed to be relevant to an intro post.

comment by Swimmer963 · 2012-07-20T02:03:12.714Z · score: 2 (2 votes) · LW · GW

Welcome!

Many times I've found myself trying to be the voice of reason and pointing out flaws in people's reasoning, even when I agreed with the core idea, only to have them tell me that I'm being too analytical and that I should... what... close off my mind and stop noticing mistakes, right?

That's interesting... I don't think I've ever had someone respond to my pointing out flaws in this way. I've had people argue back plenty of times, but never tell me that we shouldn't be arguing about it. Can you give some examples of topics where this has happened? I would be curious what kind of topics engender this reaction in people.

comment by juliawise · 2012-07-20T16:00:12.951Z · score: 10 (10 votes) · LW · GW

I've seen this happen where one person enjoys debate/arguing and another does not. To one person it's an interesting discussion, and to the other it feels like a personal attack. Or, more commonly, I've seen onlookers get upset watching such a discussion, even if they don't personally feel targeted. Specifically, I'm remembering three men loudly debating about physics while several of their wives left the room in protest because it felt too argumentative to them.

Body language and voice dynamics can affect this a lot, I think - some people get loud and frowny when they're excited/thinking hard, and others may misread that as angry.

comment by Nornagest · 2012-07-20T18:27:15.063Z · score: 5 (5 votes) · LW · GW

I ended up having to include a disclaimer in the FAQ for an older project of mine, saying that the senior staff tends to get very intense when discussing the project and that this doesn't indicate drama on our part but is actually friendly behavior. That was a text channel, though, so body dynamics and voice wouldn't have had anything to do with it. I think a lot of people just read any intense discussion as hostile, and quality of argument doesn't really enter into it -- probably because they're used to an arguments-as-soldiers perspective.

comment by TheOtherDave · 2012-07-20T18:49:51.253Z · score: 6 (6 votes) · LW · GW

We used to say of two friends of mine that "They don't so much toss ideas back and forth as hurl sharp jagged ideas directly at one another's heads."

comment by gwern · 2012-07-21T02:28:12.788Z · score: 5 (5 votes) · LW · GW

"Wise words are like arrows flung at your forehead. What do you do? Why, you duck of course."

--Steven Erikson, House of Chains (2002)

comment by Dahlen · 2012-07-21T14:37:59.182Z · score: 5 (5 votes) · LW · GW

Oh, it's not a topic-specific behavior. Every time I go too far down a chain of reasoning ("too far" meaning as few as three causal relationships), sometimes people start complaining that I'm giving too much thought to it, and imply they are unable to follow the arguments. I'm just not surrounded by a lot of people that like long and intricate discussions.

(Funnily, both my parents are the type that get tired listening to complex reasoning, and I turned out the complete opposite.)

comment by Swimmer963 · 2012-07-22T03:02:34.155Z · score: 6 (6 votes) · LW · GW

I'm just not surrounded by a lot of people that like long and intricate discussions.

That is...intensely frustrating. I've had people tell me that "well, I find all the points you're trying to make really complicated, and it's easier for me to just have faith in God" or that kind of thing, but I've never actually been rebuked for applying an analytical mindset to discussions. Props on having acquired those habits anyway, in spite of what sounds like an unfruitful starting environment!

comment by Dahlen · 2012-07-22T18:58:46.496Z · score: 1 (1 votes) · LW · GW

Thanks! Anyway, there's the internet to compensate for that. The wide range of online forums built around ideas of varied intellectual depth means you even get to choose your difficulty level...

comment by Davidmanheim · 2012-07-20T14:06:32.762Z · score: 2 (2 votes) · LW · GW

This happens frequently in places where reasoning is suspect, or not valued. Kids in poor areas with few scholastic or academic opportunities find more validation in pursuits that are non-academic, and they tend to deride logic. It's parodied well by Colbert, but it's not uncommon.

I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.

comment by Swimmer963 · 2012-07-20T18:13:00.049Z · score: 0 (0 votes) · LW · GW

I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.

I may be in a similar position of never having known anyone who was like this. Also, I'm very conflict averse myself (but like discussing), so any discussion I start is less likely to have any component of raised voices or emotional involvement that could make it sound like an argument.

comment by Davidmanheim · 2012-07-20T01:46:57.738Z · score: 1 (1 votes) · LW · GW

The best way for me to get good at some particular type of math, or programming, or skill, in my experience, is to put yourself in a position where you need to do it for something. Find a job that requires you to do a bit of programming, or pick a task that requires it. Spend time on it, and you'll learn a bit. Then go back and realize you missed some basics, and pick them up. Oh, and read a ton.

You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?

comment by Dahlen · 2012-07-21T16:00:13.901Z · score: 3 (3 votes) · LW · GW

I prefer the practice-based approach too, but from my position theoretical approaches are cheaper and much more available, if slower and rather tedious. In school they taught us that the only way to get better in an area is to do extra homework, and frankly my methods haven't improved much since. My usual way is to take an exercise book and solve everything in it, if that counts for practice; other than that, I only have the internet and a very limited budget.

You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?

Senior year in high school. Right now I have 49 vacation days left, after which school will start, studying will get replaced with busywork and my learning rates will have no choice but to fall dramatically. So now I'm trying to maximize studying time while I still can... It's all kind of backwards, isn't it?

comment by Davidmanheim · 2012-07-22T13:39:47.032Z · score: 1 (1 votes) · LW · GW

Where you go to college and the amount of any scholarships you get are a bigger deal for your long term personal growth than any of the specific subjects you will learn right now.

In the spirit of long term decision making, figure out where you want to go to college, or what your options are, and spend the summer maximizing the odds of getting in to your first choice schools. I cannot imagine that it won't be a better investment of your time than any one subject you are studying (unless you are preparing for SAT or some such test.) So I guess you should spend the summer on Khan, and learning and practicing vocabulary to get better at taking the tests that will get you into a great college, where your opportunities to learn are greatly expanded.

comment by Dahlen · 2012-07-22T18:49:42.702Z · score: 3 (3 votes) · LW · GW

I'm afraid all of this is not really applicable to me... My country isn't Western enough for such a wide range of opportunities. Here, institutes for higher education range from almost acceptable (state universities) to degree factories (basically all private colleges). Studying abroad in a Western country costs, per semester, somewhere between half and thrice my parents' yearly income. On top of everything, my grades would have to be impeccable and my performances worthy of national recognition for a foreign college to want me as a student so much as to step over the money issue and cover my whole tuition. (They're not, not by a long shot.)

Thanks for the support, in any case...

comment by iceman · 2012-07-19T23:05:25.740Z · score: 20 (20 votes) · LW · GW

I've commented infrequently, but never did one of these "Welcome!" posts.

Way back in the Overcoming Bias days, my roomate raved constantly about the blog and Eliezer Yudkowsky in particular. I pattern matched his behaviour to being in a cult, and moved on with my life. About two years later (?), a common friend of ours recommended Harry Potter and the Methods of Rationality, which I then read, which brought me to Lesswrong, reading the Sequences, etc. About a year later, I signed up for cryonics with Alcor, and I now give more than my former roomate to the Singularity Institute. (He is very amused by this.)

I spend quite a bit of time working on my semi-rationalist fanfic, My Little Pony: Friendship is Optimal, which I'll hopefully release on a timeframe of a few months. (I previously targeted releasing this damn thing for April, but...planning fallacy. I've whittled my issue list down to three action items, though, and it's been through it's first bout of prereading.)

comment by Alicorn · 2012-07-19T23:19:00.773Z · score: 15 (19 votes) · LW · GW

My Little Pony: Friendship is Optimal

Want.

comment by maia · 2012-07-26T00:39:06.085Z · score: 2 (2 votes) · LW · GW

Could I convince you to perhaps post on the weekly rationality diaries about progress, or otherwise commit yourself, or otherwise increase the probability that you'll put this fic up soon? :D

comment by [deleted] · 2012-07-19T22:45:01.168Z · score: 20 (20 votes) · LW · GW

Hello everyone! I've been a lurker on here for awhile, but this is my first post. I've held out on posting anything because I've never felt like I knew enough to actually contribute to the conversation. Some things about me:

I'm currently 22, female, and a recent graduate of college with a degree in computer science. I'm currently employed as a software engineer at a health insurance company, though I am looking into getting into research some day. I mainly enjoy science, playing video games, and drawing.

I found this site through a link on the Skeptics Stack Exchange page. The post was about cryonics, which is how I got over here. I've been reading the site for about six months now and I have found it extremely helpful. It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

My journey to rationality probably started with atheism and a real understanding of the scientific method and human psychology. I grew up Mormon, which has since given me some interesting perspectives into groupthink and the general problem of humanity. Leaving Mormonism is what prompted me into understanding why and how so many people could be so systematically insane.

In some ways, I've also found this very isolating because I now have a hard time relating to a lot of people. Just sitting back and watching the ways people destroy themselves and others is very frustrating. It's made worse by my knowledge that I must also be doing this to myself, albeit on a smaller level.

Anyway, I enjoy meeting you all and I will try to comment more on the site! I really enjoy this site and everyone on it seems to have very good comments.

comment by fiddlemath · 2012-07-29T14:11:25.651Z · score: 0 (0 votes) · LW · GW

It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

You describe "problems with existential nihilism." Are these bouts of disturbed, energy-sucking worry about the sheer uselessness of your actions, each lasting between a few hours and a few days? Moreover, did you have similar bouts of worry about other important seeming questions before getting into LW?

comment by [deleted] · 2012-08-14T20:48:12.629Z · score: 0 (0 votes) · LW · GW

Yes, that is how I would describe it. It normally comes and goes, with the longest period lasting a few weeks. I'm not entirely sure if it's a byproduct of recent life events or if I am suffering from regular depression, but it's something I've had on and off for a few years. LW hasn't specifically made it worse, but it hasn't made it better either.

comment by fiddlemath · 2012-08-15T15:07:47.950Z · score: 0 (0 votes) · LW · GW

In that case, it sounds very, very similar to what I've learned to deal with -- especially as you describe feeling isolated from the people around you. I started to write a long, long comment, and then realized that I'd probably seen this stuff written down better, somewhere. This matches my experience precisely.

For me, the most important realization was that the feeling of nihilism presents itself as a philosophical position, but is never caused or dispelled by philosophy. You can ruminate forever and find no reason to value anything; philosophical nihilism is fully internally consistent. Or, you can get exercise, and spend some time with friends, and feel better due not to philosophy, but to physiology. (I know this is glib, and that getting exercise when you just don't care about anything isn't exactly easy. The link above discusses this.)

That above post, and Alicorn's sequence on luminosity -- effective self-awareness -- probably lay out the right steps to take, if you'd like to most-effectively avoid these crappy moods.

Moreover, if you'd like to chat more, over skype some time, or via pm, or whatever, I'd be happy to. I'm pretty busy, so there may be high latency, but it sounds like you're dealing with things that are very similar to my own experience, and I've partly learned how to handle this stuff over the past few years.

comment by dac69 · 2012-07-18T23:00:22.503Z · score: 20 (22 votes) · LW · GW

Hello, everyone!

I'd been religious (Christian) my whole life, but was always plagued with the question, "How would I know this is the correct religion, if I'd grown up with a different cultural norm?" I concluded, after many years of passive reflection, that, no, I probably wouldn't have become Christian at all, given that there are so many good people who do not. From there, I discovered that I was severely biased toward Christianity, and in an attempt to overcome that bias, I became atheist before I realized it.

I know that last part is a common idiom that's usually hyperpole, but I really did become atheist well before I consciously knew I was. I remember reading HPMOR, looking up lesswrong.com, reading the post on "Belief in Belief", and realizing that I was doing exactly that: explaining an unsupported theory by patching the holes, instead of reevaluating and updating, given the evidence.

It's been more than religion, too, but that's the area where I really felt it first. Next projects are to apply the principles to my social and professional life.

comment by jacoblyles · 2012-07-18T23:43:10.562Z · score: 0 (8 votes) · LW · GW

Welcome!

The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find.

Let me know if you ever un-convert.

comment by Oscar_Cunningham · 2012-07-19T10:42:22.574Z · score: 13 (13 votes) · LW · GW

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.

What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.

comment by Nornagest · 2012-07-19T00:43:18.685Z · score: 9 (9 votes) · LW · GW

I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. All very explicable. But layer the field's native traditional-Chinese-medicine metaphor over that and run it through several generations of easily impressed students, partial information, and novelists without any particular incentive to be realistic, and suddenly you've got the Five-Point Palm Exploding Heart Technique.

So I don't think the mumbo-jumbo is likely to be strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

comment by Oligopsony · 2012-07-19T00:47:44.524Z · score: 1 (1 votes) · LW · GW

So I suspect it's unlikely that the mumbo-jumbo is strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

It would be difficult to do it on your own, but it's not very hard to find e.g. guides to meditation that have been bowlderized of all the mysterious magical stuff.

comment by moocow1452 · 2012-08-17T21:25:15.788Z · score: 0 (0 votes) · LW · GW

Maybe it's incomprehensibility itself that makes some people happy? If you don't understand it, you don't feel responsible, and ignorance being bliss, all that weird stuff there is not your problem, and that's the end of it as far as your monkey bits are concerned.

comment by wdmacaskill · 2012-11-09T17:57:42.979Z · score: 19 (19 votes) · LW · GW

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.

I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).

comment by MixedNuts · 2012-11-09T18:30:02.717Z · score: 6 (6 votes) · LW · GW

Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?

comment by wdmacaskill · 2012-11-09T19:42:13.120Z · score: 6 (6 votes) · LW · GW

Haha! I don't think I'm worthy of squeeing, but thank you all the same.

In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:

Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.

Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9

Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.

comment by [deleted] · 2012-11-09T23:32:45.219Z · score: 0 (2 votes) · LW · GW

That's not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren't worth living just can't be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn't apply to the hypothetical.

I haven't thought about these things that much, but my current position is that average utilitarianism is not actually absurd -- the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.

comment by Pablo_Stafforini · 2012-11-10T17:14:57.172Z · score: 1 (1 votes) · LW · GW

I don't understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false.

Here's another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.

comment by [deleted] · 2012-11-10T19:32:46.512Z · score: 0 (0 votes) · LW · GW

Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false.

I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).

comment by wdmacaskill · 2012-11-11T00:01:15.677Z · score: 5 (5 votes) · LW · GW

Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.

"Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it's good or bad to bring that person into existence.

But, let's return to the intuitive case above, and make it a little stronger.

Now suppose:

Population A: 1 person suffering a lot (utility -10)

Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering -9.9.

Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone's already horrific life, in order to bring into existence many other people with horrific lives.

Do you still get the intuition in favour of average here?

comment by TorqueDrifter · 2012-11-13T03:36:48.607Z · score: 4 (4 votes) · LW · GW

Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human - as in, in pop A you will get utility -10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward "maximize expected util of 'being someone in this world'", or something like "suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being's utility". Such perspectives would give the "non-intuitive" result in these sorts of thought experiments.

comment by TorqueDrifter · 2012-11-14T05:46:04.870Z · score: 2 (2 votes) · LW · GW

Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?

comment by MugaSofer · 2012-11-14T09:47:04.819Z · score: 0 (0 votes) · LW · GW

Perhaps people simply objected to the implied selfish motivations.

comment by TorqueDrifter · 2012-11-14T17:23:05.230Z · score: 2 (2 votes) · LW · GW

Perhaps! Though I certainly didn't intend to imply that this was a selfish calculation - one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.

comment by [deleted] · 2012-11-11T00:32:55.235Z · score: 1 (1 votes) · LW · GW

assuming they don't have any causal effects on other people

Once you make such an unrealistic assumption, the conclusions won't necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.

comment by drnickbone · 2012-11-14T08:16:30.696Z · score: 0 (0 votes) · LW · GW

When discussing such questions, we need to be careful to distinguish the following:

  1. Is a world containing population B better than a world containing population A?
  2. If a world with population A already existed, would it be moral to turn it into a world with population B?
  3. If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I'd live somewhere in the world, but not who I'd be, would I choose population B?

I am inclined to give different answers to these questions. Similarly for Parfit's repugnant conclusion; the exact phrasing of the question could lead to different answers.

Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.

I suspect that this is the situation we're actually in: a large, maybe infinite, population elsewhere that we can't do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth's population, and we can't make a judgement one way or another.

comment by MugaSofer · 2012-11-10T19:47:50.931Z · score: -1 (1 votes) · LW · GW

Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more.

While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.

comment by Pablo_Stafforini · 2012-11-10T20:20:42.529Z · score: 0 (0 votes) · LW · GW

Both worlds contain people "suffering horribly".

comment by MugaSofer · 2012-11-10T20:32:41.053Z · score: -1 (1 votes) · LW · GW

One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.

comment by Pablo_Stafforini · 2012-11-10T21:59:10.135Z · score: 0 (0 votes) · LW · GW

So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!

comment by MugaSofer · 2012-11-10T22:05:02.136Z · score: -2 (2 votes) · LW · GW

Because it doesn't contain anyone else. There's only one human left and they're "suffering horribly".

comment by Pablo_Stafforini · 2012-11-10T22:12:42.281Z · score: 0 (0 votes) · LW · GW

Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, "Because in this world, more people suffer headaches."

What would you conclude about my sanity?

comment by MugaSofer · 2012-11-10T22:22:04.859Z · score: -1 (1 votes) · LW · GW

Most people value humanity's continued existence.

comment by Nisan · 2012-11-09T18:34:35.932Z · score: 4 (4 votes) · LW · GW

I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?

comment by wdmacaskill · 2012-11-09T19:38:52.394Z · score: 4 (4 votes) · LW · GW

Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory.

Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!

Sorry if that was a bit of a complex response to a simple question!

comment by beoShaffer · 2012-11-09T18:57:27.399Z · score: 2 (2 votes) · LW · GW

Hi Will,

I woudn't call myself a 'rationalist' without knowing a lot more about what that means.

I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.

comment by wdmacaskill · 2012-11-09T19:48:26.164Z · score: 2 (2 votes) · LW · GW

Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!

comment by thomblake · 2012-11-09T20:01:30.211Z · score: 5 (5 votes) · LW · GW

Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism

Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable".

It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.

comment by Pablo_Stafforini · 2012-11-10T17:17:46.746Z · score: 0 (0 votes) · LW · GW

Could you provide a link to a blog post or essay where Eliezer endorses moral realism? Thanks!

comment by thomblake · 2012-11-12T14:17:54.147Z · score: 1 (1 votes) · LW · GW

Sorting Pebbles Into Correct Heaps notes that 'right' is the same sort of thing as 'prime' - it refers to a particular abstraction that is independent of anyone's say-so.

Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word 'right' to refer to a different abstraction.

Really, this is just shoehorning Eliezer's views into philosophical debates that he isn't involved in.

comment by somervta · 2012-11-10T04:48:55.497Z · score: 0 (0 votes) · LW · GW

"It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not."

It seems to me that moral realism is an epistemic claim - it is a statement about how the world is - or could be - and that is definitely a matter that impinges on rationality.

comment by Kindly · 2012-11-09T20:13:44.734Z · score: 0 (0 votes) · LW · GW

Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering.

This seems to be similar to Eliezer's beliefs. Relevant quote from Harry Potter and the Methods of Rationality:

"No," Professor Quirrell said. His fingers rubbed the bridge of his nose. "I don't think that's quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like 'right' to things they want to do, but how could we possibly act on anything but our own desires?"

"Well, obviously," Harry said. "I couldn't act on moral considerations if they lacked the power to move me. But that doesn't mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!"

comment by somervta · 2012-11-10T04:39:07.952Z · score: 0 (0 votes) · LW · GW

I don't think that's what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn't mean that moral considerations won't have an effect.

comment by AliceKingsley · 2012-07-19T17:57:17.521Z · score: 19 (19 votes) · LW · GW

Hi! I got here from reading Harry Potter and the Methods of Rationality, which I think I found on TV Tropes. Once I ran out of story to catch up on, I figured I'd start investigating the source material.

I've read a couple of sequences, but I'll hold off on commenting much until I've gotten through more material. (Especially since the quality of discussions in the comment sections is so high.) Thanks for an awesome site!

comment by Despard · 2012-07-20T01:13:23.958Z · score: 18 (18 votes) · LW · GW

Hello everyone,

Thought it was about time to do one of these since I've made a couple of comments!

My name's Carl. I've been interested in science and why people believe the strange things they believe for many years. I was raised Catholic but came to the conclusion around the age of ten that it was all a bit silly really, and as yet I have found no evidence that would cause me to update away from that.

I studied physics as an undergrad and switched to experimental psychology for my PhD, being more interested at that point in how people work than how the universe does. I started to study motor control and after my PhD and a couple of postdocs I know way more about how humans move their arms than any sane person probably should. I've worked in behavioural, clinical and computational realms, giving me a wide array of tools to use when analysing problems.

My current postdoc is coming to an end and a couple of months ago I was undergoing somewhat of a crisis. What was I doing, almost 31 and with no plan for my life? I realised that motor control had started to bore me but I had no real idea what to do about it. Stay in science, or abandon it and get a real job? That hurts after almost a decade of high-level research. And then I discovered, on Facebook, a link to HPMOR. And then I read it all, in about a week. And then I found LW, and a job application for curriculum design for a new rationality institute, and I wrote an email, and then flew to San Francisco to participate in the June minicamp...

And now I'm in the midst of writing some fellowship applications to come to Berkeley and study rationality - specifically how the brain is Bayesian in some ways but not in others, and how that can inform the teaching of rationality. (Or something. It's still in the planning stages!) I'm also volunteering for CFAR at the moment by helping to find useful papers on rationality and cognitive science, though that's on somewhat of a back burner since these fellowships are due very soon. Next month, in fact.

I've started a new blog: it's called 'Joy in the Merely Real', and at the moment I'm exploring a few ideas about the Twelve Virtues of Rationality and what I think about them. You can find it at:

themerelyreal.blogspot.com

Looking forward to doing more with this community in the coming months and years. :)

comment by wsean · 2012-07-18T19:22:02.368Z · score: 17 (17 votes) · LW · GW

Hi! Long-time lurker, first-time... joiner?

I was inspired to finally register by this post being at the top of Main. Not sure yet how much I'll actually post, but the removal of the passive barrier of, you know, not actually being registered is gone, so we'll see.

Anyway. I'm a dude, live in the Bay Area, work in finance though I secretly think I'm actually a writer. I studied cog sci in college, and that angle is what I tend to find most interesting on Less Wrong.

I originally came across LW via HPMoR back in 2010. Since then, I've read the Sequences, been to a few meetups, and attended the June minicamp (which, P.S., was awesome).

I'm still struggling a bit with actually applying rationality tools in my life, but it's great to have that toolbox ready and waiting. Sometimes... I hear it calling out to me. "Sean! This is an obvious place to apply Bayes! Seaaaaaaan!"

comment by Nisan · 2012-07-18T20:01:00.579Z · score: 5 (5 votes) · LW · GW

Welcome!

comment by [deleted] · 2013-02-01T18:16:34.535Z · score: 16 (18 votes) · LW · GW

Greetings LWers,

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

I wasn't always in such a stimulating environment -- indeed I grew up in what can only be deemed intellectual deprivation, from which I narrowly escaped -- and, as a result of my disregard for authority and despise for traditional classroom learning, I am largely self-taught. Unlike most autodidacts, though, I never was a voracious reader, on the contrary I barely opened books at all, instead preferring to think things over in my head; this has left me an ignorant person -- something I'm constantly striving to improve on -- but has also protected me from many diseased ideas and even allowed me to better appreciate certain notions by having to rediscover them myself. (case in fact, throughout my adolescence I took great satisfaction in analysing my mental mechanisms and correcting for what I now know to be biases, yet I never came across the relevant literature, essentially missing out on a wealth of knowledge)

For a long time I've aspired to join a cultural movement modelled on the principles of the Enlightenment and, to my eyes, LW, MIRI, CFAR, FHI and CSER are exactly the kind of community that can impact society through the use of reason. Alas, I was long unaware of their existence and when I first heard about the 'Singularity' I immediately dismissed it as the science fiction it sounds like, but thankfully this is no longer the case and I can now start making my modest contributions to reducing existential risk.

Lastly, I've never had my IQ measured properly -- passing the Mensa admission test places me at least two SDs above the norm, but that's hardly impressive by LW standards -- and, as much as I value such an indicator, I'm too emotionally invested in my intelligence to dare undergo psychometric testing. (for what it's worth, as a child my development was precocious -- e.g. the development of my motor skills was superior to that of the subjects taking part in this well-known longitudinal study)

I've opened up a lot to you, LWers; I hope my only regret will be not having discovered you earlier...

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-01T23:56:02.065Z · score: 6 (6 votes) · LW · GW

Nice! What part of FAI interests you?

comment by [deleted] · 2013-02-02T09:50:21.857Z · score: 2 (2 votes) · LW · GW

Too soon to say, as I discovered FAI a mere two months ago -- this, incidentally, could mean that it's a fleeting passion -- but CEV has definitely caught my attention, while the concept of a reflective decision theory I find really fascinating. The latter is something I've been curious about for quite some time, as plenty of moral precepts seem to break down once an agent -- even a mere homo sapiens -- reaches certain levels of self-awareness and, thus, is able to alter their decision mechanisms.

comment by Kawoomba · 2013-02-01T18:45:01.527Z · score: 1 (3 votes) · LW · GW

Lastly, I've never had my IQ measured properly -- passing the Mensa admission test places me at least two SDs above the norm

Isn't that a proper IQ test? At least it is where I live. Funny how we like to talk about things we're good at. The real test is "time from passing test to time you leave to save the yearly fee."

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

That's awesome. Don't miss Marcus' lectures, such a sharp mind. Also, midi - Imperial March (used to be?) playing on his home page.

comment by [deleted] · 2013-02-01T19:54:24.143Z · score: 1 (1 votes) · LW · GW

Isn't that a proper IQ test? At least it is where I live.

Yes and no; it's some version of the Cattell, but it's not administered individually, has a lowish ceiling and they don't reveal your exact result.

The real test is "time from passing test to time you leave to save the yearly fee."

For the record, you needn't join in order to take their heavily subsidised admission test.

comment by Kawoomba · 2013-02-01T19:58:28.298Z · score: 0 (0 votes) · LW · GW

(...) has a lowish ceiling and they don't reveal your exact result.

Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting.

For the record, you needn't join in order to take their heavily subsidised admission test.

'Twas free even, in my case, some kind of promo action.

comment by [deleted] · 2013-02-01T20:50:54.959Z · score: 0 (0 votes) · LW · GW

Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting.

Yep I had Australia in mind, though it's by no means the only country where it works that way. Also, various national Mensa chapters have stopped releasing scores -- something to do with egalitarianism, go figure... -- and pardon my imprecise language, but by lowish I meant around 145 SD15. (didn't mean it in a patronising manner, it's just that plenty of tests have a ceiling of 160 SD15 and some, e.g. Stanford-Binet Form L-M, are employed even above that cutoff)

comment by Kawoomba · 2013-02-01T20:54:15.746Z · score: 0 (0 votes) · LW · GW

I do wonder if someone who'd score, say 155 on a 160 ceiling test would probably score 145 on a 145 ceiling test. You project an aura of knowledgeability on the subject, so I'll just go ahead and ask you. Consider yourself asked.

comment by [deleted] · 2013-02-01T21:09:03.667Z · score: 1 (1 votes) · LW · GW

I'm afraid I'm not sufficiently knowledgeable to answer that and I have no intention of becoming one of those self-proclaimed internet experts! (plus the rest of the internet, outside of LW, already does a good enough job at spreading misinformation)

comment by shminux · 2013-02-01T18:53:28.993Z · score: 0 (2 votes) · LW · GW

I'm an aspiring Friendliness theorist

"machine/emergent intelligence theorist" would not box you in as much. Friendliness is only one model, you know, no matter how convincing it may sound.

comment by [deleted] · 2013-02-01T18:55:03.953Z · score: 1 (5 votes) · LW · GW

"machine intelligence researcher" is also much more employable -- which isn't saying much.

comment by [deleted] · 2013-02-01T19:50:24.744Z · score: 4 (4 votes) · LW · GW

One can signal differently to make oneself more palatable to different audiences and, indeed, "machine/emergent intelligence theorist" is less confining, while "machine intelligence researcher" is more suitable for academia or industry; here at LW, however, I needn't conceal my specific interests, which happen to be in AI safety and friendliness.

comment by SamLL · 2013-02-09T02:02:19.955Z · score: 14 (26 votes) · LW · GW

Hello and goodbye.

I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.

I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.

I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)

However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.

Looking around the site even further, I find that it is over 90% male as of the last survey, and just a lot of gender essentialist, women-are-objects-not-people-like-us crap getting plenty of upvotes.

I'm not really willing to put up with that and still less am I enthused about identifying myself as part of a community where that's so widespread.

So, despite what I think could be a lot of interesting stuff going on, I think this will be my last comment and I would recommend against joining Less Wrong to my friends. I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.

If you're interested in one problem that is causing at least one rationalist to bounce off your site (and, I think the odds are not unreasonable, where one person writes a long heartfelt post, there might be multiple others who just click away) here you go. If not, go ahead and downvote this into oblivion.

Perhaps I'll see you folks in some years if this problem here gets solved, or some more years after that when we're all unfrozen and immortal and so forth.

Sincerely,

Sam

comment by Qiaochu_Yuan · 2013-02-09T02:18:03.432Z · score: 12 (14 votes) · LW · GW

Thanks for writing this. It's true that LW has a record of being bad at talking about gender issues; this is a problem that has been recognized and commented on in the past. The standard response seems to have been to avoid gender issues whenever possible, which is unfortunate but maybe better than the alternative. But I would still like to comment on some of the specific things you brought up:

assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

I think I know the post you're referring to, I didn't read this as sexist, and I don't think that indicates a male-techy failure mode on my part about sexism. Some men are just really, really bad at understanding women (and maybe commit the typical mind fallacy when they try to understand men, and maybe just don't know anyone who doesn't fall into one of those categories), and I don't think they should be penalized for being honest about this.

gender essentialist

I haven't seen too much of this. Edit: Found some more.

women-are-objects-not-people-like-us crap

Where? Edit: Found some of this too.

I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.

This is a somewhat dangerous weapon to wield. It is very easy to classify any attempt to counter this argument as falling into the failure mode you describe; please don't use this as a fully general counterargument.

comment by [deleted] · 2013-02-09T16:35:46.315Z · score: 3 (3 votes) · LW · GW

Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.

Did you use a Rawlsian veil of ignorance when judging it? From a totally selfish point of view, I would very, very, very much rather be myself in this world than myself in that scenario (given that, among plenty of other things, I dislike most people of my gender), but think of, say, starving African children or people with disabilities. I don't know much about what it feels like to be in such dire straits so I'm not confident that I'd rather be a randomly chosen person in Failed Utopia 4-2 than a randomly chosen person in the actual world, but the idea doesn't sound obviously absurd to me.

comment by Kawoomba · 2013-02-09T17:21:27.603Z · score: 0 (0 votes) · LW · GW

I dislike most people of my gender

Is that ... like ... allowed?

edit: I agree with you and object to all the conditioning against contradicting "sacred" values (sexism = ugh, bad).

comment by [deleted] · 2013-02-09T17:43:50.518Z · score: 1 (1 votes) · LW · GW

By whom? (Of course, that's not literally true, since the overwhelming majority of all 3.5 billion male humans alive are people I've never met or heard of and so I have little reason to dislike, but...)

comment by Kawoomba · 2013-02-09T06:18:19.162Z · score: 3 (5 votes) · LW · GW

Since I cannot imagine anything but a few cherry picked examples that could have led to your impression, let me use some of my own (the number of cases is low):

The extremely positive reception of Alicorns "Living Luminously" sequence (karma +50 for the main post alone, Anja's great and technical posts (karmas +13, +34, +29) all indicate that good content is not filtered along gender lines, which it should be if there were some pervasive bias.

Even asserting that understanding anyone of the other gender is "like trying to understand an alien" does not imply any sort of male superiority complex. If you object to sexism as just pointing out that there are differences both based on culture and genetics, well you got me there. Quite obviously there are, I assume you don't live in a hermaphrodite community. Why is it bad when/if that comes up? Forbidden knowledge?

If you're interested in one problem that is causing at least one rationalist to bounce off your site (...)

Are you sure that's the rationalist thing to do? Gender imbalance and a few misplaced or easily misinterpreted remarks need not be representative of a community, just as a predominantly male CS program at Caltech and frat jokes need not be representative of College culture.

comment by jooyous · 2013-02-09T07:06:02.052Z · score: 3 (3 votes) · LW · GW

Gender imbalances and the occasional frat jokes didn't cause you to leave Caltech.

It's possible that user is sensitive to gender issues precisely because it's comparatively difficult and not entirely rationalist to leave a community like Caltech.

It's generally the stance of gender-sensitive humans that no one should have to listen to the occasional frat joke if they don't want to. I agree with everything else in your post; that final "can't you take a frat joke?" strikes me as defensive and unnecessary.

comment by Kawoomba · 2013-02-09T07:32:12.442Z · score: 1 (1 votes) · LW · GW

You're right, it was too carelessly formulated.

comment by jooyous · 2013-02-09T07:39:58.537Z · score: 1 (1 votes) · LW · GW

Will you fix it? =) Is there an established protocol for fixing these sorts of things?

comment by Manfred · 2013-02-10T19:32:50.911Z · score: 1 (1 votes) · LW · GW

The edit button? :P

comment by Kawoomba · 2013-02-10T19:42:51.401Z · score: 1 (1 votes) · LW · GW

Is that a protocol, strictly speaking? "Pressing the edit button" would be a protocol with only one action (not sufficient).

Maybe there will be a policy post on this soon.

comment by Manfred · 2013-02-10T20:00:35.063Z · score: 1 (1 votes) · LW · GW

You're right, strictly speaking, the protocol would be TCPIP. :)

(There is no mandatory or even authoritative social protocol for this situation. The typical behavior is editing and then putting an EDIT: brief explanation of edit, but just editing with no explanation is also fine, particularly if nobody's replied yet, or the edit is explained in child comments).

comment by Kawoomba · 2013-02-10T20:06:25.818Z · score: 1 (1 votes) · LW · GW

just editing with no explanation is also fine, particularly if nobody's replied yet

Well earlier today I clarified (euphemism for edited) a comment shortly after it was made, then found a reply that cited the old, unclarified version. You know what that looks like, once the tribe finds out? OhgodImdone.

In a hushed voice I just found out that EY can edit his comments without an asterisk appearing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-09T16:57:27.135Z · score: 2 (2 votes) · LW · GW

Try to keep in mind selection effects. The post was titled Failed Utopia - people who agreed with this may have posted less than those who disagreed.

I confess to being somewhat surprised by this reaction. Posts and comments about gender probably constitute around 0.1% of all discussion on LessWrong.

comment by Wei_Dai · 2013-02-10T10:01:41.020Z · score: 12 (12 votes) · LW · GW

Whenever I see a high quality comment made by a deleted account (see for example this thread where the two main participants are both deleted accounts), I'd want to look over their comment history to see if I can figure out what sequence of events alienated them and drove them away from LW, but unfortunately the site doesn't allow that. Here SamLL provided one data point, for which I think we should be thankful, but keep in mind that many more people have left and not left visible evidence of the reason.

Also, aside from the specific reasons for each person leaving, I think there is a more general problem: why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW, either through an explicit statement (SamLL's "still less am I enthused about identifying myself as part of a community where that's so widespread"), or by deleting their account? Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

comment by Gastogh · 2013-02-10T12:10:08.330Z · score: 10 (10 votes) · LW · GW

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.

  2. Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.

  3. The demographic homogeneity probably doesn't help.

comment by Wei_Dai · 2013-02-11T03:17:36.243Z · score: 2 (2 votes) · LW · GW

I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small.

comment by [deleted] · 2013-02-11T03:51:21.306Z · score: 16 (16 votes) · LW · GW

"Here at LW, we like to keep our identity small."

comment by shminux · 2013-02-11T04:50:57.237Z · score: 2 (2 votes) · LW · GW

Nice one.

comment by IlyaShpitser · 2013-02-21T09:28:29.256Z · score: 0 (0 votes) · LW · GW

Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation?

I think so. The other thing about "snide digs" the grandparent is talking about is they are not just bad image, they are also wrong (as in incorrect). I think the LW "hit rate" on specific enough technical matters is not all that good, to be honest.

comment by shminux · 2013-02-11T04:50:14.685Z · score: 0 (2 votes) · LW · GW

One of the times the issue of overidentifying with LW came up here, about a year ago, I mentioned that my self-description is "LW regular [forum participant]". It means that I post regularly, but does not mean that I derive any sense of identity from it. "LWer" certainly sounds more like "this is my community", so I stay away from using it except toward people who explicitly self-identify as such. I also tend to discount quite a bit of what someone here posts, once I notice them using the pronoun "we" when describing the community, unless I know for sure that they are not caught up in the sense of belonging to a group of cool "rationalists".

comment by satt · 2013-02-11T07:13:38.464Z · score: 3 (3 votes) · LW · GW

I think the "LWer" appellation is just plain accurate (but then I've used the term myself). Any blog with a regular group of posters & commenters constitutes a community, so LW is a community. Posting here regularly makes us members of this community by default, and being coy about that fact would make me feel odd, given that we've strewn evidence of it all over the site. But I suspect I'm coming at this issue from a bit of an odd angle.

comment by prase · 2013-02-11T01:22:11.369Z · score: 5 (5 votes) · LW · GW

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

It may be because lot of LW regulars visibly think of it in terms of identity. LW is described by most participants as a community rather than a discussion forum, and there has been a lot of explicit effort to strengthen the communitarian aspect.

comment by Eugine_Nier · 2013-02-11T05:47:25.618Z · score: 1 (3 votes) · LW · GW

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some people come from a background where they're taught to think of everything in terms of identity.

comment by Kawoomba · 2013-02-10T10:32:24.821Z · score: 1 (1 votes) · LW · GW

why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW

As a hypothesis, they may be ambivalent about discontinuing their hobby ("Two souls alas! are dwelling in my breast; (...)) and prefer to burn their bridges to avoid further ambivalence and decision pressures. Many prefer a course of action being locked in, as opposed to continually being tempted by the alternative.

comment by Kindly · 2013-02-10T16:36:53.381Z · score: 0 (4 votes) · LW · GW

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

LW is a hub for several abnormal ideas. An implication that you're affiliated with LW is an implication that you take these ideas seriously, which no reasonable person would do.

comment by Kawoomba · 2013-02-09T17:24:03.709Z · score: 4 (4 votes) · LW · GW

Your comment's first sentence answers your second paragraph.

comment by Risto_Saarelma · 2013-02-10T06:55:58.871Z · score: 1 (1 votes) · LW · GW

I guess you get considered fully unclean even if you're only observed breaking a taboo a few times.

comment by earthwormchuck163 · 2013-02-09T04:22:48.412Z · score: 1 (5 votes) · LW · GW

Why not stay around and try to help fix the problem?

comment by Nornagest · 2013-02-09T05:30:11.659Z · score: 7 (9 votes) · LW · GW

Ordinarily I'd leave this for SamLL to respond to, but I'd say the chances of getting a response in this context are fairly low, so hopefully it won't be too presumptuous for me to speculate.

First of all, we as a community suck at handling gender issues without bias. The reasons for this could span several top-level posts and in any case I'm not sure of all the details; but I think a big one is the unusually blurry lines between research and activism in that field and consequent lack of a good outside view to fall back on. I don't think we're methodologically incapable of overcoming that, but I do think that any serious attempt at doing so would essentially convert this site into a gender blog.

To make matters worse, for one inclined to view issues through the lens of gender politics, Failed Utopia 4-2 is close to the worst starting point this site has to offer. Never mind the explicitly negative framing, or its place within the fun theory sequence: we have here a story that literally places men on Mars on gender-essentialist grounds, and doesn't even mention nonstandard genders or sexual preferences. No, that's not meant to be taken all that seriously or to inform people's real behavior. Doesn't matter. We're talking enormously poor associations here.

From there, the damage has basically been done. If you take that as a starting point and look around the site with gender in mind -- perhaps not even consciously trying to vet things in those terms, but having framed things in that way -- you aren't going to go anywhere good with it. Facts like the predominately male gender mix (which I'd be inclined to explain in terms of background demographics; computer science is the dominant intellectual framework here and that field's even more gender-skewed) or the evopsych reasoning we use occasionally start to look increasingly sinister, and every related data point's going to build on an already dismal impression. These data points are in fact pretty sparse -- we don't talk much about gender here, for what I see as good reasons -- but they're fairly salient if you're looking for them. And there aren't many pointing in the other direction.

I don't agree with the conclusion. But I can see where it's coming from, and once it's been accepted sticking around to fight a presumptively hopeless battle wouldn't be a very smart move. Now, can we prevent impressions like this from being formed without losing sight of our primary goals or engaging in types of moderation that aren't going to happen with our current leadership and culture? That I'm not sure of.

comment by [deleted] · 2013-02-09T16:48:08.111Z · score: 1 (3 votes) · LW · GW

we as a community suck at handling gender issues without bias.

As far as I can tell, we as a species suck at handling gender issues without bias, the closest thing to an exception to that I recall seeing being some (not all) articles (but usually not the comments) on the Good Men Project and the discussions on Yvain's “The $Nth Meditation on $Thing” blog post series.

comment by Nornagest · 2013-02-09T18:55:39.539Z · score: 2 (4 votes) · LW · GW

Yeah, I was fairly impressed with Yvain's posts on the subject; if we did want to devote some serious effort to tackling this issue, I can think of far worse starting points.

comment by shminux · 2013-02-11T04:55:19.714Z · score: 1 (1 votes) · LW · GW

we as a species suck at handling gender issues without bias

s/gender//

Though I think that this particular forum sucks less at handling at least some issues.

comment by wedrifid · 2013-02-09T06:30:18.261Z · score: 4 (8 votes) · LW · GW

Why not stay around and try to help fix the problem?

Fixing the problem needs less people with a highly polarizing agenda, not more.

comment by Davidmanheim · 2012-07-20T00:11:04.807Z · score: 14 (14 votes) · LW · GW

Hi all,

Not quire recently joined, but when I first joined, I read some, then got busy and didn't participate after that.

Age: Not yet 30. Former Occupation: Catastrophe Risk Modeling New Occupation: Graduate Student, Public Policy, RAND Corporation.

Theist Status: Orthodox Jew, happy with the fact that there are those who correctly claim that I cannot prove that god exists, and very aware of the confirmation bias and lack of skepticism in most religious circles. It's one reason I'm here, actually. And I'll be glad to discuss it in the future, elsewhere.

I was initially guided here, about a year ago, by a link to The Best Textbooks on Every Subject . I was a bit busy working at the time, building biased mathematical models of reality. (Don't worry, they weren't MY biases, they were those of the senior people and those of the insurance industry. And they were normalized to historical experience, so as long as history is a good predictor of the future...) So I decided that I wanted to do something different, possibly something with more positive externalities, less short term thinking about how the world could be more profitable for my employer, and more long-term thinking about how it could be better for everyone.

Skip forward; I'm going to be going to graduate school for Policy Analysis at RAND, and they asked us to read Thinking Fast and Slow, by Kahneman - and I'm a big fan of his. While reading and thinking about it, I wanted to reference something I read on here, but couldn't remember the name of the site. I ended up Googling my way to a link to HP:MOR, which I read in about a day, (yesterday, actually) and a link back here. So now LR is in my RSS reader, and I'm here to improve myself and my mind, and become a bit less wrong.

comment by maia · 2012-07-19T17:35:41.535Z · score: 13 (13 votes) · LW · GW

I've been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.

I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I'd missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn't really make any sense.

My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.

To cover some typical LW clusters: I am not signed up for cryonics, and am not entirely convinced it is worth it. And I am interested in studying AI, but mostly because I think it is interesting and not out of Singularity-related concern. (I get the feeling that people who don't share the prominent belief patterns about AI/cryonics hereabouts think they are much more of a minority than they actually are.)

comment by TheOtherDave · 2012-07-19T17:47:48.718Z · score: 1 (1 votes) · LW · GW

I'm not quite sure what you're referring to by "the prominent belief patterns," but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.

comment by maia · 2012-07-19T19:04:24.730Z · score: 1 (1 votes) · LW · GW

That is more or less what I am trying to say. It's just that I've noticed several people on Welcome threads saying things like, "Unlike many LessWrongers, I don't think cryonics is a good idea / am not concerned about AI risk."

comment by candyfromastranger · 2012-07-26T04:13:11.719Z · score: 12 (12 votes) · LW · GW

I highly doubt that I'll be posting articles or even joining discussions anytime soon, since right now, I'm just getting started on reading the sequences and exploring other parts of the site, and don't feel prepared yet to get involved in discussions. However, I'll probably comment on things now and then, so because of that (and, honestly, just because I'm a very social person), I figured I might as well post an introduction here.

I appreciate the way that discussions are described as ending on here, because I've noticed in other debates that "tapping out" is seen as running away, and the main trait that gives me problems in my quest for rationality is that I'm inherently a competitive person, and get more caught up in the idea of "winning" than of improving my thinking. I'm working on this, but if I do get involved in discussions, the fact that they aren't seen as much as competitions here compared to other places should be helpful to me.

Anyway, I guess I'll introduce myself. I'm Alexandra, and I'm a seventeen year old high-school student in the United States (I applied to the camp in August, but I never received any news about it, so I assume that I wasn't accepted). Like many people here, I found out about this website through Harry Potter and the Methods of Rationality, but I've been interested in improving my rational thinking since I was young. I grew up in a secular and intellectual home, so seeing the world and myself realistically have always been major goals for myself, and I've always naturally tried to apply logical thinking and the scientific method to my problems, but I've never really formally studied rationality (though I did take statistics last year).

I'm pretty smart, but as a high school student (especially one who, due to various bad experiences with the school system, only really found motivation and purpose in school-work less than a year ago), I don't have too much technical knowledge, which I hope to change. I'm more experienced in aggressive self-awareness than I am in more technical ideas (such as the contrast between Bella from Luminosity and Harry in HPMOR). I'm not really interested in a future in rationality work (and, while I'm interested in transhumanism, I don't really see myself being pulled in that direction for a career), I just want to improve my own thinking in order to better use my mind as a tool to achieve my goals.

While I might come across it on here, I actually don't act very intellectual in my usual social interactions (especially compared to my younger brother, who's very openly and almost aggressively rational). I usually keep my rationality to myself except for certain situations, and use it internally to figure out the best way to approach situations, but I usually come across as much more flippant and frivolous than I actually am (especially since I'm very much an extrovert). I'm too misanthropic to expect rationality from others, so I prefer to use my inner logical side to figure out how to interact with people on their respective levels in a way that works best for me. I can understand the desire to appear as rational and intelligent as you truly are, I just am a very utilitarian person and have found that placing less emphasis on that side of myself works best for me.

I'm used to most people that I debate with being irrational and easily upset. It never used to bother me, because I consider my intelligence to be a mental tool of mine rather than a personality trait, and because my naturally competitive personality meant that I still enjoyed debates that fell into petty conflict, but recently (maybe because I'm maturing, maybe because I'm busier these days), I've found myself getting bored with that sort of thing. So I'm definitely interested in intellectual discussions on here, though I might not involve myself in them until I'm better prepared.

One thing that I've noticed about myself is that, in discussions, I tend to insist on responding to every single point made by others rather than just selecting some to focus on (before I realized that's what people were doing, it used to bother me that others wouldn't respond to every individual point I made). I'm not sure whether that's something shared by other members of this website or just a personal quirk.

This is getting rambly because I'm a long-winded person, but I'll add a bit more (mostly non-rationality-related) information. I'm not a theist or a spiritual person, but atheism seems obvious enough to me that I don't see much point in discussing it anymore (unless the more "New Age"-y members of my family get a little too pushy with me). I'm interested in physics, math, foreign languages, literature, singing, exploring urban areas, climbing things, transhumanism (especially life-extension, because I want to live forever) and throwing parties. I have a strong appreciation for the arts, but I don't personally do anything artistic (other than singing, which is just a hobby), and I'm easily entertained by the small pleasures in life (good food, pretty views, attractive people of either gender, and fluffy blankets). I really like cats and books and the nighttime, and I'm more interested in clothes and makeup than might be expected from an eccentric, science-loving rationalist with quite a few geeky interests, but people are complex. I tend to be a bit surreal when I'm not purposefully trying to be serious.

comment by Bugmaster · 2012-07-26T06:06:29.749Z · score: 2 (2 votes) · LW · GW

I applied to the camp in August, but I never received any news about it, so I assume that I wasn't accepted

I'm not affiliated with SIAI or the summer camps in any way, but IMO this sounds like a breakdown somewhere in the organization's communication protocols. If I were you, I wouldn't just assume that I wasn't accepted, I would ask for an explanation.

comment by candyfromastranger · 2012-07-26T06:23:13.673Z · score: 1 (1 votes) · LW · GW

I'll contact them, then. I wasn't expecting to be accepted, but on the off chance that I was, it's hopefully not too late.

comment by hannahelisabeth · 2012-11-10T22:47:52.404Z · score: 1 (1 votes) · LW · GW

I like your description of yourself. You remind me a bit of myself, actually. I think I'd enjoy conversing with you. Though I have nothing on my mind at the moment that I feel like discussing.

Hm, I kind of feel like my comment ought to have a bit more content than "you seem interesting" but that's really all I've got.

comment by ViEtArmis · 2012-07-19T16:41:25.003Z · score: 12 (14 votes) · LW · GW

Hello! I'm David.

I'm 26 (at the time of writing), male, and an IT professional. I have three (soon to be four) children, three (but not four) of which have a different dad.

My immediate links here were through the Singularity Institute and Harry Potter and the Methods of Rationality, which drove me here when I realized the connection (I came to those things entirely separately!). When I came across this site, I had read through the Wikipedia list of biases several times over the course of years, come to many conscious conclusions about the fragility of my own cognition, and had innumerable arguments with friends and family that changed minds, but I never really considered that there would be a large community of people that got together on those grounds.

I'm going to do the short version of my origin story here, since writing it all out seems both daunting and pretentious. I was raised rich and lucky by an entrepreneur/university professor/doctor father and a mother who always had to be learning something or go crazy (she did some of both). I dropped out of a physics major in college and got my degree in gunsmithing instead, but only after I worked a few years. Along the way, I've politically and morally moved around, but I'm worried that the settling of my moral and political beliefs is a symptom of my brain settling rather than because of all of my rationalizations.

There are a few reasons that I haven't commented on here yet (mostly because I despise any sort of hard work), and this is an attempt to break some of those inhibitions and maybe even get to know some people well enough (i.e. at all) to actively desire discourse.

Ok, David Fun Facts time:

  • I know enough Norwegian, Chinese, Latin, Lojban, and Spanish to do...something useful maybe?

  • I almost never think of what I'm saying before I say it (as in black-box), and I let it continue because it works.

  • Corollary: I curse a lot when I'm comfortable with people.

  • Corollary: My voice is low and loud, so it carries quite far.

  • I play a lot of video games, board games, and thought experiment games.

comment by CoffeeStain · 2013-02-08T11:15:39.505Z · score: 11 (11 votes) · LW · GW

Hey everyone,

As I continue to work through the sequences, I've decided to go ahead and join the forums here. A lot of the rationality material isn't conceptually new to me, although much of the language is very much so, and thus far I've found it to be exceptionally helpful to my thinking.

I'm a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It's quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I'm quite grateful for.

When I'm not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn't make me uncomfortable, my inability to come to answers that I'm happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I'd like to first get through all the sequences, get all of my questions about it all answered, pay attention a bit to the discussions here, and I'll go from there. I have no grand hopes to finally put these beliefs to rest, but I will go to lengths to see whether it is something I should do. To pick either seems to me to suppose I have a Way to rationality, if I understand the point correctly. I would invite any and all discussion on the topic, and I appreciate the little "welcome to Theists" in the main post here. :)

See you all around.

comment by Vaniver · 2013-02-20T19:23:50.984Z · score: 1 (1 votes) · LW · GW

Welcome! Glad to see you here. :D

comment by RobertChange · 2013-01-17T21:35:24.051Z · score: 11 (13 votes) · LW · GW

Hi LWers,

I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)

I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren't "rationalists" surrounded by a visible aura of formidability?) and I would like to help them change that.

My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority. Let me stress that I am not kidding here. If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!” And then we’ll have changed the world, because people will start taking rationality serious.

Let me send out a warm “thank you” to you all for welcoming me in your rationalist circles!

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-18T06:23:58.912Z · score: 3 (3 votes) · LW · GW

Welcome!

Why aren't "rationalists" surrounded by a visible aura of formidability?

Because they don't project high status with their body language?

Re: Taking out someone evil. Let's be rational about this. Do we want to get press? Will taking them out even be worthwhile? What sort of benefits from testing ideas against reality can we expect?

I think humans who study rationality might be better than other humans at avoiding certain basic mistakes. But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable.

Also, if the recent survey is to be believed, the average IQ at Less Wrong is very high. So if LW does accomplish something, it could very well be due to being smart rather than having read a bunch about rationality. (Sometimes I wonder if I like LW mainly because it seems to have so many smart people.)

comment by Peterdjones · 2013-01-18T13:51:49.840Z · score: 0 (0 votes) · LW · GW

But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable.

Some lessWrongians believe it is

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T07:02:42.451Z · score: 0 (0 votes) · LW · GW

That comment doesn't rule out selection effects, e.g. the IQ thing I mentioned.

comment by Peterdjones · 2013-01-19T23:49:51.104Z · score: 0 (2 votes) · LW · GW

IQ without study will not make you are super philosopher or super anything else.

comment by MugaSofer · 2013-01-18T09:33:00.717Z · score: 0 (2 votes) · LW · GW

Don't be too pessimistic to the newcomer, John. We're not completely useless. It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be. Of course, how much the right choices can help you varies a bit, but it's hard to know how much you could achieve if you're biased, isn't it?

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T07:00:22.399Z · score: 0 (0 votes) · LW · GW

It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be.

Hm. My correction on that would be: To the extent that your native decisionmaking mechanisms are broken and can be fixed by reading blog posts on Less Wrong, then this is the place to be. In other words, how useful the study of rationality is depends on how important and easily beaten the bugs Less Wrong tries to fix in human brains are.

Many people are interested in techniques for becoming more successful and getting more out of life. Techniques range from reading The Secret to doing mindfulness meditation to reading Less Wrong. I don't see any a priori reason to believe that the ROI from reading Less Wrong is substantially higher than other methods. (Though, come to think of it, self-improvement guru Sebastian Marshall gives LW a rave review. So in practice LW might work pretty well, but I don't think that is the sort of thing you can derive from first principles, it's really something that you determine through empirical investigation.)

comment by OrphanWilde · 2013-01-17T22:43:49.443Z · score: 2 (2 votes) · LW · GW

I'm evil by some people's standards. You'll have to get a little bit more specific about what you think constitutes evil.

From what I've seen, real evil tends to be petty. Most grand atrocities are committed by people who are simply incorrect about what the right thing to do is.

comment by shminux · 2013-01-17T23:18:01.335Z · score: 1 (1 votes) · LW · GW

If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!”

You may follow HJPEV in calling world domination "world optimization", but running on some highly unreliable wetware means that grand projects tend to become evil despite best intentions, due to snowballing unforeseen ramifications. In other words, your approach seems to be lacking wisdom.

comment by jpaulson · 2013-01-18T02:09:32.545Z · score: 1 (1 votes) · LW · GW

You seem to be making a fully general argument against action.

comment by shminux · 2013-01-18T03:20:03.917Z · score: 1 (1 votes) · LW · GW

Against any sweeping action without carefully considering and trying out incremental steps.

comment by RobertChange · 2013-01-19T17:10:08.219Z · score: 0 (0 votes) · LW · GW

Thanks to all for the warm welcome and the many curious questions about my ambition! And special thanks to MugaSofer, Peterdjones, and jpaulsen for your argumentative support. I am very busy writing right now, and I hope that my posts will answer most of the initial questions. So I’ll rather use the space here to write a little more about myself.

I grew up a true Ravenclaw, but after grad school I discovered that Hufflepuff’s modesty and cheering industry also have their benefits when it comes to my own happiness. HPMOR made me discover my inner Slytherin because I realized that Ravenclaw knowledge and Hufflepuff goodness do not suffice to bring about great achievements. The word “ambition” in the first line of the comment is therefore meant in professor Quirrell’s sense. I also have a deep respect for the principles of Gryffindor’s group (of which the names of A. Swartz and J. Assange have recently caught much mainstream attention), but I can’t find anything of that spirit in myself. If I have ever appeared to be a hero, it was because I accidentally knew something that was of help to someone.

@shminux: I love incremental steps and try to incorporate them into any of my planning and acting! My mini-retirement is actually such a step that, if successful, I’d like to repeat and expand.

@John_Maxwell_IV: Yay for empirical testing of rationality!

@OrphanWilde: “Don't be frightened, don't be sad, We'll only hurt you if you're bad.“ Or to put it into more utilitarian terms: If you are in the way of my ambition, for instance if I would have to hurt your feelings to accomplish any of my goals for the greater good, I would not hesitate to do what has to be done. All I want is to help people to be happy and to achieve their goals, whatever they are. And you’ll probably all understand that I might give a slight preference to helping people whose goals align with mine. ;-)

May you all be happy and healthy, may you be free from stress and anxiety, and may you achieve your goals, whatever they are.

comment by Kawoomba · 2013-01-17T22:23:09.222Z · score: 0 (0 votes) · LW · GW

Let me stress that I am not kidding here.

Anything more specific you have in mind?

comment by kirpi · 2012-07-21T08:18:09.132Z · score: 11 (11 votes) · LW · GW

Hello. I am from Istanbul, Turkey (A Turkish Citizen born and raised). I came across LessWrong on a popular Turkish website called EkşiSözlük. Since then, this is the place I checked to see what's new when there's nothing worth reading on Google Reader and I have time. (So long posts you have!)

I am 31 years old and I have a BSc in Computer Science and MSc in Computational Sciences (Research on Bioinformatics). But then, like most of the people in my country does, I've landed upon a job where I can't utilize any of these information. Information Security :)

Why did I complain about my job? Here is why:

I've been long since looking for "the best way to have lived a life". What I mean by this is, I have to say, at the moment of death "I lived my life the best way I could, and I can die blissfully". This may come off a bit cliché but bear in mind that I'm relatively new to this rationality thing.

While I was learning Computer Science for the first time, I saw there was great opportunity in relating computational sciences to social sciences so as to understand inner workings of human beings. This I realised when the Law&Ethics instructor asked us to write an essay on what would be "the best way to live your life" and I was then learning "Greedy Algorithms" Granted there would be many gaps in my arguments but my case was like this: "You can't predict how long you will live. So the best way to search for the (sub)optimal life was to utilize a greedy algorithm. That is, at every decision point, you have to select the best alternative that maximizes your utility at that time." You soon come to learn that this is easier said than done. (No long term goals, no relationships.. etc) And greedy algorithms may generate a sub-optimal solution, rather than the optimal solution (because you have at one point chosen the wrong path since you didn't consider leaving this far)

I currently suspect that Bayesian (Or Laplacian maybe? ) methods may have the best luck to increase the possibility that I live a good life. I wrote all over the place, but one last thing I want to add.

I do not believe an afterlife or a soul for that matter. This has happened very recently relative to most of you. So, I was constantly looking for a "rational" justification for continuing living a good life . I am on the verge of giving up looking, since there seems to be nothing to find, and just living. Which is a little sad actually, since I still have the feeling that I could probably do something great with my life. But then constant questioning seems to also lead to a sub-optimal life. (May be with an even lower utility than greedy algorithm) I guess, what I am trying to say is I am on the verge of becoming a hedonist..

I'd love to learn your ideas or reading recommendations on how best to live a life. I'd also love to organize meetups of rationalists in Turkey.

P.S. If you haven't seen yet, there's a book called "The theory that would not die", which is an excellent source on many (and I mean it when I say many) things Bayesian.

comment by NancyLebovitz · 2012-12-05T19:58:47.688Z · score: 3 (3 votes) · LW · GW

"You can't predict how long you will live. So the best way to search for the (sub)optimal life was to utilize a greedy algorithm. That is, at every decision point, you have to select the best alternative that maximizes your utility at that time."

However, you can estimate how long you will live with fairly good accuracy. If you know you're very likely to live for some decades more, then I think it makes sense to optimize around the estimate rather than for the very small possibility that you'll be dead in the next hour.

comment by NotInventedHere · 2012-12-05T18:32:15.070Z · score: 2 (2 votes) · LW · GW

This is an extremely belated reply, but with regards to

So, I was constantly looking for a "rational" justification for continuing living a good life . I am on the verge of giving up looking, since there seems to be nothing to find, and just living.

The Fun Theory and Metaethics sequences helped me through my personal period of existential angst.

The two most immediately helpful posts I would recommend for someone like you are Joy in The Merely Real and Joy in the Merely Good.

comment by fowlertm · 2012-07-19T00:15:46.502Z · score: 11 (11 votes) · LW · GW

Hello,

My name is Trent Fowler. I started leaning toward scientific and rational thinking while still a child, thanks in part to a variety of aphorisms my father was fond of saying. Things like "think for yourself" and "question your own beliefs" are too general to be very useful in particular circumstances, but were instrumental in fostering in me a skepticism and respect for good argument that has persisted all my life (I'm 23 as of this writing). These tools are what allowed me to abandon the religion I was brought up in as a child, and to eventually begin salvaging the bits of it that are worth salvaging. Like many atheists, when I first dropped religion I dropped every last thing associated with it. I've since grown to appreciate practices like meditation, ritual, and even outright mysticism as techniques which are valuable and pursuable in a secular context.

What I've just described is basically the rationality equivalent of lifting weights twice a week and going for a brisk walk in the mornings. It's great for a beginner, but anyone who sticks with it long enough will start to get a glimpse of what's achievable by systematizing training and ramping up the intensity. World-class martial artists, olympic powerlifters, and ultramarathoners may seem like demi-gods to the weekend warriors, but a huge amount of what they've accomplished is attributable to hard work and dedication (with a dash of luck and genetics, of course).

The Bruce Lees of the mind, however, are more than just role models. They're the people who will look extinction risk square in the face and start figuring out how to actually the problems. They're the people who will build transhuman AIs, extinguish death, probe the bedrock of reality, and fling human civilization into deep-space. As the dojo is to the apprentice, so is Less Wrong to the aspiring rationalist.

Sadly, when I was gripped rather suddenly by a fascination with math and physics as a child, there was not enough in the way of books, support, and instruction to get the prodigy-fires burning. To this day deep math and physics remain and interesting but largely inscrutable realm of human knowledge. But I'm still young enough that with hard work and dedication I could be a Bostrom or a Yudkowsky, especially if I manage to scramble onto their shoulders.

So here I am, ready to sharpen the blade of my thinking, that it may more effectively be turned to both pondering metaphysical quandaries and solving problems that threaten our collective future. I am excited by the prospects, and hope I am up to the challenge.

comment by krzhang · 2013-02-19T05:33:23.535Z · score: 10 (10 votes) · LW · GW

I am Yan Zhang, a mathematics grad student specializing in combinatorics at MIT (and soon to work at UC Berkeley after graduation) and co-founder of Vivana.com. I was involved with building the first year of SPARC. There, I met many cool people at CFAR, for which I'm now a curicculum consultant.

I don't know much about LW but have liked some of the things I have read here; AnnaSalamon described me as a "street rationalist" because my own rationality principles are home-grown from a mix of other communities and hobbies. In that sense, I'm happy to step foot into this "mainstream dojo" and learn your language.

Recently Anna suggested I may want to cross-post something I wrote to LW and I've always wanted to get to know the community better, so this is the first step, I suppose. I look forward to learning from all of you.

comment by Qiaochu_Yuan · 2013-02-19T05:54:04.835Z · score: 1 (1 votes) · LW · GW

Welcome! It's good to see you here.

comment by krzhang · 2013-02-19T05:58:03.584Z · score: 0 (0 votes) · LW · GW

Haha hey QC. Remind me sometime to learn the "get ridiculously high points in karma-based communities and learn a lot" metaskill from you... you seem to be off to a good start here too ;)

comment by Qiaochu_Yuan · 2013-02-19T06:00:36.144Z · score: 2 (2 votes) · LW · GW

Step 1 is to spend too much time posting comments. I'm not sure I recommend this to someone whose time is valuable. I would like to see you share your "street rationalist" skills here, though!

comment by hannahelisabeth · 2012-11-10T22:08:51.215Z · score: 10 (10 votes) · LW · GW

Hi,

My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.

I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment I wanted to make, so I decided to join in order to be able to post the comment. I figure I may as well be part of the group, since I am interested in continuing reading and discussing here. :-)

As for more about my background in rationality and such, I like to think I've always been oriented towards rationality. Well, when I was younger I was probably less adept at reasoning and certainly less aware of cognitive biases and such, but I've always believed in following the evidence to find the truth. That's something I think my mother helped to instill in me. My acute interest in rationality, however, probably occurred when I was around 18-19 years old. It was at this point that I became an atheist and also when I began Rational Emotive Behavior Therapy.

I had been raised as a Christian, more or less. My mother is very religious, but also very intelligent, and she believes fervently in following the evidence wherever it leads (despite the fact that, in practice, she does not actually do this). The shift in my religious perspective initially occurred around when I first began dating my husband. He was not religious, and I had the idea in my head that it was important that he be religious, in order for us to be properly compatible. But I observed that he was very open-minded and sensible, so I believed that the only requirement for him to become a Christian was for me to formulate a sufficiently compelling argument for why it was the true religion. And if this had been possible, it's likely he would have converted, but alas, this was a task I could not succeed at. It was by examining my own religion and trying to answer his honest questions that I came to realize that I didn't actually know what any good reasons for being a Christian were, and that I had merely assumed there must be good reasons, since my mother and many other intelligent relgious people that I knew were convinced of the religion. So I tried to find out what these reasons were, and they came up lacking.

When I found that I couldn't find any obvious reasons that Christianity had to be the right religion, I realized that I didn't have enough information to come to that conclusion. When I reflected on all my religious beliefs, it occured to me that I didn't even know where most of them came from. So I decided to throw everything out the window and start from scratch. This was somewhat difficult for me emotionally, since I was honestly afraid that I was giving up something important that I might not get back. I mean, what if Christianity were the true religion and I gave it up and never came back? So I prayed to God (whichever god(s) he was, if any) to lead me on a path towards the truth. I figured if I followed evidence and reason, then I would end up at the truth, whatever it was. If that meant losing my religion, then my religion wasn't worth having. I trusted that anything worth believing would come back to me. And that even if I was led astray and ended up believing the wrong thing, God would judge me based on my intent and on my deeds. A god who is good will not punish me for seeking the truth, even if I am unsuccessful in my quest. And a god who is not good is not worth worshipping. I know this idea has been voiced by many others before me, but for me this was an original conclusion at the time, not something I'd heard as a quote from someone else.

Another pertinent influence of rationality on my life occured during my second year of college. I had decided to see a counselor for problems with anxiety and depression. The therapy that counselor used was Rational Emotive Behavior Therapy, and we often engaged in a lot of meaningful discussions. I found the therapy and that particular approach extremely helpful in managing my emotions and excellent practice in thinking rationally. I think it really helped me become a better thinker in addition to being more emotionally stable.

So it's been sort of a cumulative effect, losing my religion, going to college, going through counseling, etc. As I get older, I expose myself to more and more ideas (mostly through reading, but also through some discussion) and I feel that I get better and better at reasoning, understanding biases, and being more rational. A lot of the things I've read here are things that I had either encountered before or seemed obvious to me already. Although, there is plenty of new stuff too. So I feel that this community will be a good fit for me, and I hope that I will be a positive addition to it.

I have a lot of unorthodox ideas and such that I'd be happy to discuss. My interests are parenting (roughly in line with Unconditional Parenting by Alfie Kohn), schooling/education (I support a Sudbury type model), diet (I'm paleo), relationships (I don't follow anyone here; I've got my own ideas in this area), emotions and emotional regulation (REBT, humanistic approach, and my own experience/ideas) and pretty much anything about or related to psychology (I'm reasonably educated in this area, but I can always learn more!). I'm open to having my ideas challenged and I don't shy away from changing my mind when the evidence points in the opposite direction. I used to have more of a problem with this, in so far as I was concerned about saving face (I didn't want to look bad by publicly admitting I was wrong, even if I privately realized it), but I've since reasoned that changing my mind is actually a better way of saving face. You look a lot stupider clinging to a demonstrably wrong position than simply admitting that you were mistaken and changing your ideas accordingly.

Anyway, I hope that wasn't too long an introduction. I have a tendency to write a lot and invest a lot of time and effort in to my writings. I care a lot about effective communication, and I like to think I'm good at expressing myself and explaining things. That seems to be something valued here too, so that's good.

comment by Morendil · 2012-11-10T22:13:40.659Z · score: 1 (1 votes) · LW · GW

Welcome here!

comment by Abd · 2012-10-31T00:25:03.141Z · score: 10 (12 votes) · LW · GW

I'm Abd ul-Rahman Lomax, introducing myself. I have six grandchildren, from five biological children, and I have two adopted girls, age 11 from China, and age 9 from Ethiopia.

Born in 1944, Abd ul-Rahman is not my birth name, I accepted Islam in 1970. Not being willing to accept pale substitutes, I learned to read the Qur'an in Arabic by reading the Qur'an in Arabic.

Back to my teenage years, I was at Cal Tech for a couple of years, being in Richard P. Feynman's two years of undergraduate physics classes, the ones made into the textbook. I had Linus Pauling for freshman chemistry, as well. Both of them helped create how I think.

I left Cal Tech to pursue a realm other than "science," but was always interested in direct experience rather than becoming stuffed with tradition, though I later came to respect tradition (and memorization) far more than at the outset. I became a leader of a "spiritual community," and a successor to a well-known teacher, Samuel L. Lewis, but was led to pursue many other interests.

I delivered babies (starting with my own) and founded a school of midwifery that trained midwives for licensing in Arizona.

Self-taught, I started an electronics design consulting business, still going with a designer in Brazil.

I became known as one of the many independent inventors of delegable proxy as a method of creating hierarchical communication structure from the bottom up. Social structure, and particularly how to facilitate collective intelligence, has been a long-term interest.

I was a Muslim chaplain at San Quentin State Prison, serving an almost entirely Black community. In case you haven't guessed, I'm not black. I loved it. People are people.

So much I'm not saying yet.... I became interested in wikis early on, but didn't get to Wikipedia until 2005, becoming seriously active in 2007. Eventually, I came across an abusive blacklisting of a web site, a well-known archive of scientific papers on cold fusion. I'd been very aware of the 1989 announcement and some of the ensuing flap, but had assumed, like most people with enough knowledge to know what it was about, that the work had not been replicated.

When I looked, I became interested enough to buy a number of major works in the area (including almost all of the skeptical literature).

Among those who have become familiar, cold fusion (a bit of a misnomer; at the least it was prematurely named), is an ultimately clear example of how pseudoskepticism came to dominate a whole field, for over fifteen years. The situation flipped in the peer-reviewed journals beginning about eight years ago, but that's not widely recognized, it is merely obvious if one looks at what has been published in that period of time..

Showing this is way beyond the scope of this introduction, but I assume it will come up. I'm just asserting what I reasonably conclude, having become familiar with the evidence, (and I'm working with the scientists in the field now, in many ways).

As to rational skepticism, I was known to Martin Gardner, who quoted a study of mine on the so-called Miracle of the Nineteen in the Qur'an, the work of Rashad Khalifa, whom I knew personally.

I naively thought, for a couple of days, that a rational-skeptic approach to cold fusion might be welcome on RationalWiki. Definitely not. Again, that's another story. However, I'm not banned there and have sysop privileges (like most users).

On RationalWiki, however, I came across the work of Yudkowsky, and this blog. Wow! In some of the circles in which I've moved, I've been a voice crying in the wilderness, with only a few echoes here and there. Here, I'm reluctant to say anything, so commonly cogent is comment in this community. I know I'm likely to stick my foot in my mouth.

However, that's never stopped me, and learning to recognize the taste of my foot, with the help of my friends, is one way in which I've kept my growth alive. The fastest way to learn is generally to make mistakes.

I'm also likely to comment, eventually, on the practical ontology and present reality of Landmark Education, with which I've become quite familiar, as well as on the myths and facts which widely circulate about Landmark. To start, they do let you go to the bathroom.

Meanwhile, I've caught up with HPMOR, and am starting to read the sequences. Great stuff, folks.

comment by Nisan · 2012-10-31T01:09:16.898Z · score: 5 (5 votes) · LW · GW

Welcome! That's a fascinating biography.

I have been to one introductory Landmark seminar and wrote about the experience here.

comment by kaneleh · 2012-10-24T19:37:35.410Z · score: 10 (10 votes) · LW · GW

Hello. I was brought here by HPMOR, which I finished reading today. Back in 1999 or something I found the site called sysopmind.com which had interesting reads on AI, Bayes theorem (that I didn't understand) and 12 virtues of rationality. I loved it for the beauty that reminded me of Asimov. I kept it in my bookmarks forever. (I knew him before he was famous? ;-))

I like SF (I have read many SF books but most were from before 1990 for some reason) and I'm a computer nerd, among other things. I want to learn everything, but I have a hard time putting in the work. I study to become a psychologist, scheduled to finish in 2013. My favorite area of psychology is social psychology, especially how humans make decisions, how humans are influenced by biases or norms or high status people. I'm married and have a daughter born in 2011.

I like to watch tv-shows, but I have high standards. It is SF if it is based in science and rationality, otherwise it's just space drama/space action and I have no patience for it. I also like psychological drama, but it has to be realistic and believable. Please give recommendations if you like. (edited:) Also, someone could explain in what way Star Trek, Babylon 5 or Battlestar Galactica is really SF or Buffy is feminist, so I know if they are worth my while.

comment by CCC · 2012-10-25T08:44:22.566Z · score: 1 (1 votes) · LW · GW

Of those, the only one I've seen is Star Trek. They can be a bit handwavey about the science sometimes; I liked it, but if you're looking for hard science then you might not. As far as recommendations go, may I recommend the Chanur series (books, not TV) by one C.J. Cherryh?

comment by Alejandro1 · 2012-10-25T06:45:49.273Z · score: 1 (1 votes) · LW · GW

For realistic psychological drama, I haven't seen any show that beats Mad Men.

comment by shminux · 2012-10-24T20:40:52.771Z · score: 1 (5 votes) · LW · GW

Also, someone could explain why Star Trek, Babylon 5, Battlestar Galactica or Buffy is worth my while.

Not without knowing you well enough. Sherlock), on the other hand should suit you just fine.

comment by kaneleh · 2012-10-25T06:33:16.475Z · score: 1 (1 votes) · LW · GW

Ah, yes, thank you. I have seen Sherlock and loved it. Too few episodes though! =)

comment by Error · 2012-09-14T16:14:36.319Z · score: 10 (10 votes) · LW · GW

Greetings. I am Error.

I think I originally found the place through a comment link on ESR's blog. I'm a geek, a gamer, a sysadmin, and a hobbyist programmer. I hesitate to identify with the label "rationalist"; much like the traditional meaning of "hacker", it feels like something someone else should say of me, rather than something I should prematurely claim for myself.

I've been working through the Sequences for about a year, off and on. I'm now most of the way through Metaethics. It's been a slow but rewarding journey, and I think the best thing I've taken out of it is the ability to identify bogus thoughts as they happen. (Identifying is not always the same as correcting them, unfortunately) Another benefit, not specifically from the sequences but from link-chasing, is the realization that successful mental self-engineering is possible; I think the tipping point for me there was Alicorn's post about polyhacking. The realization inspired me to try and beat the tar out of my akrasia, and I've done fairly well so far.

My current interests center around "updating efficiently." I just turned 30; I burnt my 20s establishing a living instead of learning all the stuff I wanted to learn. I figure I only have so many years left before neural rigor mortis begins to set in, and there's more stuff I want to learn and more skills I want to aquire than time to do it in. So, how does one learn as much truth as possible while wasting as little time as possible on things that are wrong? The difficulty I see is that a layperson to a subject (the C programming language for purposes of this example) can't tell the difference between K&R and Herbert Schildt, and may waste a lot of time on the latter when they should be inhaling the former or something similar. The "Best Textbooks" thread looks like it will be invaluable here.

A related concern is that some subjects in science don't lend themselves to easy verification. How does one construct an accurate model of a thing when, for reasons of cost or time, you can't directly compare your map (or your textbook's map) to the territory? I can read a great deal about, say, quantum mechanics, but without an atom smasher in my backyard it's difficult to check if what I'm reading is correct. That's fine when dealing with something you know is settled science. It's harder when trying to draw accurate conclusions about things that are politically charged (e.g. global warming), or for which evidence in any direction is slim. (e.g. cryonics)

Something else I'm interested in is the Less Wrong local meetups. There's one listed for my area (Atlanta) but it doesn't appear to be active. Finding interesting people is hard when you're excessively introverted. I've tried Mensa meetings, but most of the people there were nearly twice my age and I found it difficult to relate. Dragoncon worked out better (well, almost), but only happens once a year.

A fair number of intro posts seem to include religious leanings or (more frequently) lack thereof, so I'll add mine: I was raised mildly Christian but it began to fade out of my worldview around the time I read the bit about how disobedient children should be stoned to death. In retrospect my parents probably shouldn't have made me read the Bible on days that we skipped church. Churches leave that stuff out. Now I swing back and forth between atheism, misotheism, and discordianism, depending on how I'm feeling on any given day, and I don't take any of those seriously.

Is it still acceptable/advisable to comment in the Sequences, even as old as they are? It looks from the comment histories in them that some people still watch and answer in them. I doubt I'll muck around too much elsewhere until I've finished them.

comment by NancyLebovitz · 2012-09-14T17:13:23.930Z · score: 1 (1 votes) · LW · GW

Welcome!

It's acceptable and welcome to comment in the Sequences. The Recent Comments feature (link on the right sidebar, with distinct Recent Comments for the Main section and for the Discussion section) mean that there's a chance that new comments on old threads will get noticed.

comment by shokwave · 2012-09-14T17:11:09.549Z · score: 1 (1 votes) · LW · GW

Welcome! Commenting on the Sequences isn't against any rules. You stand a chance of getting responses from who watch the Recent Comments. However, in Discussion you'll see [SEQ RERUN] posts (which are bringing up old posts in the Sequences in chronological order) that encourage comments on the rerun, not the original. If you happen to be reading a post that's been recently re-run, you might get a better response in the rerun thread.

comment by cjb230 · 2012-07-21T16:41:13.335Z · score: 10 (10 votes) · LW · GW

Hi! Given how much time I've spent reading this site and its relatives, this post is overdue.

I'm 35, male, British and London-based, with a professional background in IT. I was raised Catholic, but when I was about 12, I had a de-conversion experience while in church. I remember leaving the pew during mass to go to the toilet, then walking back down the aisle during the eucharist, watching the priest moving stuff around the altar. It suddenly struck me as weird that so many people had gathered to watch a man in a funny dress pour stuff from one cup to another. So I identified as atheist or humanist for a long time. I can't remember any incident that made me start to identify as a rationalist, but I've been increasingly interested in evidence, biases and knowledge for over ten years now.

I've been lucky, I think, to have some breadth in my education: I studied Physics & Philosophy as an undergrad, Computer Science as a postgrad, and more recently rounded that off with an MBA. This gives me a handy toolset for approaching new problems, I think. I definitely want to learn more statistics though - it feels like there's a big gap in the arsenal.

There are a few stand-out things I have picked out from LW and OB so far. "Noticing that I am confused", and running toward that feeling rather than away from it, has helped at work. "Dissolving the question" has helped me to clarify some problems, and I'd like to be better at it. The material on how words can mislead has helped me to pay more attention to what people mean in discussion.

Non-rationality stuff: my lust to learn new things runs ahead of my ability to follow through, so I have far too many books! Like many people here, I have akrasia issues. I am interested in what can be done to improve quantity and quality of life, as well as productivity, including fitness and mindfulness meditation. Lastly, I'm taking a long trip to LA, flying on August 1, and I'd like to meet up with the LW community there.

comment by Gaviteros · 2012-07-19T07:03:39.662Z · score: 10 (10 votes) · LW · GW

Hellow Lesswrong! (I posted this in the other July2012 welcome thread aswell. :P Though apparently it has too many comments at this point or something to that effect.)

My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.

Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.

I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not removed them from my determinative process.

Furthermore I am not an atheist, nor am I a theist. I have chosen to let others figure out and solve the questions of sentient creators through science, and I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything. I just try to leave religion out of most of my determinations.

Anyway! I'm looking forward to reading and discussing more with all of you!

Current soapbox: Educational System of de-emphasizing critical thinking skills.

If you are interested you can check out my artwork and tools at www.ryandowlingsoka.com

comment by Grognor · 2012-07-25T04:58:26.629Z · score: 2 (2 votes) · LW · GW

I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.

Also, please don't call what we do here, "rationalism". Call it "rationality".

comment by Emile · 2012-07-19T13:18:44.574Z · score: 2 (2 votes) · LW · GW

Welcome to LessWrong!

There are a few of us here in the Game Industry, and a few more that like making games in their free time. I also played around with Houdini, though never produced anything worth showing.

comment by Gaviteros · 2012-07-20T06:35:48.672Z · score: 0 (0 votes) · LW · GW

Thanks for the welcome!

Houdini can be a lot of fun- but without a real goal it is almost too open for anything of value to be easily made. Messing around in Houdini is a time sink without a plan. :) That said, I absolutely love it as a program.

comment by cogwerk · 2013-03-25T22:00:23.692Z · score: 9 (9 votes) · LW · GW

Hi, I'm Edward and have been reading the occasional article on here for a while. I've finally decided to officially join as this year I'm starting to do more work on my knowledge and education (especially maths & science) and I like the thoughtful community I see here. I'm a programmer, but also have a passion for history. Just as I was finishing university, my thinking led me to abandon the family religion (many of my friends are still theists). I was going to keep thinking and exploring ideas but I ended up just living - now I want to begin thinking again.

Regards, Edward

comment by findis · 2012-12-26T06:20:13.853Z · score: 9 (9 votes) · LW · GW

Hi, I'm Liz.

I'm a senior at a college in the US, soon to graduate with a double major in physics and economics, and then (hopefully) pursue a PhD in economics. I like computer science and math too. I'm hoping to do research in economic development, but more relevantly to LW, I'm pretty interested in behavioral economics and in econometrics (statistics). Out of the uncommon beliefs I hold, the one that most affects my life is that since I can greatly help others at a small cost to myself, I should; I donate whatever extra money I have to charity, although it's not much. (see givingwhatwecan.org)

I think I started behaving as a rationalist (without that word) when I became an atheist near the end of high school. But to rewind...

I was raised Christian, but Christianity was always more of a miserable duty than a comfort to me. I disliked the music and the long services and the awkward social interactions. I became an atheist for no good reason in the beginning of high school, but being an atheist was terrible. There was no one to forgive me when I screwed up, or pray to when the world was unbearably awful. My lack of faith made my father sad. Then, lying in bed and angsting about free will one night, I had some philosophical revelation, and it seemed that God must exist. I couldn't re-explain the revelation to myself, but I clung to the result and became seriously religious for the next year or so. But objections to the major strands of theism began to creep up on me. I wanted to believe in God, and I wanted to know the truth, and I found out that (surprise) having an ideal set of beliefs isn't compatible with seeking truth. I did lots of reading (mostly old-school philosophy), slowly changed my mind, then came out as an atheist (to close friends only) once the Bible Quiz season was over. (awk.)

At that point I decided to never lie to myself again. Not just to avoid comforting half-truths, but to actively question all beliefs I held, and to act on whatever conclusions I come to. After hard practice, unrelenting honesty towards myself is a habit I can't break, but I'm not sure it's actually a good policy. For example, a few white lies would've helped me move past a situation of extreme guilt last year.

Anyway, more recently, I read HPMOR and I'm now reading Kahneman's Thinking, Fast and Slow. I'm slowly working through the Sequences too. I always appreciate new reading recommendations.


I have some thoughts on Newcomb's Paradox. (Of course I am new to this, probably way off base, etc.) I think two boxes is the right way to go, and it seems that intuition towards one-boxing often comes from the idea that your decision somehow changes the contents of the boxes. (No reverse causality is supposed to be assumed, right?) Say that instead of an infallible superintelligence, the story changes to

"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."

Terribly small sample size, but a friend told me this changes his answer from one box to two. As far as I can tell these changes are aesthetic and make the story clearer without changing the philosophy.


And, a question. Why is Bayes so central to this site? I use Bayesian reasoning regularly, but I learned Bayes' Theorem around the time I started thinking seriously about anything, so I'm not clear on what the alternative is. Why do y'all celebrate Bayes, rather than algebra or well-designed experiments?

Edit: Read farther in Thinking, Fast and Slow; question answered.

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-12T08:48:19.940Z · score: 2 (2 votes) · LW · GW

Welcome to LW.

Also not an expert on Newcomb's Problem, but I'm a one-boxer because I choose to have part of my brain say that I'm a one-boxer, and have that part of my brain influence my behavior if I get in to a Newcomb-like situation. Does that make any sense? Basically, I'm choosing to modify my decision algorithm so I no longer maximize expected value because I think having this other algorithm will get me better results.

comment by Desrtopa · 2012-12-26T07:01:23.409Z · score: 0 (2 votes) · LW · GW

"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

In Newcomb's Problem, if you choose on the basis of which choice is consistent with a higher expected return, then you would choose to one-box. You know that your choice doesn't cause the box to be filled, but given the knowledge that whether the money is in the box or not is contingent on a perfect predictor's assessment of whether or not you were likely to one-box, you should assign different probabilities to the box containing the money depending on whether you one-box or two-box. Since your own mental disposition is evidence of whether the money is in the box or not, you can behave as if the contents were determined by your choice.

comment by findis · 2012-12-29T20:51:23.025Z · score: 0 (0 votes) · LW · GW

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just for p=0)? What changes as p shrinks?

Or what if Omega/Ann's mom is a perfect predictor, but for a random 1% of the time decides to fill the boxes as if it made the opposite prediction, just to mess with you? If you one-box for p=0, you should believe that taking one box is correct (and generates $1 million more) in 99% of cases and that two boxes is correct (and generates $1000 more) in 1% of cases. So taking one box should still have a far higher expected value. But the perfect predictor who sometimes pretends to be wrong behaves exactly the same as an imperfect predictor who is wrong 1% of the time.

comment by Desrtopa · 2012-12-29T22:10:18.525Z · score: 0 (0 votes) · LW · GW

You choose the boxes according to the expected value of each box choice. For a 99% accurate predictor, the expected value of one-boxing is $990,000,000 (you get a billion 99% of the time, and nothing 1% of the time,) while the expected value of two-boxing is $10,001,000 (you get a thousand 99% of the time, and one billion and one thousand 1% of the time.)

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega. If you're playing against an agent who you know will fill the boxes according to how you would choose if you were playing Omega (we'll call it Omega-1,) then you should always two-box (if you would one-box against Omega, both boxes will contain money, so you get the contents of both. If you would two-box against Omega, only one box would contain money, and if you one-box you'll get the empty one.)

An imperfect predictor with random error is a different proposition from an imperfect predictor with nonrandom error.

Of course, if I were dealing with this dilemma in real life, my choice would be heavily influenced by considerations such as how likely it is that Ann's mom really has billions of dollars to give away.

comment by findis · 2013-01-02T00:59:04.235Z · score: 0 (0 votes) · LW · GW

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega.

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

I still think I'm missing something, since a lot of people have thought carefully about this and come to a different conclusion from me, but I'm still not sure what it is. :/

comment by ArisKatsaris · 2013-01-04T04:34:10.449Z · score: 1 (1 votes) · LW · GW

Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

You are focusing too much on the "already have been filled", as if the particular time of your particular decision is relevant. But if your decision isn't random (and yours isn't), then any individual decision is dependent on the decision algorithm you follow -- and can be calculated in exactly the same manner, regardless of time. Therefore in a sense your decision has been made BEFORE the filling of the boxes, and can affect their contents.

You may consider it easier to wrap your head around this if you think of the boxes being filled according to what result the decision theory you currently have would return in the situation, instead of what decision you'll make in the future. That helps keep in mind that causality still travels only one direction, but that a good predictor simply knows the decision you'll make before you make it and can act accordingly.

comment by Desrtopa · 2013-01-02T03:06:23.795Z · score: -1 (1 votes) · LW · GW

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I would one-box. I gave the relevant numbers on this in my previous comment; one-boxing has an expected value of $990,000,000 to the expected $10,001,000 if you two-box.

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

When you're dealing with a problem involving an effective predictor of your own mental processes (it's not necessary for such a predictor to be perfect for this reasoning to become salient, it just makes the problems simpler,) your expectation of what the predictor will do or already have done will be at least partly dependent on what you intend to do yourself. You know that either the opaque box is filled, or it is not, but the probability you assign to the box being filled depends on whether you intend to open it or not.

Let's try a somewhat different scenario. Suppose I have a time machine that allows me to travel back a day in the past. Doing so creates a stable time loop, like the time turners in Harry Potter or HPMoR (on a side note, our current models of relativity suggest that such loops are possible, if very difficult to contrive.) You're angry at me because I've insulted your hypothetical scenario, and are considering hitting me in retaliation. But you happen to know that I retaliate against people who hit me by going back in time and stealing from them, which I always get away with due to having perfect alibis (the police don't believe in my time machine.) You do not know whether I've stolen from you or not, but if I have, it's already happened. You would feel satisfied by hitting me, but it's not worth being stolen from. Do you choose to hit me or not?

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

If the professor is a perfect predictor, then I would deliberately get most of the problems wrong, thereby all but guaranteeing a score of over 100 points. I would have to be very confident that I would get a score below fifty even if I weren't trying to on purpose before trying to get all the questions right would give me a higher expected score than trying to get most of the questions wrong.

If the professor posts the list on the board, then of course it should affect the answer. If my name isn't on the list, then he's not going to add the 100 points to my test in any case, so my only recourse to maximizing my grade is to try my best on the test. If my name is on the list, then he's already predicted that I'm going to score below 50, so whether he's a perfect predictor or not, I should try to do well so that he's adding 100 points to as high a score as I can manage.

The difference between the scenario where he writes the names on the board and the scenario where he doesn't is that in the former, my expectations of his actions don't vary according to my own, whereas in the latter, they do.

comment by wedrifid · 2013-01-02T08:01:49.541Z · score: 1 (1 votes) · LW · GW

If the professor posts the list on the board, then of course it should affect the answer. If my name isn't on the list, then he's not going to add the 100 points to my test in any case, so my only recourse to maximizing my grade is to try my best on the test. If my name is on the list, then he's already predicted that I'm going to score below 50, so whether he's a perfect predictor or not, I should try to do well so that he's adding 100 points to as high a score as I can manage.

I believe you are making a mistake. Specifically, you are implementing a decision algorithm that ensures that "you lose" is a correct self fulfilling prophecy (in fact you ensure that it is the only valid prediction he could make). I would throw the test (score in the 40s) even when my name is not on the list.

The difference between the scenario where he writes the names on the board and the scenario where he doesn't is that in the former, my expectations of his actions don't vary according to my own, whereas in the latter, they do.

Do you also two box on Transparent Newcomb's?

comment by Desrtopa · 2013-01-02T22:29:34.519Z · score: 0 (0 votes) · LW · GW

I believe you are making a mistake. Specifically, you are implementing a decision algorithm that ensures that "you lose" is a correct self fulfilling prophecy (in fact you ensure that it is the only valid prediction he could make). I would throw the test (score in the 40s) even when my name is not on the list.

If I were in a position to predict that this were the sort of thing the professor might do, then I would precommit to throwing the test should he implement such a procedure. But you could just as easily end up with the perfect predictor professor saying that in the scoring for this test, he will automatically fail anyone he predicts would throw the test in the previously described scenario. I don't think there's any point in time where making such a precommitment would have positive expected value. By the time I know it would have been useful, it's already too late.

Do you also two box on Transparent Newcomb's?

Edit: I think I was mistaken about what problem you were referring to. If I'm understanding the question correctly, yes I would, because until the scenario actually occurs I have no reason to suspect any precommitment I make is likely to bring about more favorable results. For any precommitment I could make, the scenario could always be inverted to punish that precommitment, so I'd just do what has the highest expected utility at the time at which I'm presented with the scenario. It would be different if my probability distribution on what precommitments would be useful weren't totally flat.

comment by Desrtopa · 2013-01-03T00:58:47.296Z · score: 2 (2 votes) · LW · GW

As an aside, I'll note that a lot of the solutions bandied around here to decision theory problems remind me of something from Magic: The Gathering which I took notice of back when I still followed it.

When I watched my friends play, one would frequently respond to another's play with "Before you do that, I-" and use some card or ability to counter their opponent's move. The rules of MTG let you do that sort of thing, but I always thought it was pretty silly, because they did not, in fact, have any idea that it would make sense to make that play until after seeing their opponent's move. Once they see their opponent's play, they get to retroactively decide what to do "before" their opponent can do it.

In real life, we don't have that sort of privilege. If you're in a Counterfactual Mugging scenario, for instance, you might be inclined to say "I ought to be the sort of person who would pay Omega, because if the coin had come up the other way, I would be making a lot of money now, so being that sort of person would have positive expected utility for this scenario." But this is "Before you do that-" type reasoning. You could just as easily have ended up in a situation where Omega comes and tells you "I decided that if you were the sort of person who would not pay up in a Counterfactual Mugging scenario, I would give you a million dollars, but I've predicted that you would, so you get nothing."

When you come up with a solution to an Omega-type problem involving some type of precommitment, it's worth asking "would this precommitment have made sense when I was in a position of not knowing Omega existed, or having any idea what it would do even if it did exist?"

In real life, we sometimes have to make decisions dealing with agents who have some degree of predictive power with respect to our thought processes, but their motivations are generally not as arbitrary as those attributed to Omega in most hypotheticals.

comment by TheOtherDave · 2013-01-03T04:23:21.984Z · score: 0 (0 votes) · LW · GW

Can you give a specific example of a bandied-around solution to a decision-theory problem where predictive power is necessary in order to implement that solution?

I suspect I disagree with you here -- or, rather, I agree with the general principle you've articulated, but I suspect I disagree that it's especially relevant to anything local -- but it's difficult to be sure without specifics.

With respect to the Counterfactual Mugging you reference in passing, for example, it seems enough to say "I ought to be the sort of person who would do whatever gets me positive expected utility"; I don't have to specifically commit to pay or not pay. Isn't it? But perhaps I've misunderstood the solution you're rejecting.

comment by Desrtopa · 2013-01-03T16:19:00.174Z · score: 1 (1 votes) · LW · GW

Well, if your decision theory tells you you ought to be the sort of person who would pay up in a Counterfactual Mugging, because that gets you positive utility, then you could end up in with Omega coming and saying "I would have given you a million dollars if your decision theory said not to pay out in a counterfactual mugging, but since you would, you don't get anything."

When you know nothing about Omega, I don't think there's any positive expected utility in choosing to be the sort of person who would have positive expected utility in a Counterfactual Mugging scenario, because you have no reason to suspect it's more likely than the inverted scenario where being that sort of person will get you negative utility. The probability distribution is flat, so the utilities cancel out.

Say Omega comes to you with a Counterfactual Mugging on Day 1. On Day 0, would you want to be the sort of person who pays out in a Counterfactual Mugging? No, because the probabilities of it being useful or harmful cancel out. On Day 1, when given the dilemma, do you want to be the sort of person who pays out in a Counterfactual Mugging? No, because now it only costs you money and you get nothing out of it.

So there's no point in time where deciding "I should be the sort of person who pays out in a Counterfactual Mugging" has positive expected utility.

Reasoning this way means, of course, that you don't get the money in a situation where Omega would only pay you if it predicted you would pay up, but you do get the money in situations where Omega pays out only if you wouldn't pay out. The latter possibility seems less salient from the "before you do that-" standpoint of a person contemplating a Counterfactual Mugging, but there's no reason to assign it a lower probability before the fact. The best you can do is choose according to whatever has the highest expected utility at any given time.

Omega could also come and tell me "I decided that I would steal all your money if you hit the S key on your keyboard between 10:00-11:00 am on a Sunday, and you just did," but I don't let this influence my typing habits. You don't want to alter your decision theories or general behavior in advance of specific events that are no more probable than their inversions.

comment by TheOtherDave · 2013-01-03T17:01:16.617Z · score: 0 (0 votes) · LW · GW

So there's no point in time where deciding "I should be the sort of person who pays out in a Counterfactual Mugging" has positive expected utility.

Sure, I agree.

What I'm suggesting is that "I should be the sort of person who does the thing that has positive expected utility" causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.

Is this not true?

"I decided that I would steal all your money if you hit the S key on your keyboard between 10:00-11:00 am on a Sunday, and you just did,"

I agree that this is not something I can sensibly protect against. I'm not actually sure I would call it a decision theory problem at all.

comment by Desrtopa · 2013-01-03T17:23:25.061Z · score: 1 (1 votes) · LW · GW

What I'm suggesting is that "I should be the sort of person who does the thing that has positive expected utility" causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.

In the inversion I suggested to the Counterfactual Mugging, your payout is determined on the basis of whether you pay up in the Counterfactual Mugging. In the Counterfactual Mugging, Omega predicts whether you would pay out in the Counterfactual Mugging, and if you would, you get a 50% shot at a million dollars. In the inverted scenario, Omega predicts whether you would pay out in the Counterfactual Mugging scenario, and if you wouldn't, you get a shot at a million dollars.

Being the sort of person who would pay out in a Counterfactual Mugging only brings positive expected utility if you expect the Counterfactual Mugging scenario to be more likely than the inverted Counterfactual Mugging scenario.

The inverted Counterfactual Mugging scenario, like the case where Omega rewards or punishes you based on your keyboard usage, isn't exactly a decision theory problem, in that once it arises, you don't get to make a decision, but it doesn't need to be.

When the question is "should I be the sort of person who pays out in a Counterfactual Mugging?" if the chance of it being helpful is balanced out by an equal chance of it being harmful, then it doesn't matter whether the situations that balance it out require you to make decisions at all, only that the expected utilities balance.

If you take as a premise "Omega simply doesn't do that sort of thing, it only provides decision theory dilemmas where the results are dependent on how you would respond in this particular dilemma," then our probability distribution is no longer flat, and being the sort of person who pays out in a Counterfactual Mugging scenario becomes utility maximizing. But this isn't a premise we can take for granted. Omega is already posited as an entity which can judge your decision algorithms perfectly, and imposes dilemmas which are highly arbitrary.

comment by wedrifid · 2013-01-03T04:30:18.053Z · score: 0 (2 votes) · LW · GW

Edit: I think I was mistaken about what problem you were referring to. If I'm understanding the question correctly, yes I would, because until the scenario actually occurs I have no reason to suspect any precommitment I make is likely to bring about more favorable results. For any precommitment I could make, the scenario could always be inverted to punish that precommitment, so I'd just do what has the highest expected utility at the time at which I'm presented with the scenario. It would be different if my probability distribution on what precommitments would be useful weren't totally flat.

You don't need a precommitment to make the correct choice. You just make it. That does happen to include one boxing on Transparent Newcomb's (and conventional Newcomb's, for the same reason). The 'but what if someone punishes me for being the kind of person who makes this choice' is a fully general excuse to not make rational choices. The reason why it is an invalid fully general excuse is because every scenario that can be contrived to result in 'bad for you' is one in which your rewards are determined by your behavior in an entirely different game to the one in question.

For example your "inverted Transparent Newcomb's" gives you a bad outcome, but not because of your choice. It isn't anything to do with a decision because you don't get to make one. It is punishing you for your behavior in a completely different game.

comment by Desrtopa · 2013-01-03T16:21:34.322Z · score: 0 (2 votes) · LW · GW

Could you describe the Transparent Newcomb's problem to me so I'm sure we're on the same page?

"What if I face a scenario that punishes me for being the sort of person who makes this choice?" is not a fully general counterargument, it only applies in cases where the expected utilities of the scenarios cancel out.

If you're the sort of person who won't honor promises made under duress, and other people are sufficiently effective judges to recognize this, then you avoid people placing you under duress to extract promises from you. But supposing you're captured by enemies in a war, and they say "We could let you go if you made some promises to help out our cause when you were free, but since we can't trust you to keep them, we're going to keep you locked up and torture you to make your country want to ransom you more."

This doesn't make the expected utilities of "Keep promises made under duress" vs. "Do not keep promises made under duress" cancel out, because you have an abundance of information with respect to how relatively likely these situations are.

comment by wedrifid · 2013-01-03T18:43:58.111Z · score: 0 (0 votes) · LW · GW

Could you describe the Transparent Newcomb's problem to me so I'm sure we're on the same page?

Take a suitable description of Newcomb's problem (you know, with Omega and boxes). Then make the boxes transparent. That is the extent of the difference. I assert that being able to see the money makes no difference to whether one should one box or two box (and also that one should one box).

comment by Desrtopa · 2013-01-03T19:28:36.699Z · score: -1 (1 votes) · LW · GW

Well, if you know advance that Omega is more likely to do this than it is to impose a dilemma where it will fill both boxes only if you two-box, then I'd agree that this is an appropriate solution.

I think that if in advance you have a flat probability distribution for what sort of Omega scenarios might occur (Omega is just as likely to fill both boxes only if you would two-box in the first scenario as it is to fill both boxes only if you would one-box,) then this solution doesn't make sense.

In the transparent Newcomb's problem, when both boxes are filled, does it benefit you to be the the sort of person who would one-box? No, because you get less money that way. If Omega is more likely to impose the transparent Newcomb's problem than its inversion, then prior to Omega foisting the problem on you, it does benefit you to be the sort of person who would one-box (and you can't change what sort of person you are mid-problem.)

If Omega only presents transparent Newcomb's problems of the first sort, where the box containing more money is filled only if the person presented with the boxes would one-box, then situations where a person is presented with two transparent boxes of money and picks both will never arise. People who would one-box in the transparent Newcomb's problem come out ahead.

If Omega is equally likely to present transparent Newcomb's problems of the first sort, or inversions where Omega fills both boxes only for people it predicts would two-box in problems of the first sort, then two-boxers come out ahead, because they're equally likely to get the contents of the box with more money, but always get the box with less money, while the one-boxers never do.

You can always contrive scenarios to reward or punish any particular decision theory. The Transparent Newcomb's Problem rewards agents which one-box in the Transparent Newcomb's Problem over agents which two-box, but unless this sort of problem is more likely to arise than ones which reward agents which two-box in Transparent Newcomb's Problem over ones that one-box, that isn't an an argument favoring decision theories which say you should one-box in Transparent Newcomb's.

If you keep a flat probability distribution of what Omega would do to you prior to actually being put into a dilemma, expected-utility-maximizing still favors one-boxing in the opaque version of the dilemma (because based on the information available to you, you have to assign different probabilities to the opaque box containing money depending on whether you one-box or two-box,) but not one-boxing in the transparent version.

comment by wedrifid · 2013-01-03T19:55:44.189Z · score: 0 (2 votes) · LW · GW

You can always contrive scenarios to reward or punish any particular decision theory. The Transparent Newcomb's Problem rewards agents which one-box in the Transparent Newcomb's Problem over agents which two-box, but unless this sort of problem is more likely to arise than ones which reward agents which two-box in Transparent Newcomb's Problem over ones that one-box, that isn't an an argument favoring decision theories which say you should one-box in Transparent Newcomb's.

No, Transparent Newcomb's, Newcomb's and Prisoner's Dilemma with full mutual knowledge don't care what the decision algorithm is. They reward agents that take one box and mutually cooperate for no other reason than they decide to make the decision that benefits them.

You have presented a fully general argument for making bad choices. It can be used to reject "look both ways before crossing a road" just as well as it can be used to reject "get a million dollars by taking one box". It should be applied to neither.

comment by Desrtopa · 2013-01-03T22:03:35.774Z · score: 0 (2 votes) · LW · GW

It's not a fully general counterargument, it demands that you weigh the probabilities of potential outcomes.

If you look both ways at a crosswalk, you could be hit by a falling object that you would have avoided if you hadn't paused in that location. Does that justify not looking both ways at a crosswalk? No, because the probability of something bad happening to you if you don't look both ways at the crosswalk is higher than if you do.

You can always come up with absurd hypotheticals which would punish the behavior that would normally be rational in a particular situation. This doesn't justify being paralyzed with indecision, the probabilities of the absurd hypotheticals materializing are miniscule. But the possibilities of absurd hypotheticals will tend to balance out other absurd hypotheticals.

Transparent Newcomb's Problem is a problem that rewards agents which one-box in Transparent Newcomb's Problem, via Omega predicting whether the agent one-boxes in Transparent Newcomb's Problem and filling the boxes accordingly. Inverted Transparent Newcomb's Problem is one that rewards agents that two-box in Transparent Newcomb's Problem via Omega predicting whether the agent two-boxes in Transparent Newcomb's Problem, and filling the boxes accordingly.

If one type of situation is more likely than the other, you adjust your expected utilities accordingly, just as you adjust your expected utility of looking both ways before you cross the street because you're less likely to suffer an accident if you do than if you don't.

comment by wedrifid · 2013-01-04T00:24:36.072Z · score: 0 (0 votes) · LW · GW

Transparent Newcomb's Problem is a problem that rewards agents which one-box in Transparent Newcomb's Problem

Yes.

Inverted Transparent Newcomb's Problem is one that rewards agents that two-box in Transparent Newcomb's Problem via Omega predicting whether the agent two-boxes in Transparent Newcomb's Problem, and filling the boxes accordingly.

That isn't an 'inversion' but instead an entirely different problem in which agents are rewarded for things external to the problem.

comment by Desrtopa · 2013-01-04T03:51:24.017Z · score: 0 (2 votes) · LW · GW

There's no reason an agent you interact with in a decision problem can't respond to how it judges you would react to different decision problems.

Suppose Andy and Sandy are bitter rivals, and each wants the other to be socially isolated. Andy declares that he will only cooperate in Prisoner's Dilemma type problems with people he predicts would cooperate with him, but not Sandy, while Sandy declares that she will only cooperate in Prisoner's Dilemma type problems with people she predicts would cooperate with her, but not Andy. Both are highly reliable predictors of other people's cooperation patterns.

If you end up in a Prisoner's Dilemma type problem with Andy, it benefits you to be the sort of person who would cooperate with Andy, but not Sandy, and vice versa if you end up in a Prisoner's Dilemma type problem with Sandy. If you might end up in a Prisoner's Dilemma type problem with either of them, you have higher expected utility if you pick one in advance to cooperate with, because both would defect against an opportunist willing to cooperate with whichever one they ended up in a Prisoner's Dilemma with first.

That isn't an 'inversion' but instead an entirely different problem in which agents are rewarded for things external to the problem.

If you want to call it that, you may, but I don't see that it makes a difference. If ending up in Transparent Newcomb's Problem is no more likely than ending up in an entirely different problem which punishes agents for one-boxing in Transparent Newcomb's Problem, then I don't see that it's advantageous to one-box in Transparent Newcomb's Problem. You can draw a line between problems determined by factors external to the problem, and problems determined only by factors internal to the problem, but I don't think this is a helpful distinction to apply here. What matters is which problems are more likely to occur and their utility payoffs.

In any case, I would honestly rather not continue this discussion with you, at least if TheOtherDave is still interested in continuing the discussion. I don't have very high expectations of productivity from a discussion with someone who has such low expectations of my own reasoning as to repeatedly and erroneously declare that I'm calling up a fully general counterargument which could just as well be used to argue against looking both ways at a crosswalk. If possible, I would much rather discuss this with someone who's prepared to operate under the presumption that I'm willing and able to be reasonable.

comment by wedrifid · 2013-01-04T05:13:38.700Z · score: 1 (1 votes) · LW · GW

If possible, I would much rather discuss this with someone who's prepared to operate under the presumption that I'm willing and able to be reasonable.

Don't confuse an intuition aid that failed to help you with a personal insult. Apart from making you feel bad it'll ensure you miss the point. Hopefully Vladimir's explanation will be more successful.

comment by Desrtopa · 2013-01-04T06:40:40.252Z · score: 0 (2 votes) · LW · GW

I didn't take it as a personal insult, I took it as a mistaken interpretation of my own argument which would have been very unlikely to come from someone who expected me to have reasoned through my position competently and was making a serious effort to understand it. So while it was not a personal insult, it was certainly insulting.

I may be failing to understand your position, and rejecting it only due to a misunderstanding, but from where I stand, your assertion makes it appear tremendously unlikely that you understand mine.

If you think that my argument generalizes to justifying any bad decision, including cases like not looking both ways when I cross the street, when I say otherwise, it would help if you would explain why you think it generalizes in this way in spite of the reasons I've given for believing otherwise, rather than simply repeating the assertion without acknowledging them, otherwise it looks like you're either not making much effort to comprehend my position, or don't care much about explaining yours, and are only interested in contradicting someone you think is wrong.

Edit: I would prefer you not respond to this comment, and in any case I don't intend to respond to a response, because I don't expect this conversation to be productive, and I hate going to bed wondering how I'm going to continue tomorrow what I expect to be a fruitless conversation.

comment by Vladimir_Nesov · 2013-01-04T04:21:51.184Z · score: 1 (1 votes) · LW · GW

(I haven't followed the discussion, so might be missing the point.)

If ending up in Transparent Newcomb's Problem is no more likely than ending up in an entirely different problem which punishes agents for one-boxing in Transparent Newcomb's Problem, then I don't see that it's advantageous to one-box in Transparent Newcomb's Problem.

If you are actually in problem A, it's advantageous to be solving problem A, even if there is another problem B in which you could have much more likely ended up. You are in problem A by stipulation. At the point where you've landed in the hypothetical of solving problem A, discussing problem B is a wrong thing to do, it interferes with trying to understand problem A. The difficulty of telling problem A from problem B is a separate issue that's usually ruled out by hypothesis. We might discuss this issue, but that would be a problem C that shouldn't be confused with problems A and B, where by hypothesis you know that you are dealing with problems A and B. Don't fight the hypothetical.

comment by Desrtopa · 2013-01-04T06:01:30.401Z · score: 0 (2 votes) · LW · GW

In the case of Transparent Newcomb's though, if you're actually in the problem, then you can already see either that both boxes contain money, or that one of them doesn't. If Omega only fills the second box, which contains more money, if you would one-box, then by the time you find yourself in the problem, whether you would one-box or two-box in Transparent Newcomb's has already had its payoff.

If I would two-box in a situation where I see two transparent boxes which both contain money, that ensures that I won't find myself in a situation where Omega lets me pick whether to one-box or two-box, but only fills both boxes if I would one-box. On the other hand, A person who one-boxes in that situation could not find themself in a situation where they can pick one or both of two filled boxes, where Omega would only fill both boxes if they would two-box in the original scenario.

So it seems to me that if I follow the principle of solving whatever situation I'm in according to maximum expected utility, then unless the Transparent Newcomb's Problem is more probable, I will become the sort of person who can't end up in Transparent Newcomb's problems with a chance to one-box for large amounts of money, but can end up in the inverted situation which rewards two-boxing, for more money. I don't have the choice of being the sort of person who gets rewarded by both scenarios, just as I don't have the choice of being someone who both Andy and Sandy will cooperate with.

I agree that a one-boxer comes out ahead in Transparent Newcomb's, but I don't think it follows that I should one-box in Transparent Newcomb's, because I don't think having a decision theory which results in better payouts in this particular decision theory problem results in higher utility in general. I think that I "should" be a person who one-boxes in Transparent Newcomb's in the same sense that I "should" be someone who doesn't type between 10:00-11:00 on a Sunday if I happen to be in a world where Omega has, unbeknownst to anyone, arranged to rob me if I do. In both cases I've lucked into payouts due to a decision process which I couldn't reasonably have expected to improve my utility.

comment by Vladimir_Nesov · 2013-01-04T15:49:36.154Z · score: 1 (1 votes) · LW · GW

I agree that a one-boxer comes out ahead in Transparent Newcomb's, but I don't think it follows that I should one-box in Transparent Newcomb's, because I don't think having a decision theory which results in better payouts in this particular decision theory problem results in higher utility in general.

We are not discussing what to do "in general", or the algorithms of a general "I" that should or shouldn't have the property of behaving a certain way in certain problems, we are discussing what should be done in this particular problem, where we might as well assume that there is no other possible problem, and all utility in the world only comes from this one instance of this problem. The focus is on this problem only, and no role is played by the uncertainty about which problem we are solving, or by the possibility that there might be other problems. If you additionally want to avoid logical impossibility introduced by some of the possible decisions, permit a very low probability that either of the relevant outcomes can occur anyway.

If you allow yourself to consider alternative situations, or other applications of the same decision algorithm, you are solving a different problem, a problem that involves tradeoffs between these situations. You need to be clear on which problem you are considering, whether it's a single isolated problem, as is usual for thought experiments, or a bigger problem. If it's a bigger problem, that needs to be prominently stipulated somewhere, or people will assume that it's otherwise and you'll talk past each other.

It seems as if you currently believe that the correct solution for isolated Transparent Newcomb's is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so? (You seem to understand "I'm in Transparent Newcomb's problem" incorrectly, which further motivates fighting the hypothetical, suggesting that for the general player that has other problems on its plate two-boxing is better, which is not so, but it's a separate issue, so let's settle the problem statement first.)

comment by Desrtopa · 2013-01-04T16:46:58.503Z · score: 0 (0 votes) · LW · GW

It seems as if you currently believe that the correct solution for isolated Transparent Newcomb's is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so?

Yes.

I don't think that the most advantageous solution for isolated Transparent Newcomb's is likely to be a very useful question though.

I don't think it's possible to have a general case decision theory which gets the best possible results for every situation (see the Andy and Sandy example, where getting good results for one prisoner's dilemma necessitates getting bad results from the other, so any decision theory wins in at most one of the two.)

That being the case, I don't think that a goal of winning in Transparent Newcomb's Problem is a very meaningful one for a decision theory. The way I see it, it seems like focusing on coming out ahead in Sandy prisoner's dilemmas, while disregarding the relative likelihoods of ending up in a dilemma with Andy or Sandy, and assuming that if you ended up in an Andy prisoner dilemma you could use the same decision process to come out ahead in that too.

comment by findis · 2013-01-04T05:55:55.695Z · score: 0 (0 votes) · LW · GW

Do you choose to hit me or not?

No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality.

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

comment by Desrtopa · 2013-01-04T06:28:28.665Z · score: 1 (1 votes) · LW · GW

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

If the agent filling the boxes follows a consistent, predictable pattern you're outside of, you can certainly use that information to do this. In Newcomb's Problem though, Omega follows a consistent, predictable pattern you're inside of. It's logically inconsistent for you to two box and find they both contain money, or pick one box and find it's empty.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

Why is whether your decision actually changes the boxes important to you? If you know that picking one box will result in your receiving a million dollars, and picking two boxes will result in getting a thousand dollars, do you have any concern that overrides making the choice that you expect to make you more money?

A decision process of "at all times, do whatever I expect to have the best results" will, at worst, reduce to exactly the same behavior as "at all times, do whatever I think will have a causal relationship with the best results." In some cases, such as Newcomb's problem, it has better results. What do you think the concern with causality actually does for you?

We don't always agree here on what decision theories get the best results (as you can see by observing the offshoot of this conversation between Wedrifid and myself,) but what we do generally agree on here is that the quality of decision theories is determined by their results. If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

comment by findis · 2013-01-04T06:50:44.516Z · score: 0 (0 votes) · LW · GW

Why is whether your decision actually changes the boxes important to you? [....] If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.)

I think I'm going to back out of this discussion until I understand decision theory a bit better.

comment by Desrtopa · 2013-01-04T06:56:15.249Z · score: 2 (2 votes) · LW · GW

I think I'm going to back out of this discussion until I understand decision theory a bit better.

Feel free. You can revisit this conversation any time you feel like it. Discussion threads never really die here, there's no community norm against replying to comments long after they're posted.

comment by [deleted] · 2012-08-26T19:25:59.395Z · score: 9 (9 votes) · LW · GW

Hello LW,

Last Thursday, I was asked by User:rocurley if, in his absence, I wanted to organize a hiking event (originally my idea) for this week's DC metro area meetup, during which I discovered I could not make posts, etc. here because I had zero karma. I chose to cancel the meetup on account of weather. I had registered my account previously, but realizing that I might have need to post here in the future, and that I had next to nothing to lose, I have decided to introduce myself finally.

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others. Areas of interest include sciences (I have a BS in physics), psychology, personality disorders, some areas of philosophy, reading, and generally learning new things. One of my favorite books (if not /the/ favorite) is Godel, Escher, Bach, which I read for the first (and certainly not last) time while I was in college, 5+ years ago.

I'm extremely introverted, and I am aware that I have certain anxiety issues; while rationality has not helped with the actual feeling of anxiety, it has allowed me to push through it, in some cases.

comment by Vaniver · 2012-08-26T21:30:43.294Z · score: 1 (1 votes) · LW · GW

Welcome!

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others.

Specific! :P Which is the most interesting one you've read so far? We might have recommendations of similar ones that you would like.

I'm extremely introverted, and I am aware that I have certain anxiety issues; while rationality has not helped with the actual feeling of anxiety, it has allowed me to push through it, in some cases.

So, I found my introversion much easier to manage when I started scheduling time by myself to recharge, and scheduling infrequent social events to make sure I didn't get into too much of a cave. It had been easy to get overwhelmed with social events near each other if I didn't have something on my calendar reminding me "you'll want to read a book by yourself for a few hours before you go to another event." That sort of thing might be helpful to consider.

comment by [deleted] · 2012-08-27T01:13:11.902Z · score: 1 (1 votes) · LW · GW

Some of my favorite articles, off the top of my head (and a bit of browsing) :

  • A Fable of Science and Politics
  • Explain, Worship, Ignore - I am, as of now, something of a naturalistic pantheist / pandeist; if you've heard Carl Sagan or Neil Degrasse Tyson speak on the wonder that is the existence of the universe, it's something like that. Unlike what is written in the linked article, however, I'm not convinced that the initial singularity, or whatever cause the Big Bang might have, can be explained by science. (Is it even meaningful to ask questions about what is outside the universe?)
  • Belief in Belief
  • Avoiding Your Belief's Real Weak Points
  • The 'Outside the Box' Box - How much of my belief system is actually a result of my own thinking, as opposed to a result of culture, society, etc? Granted, sometimes collective wisdom is better than what one might come up with by oneself, but not always...

So, I found my introversion much easier to manage when I started scheduling time by myself to recharge, and scheduling infrequent social events to make sure I didn't get into too much of a cave. It had been easy to get overwhelmed with social events near each other if I didn't have something on my calendar reminding me "you'll want to read a book by yourself for a few hours before you go to another event." That sort of thing might be helpful to consider.

I have Meetup.com to organize and schedule social events, and of course there's the LW meetups. I get plenty of alone time, so that isn't really a problem for me. (Some minutes of thinking later) The particular issues aren't something I can accurately put into words, but they're something like 'active avoidance of (perceived) excessive attention or expectations, either positive or negative' and 'fear of exposing "personal" info I'd rather not share, and of any negative consequences that might result'. Perhaps not surprisingly, I greatly prefer internet or written "non-personal" communication over verbal communication.

comment by skeptical_lurker · 2012-07-28T18:51:25.616Z · score: 9 (11 votes) · LW · GW

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity (I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...).

Oddly enough, I am not interested in improving epistemic rationality right now, partially because I am already quite good at it. But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive (unless they are talking to other rationalists). I think I am borderline asperger's (again, like many people here) and optimizing social skills probably takes precedence over most other things.

I am currently doing a PhD in "absurdly simplistic computational modeling of the blatantly obvious" which better damn well have some signaling value. In my spare time, to stop my brain turning to mush, among other things I am writing a story which is sort of rationalist, in that some of the characters keep using science effectively even when the world is going crazy and the laws of physics seem to change dependent upon whether you believe in them. On the other hand, some of the characters are (a) heroes/heroines (b) awesomely successful (c) hippies on acid who do not believe in objective reality (not that I am implying that all hippies/people who use lsd are irrational). Maybe the point of the story is that you need more than just rationality? Or that some people are powerful because of rationality, while others have imagination, and that friendship combines their powers in a my little pony like fashion? Or maybe its all just an excuse for pretentious philosophy and psychic battles?

comment by robertskmiles · 2012-07-28T20:23:15.913Z · score: 6 (6 votes) · LW · GW

I am not interested in improving epistemic rationality right now, partially because I am already quite good at it.

But remember that it's not just your own rationality that benefits you.

it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma

Presume away. Karma doesn't win arguments, arguments win karma.

comment by skeptical_lurker · 2012-07-29T20:21:02.048Z · score: 0 (0 votes) · LW · GW

But remember that it's not just your own rationality that benefits you.

Are you saying that improving epistemic rationality is important because it benefits others as well as myself? This is true, but there are many other forms of self-improvement that would also have knock-on effects that benefit others.

I have actually read most of the relevant sequences, epistemic rationality really isn't low-hanging fruit anymore for me, although I wish I had known about cognitive biases years ago.

comment by robertskmiles · 2012-07-30T11:18:04.238Z · score: 1 (1 votes) · LW · GW

Are you saying that improving epistemic rationality is important because it benefits others as well as myself?

No, I'm saying that improving the epistemic rationality of others benefits everyone, including yourself. It's not just about improving our own rationality as individuals, it's about trying to improve the rationality of people-in-general - 'raising the sanity waterline'.

comment by skeptical_lurker · 2012-07-31T13:17:06.966Z · score: 1 (1 votes) · LW · GW

Ok, I see what you mean now. Yes, this is often true, but again, I am trying to be less preachy (at least IRL) about rationality - if someone believes in astrology, or faith healing, or reincarnation then: (a) their beliefs probably bring them comfort (b) Trying to persuade them is often like banging my head against a brick wall (c) even the notion that there can be such a thing as a correct fact, independent of subjective mental states is very threatening to some people and I don't want to start pointless arguments

So unless they are acting irrationally in a way which harms other people, or they seem capable of having a sensible discussion, or I am drunk, I tend to leave them be.

comment by wedrifid · 2012-07-29T02:03:03.575Z · score: 2 (2 votes) · LW · GW

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity

Many here would agree with you. (And, for instance, consider a ~10% chance of success better than near certain extinction.)

comment by skeptical_lurker · 2012-07-29T19:24:38.567Z · score: 0 (0 votes) · LW · GW

I agree that 10% chance of success is better than near zero, and furthermore I agree that expected utility maximization means that putting in a great deal of effort to achieve a positive outcome is wiser than saying "oh well, we're doomed anyway, might as well party hard and make the most of the time we have left". However, the question is whether, if FAI has a low probability of success, are other possibilities, e.g. tool AI a better option to pursue?

comment by [deleted] · 2012-07-29T02:15:37.137Z · score: 0 (0 votes) · LW · GW

Many here would agree with you. (And, for instance, consider a ~10% chance of success better than near certain extinction.)

Would you say that many people here (and yourself?) believe that the probable end of our species is within the next century or two?

comment by Nornagest · 2012-07-29T03:01:21.106Z · score: 1 (1 votes) · LW · GW

The last survey reported that Less Wrongers on average believe that humanity has about a 68% chance of surviving the century without a disaster killing >90% of the species. (Median 80%, though, which might be a better measure of the community feeling than the mean in this case.) That's a lower bar than actual extinction, but also a shorter timescale, so I expect the answer to your question would be in the same ballpark.

comment by wedrifid · 2012-07-29T03:07:12.528Z · score: 0 (0 votes) · LW · GW

Would you say that many people here (and yourself?) believe that the probable end of our species is within the next century or two?

For myself: Yes! p(extinct within 200 years) > 0.5

comment by John_Maxwell (John_Maxwell_IV) · 2012-07-28T19:03:21.855Z · score: 1 (1 votes) · LW · GW

Welcome!

I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...

IMO you should definitely do it. Even if LW karma is good an indicator of good ideas, more information rarely hurts, especially on a topic as important as this.

comment by skeptical_lurker · 2012-07-31T13:20:12.755Z · score: 7 (7 votes) · LW · GW

Ok - although maybe I should stick it in its own thread?

I realize much of this has been said before.

Part 1 : AGI will come before FAI, because:

Complexity of algorithm design:

Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem.
The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term.

Computational complexity CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly. Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions?

Infeasibility of relinquishment. If you can’t convince Ben Goertzel that FAI is needed, even though he is familiar with the arguments and is an advisor to SIAI, you’re not going to get anywhere near a universal consensus on the matter. Furthermore, AI is increasingly being used in financial and possibly soon military applications, and so there are strong incentives to speed the development of AI. While these uses are unlikely to be full AGI, they could provide building blocks – I can imagine a plausible situation where an advanced AI that predict the stock exchange could easily be modified to be a universal predictor.
The most powerful incentive to speed up AI development is the sheer number of people who die every day, and the amount of negentropy lost in the case that the 2nd law of thermodynamics cannot be circumvented. Even if there could be a worldwide ban on non-provably safe AGI, work would still probably continue in secret by people who thought the benefits of an earlier singularity outweighed the risks, and/or were worried about ideologically opposed groups getting their first.

Financial bootstrapping If you are ok with running a non-provably friendly AGI, then even in the early stages when, for example, your AI can write simple code or make reasonably accurate predictions but not speak English or make plans, you can use these to earn money, and buy more hardware/programmers. This seems to be part of the approach Ben is taking.

Coming in Part II: is there any alternative (and doing nothing is not an alternative! even if FAI is unlikely to work its better than giving up!)

comment by shminux · 2012-07-31T21:33:38.731Z · score: 1 (1 votes) · LW · GW

Definitely worth its own Discussion post, once you have min karma, which should not take long.

comment by beoShaffer · 2012-07-31T21:52:32.185Z · score: 0 (0 votes) · LW · GW

They already have it.

comment by Swimmer963 · 2012-07-28T22:05:09.773Z · score: 0 (2 votes) · LW · GW

Welcome!

But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive.

Made me think of this article. Yes, you may be able, in the short run, to win friends and influence people by tricking yourself into being overconfident. But that belief is only in your head and doesn't affect the universe–thus doesn't affect the probability of Event X happening. Which means that if, realistically, X is 65% likely to happen, then you with your overconfidence, claiming that X is bound to happen, will eventually look like a fool 35% of the time, and will make it hard for yourself to leave a line of retreat.

Conclusion: in the long run, it's very good to be honest with yourself about your predictions of the future, and probably preferable to be honest with others, too, if you want to recruit their support.

comment by skeptical_lurker · 2012-07-29T19:43:10.119Z · score: 3 (3 votes) · LW · GW

Excellent points, and of course it is situation dependent - if one makes erroneous predictions on archived forms of communication, e.g. these posts, then yes these predictions can come back to haunt you, but often, especially in non-archived communications, people will remember the correct predictions and forget the false ones. It should go without saying that I do not intend to be overconfident on LW - if I was going to be, then the last thing I would do is announce this intention! In a strange way, I seem to want to hold three different beliefs: 1) An accurate assessment of what will happen, for planning my own actions. 2) A confidant, stopping just short of arrogant, belief in my predictions for impressing non-rationalists. 3) An unshakeable belief in my own invincibility, so that psychosomatic effects keep me healthy.

Unfortunately, this kinda sounds like "I want to have multiple personality disorder".

comment by Strange7 · 2012-08-01T02:22:06.566Z · score: 2 (2 votes) · LW · GW

If you're going to go that route, at least research it first. For example:

http://healthymultiplicity.com/

comment by skeptical_lurker · 2012-08-01T11:34:08.911Z · score: 0 (0 votes) · LW · GW

Thanks for the advice, but I don't actually want to have multiple personality disorder - I was just drawing an analogy.

comment by TheOtherDave · 2012-07-28T23:58:47.506Z · score: 3 (3 votes) · LW · GW

Hm.

So, call -C1 the social cost of reporting a .9 confidence of something that turns out false, and -C2 the social cost of reporting a .65 confidence of something that turns out false. Call C3 the benefit of reporting .9 confidence of something true, and C4 the benefit of .65 confidence.

How confident are you that that (.65C3 -.35C1) < (.65C4-.35C2)?

comment by skeptical_lurker · 2012-07-29T19:46:25.405Z · score: 1 (1 votes) · LW · GW

In certain situations, such as sporting events which do not involve betting, my confidence that (.65C3 -.35C1) < (.65C4-.35C2) is at most 10%. In these situations confidence is valued far more that epistemic rationality.

comment by Swimmer963 · 2012-07-29T03:42:19.119Z · score: 1 (1 votes) · LW · GW

I would say I'm about 75% confident that (.65C3 -.35C1) < (.65C4-.35C2)... But one of the reasons I don't even want to play that game is that I feel I am completely unqualified to estimate probabilities about that, and most other things. I would have no idea how to go about estimating the probability of, for example, the Singularity occurring before 2050...much less how to compensate for biases in my estimate.

I think I also have somewhat of an ick reaction towards the concept of "tricking" people to get what you want, even if in a very subtle form. I just...like...being honest, and it's hard for me to tell if my arguments about honesty being better are rationalizations because I don't want being dishonest to be justifiable.

comment by Mass_Driver · 2012-07-29T05:20:16.961Z · score: 2 (2 votes) · LW · GW

The way to bridge that gap is to only volunteer predictions when you're quite confident, and otherwise stay quiet, change the subject, or murmur a polite assent. You're absolutely right that explicitly declaring a 65% confidence estimate will make you look indecisive -- but people aren't likely to notice that you make predictions less often than other people -- they'll be too focused on how when you do make predictions, you have an uncanny tendency to be correct...and also that you're pleasantly modest and demure, too.

comment by TheOtherDave · 2012-07-29T07:43:04.088Z · score: 0 (0 votes) · LW · GW

(nods) That makes sense.

comment by Rukifellth · 2012-07-25T23:54:59.659Z · score: 9 (9 votes) · LW · GW

I got into a community of intelligent, creative free-thinkers by reading fan fiction of all things.

You know the one.

Anyway, my knowledge of what is collectively referred to as Rationality is slim. I read the first 6 pages of The Sequences, felt like I was cheating on a test, and stopped. I'll try to make up for it with some of the most unnecessarily theatrical and hammy writing I can get away with.

I love word play, and over the course of a year I offered (as a way of apology) to owe my friend a quarter for every time I improvised a pun or awful joke mid-conversation, by the end of which I could have bought a dinner for him at Pizza Delight- I didn't. It's on my to-do list to compile all the wises that Carlos Ramon ever cracked on The Magic School Bus and put it on you tube, because no one else has and it needs to be done, damn it. As you can tell, I sometimes write for it's own sake, sort of a literary hedonist if you will. But all good things must come to an end...

My greatest principle is that a person's course in life is governed by their reaction to their circumstance, and that nothing at all is of certainty. The nature of the human mind is a process which our current metaphors and models can only approximate, a physical system adjusting itself, which words like "I", "our" and "qualia" can only activate whatever concept we have to answer the question of "What". Because of this, I have a great sympathy towards Eastern spirituality and some Christian mysticism, because they have the spirit of what we're all trying to accomplish here; to answer a question.

Sometimes I end up in the psychological equivalence of a fractal zoom where philosophy has this impossible to divide property, of all things linking to others without there being any elementary axioms or parts, probably because of that whole "brain made of neurons" racket. I concluded that emotions are just another form of sense; love, curiosity and understanding being reactions and sensory input much like taste and touch. Happily any cognitive dissonance or emptiness can be discarded the same way, and the logical contradiction a property of the purely physical (rather than comforting "conceptual") nature of our very thought, meaning that I'll simultaneously accept the objective truth of this, but reject any emotional significance, as emotional significance is itself deconstructed as a concept.

Of course the empathy gap and the nature of attention span (or at least my attention span) means that I'm normally not like this unless triggered. To me, regular life is the reaction of our psyche, broken up occasionally by the temporary delusion that a fractal zoom of philosophy can answer my questions. I call this a "delusion" because the concept of a question to be answered is an extraneous layer added to by an entity which just wants to avoid suffering.

The human mind; a non-linear physical system which tries to evaluate itself with a linear processing system that's not suited to that sort of thing at all. Sometimes I wonder if who we are is just the sum of five or six different personalities, each with about a fifth of sixth of our functioning, plus a heavy specialization in one type of behaviour, the sum of which is an idea of what is right and wrong with a sense of identity. Given the existence of neural pathways in our spinal column, I wouldn't be surprised. Sometimes I feel like I can feel the shape of our brains based on this, but that's probably just me connecting concepts to high school biology.

I went off the rails a bit there, but looking back, I figure this should be a more honest introduction from me than any structured post. Even so, I doubt I can really convey that kind of leg twisting logical insanity without the meaning being hallowed by interpretation and pattern recognition.

Ugh, I feel like there wasn't a speck of relate-ability there at all. Well, I'm eighteen years old and male. I followed the My Little Pony following out of a combination of boredom, fascination and a love of the bizarre. The show never struck a chord with me at all, really, but the fandom was something else. There was a period of about a month where I read crossover fan fictions, but I couldn't be bothered after that point, because the fandom's growth wound down and the novelty was gone. Even so, Nine Knackered Souls is the funniest fan fiction I've ever read, a Red vs Blue crossover. Fallout Equestria is the longest and most "so-okay-it's average" fan fiction, despite the fact that I was drawn in enough to overlook the Mary sue aspects and read the whole thing in like four days in one sitting...

I'm going into Computer Science at Dalhousie University, and CSci being what it is, I'm going to make up my path as I go along. I really don't know enough about robotics, AI or informatics to make the choice between them right now anyway.

comment by Rukifellth · 2012-07-26T00:03:23.982Z · score: 0 (0 votes) · LW · GW

Also, I enjoy playing Superman 64's ring levels.

comment by [deleted] · 2013-03-19T04:25:46.222Z · score: 8 (8 votes) · LW · GW

Background:

21-year old transgender-neither. I spent 13 years enveloped by Mormon culture and ideology, growing up in a sheltered environment. Then, everything changed when the Fire nation attacked.

Woops. Off-track.

I want my actions to matter, not from others remembering them but from me being alive to remember them . In simpler terms, I want to live for a long time - maybe forever. Death should be a choice, not an unchanging eventuality.

But I don't know where to start; I feel overwhelmed by all the things I need to learn.

So I've come here. I'm reading the sequences and trying to get a better grasp on thinking rationally, etc., but was hoping to get pointers from the more experienced.

What is needed right now? I want to do what I can to help not only myself, but those whose paths I cross.

~Jenna

comment by Alicorn · 2013-03-19T05:51:05.707Z · score: 4 (4 votes) · LW · GW

transgender-neither

Is this the same thing as "agender"?

Then, everything changed when the Fire nation attacked.

<3!!

comment by [deleted] · 2013-03-19T20:13:18.782Z · score: 1 (1 votes) · LW · GW

Yes, it's the same. Transgender-neither sounds better to me, though, so I used that term.

But if I find that agender is more accessible I'll switch.

And yep, I'm an Avatar the Last Airbender junkie. :)

comment by Nisan · 2013-03-19T06:17:56.578Z · score: 2 (2 votes) · LW · GW

Welcome! Have you considered signing up for cryonics?

comment by [deleted] · 2013-03-19T20:08:29.529Z · score: 1 (1 votes) · LW · GW

Aside from the occasional X-files episode and science fiction reading, I don't know much about cryonics.

I considered it as a possibility but dislike that it means I'm 'in suspense' while the world is continuing on without me. I want to be an active participant! :D

comment by shminux · 2013-03-19T20:54:38.140Z · score: 2 (4 votes) · LW · GW

I want to be an active participant!

Certainly, but when you no longer can be, it's nice to have an option of becoming one again some day.

comment by EHeller · 2013-03-20T00:20:15.864Z · score: 3 (9 votes) · LW · GW

Option might be too strong a word. Its nice to have the vanishingly-small possibility. I think its important for transhumanists to remind ourselves that cryonics is unlikely to actually work, its just the only hail-mary available.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-20T03:12:22.437Z · score: 2 (6 votes) · LW · GW

Far as I can tell, the basic tech in cryonics should basically work. Storage organizations are uncertain and so is the survival of the planet. But if we're told that the basic cryonics tech didn't work, we've learned some new fact of neuroscience unknown to present-day knowledge.

Don't assign vanishingly small probabilities to things just because they sound weird, or it sounds less likely to get funny looks if you can say that it's just a tiny chance. That is not how 'probability' works. Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.

comment by Kawoomba · 2013-03-20T07:57:03.675Z · score: 4 (4 votes) · LW · GW

Probabilities of basic cryonics tech working are questions of neuroscience, full stop

I'd say full speed ahead, Cap'n. Basic cryonics tech working - while being a sine qua non - isn't the ultimate question for people signing up for cryonics. It's just a term in the probability calculation for the actual goal: "Will I be revived (in some form that would be recognizable to my current self as myself)?" (You've mentioned that in the parent comment, but it deserves more than a passing remark.)

And that most decidedly requires a host of complex assumptions, such as "an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them", "the future agents' goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood", "there won't be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed", not to mention my favorite "I won't be instantly integrated into some hivemind in which I lose all traces of my individuality".

What we're all hoping for, of course, is for a benevolent super-current-human agent - e.g. an FAI - to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn't be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario "there will be a friendly AI to do it" with "and it will have been successfully implemented by someone other than me".

Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you're held in, or take away anyone's incentive to maintain it. (TW: That's not even taking into account Siam the Star Shredder.)

I'm trying to avoid motivated cognition here, but there are lot of terms going into the actual calculation, and while that in itself doesn't mean the probability will be vanishingly small, there seem to be a lot more (and given human nature, unfortunately likely / contributing more probability mass) scenarios in which your goal wouldn't be achieved - or be achieved in some undesirable fashion - than the "here you go, welcome back to a society you'd like to live in" variety.

That being said, I'll take the small chance over nothing. Hopefully some decent options will be established near my place of residence, soon.

comment by shminux · 2013-03-21T22:18:55.829Z · score: 3 (3 votes) · LW · GW

Probabilities of basic cryonics tech working are questions of neuroscience, full stop

Is this your true objection? What potential discovery in neuroscience would cause you to abandon cryonics and actively look for other ways to preserve your identity beyond the natural human lifespan? (This is a standard question one asks a believer to determine whether the belief in question is rational -- what evidence would make you stop believing?)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T06:18:36.256Z · score: 9 (9 votes) · LW · GW

Anders Sandberg who does get the concept of sufficiently advanced technology posts saying, "Shit, turns out LTM seems to depend really heavily on whether protein blah has conformation A and B and the vitrification solution denatures it to C and it's spatially isolated so there's no way we're getting the info back, it's possible something unknown embodies redundant information but this seems really ubiquitous and basic so the default assumption is that everyone vitrified is dead". Although, hm, in this case I'd just be like, "Okay, back to chopping off the head and dropping it in a bucket of liquid nitrogen, don't use that particular vitrification solution". I can't think offhand of a simple discovery which would imply literally giving up on cryonics in the sense of "Just give up you can't figure out how to freeze people ever." I can certainly think of bad news for particular techniques, though.

comment by shminux · 2013-03-22T15:54:55.905Z · score: 1 (1 votes) · LW · GW

I can't think offhand of a simple discovery which would imply literally giving up on cryonics

OK. More instrumentally, then. What evidence would make you stop paying the cryo insurance premiums with CI as the beneficiary and start looking for alternatives?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T22:25:44.686Z · score: 3 (3 votes) · LW · GW

Anders publishes that, CI announces they intend to go on vitrifying patients anyway, Alcor offers a chop-off-your-head-and-dunk-in-liquid-nitro solution. Not super plausible but it's off the top of my head.

comment by shminux · 2013-03-23T04:27:24.885Z · score: 3 (3 votes) · LW · GW

No pun intended?

comment by Kawoomba · 2013-03-22T16:57:43.212Z · score: -1 (1 votes) · LW · GW

Can you name currently available alternatives to cryonics which accomplish a similar goal?

Apologies, misinterpreted the question.

comment by shminux · 2013-03-22T17:09:16.916Z · score: 3 (3 votes) · LW · GW

Not really, but yours is an uncharitable interpretation of my question, which is to evaluate the utility of spending some $100/mo on cryo vs spending it on something (anything) else, not "I have this dedicated $100/mo lying around which I can only spend toward my personal future revival".

comment by gwern · 2013-03-22T17:08:28.762Z · score: 7 (7 votes) · LW · GW

Personally, I would be very impressed if anyone could demonstrate memory loss in a cryopreserved and then revived organism, like a bunch of C. elegans losing their maze-running memories. They're very simple, robust organisms, it's a large crude memory, the vitrification process ought to work far better on them than a human brain, and if their memories can't survive, that'd be huge evidence against anything sensible coming out of vitrified human brains no matter how much nanotech scanning is done (and needless to say, such scanning or emulation methods can and will be tested on a tiny worm with a small fixed set of neurons long before they can be used on anything approaching a human brain). It says a lot about how poorly funded cryonics research is that no one has done this or something similar as far as I know.

comment by shminux · 2013-03-22T23:24:27.902Z · score: 1 (1 votes) · LW · GW

Hmm, I wonder how much has been done on figuring out the memory storage in this organism. Like, if you knock out a few neurons or maybe synapses, how much does it forget?

comment by gwern · 2013-03-23T02:25:35.805Z · score: 1 (1 votes) · LW · GW

Since it's C. elegans, I assume the answer is 'a ton of work has been done', but I'm too tired right now to go look or read more medical/biological papers.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-23T00:10:57.936Z · score: 0 (0 votes) · LW · GW

I'm not totally sure I'd call this sufficient evidence since functional damage != many-to-one mapping but it would shave some points off the probability for existing tech and be a pointer to look for the exact mode of functional memory loss.

comment by wedrifid · 2013-03-22T01:33:47.553Z · score: 2 (2 votes) · LW · GW

and actively look for other ways to preserve your identity beyond the natural human lifespan?

He's kind of been working on that for a while now.

(I suppose that works either as "subvert the natural human lifespan entirely through creating FAI" or "preserve his identity for time immemorial in the form of 'Harry-Stu' fanfiction" depending on how cynical one is feeling.)

comment by orthonormal · 2013-03-22T05:23:50.810Z · score: 1 (1 votes) · LW · GW

In my case, to name one contingency: if the NEMALOAD Project finds that analysis of relatively large cellular structures doesn't suffice to predict neuronal activity, and concludes that the activity of individual molecules is essential to the process, then I'd become significantly more worried about EHeller's objection and redo the cost-benefit calculation I did before signing up for cryonics. (It came out in favor, using my best-guess probability of success between 1 and 5 percent; but it wouldn't have trumped the cost at, say, 0.1%.)

To name another: if the BPF shows that cryopreservation makes a hash of synaptic connections, I'd explicitly re-do the cost-benefit calculation as well.

comment by EHeller · 2013-03-20T06:59:28.727Z · score: 3 (7 votes) · LW · GW

I actually am signed up for cryonics.

My issue with the basic tech is that liquid nitrogen, while a cheap storage method, is too cold to avoid fracturing. Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information.

Now, it seems likely to me that at some point in the future the fracturing problem can be solved, or at least mitigated, by intermediate temperature storing and careful cooling processes, but that won't fix the bodies frozen today. So I don't doubt that (barring large neuroscience related, unquantifiable uncertainty) cryonics may improve to the point where the tech is likely to work (or be supplanted by plastination methods,etc), it is not there now, and what matters for people frozen today is the state of cryonics today.

Saying there are no fundamental scientific barriers to the tech working is not the same thing as saying the hard work of engineering has been done and the tech currently works.

Edit: I also have a weak prior that the chemical information in the brain is important, but it is weak.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-20T23:08:42.498Z · score: 5 (5 votes) · LW · GW

Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information.

Since this is the key point of neuroscience, do you want to expand on it? What experience with imaging leads you to believe that fractures (of incompletely vitrified cells) will implement many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states? What chemical information is obviously destroyed and is it a type that could plausibly play a role in long-term memory?

comment by shminux · 2013-03-21T20:52:59.377Z · score: 2 (2 votes) · LW · GW

"many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states" is probably too restrictive. I would use "possibly functionally different, but subjectively acceptably close brain states".

comment by EHeller · 2013-03-21T08:04:26.801Z · score: 2 (2 votes) · LW · GW

The cryoprotectants are toxic, they will damage proteins (misfolds, etc) and distort relative concentrations throughout the cell. This information is irretrievable once the damage is done. This is what I refereed to when I said obviously destroyed chemical information. It is our hope that such information is unimportant, but my (as I said above fairly uncertain) prior would be that the synaptic protein structures are probably important. My prior is so weak because I am not an expert on biochemistry or neuroscience.

As to the physical fracture, very detailed imaging would have to be done on either side of the fracture in order to match the sides back up, and this is related to a problem I do have some experience with. I'm familiar with attempts to use synchrotron radiation to image protein structures, which has a percolation problem- you are damaging what you are trying to image while you image it. If you have lots of copies of what you want to image, this is a solvable problem, but with only one original you are going to lose information.

Edit: in regards to the first point, kalla724 makes the same point with much more relevant expertise in this thread http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/ His experience working with synapses leads him to a much stronger estimate that cryoprotectants cause irreversible damage. I may strengthen my prior a bit.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T08:41:51.049Z · score: 4 (4 votes) · LW · GW

This information is irretrievable once the damage is done.

How do you know? I'm not asking for some burden of infinite proof where you have to prove that the info can't be stored elsewhere. I am asking whether you know that widely functionally different start states are being mapped onto an overlapping spread of molecularly identical end states, and if so, how. E.g., "denaturing either conformation A or conformation B will both result in denatured conformation C and the A-vs.-B distinction is just a little twist of this spatially isolated thingy here so you wouldn't expect it to be echoed in any exact nearby positions of blah" or something.

comment by EHeller · 2013-03-21T15:56:18.900Z · score: 5 (5 votes) · LW · GW

So what I'm thinking about is something like this: imagine an enzyme,present at two sites on the membrane and regulated by an inhibitor. Now a toxin comes along and breaks the weak bonds to the inhibitor, stripping them off. Information about which site was inhibited is gone.

If the inhibitor has some further chemical involvement with the toxin, or if the toxin pops the enzymes off the membrane all together you have more problems. You might not know how many enzymes were inhibited, which sites were occupied, or which were inhibited.

I could also imagine more exotic cases where a toxin induces a folding change in one protein, which allows it to accept a regulator molecule meant for a different protein. Now to figure out our system we'd need to scan at significantly smaller scales to try to discern those regulator molecules. I don't have the expertise to estimate if this is likely.

To reiterate, I am not by any means a neuroscientist (my training is physics and my work is statistics), so its possible this sort of information just isn't that important, but my suspicion is that it is.

Edited to fix an embarrassing except/accept mistake.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T22:08:36.483Z · score: 5 (5 votes) · LW · GW

(Scanning at significantly smaller scales should always be assumed to be fine as long as end states are distinguishable up to thermal noise!)

So what I'm thinking about is something like this: imagine an enzyme,present at two sites on the membrane and regulated by an inhibitor. Now a toxin comes along and breaks the weak bonds to the inhibitor, stripping them off. Information about which site was inhibited is gone.

Okay, I agree that if this takes place at a temperature where molecules are still diffusing at a rapid pace and there's no molecular sign of the broken bond at the bonding site, then it sounds like info could be permanently destroyed in this way. Now why would you think this was likely with vitrification solutions currently used? Is there an intuition here about ranges of chemical interaction so wide that many interactions are likely to occur which break such bonds and at least one such interaction is likely to destroy functionally critical non-duplicated info? If so, should we toss out vitrification and go back to dropping the head in liquid nitrogen because shear damage from ice freezing will produce fewer many-to-one mappings than introducing a foreign chemical into the brain? I express some surprise because if destructive chemical interactions were that common with each new chemical introduced then the problem of having a whole cell not self-destruct should be computationally unsolvable for natural selection, unless the chemicals used in vitrification are unusually bad somehow.

comment by EHeller · 2013-03-21T23:20:29.117Z · score: 2 (2 votes) · LW · GW

(Scanning at significantly smaller scales should always be assumed to be fine as long as end states are distinguishable up to thermal noise!)

This has some problems- fundamentally the length scale probed is inversely proportional to the energy required, which means increasing the resolution increases the damage done by scanning. You start getting into issues of 'how much of this can I scan before I've totally destroyed this?' which is a sort of percolation problem (how many amino acids can I randomly knock out of a protein before it collapses or rebonds into a different protein?), so scanning at resolutions with energy equivalent above peptide bonds is very problematic. Assuming peptide bond strength of a couple kj/mol, I get lower-limit length scales of a few microns (this is rough, and I'd appreciate if someone would double check).

Now why would you think this was likely with vitrification solutions currently used?

The vitrification solutions currently used are know to be toxic, and are used at very high concentrations, so some of this sort of damage will occur. I don't know enough biochemistry to say anything else with any kind of definitety, but on the previous thread kalla724 seemed to have some domain specific knowledge and thought the problem would be severe.

If so, should we toss out vitrification and go back to dropping the head in liquid nitrogen because shear damage from ice freezing will produce fewer many-to-one mappings than introducing a foreign chemical into the brain?

No, not at all. The vitrification damage is orders of magnitude less. Destroying a few multi-unit proteins and removing some inhibitors seems much better than totally destroying the cell-membrane (which has many of the same "which sites were these guys attached to?" problems).

I express some surprise because if destructive chemical interactions were that common with each new chemical introduced then the problem of having a whole cell not self-destruct should be computationally unsolvable for natural selection

Its my (limited) understanding that the cell membrane exist to largely solve this problem. Also, introducing tiny bits of toxins here and there causes small amounts of damage but the cell could probably survive. Putting the cell in a toxic environment will inevitably kill it. The concentration matters. But here I'm stepping way outside anything I know about.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T00:10:40.951Z · score: 4 (4 votes) · LW · GW

This has some problems- fundamentally the length scale probed is inversely proportional to the energy required, which means increasing the resolution increases the damage done by scanning.

We seem to have very different assumptions here. I am assuming you can get up to the molecule and gently wave a tiny molecular probe in its direction, if required. I am not assuming that you are trying to use high-energy photons to photograph it.

You also still seem to be use a lot of functional-damage words like "destroying" which is why I don't trust your or kalla724's intuitions relative to the intuitions of other scientists with domain knowledge of neuroscience who use the language of information theory when assessing cryonic feasibility. If somebody is thinking in terms of functional damage (it doesn't restart when you reboot it, oh my gosh we changed the conformation look at that damage it can't play its functional role in the cell anymore!) then their intuitions don't bear very well on the real question of many-to-one mapping.

What does the vitrification solution actually do that's supposed to irreversibly map things, does anyone actually know? The fact that a cell can survive with a membrane, at all, considering the many different molecules inside it, imply that most molecules don't functionally damage most other molecules most of the time, never mind performing irreversible mappings on them. But then this is reasoning over molecules that may be of a different type then vitrificants. At the opposite extreme, I'd expect introducing hydrochloric acid into the brain to be quite destructive.

comment by EHeller · 2013-03-22T04:30:22.722Z · score: 2 (2 votes) · LW · GW

We seem to have very different assumptions here. I am assuming you can get up to the molecule and gently wave a tiny molecular probe in its direction, if required. I am not assuming that you are trying to use high-energy photons to photograph it.

How are you imaging this works? I'm aware of chemistry that would allow you to say there are X whatever proteins, and Y such-and-such enzymes,etc, but such chemical processes I don't think are good enough for the sort of geometric reconstruction needed. Its not obvious to me that a molecular probe of the type you imagine can exist. What exactly is it measuring and how is it sensitive to it? Is it some sort of enzyme? Do we thaw the brain and then introduce these probes in solution? Do we somehow pulp the cell and run the constituents through a nanopore type thing and try to measure charge?

the intuitions of other scientists with domain knowledge of neuroscience who use the language of information theory when assessing cryonic feasibility.

I would love to be convinced I am overly pessimistic, and pointing me in the direction of biochemists/neuroscientists/biophysicists who disagree with me would be welcome. I only know a few biophysicists and they are generally more pessimistic than I am.

What does the vitrification solution actually do that's supposed to irreversibly map things, does anyone actually know?

I know ethylene glycol is cytotoxic, and so interacts with membrane proteins, but I don't know the mechanism.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T05:48:28.327Z · score: 8 (8 votes) · LW · GW

I'll quickly point you at Drexler's Nanosystems and Freitas's Nanomedicine though they're rather long and technical reads. But we are visualizing molecularly specified machines, and 'hell no' to thawing first or pulping the cell. Seriously, this kind of background assumption is why I have to ask a lot of questions instead of just taking this sort of skeptical intuition at face value.

But rather than having to read through either of those sources, I would ask you to just take on assumption that two molecularly distinct (up to thermal noise) configurations will somehow be distinguishable by sufficiently advanced technology, and describe what your intuitions (and reasons) would be taking that premise at face value. It's not your job to be a physicist or to try to describe the theoretical limits of future technology, except of course that two systems physically identical up to thermal noise can be assumed to be technologically indistinguishable, and since thermal noise is much larger than exact quark positions it will not be possible to read off any subtle neural info by looking at exact quark positions (now that might be permanently impossible), etc. Aside from that I would encourage you to think in terms of doing cryptography to a vitrified brain rather than medicine. Don't ask whether ethylene glycol is toxic, ask whether it is a secure hard drive erasure mechanism that can obscure the contents of the brain from a powerful and intelligent adversary reading off the exact molecular positions in order to obtain tiny hints.

Checking over the open letter from scientists in support of cryonics to remember who has an explicitly neuroscience background, I am reminded that good old Anders Sandberg is wearing a doctorate in computational neuroscience from Stockholm, so I'll go ahead and name him.

comment by EHeller · 2013-03-22T06:50:56.326Z · score: 3 (3 votes) · LW · GW

Do you have a page number in Nanosystems for a references to a sensing probe? Also, this is tangential to the main discussion, so I'll take pointers to any reference you have and let this drop.

Don't ask whether ethylene glycol is toxic, ask whether it is a secure hard drive erasure mechanism that can obscure the contents of the brain from a powerful and intelligent adversary reading off the exact molecular positions in order to obtain tiny hints.

I was using cytotoxic in the very specific sense of "interacts and destabilizes the cell membrane," which is doing the sort of operations we agreed in principle can be irreversible. Estimates as to how important this sort of information actually is are impossible for me to make, as I lack the background. What I would love to see is someone with some domain specific knowledge explaining why this isn't an issue.

comment by zslastman · 2013-03-23T07:50:38.978Z · score: 0 (0 votes) · LW · GW

Do you have a page number in Nanosystems for a references to a sensing probe?

Boom. http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T07:18:40.632Z · score: 0 (0 votes) · LW · GW

I was using cytotoxic in the very specific sense of "interacts and destabilizes the cell membrane," which is doing the sort of operations we agreed in principle can be irreversible.

Sorry, but can you again expand on this? What happens?

comment by EHeller · 2013-03-23T03:21:25.561Z · score: 4 (4 votes) · LW · GW

So I cracked open a biochem book to avoid wandering off a speculative pier,as we were moving beyond what I readily knew. A simple loss of information presented itself.

Some proteins can have two states, open and closed, which operate on a hydrophobic/hydrophilic balance. In dessicated cells or if the proteins denature for some other reason, the open/closed state will be lost.

Adding cryoprotectants will change osmotic pressure and the cell will dessicate, and the open/closed state will be lost.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-23T04:33:40.017Z · score: 2 (2 votes) · LW · GW

Do we know about any such proteins related to LTM? Can we make predictions about what it takes to erase C. elegans maze memory this way?

comment by zslastman · 2013-03-23T07:37:56.409Z · score: 6 (6 votes) · LW · GW

Would strongly predict that such changes erase only information about short term activity, not long term memory. Protein conformation in response to electrochemical/osmotic gradients operates on the timescale of individual firings, it's probably too flimsy to encode stable memories. These should be easy for Skynet to recover.

Higher level pattens of firings might conceivably store information, but experience with anaesthesia, hypothermia etc. says they do not. Or we've been killing people and replacing them all this time... a possibility which thanks to this site I'm prepared to consider..

Oh, and

Do you have a page number in Nanosystems for a references to a sensing probe?

Bam.

http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343

comment by EHeller · 2013-03-23T05:32:32.799Z · score: 1 (1 votes) · LW · GW

Here we have moved far past my ability to even speculate.

comment by lsparrish · 2013-03-23T16:04:14.253Z · score: 1 (3 votes) · LW · GW

Presumably you can use google and wikipedia to fill in the gaps just like the rest of us.

Wikipedia: Long-term memory

Long-term memory, unlike short-term memory, is dependent upon the construction of new proteins.[30] This occurs within the cellular body, and concerns in particular transmitters, receptors, and new synapse pathways that reinforce the communicative strength between neurons. The production of new proteins devoted to synapse reinforcement is triggered after the release of certain signaling substances (such as calcium within hippocampal neurons) in the cell. In the case of hippocampal cells, this release is dependent upon the expulsion of magnesium (a binding molecule) that is expelled after significant and repetitive synaptic signaling. The temporary expulsion of magnesium frees NMDA receptors to release calcium in the cell, a signal that leads to gene transcription and the construction of reinforcing proteins.[31] For more information, see long-term potentiation (LTP).

One of the newly synthesized proteins in LTP is also critical for maintaining long-term memory. This protein is an autonomously active form of the enzyme protein kinase C (PKC), known as PKMζ. PKMζ maintains the activity-dependent enhancement of synaptic strength and inhibiting PKMζ erases established long-term memories, without affecting short-term memory or, once the inhibitor is eliminated, the ability to encode and store new long-term memories is restored.

Also, BDNF is important for the persistence of long-term memories.[32]

What I worry about being confused on when reading the literature is the distinction between forming memories in the first place, and actually encoding for memory.

Another critical distinction is that, proteins that are needed to prevent degradation of memories over time (which get lots of research and emphasis in the literature due to their role in preventing degenerative diseases) aren't necessarily the ones directly encoding for the memories.

comment by EHeller · 2013-03-23T17:08:14.930Z · score: 3 (3 votes) · LW · GW

So in subjects I know a lot about, I have dealt with many people who pick up strange notions by filling in the gaps from google and wikipedia with a weak foundation. The work required to effectively figure out what specific damage to the specific proteins you mentioned could be done by desiccation of a cell is beyond my knowledge base, so I leave it to someone more knowledgeable than myself(perhaps you?) to step in.

What open/closed states does PKMζ have? What regulates those open/closed states? Are the open/closed states important to its roll (it looks like yes given the notion of the inhibitor?)?

comment by lsparrish · 2013-03-25T15:09:43.457Z · score: 0 (0 votes) · LW · GW

Yes, it's important to build a strong foundation before establishing firm opinions. Also, in this particular case note that science appears to have recently changed it's mind based on further evidence, which goes to show that you have to be careful when reading wikipedia. Apparently the protein in question is not so likely to underlie LTM after all, as transgenic mice lacking it still have LTM (exhibiting maze memory, LTP, etc). The erasure of memory is linked to zeta inhibitory peptide (ZIP), which incidentally happens in the transgenic mice as well.

ETA: Apparently PKMzeta can be used to restore faded memories erased with ZIP.

comment by lsparrish · 2013-03-23T04:09:28.078Z · score: 2 (2 votes) · LW · GW

Adding cryoprotectants will change osmotic pressure and the cell will dessicate, and the open/closedstate will be lost.

Now you know why I'm so keen on the idea of figuring out a way to get something like trehalose into the cell. Neurons tend to lose water rather than import cryoprotectants because of their myelination. Trehalose protects against dessication by cushioning proteins from hitting each other. Other kinds of solute that can get past the membrane could balance out the osmotic pressure (that's kind of the point of penetrating cryoprotectants) just as well, but I like trehalose because of its low toxicity.

comment by orthonormal · 2013-03-22T05:07:52.731Z · score: 1 (1 votes) · LW · GW

How are you imaging this works?

Nanotechnology, not chemical analysis. Drexler's Engines of Creation contains a section on the feasibility of repairing molecular damage in this way. Since (if our current understanding holds) nanobots can be functional on a smaller scale than proteins (which are massive chunks held together Lego-style by van der Walls forces), they can be introduced within a cell membrane to probe, report on, and repair damaged proteins.

comment by EHeller · 2013-03-22T06:34:56.888Z · score: 0 (0 votes) · LW · GW

I have not read Engine's of Creation, but I have read his thesis and I was under the impression most of the proposed systems would only work in vacuum chambers as the would oxidize extremely rapidly in an environment like the body. Has someone worked around this problem, even in theory?

Also, I've seen molecular assembler designs of various types in various speculative papers, but I've never seen a sensing apparatus. Any references?

comment by orthonormal · 2013-03-23T16:44:25.269Z · score: 0 (0 votes) · LW · GW

Has someone worked around this problem, even in theory?

Later in the thread, Eliezer recommended Drexler's followup Nanosystems and Freitas' Nanomedicine, neither of which I've read, but I'd be surprised if the latter didn't address this issue. Sorry that I in particular don't think this is a worrisome objection, but it's on the same level as saying that electronics could never be helpful in the real world because water makes them malfunction. You start by showing that something works under ideal conditions, and then you find a way to waterproof it.

Also, I've seen molecular assembler designs of various types in various speculative papers, but I've never seen a sensing apparatus. Any references?

For the convenience of later readers: someone elsewhere in the thread linked an actual physical experimental example.

comment by EHeller · 2013-03-23T17:42:56.884Z · score: 1 (1 votes) · LW · GW

Freitas' Nanomedicine, neither of which I've read, but I'd be surprised if the latter didn't address this issue.

Not that I have seen, but I'm only partially through it.

For the convenience of later readers: someone elsewhere in the thread linked an actual physical experimental example.

And its an awesome example from just a few months ago! Pushing NMR from mm resolutions down to nm resolutions is a truly incredibly feat!

comment by Strange7 · 2013-03-21T08:52:42.254Z · score: 0 (0 votes) · LW · GW

The end states don't need to be identical, just indistinguishable.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T09:16:47.885Z · score: 2 (2 votes) · LW · GW

To presume that states non-identical up to thermal noise are indistinguishable seems to presume either lower technology than the sort of thing I have in mind, or that you know something I don't about how two physical states can be non-identical up to thermal noise and yet indistinguishable.

comment by Nisan · 2013-03-20T14:44:54.107Z · score: 3 (3 votes) · LW · GW

Do you think it's at all likely that the connectome can be recovered after fracturing by "matching up" the structure on either side of the fracture?

comment by shminux · 2013-03-21T20:51:54.963Z · score: 0 (0 votes) · LW · GW

Just to be a cryo advocate here for a moment, if the information of interest is distributed rather than localized, like in a hologram (or any other Fourier-type storage), there is a chance that one can be recovered as a reasonable facsimile of the frozen person, with maybe some hazy memories (corresponding to the lowered resolution of a partial hologram). I'd still rather be revived but having trouble remembering someone's face or how to drive a car, or how to solve the Schrodinger equation, than not to be revived at all. Even some drastic personality changes would probably be acceptable, given the alternative.

comment by EHeller · 2013-03-22T04:02:14.809Z · score: 1 (1 votes) · LW · GW

Oh, sure. Or if the sort of information that gets destroyed relates to what-I-am-currently-thinking, or something similar. If I wake up and don't remember the last X minutes,or hours, big deal. But when we have to postulate certain types of storage for something to work, it should lower our probability estimates.

comment by TheOtherDave · 2013-03-21T21:14:35.499Z · score: 0 (0 votes) · LW · GW

Do you have a sense of how drastic a personality change has to be before there's someone else you'd rather be resurrected instead of drastically-changed-shminux?

comment by shminux · 2013-03-21T21:37:58.017Z · score: 0 (0 votes) · LW · GW

Not really. This would require solving the personal identity problem, which is often purported to have been solved or even dissolved, but isn't.

I'm guessing that there is no actual threshold, but a fuzzy fractal boundary which heavily depends on the person in question. While one may say that if they are unable to remember the faces and names of their children and no longer able to feel the love that they felt for them, it's no longer them, and they do not want this new person to replace them, others would be reasonably OK with that. The same applies to the multitude of other memories, feelings, personality traits, mental and physical skills and whatever else you (generic you) consider essential for your identity.

comment by TheOtherDave · 2013-03-22T02:26:20.902Z · score: 0 (0 votes) · LW · GW

Yeah, I share your sense that there is no actual threshold.

It's also not clear to me that individuals have any sort of specifiable boundary or what is or isn't "them", however fuzzy or fractal, so much as they have the habit of describing themselves in various ways.

comment by Dreaded_Anomaly · 2013-03-20T13:52:13.120Z · score: 2 (2 votes) · LW · GW

Have you seen the comments by kalla724 in this thread?

Edit: There's some further discussion here.

comment by Error · 2013-03-20T03:54:22.990Z · score: 2 (4 votes) · LW · GW

Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.

It seems to me that they're also questions of engineering feasibility. A thing can be provably possible and yet unfeasibly difficult to implement in reality. Consider the difference between, say, adding salt to water and getting it out again. What if the difference in cost and engineering difficulty between vitrifying and successfully de-vitrifying is similar? What if it turns out to be ten orders of magnitude greater?

I think the most likely failure condition for cryonics tech (as opposed to cyronics organizations) isn't going to be that revival turns out to be impossible, but that revival turns out to be so unbelievably hard or expensive that it's never feasible to actually do. If it's physically and information-theoretically allowed to revive a person, but technologically impractical (even with Sufficiently Advanced Science), then its theoretical possibility doesn't help the dead much.

I have the same concern about unbounded life extension, actually; but I find success in that area more probable for some reason.

(personal disclosure: I'm not signed up for cryonics, but I don't give funny looks to people who are. Their screws seem a bit loose but they're threaded in the right direction. That's more than one can say for most of the world.)

comment by Izeinwinter · 2013-03-31T19:01:55.692Z · score: 0 (0 votes) · LW · GW

Getting aging to stop looks positively trivial in comparison - The average lifespan of different animals already varies /way/ to much for there to be any biological law underlying it. So turning senescence off altogether should be possible. I suspect evolution has not already done so because overly long-lived creatures in the wild were on average bad news for their bloodlines - banging their grand daughters and occupying turf with the cunning of the old. Uhm. Now I have an itch to set up a simulation and run it.. Just so stories are not proof. Math is proof.

comment by Error · 2013-03-20T03:04:30.486Z · score: 2 (2 votes) · LW · GW

I think it might be important to remind others of that too, when discussing the subject. Especially for people who are signed up but have a skeptical social circle, "this seems like the least-bad of a set of bad options" may be easier for them to swallow than "I believe I'm going to wake up one day."

comment by ThoughtSpeed · 2013-02-27T06:07:20.381Z · score: 8 (10 votes) · LW · GW

Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.

Finally. It is written. So many choices... so many thoughts, ideas, plans to express... No! It is done! Another time you silly brain! We must choose futures! We will improve, brain, I promise.

I look forward to at last becoming an active member of this community, and LEVELING UP! Tsuyoku naritai!

comment by itaibn0 · 2013-02-23T17:37:32.426Z · score: 8 (8 votes) · LW · GW

My name is Itai Bar-Natan. I have been lurking here for a long time, more recently I start posting some things, but only now do I formally introduce myself.

I am in grade 11, and I began reading less wrong at grade 8 (introduced by Scott Aaronson's blog). I am a former math prodigy, and am currently taking one graduate-level course in it. This is the first time I am learning math under the school system (although I not the first time I attended math classes under the school system). Before that, I would learn from my parents, who are both mathematicians, or (later on) from books and internet articles.

Heedless of Feynman, I believe I understand quantum mechanics.

One weakness I am working to improve on is the inability to write in large quantities.

I have a blog here: http://itaibn.wordpress.com/

I consider less wrong as a fun time-waster and a community which is relatively sane.

comment by BerryPick6 · 2013-02-23T17:58:20.812Z · score: 3 (3 votes) · LW · GW

Are you, by any chance, related to Dror?

comment by itaibn0 · 2013-02-23T18:12:01.330Z · score: 4 (4 votes) · LW · GW

Yes, I am his son.

comment by BerryPick6 · 2013-02-23T19:25:55.206Z · score: 0 (0 votes) · LW · GW

To my eternal embarrassment, I was, as a youth, quite taken in by "The Bible Code." Very taken in, actually. That ended suddenly when someone directed to the material written by your father and McKay (I think?). Small world, I guess? :)

comment by wedrifid · 2013-02-23T18:50:14.022Z · score: 2 (2 votes) · LW · GW

Headless of Feynman, I believe I understand quantum mechanics.

Give her to Headless Feyn-man!

comment by itaibn0 · 2013-02-23T18:59:30.547Z · score: 0 (0 votes) · LW · GW

Typo fixed.

comment by olibain · 2013-02-20T20:34:42.592Z · score: 8 (10 votes) · LW · GW

I'm Robby Oliphant. I started a few months ago reading HP:MoR, which led me to the Sequences, which led me here about two weeks ago. So far I have read comments and discussions solely as a spectator. But finally, after developing my understanding and beginning on the path set forth by the sequences, I remain silent no more.

I am fresh out of high school, excited about life and plan to become a teacher, eventually. My short-term plans involve going out and doing missionary work for my church for the next two years. When I came head on against the problem of being a rationalist and a missionary for a theology, I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.

I find some of this difficult, some of this intuitive and some of this neither difficult or intuitive, which is extremely frustrating, how something can appears simple but defy my efforts to intuitively work it. I will continue to work at it because rationality seems to be praiseworthy and useful. I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.

comment by olibain · 2013-02-21T04:17:23.293Z · score: 3 (5 votes) · LW · GW

Hahaha! I find it heartening that that is your response to me wanting to be a teacher. I am quite aware that the system is broken. My personal way of explaining it: The school system works for what it was made to work for; avoiding responsibility for a failed product.

  • The parents are not responsible; the school taught their kids.

  • The students are not socially responsible; everything was compulsory, they had no choice to make.

  • Teachers are not to blame; they teach what they are told to teach and have the autonomy of a pre-AI computer intelligence.

  • The administrators are not to blame; They are not the students' parents or teachers.

  • The faceless, nameless committees that set the curriculum are not responsible, they formed then separated after setting forth the unavoidably terrible standards for all students of an arbitrary age everywhere.

So the product fails but everyone did they're best. No nails stick out, no one gets hammered.

I have high dreams of being the educator that takes down public education. If a teacher comes up with a new way of teaching or an important thing to teach, he can go to class the next day and test it. I have a hope of professional teachers; either trusted with the autonomy of being professionals, or actual professionals in their subject, teaching only those that want to learn.

Also the literature on Mormons fromDesrtopa, Ford and Nisan I am thankful for. I enjoyed the Mormonism organizational post because I have also noticed how well the church runs. It is one reason I stay a Latter-Day Saint in this time of Atheism mainstreaming. The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education. I can give examples, but I will leave my deeper insights for my future posts; I feel I am well introduced for now.

comment by Bugmaster · 2013-02-21T05:45:13.627Z · score: 1 (1 votes) · LW · GW

The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education.

I would be quite interested to see a more detailed post regarding that last part. Of course, I am just some random guy on the Internet, but still :-)

comment by [deleted] · 2013-03-08T05:37:05.168Z · score: 0 (0 votes) · LW · GW

I'd like to know how they [=consequentialist deists stuck in religions with financial obligations] justify tithing so much of their income to an ineffective charity.

comment by whowhowho · 2013-02-21T10:36:22.158Z · score: 0 (0 votes) · LW · GW

The Education system in the US, or the education system everywhere?

comment by MugaSofer · 2013-02-21T10:54:35.514Z · score: -1 (1 votes) · LW · GW

Can't speak for Everywhere, but it's certainly not just the US. Ireland has much the same problem, although I think it's not quite as bad here.

comment by [deleted] · 2013-02-21T17:05:43.874Z · score: 0 (0 votes) · LW · GW

In Italy it's also very bad, but the public opinion does have a culprit in mind (namely, politics).

comment by OrphanWilde · 2013-03-07T20:08:33.277Z · score: -1 (1 votes) · LW · GW

I love Mormonism.

Possibly because I love Thus Spoke Zarathustra, and Mormonism seems to be at least partially inspired by it.

comment by gwern · 2013-03-08T05:06:08.019Z · score: 3 (3 votes) · LW · GW

That seems rather unlikely, inasmuch as the first English translation was in 1896 - by which point Smith had preached, died, the Mormons evacuated to Utah, begun proselytizing overseas and baptism of the dead, set up a successful state, disavowed polygamy, etc.

comment by OrphanWilde · 2013-03-08T14:50:14.350Z · score: 0 (0 votes) · LW · GW

There's also the fact that it wasn't even written until after Joseph Smith had died, translation not even being an issue. (In point of fact, Nietzsche was born the same year that Joseph Smith died.)

Nonetheless! I am convinced a time traveler gave Joseph Smith the book.

comment by Desrtopa · 2013-02-20T22:48:23.510Z · score: 2 (2 votes) · LW · GW

I don't think you'll find much discussion of theology here, since in these parts religion is generally treated as an open and shut case. The archives of Luke Muelhauser's blog, Common Sense Atheism, are probably a much more abundant resource for rational analysis of theology; it documents his (fairly extensive) research into theological matters stemming from his own crisis of faith, starting before he became an atheist.

Obviously, the name of the site is rather a giveaway as to the ultimate conclusion that he drew (I would have named it differently in his place,) and the foregone conclusion might be a bit mindkilling, but I think the contents will probably be a fair approximation of the position of most of the community here on religious theological matters, made more explicit than they generally are on Less Wrong.

comment by Epiphany · 2013-02-21T01:24:33.745Z · score: 1 (5 votes) · LW · GW

I appreciate your altruistic spirit and your goal of gathering objective evidence regarding your religion. I'm glad to see you beginning on the path of improving your rationality! If you haven't encountered the term "effective altruist" yet or have not yet investigated the effective altruist organizations, I very much encourage you to investigate them! As a fellow altruistic rationalist, I can say that they've been inspiring to me and hope they're inspiring to you as well.

I feel it necessary to inform you of something important yet unfortunate about your goal of becoming a teacher. I'm not happy to have to tell you this, but I am quite glad that somebody told you about it at the beginning of your adulthood:

The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for.

If you wish to investigate alternatives to becoming a standard school teacher, I would highly recommend considering becoming involved with effective altruists. An organization like THINK or 80,000 hours may be very helpful to you in determining what sorts of effective and altruistic things you might do with your skills. THINK does training for effective altruists and helps them figure out what to do with themselves. 80,000 hours helps people figure out how to make the most altruistic contribution with careers they already have.

For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources. Also, due to the fact that he began as a Christian, I'm guessing that his reasoning was not unnecessarily harsh toward Christian ideas like they might have been otherwise. The sampling of his blog that I've read is of good quality. He's a rationalist, so that might be part of why.

comment by Bugmaster · 2013-02-21T02:59:15.451Z · score: 0 (2 votes) · LW · GW

The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for.

See also Lockhart's Lament (PDF link) . That said, in my own case, competent teachers (such as Lockhart appears to be) did indeed make a difference. Though my IQ is much closer to the population than the IQ of an average LWer's, so maybe my anecdotal evidence does not apply (not that it ever does, what with being anecdotal and all).

comment by Epiphany · 2013-02-21T03:18:26.491Z · score: 0 (2 votes) · LW · GW

That said, in my own case, competent teachers (such as Lockhart appears to be) did indeed make a difference.

I can't fathom that you'd say that if you had read Gatto's speech.

I am very interested in the reaction you have to the speech (It's called The Seven Lesson School Teacher, and it's in the beginning of chapter 1).

Would you indulge me?

Also:

Failing to teach reasoning skills in school is a crime against humanity.

comment by Bugmaster · 2013-02-21T05:21:11.078Z · score: 4 (6 votes) · LW · GW

I should also point out that, while Gatto makes some good points, his overall thesis is hopelessly lost in all the hyperbole, melodrama, and outright conspiracy theorizing. He does his own ideas a disservice by presenting them the way he does. For example, I highly doubt that mental illnesses, television broadcasts, and restaurants would all magically disappear (as Gatto claims on pg. 8) if only we could teach our children some critical thinking skills.

comment by Epiphany · 2013-02-22T03:56:19.140Z · score: 0 (2 votes) · LW · GW

Connection between education and sanity

Check out Ed DeBono's CORT thinking system. His research (I haven't thoroughly reviewed it, just reciting from memory) shows that by increasing people's lateral thinking / creativity, it decreases things like their suicide rate. If you have been taught to see more options, you're less likely to choose to behave desperately and destructively. If you're able to reason things out, you're less likely to feel stuck and need help. If you're able to analyze, you're less likely to believe something batty. Would mental illness completely disappear? I don't think so. Sometimes conditions are mostly due to genes or health issues. But there are connections, definitely, between one's ability to think and one's sanity.

If you don't agree with this, then do you also criticize Eliezer's method of raising the sanity waterline by encouraging people to refine their rationality?

Connection between education and indulging in passive entertainment

As for television, I think he's got a point. When I was 17, I realized that I was spending most of my free time watching someone else's life. I wasn't spending my time making my own life. If the school system makes you dependent like he says (and I believe it does) then you'll be a heck of a lot less likely to take initiative and do something. If your self-confidence depends on other expert's approval, it becomes hard to take a risk and go do your own project. If your creativity and analytical abilities are reduced, so too will be your ability to imagine projects for yourself to do and guide yourself while doing them. If your love for learning and working is destroyed, why would you want to do self-directed projects in the first place? And if you aren't doing your own projects your own way, that sucks a lot of the life and pleasure out of them. Fortunately, for me, a significant amount of my creativity, analytical abilities, and a significant amount of my passion for learning and working survived school. That gave me the perspective I needed to make the choice between living an idle life of passive entertainment, and making my own life. Making my own life is more engaging than passive entertainment because it's tailored to my interests exactly, more fulfilling than accomplishing nothing could ever be, more exciting than fantasy can be because it is real, and more beneficial and rewarding in both emotional and practical ways than entertainment can be due to the fact that learning and working opens up new social and career opportunities.

If the choice you are making is between "watch TV" and "not watch TV" you're probably going to watch it.

But if you have a busy mind full of ideas and thoughts and passions, that's not the choice you're perceiving. You've got the choice between "watch character's lives" and "make my own life awesome and watch that". If you felt strongly that you could make your own life awesome, is there anything that could convince you to watch TV instead?

Gatto doesn't do a good job of giving you perspective so you can understand his point of view here. He didn't explain how incredible it can feel to have a mind that is on, how engaging it can be to learn something you're interested in, how satisfying it is to do your own d project your own d way and see it actually work! He doesn't do a good job of helping you imagine how much more motivation you would experience if your creativity and analytical abilities were jacked up way beyond what they are. If your life was packed full of thoughts and ideas and self-confidence, could you spend half your free time in front of a show? If you had the kind of motivation it causes to feel like you're in the process of building an amazing life, would you be able to still your mind and focus on sitcoms?

I wouldn't. I can't. It is as if I am possessed by this supernova sized drive to DO THINGS.

Restaurants and education

I honestly don't know anything about whether these are connected. My best guess is that Gatto loves to cook, but had found not being taught how to cook to be a rather large obstacle in the way of enjoying it.

comment by Bugmaster · 2013-02-22T06:35:21.073Z · score: 4 (4 votes) · LW · GW

I mostly agree with the things you say, but these are not the things that Gatto says. Your position is a great deal milder than his.

In a single sentence, he claims that if only we could set up our schools the way he wants them to be set up, then social services would utterly disappear, the number of "psychic invalids" would drop to zero, "commercial entertainment of all sorts" would "vanish", and restaurants would be "drastically down-sized".

This is going beyound hyperbole; this borders on drastic ignorance.

For example, not all mental illnesses are caused by a lack of gumption. Many, such as clinical depression and schizophrenia, are genetic in nature, and will strike their victims regardless of how awesomely rational they are. Others, such as PTSD, are caused by psychological trauma and would fell even the mighty Gatto, should he be unfortunate enough to experience it.

While it's true that most of the "commercial entertainment of all sorts" is junk, some of it is art; we know this because a lot of it has survived since ancient times, despite the proclamations of people who thought just like Gatto (only referring to oil paintings, phonograph records, and plain old-fashioned writing instead of electronic media). As an English teacher, it seems like Gatto should know this.

And what's his beef with restaurants, anyway ? That's just... weird.

If you had the kind of motivation it causes to feel like you're in the process of building an amazing life, would you be able to still your mind and focus on sitcoms?

Do you feel the same way about fiction books, out of curiosity ?

If you don't agree with this, then do you also criticize Eliezer's method of raising the sanity waterline by encouraging people to refine their rationality?

If Eliezer claimed that raising the sanity waterline is the one magic bullet that would usher us into a new Golden Age, as we reclaim the faded glory of our ancestors, then yes, I would disagree with him too. But, AFAIK, he doesn't claim this -- unlike Gatto.

comment by wedrifid · 2013-02-22T08:25:25.402Z · score: 6 (6 votes) · LW · GW

For example, not all mental illnesses are caused by a lack of gumption. Many, such as clinical depression and schizophrenia, are genetic in nature, and will strike their victims regardless of how awesomely rational they are.

I'm afraid this account has swung to the opposite extreme---to the extent that it is quite possibly further from the truth and more misleading than Gatto's obvious hyperbole.

Schizophrenia is one of the most genetically determined of the well known mental health problems but even it is heavily dependent on life experiences. In particular, long term exposure to stressful environments or social adversity dramatically increases the risk that someone at risk for developing the condition will in fact do so.

As for clinical depression, the implication that due to being 'genetic in nature' means that the environment in which an individual spends decades of growth and development in is somehow not important is utterly absurd. Genetics is again relevant in determining how vulnerable the individual is but the social environment is again critical for determining whether problems will arise.

comment by Bugmaster · 2013-02-22T19:53:05.315Z · score: 1 (1 votes) · LW · GW

That's a good point, I did not mean to imply that these mental illnesses are completely unaffected by environmental factors. In addition, in case of some illnesses such as depression, there are in fact many different causes that can lead to similar symptoms, so the true picture is a lot more complex (and is still not entirely well understood).

However, this is very different from saying something like "schizophrenia is completely environmental", or even "if only people had some basic critical thinking skills, they'd never become depressed", which is how I interpreted Gatto's claims.

For example even with a relatively low heritability rate, millions of people would still contract schizophrenia every year worldwide -- especially since many of the adverse life experiences that can trigger it are unavoidable. No amount of critical thinking will reduce the number of victims to zero. And that's just one specific disease among many, and we're not even getting into more severe cases such as Down's Syndrome. If Gatto thinks otherwise, then he's being hopelessly naive.

comment by Epiphany · 2013-02-22T18:47:58.310Z · score: 1 (3 votes) · LW · GW

I agree that saying "all these problems will disappear" is not the same as saying that "these problems will reduce". I felt the need to explain why the problems would reduce because I wasn't sure you saw the connections.

Others, such as PTSD, are caused by psychological trauma and would fell even the mighty Gatto, should he be unfortunate enough to experience it.

I have to wonder if having a really well-developed intellect might offer some amount of protection against this. Whether Gatto's intellect is sufficiently well-developed for this is another topic.

And what's his beef with restaurants, anyway ? That's just... weird.

I don't know. I love not cooking.

Do you feel the same way about fiction books, out of curiosity ?

Actually, yes. When I am fully motivated, I can spend all my evenings doing altruistic work for years, reading absolutely no fiction and watching absolutely no TV shows. That level of motivation is where I'm happiest, so I prefer to live that way.

I do occasionally watch movies during those periods, perhaps once a month, because rest is important (and because movies take less time to watch than a book takes to read, but are higher quality than television, assuming you choose them well).

comment by Bugmaster · 2013-02-22T19:39:37.418Z · score: 2 (2 votes) · LW · GW

I felt the need to explain why the problems would reduce because I wasn't sure you saw the connections.

I see the connections, but I do not believe that some of the problems Gatto wants to fix -- f.ex. the existence of television and restaurants -- are even problems at all. Sure, TV has a lot of terrible content, and some restaurants have terrible food, but that's not the same thing as saying that the very concept of these services is hopelessly broken.

I have to wonder if having a really well-developed intellect might offer some amount of protection against this

It probably would, but not to any great extent. I'm not a psychiatrist or a neurobiologist though, so I could be widely off the mark. In general, however, I think that Gatto is falling prey to the Dunning–Kruger effect when he talks about mental illness, economics, and many other things for that matter.

For example, the biggest tool in his school-fixing toolbox is the free market; he believes that if only schools could compete against each other with little to no government regulation, their quality would soar. In practice, such scenarios tend to work out... poorly.

When I am fully motivated, I can spend all my evenings doing altruistic work for years, reading absolutely no fiction and watching absolutely no TV shows.

That's fair, and your preferences are consistent. However, many other people see a great deal of value in fiction; some even choose to use it as a vehicle for transmitting their ideas (f.ex. HPMOR). I do admit that, in terms of raw productivity, I cannot justify spending one's time on reading fiction; if a person wanted to live a maximally efficient life, he would probably avoid any kind of entertainment altogether, fiction literature included. That said, many people find the act of reading fiction literature immensely useful (scientists and engineers included), and the same is true for other forms of entertainment such as music. I am fairly convinced that any person who says "entertainment is a waste of time" is committing a fallacy of false generalization.

comment by Epiphany · 2013-02-23T05:41:53.805Z · score: 0 (2 votes) · LW · GW

I do not believe that some of the problems Gatto wants to fix -- f.ex. the existence of television and restaurants -- are even problems at all.

The existence of television technology isn't, in my opinion, a problem. Nor is the fact that some shows are low quality. Even if all of them were low quality, I wouldn't necessarily see that as a problem - it would still be a way of relaxing. The problem I see with television is that the average person spends 4 hours a day watching it. (Can't remember where I got that study, sorry.) My problem with that is not that they aren't exercising (they'd still have an hour a day which is plenty of exercise, if they want it) or that they aren't being productive (you can only be so productive before you run out of mental stamina anyway, and the 40 hour work week was designed to use the entirety of the average person's stamina) but that they aren't living.

It could be argued that people need to spend hours every day imagining a fantasy. I was told by an elderly person once that before television, people would sit on a hill and daydream. I've also read that imagining doing a task correctly is more effective at making you better at it than practice. If that's true, daydreaming might be a necessity for maximum effectiveness and television might provide some kind of similar benefit. So it's possible that putting one's brain into fantasy mode for a few hours of day really is that beneficial.

Spending four hours a day in fantasy mode is not possible for me (I'm too motivated to DO something) and I don't seem to need anywhere near that much daydreaming. I would find it very hard to deal with if I had spent that much of my free time in fantasy. I imagine that if asked whether they would have preferred to watch x number of shows, or spent all of that free time on getting out there and living, most people would probably choose the latter - and that's sad.

he believes that if only schools could compete against each other with little to no government regulation, their quality would soar. In practice, such scenarios tend to work out... poorly.

I think that people would also have to have read the seven lessons speech for the problems he sees to be solved. Maybe eventually things would evolve to the point where schools would not behave this way anymore without them reading it, because it's probably not the most effective way of teaching, but I don't see that change happening quickly without people pressuring schools to make those specific changes.

However, I'm surprised that you say "In practice, such scenarios tend to work out... poorly." Do you mean that the free market doesn't do much to improve quality, or do you just mean that when people want specific changes and expect the free market to implement them, the free market doesn't tend to implement those specific changes?

I'm also very interested in where you got the information to support the idea, either way.

a vehicle for transmitting their ideas

After reading Ayn Rand's the Fountainhead, my feeling was that even though much of the writing was brilliant and enjoyable, I could have gotten the key ideas much faster if she had only published a few lines from one of the last chapters. I'm having the same reaction to the sequences and HPMOR. I enjoy them and recognize the brilliance in the writing abilities, but I find myself doing things like reading lists of biases over and over in order to improve my familiarity and eventually memorize them. I still want to finish the sequences because they're so important to this culture, but what I have prioritized appears to be getting the most important information in as quickly as possible. So, although entertainment is a way of transmitting ideas, I question how efficient it is, and whether it provides enough other learning benefits to outweigh the cost of wrapping all those ideas in so much text. I could walk all the way to Florida, but flying would be faster. People realize this so if they want to take vacations, they fly. Why, then, do they use entertainment to learn instead of seeking out the most efficient method?

It makes sense from the writer's point of view. I have said before that I was very glad that Eliezer decided to popularize rationality as much as possible, as I had been thinking that somebody needed to do that for a very long time. His writing is interesting and his style is brilliant and his method has worked to attract almost twelve million hits to his site. I think that's great. But the fact that people probably would not have flocked to the site if he had posted an efficient dissemination of cognitive biases and whatnot is curious. Maybe the way I learn is different.

I am fairly convinced that any person who says "entertainment is a waste of time" is committing a fallacy of false generalization.

I think it depends on whether you use "waste of time" to mean "absolutely no benefit whatsoever" or "nowhere near the most efficient way of getting the benefit".

The statement "entertainment is an inefficient way to get ideas compared with other methods" seems true to me.

comment by wedrifid · 2013-02-23T06:36:23.920Z · score: 3 (3 votes) · LW · GW

I enjoy them and recognize the brilliance in the writing abilities, but I find myself doing things like reading lists of biases over and over in order to improve my familiarity and eventually memorize them. I still want to finish the sequences because they're so important to this culture, but what I have prioritized appears to be getting the most important information in as quickly as possible.

I wonder if the author would agree that that is the most important information. I suspect he would not. (So naturally if you learning goals are different to the teaching goals of the author then their material will not be optimized for your intentions.)

comment by Epiphany · 2013-02-23T09:07:01.753Z · score: -1 (1 votes) · LW · GW

It seems to me that the problem is what intention one has when one begins learning and whether one can deal with accepting the fact that they're biased, not how they go about learning them. Though, maybe Eliezer has put various protections in that get people questioning their intention and sells them on learning with the right intention. I would agree that if it did not occur to a person to use their knowledge of biases to look for their own mistakes, learning them could be really bad, but I do not think that learning a list of biases will all by itself turn me into an argument-wielding brain-dead zombie.

If it makes you feel any better to know this, I've been seeking a checklist of errors against which I can test my ideas.

comment by olibain · 2013-03-25T03:46:48.945Z · score: 0 (0 votes) · LW · GW

Whoo! my post got the most recursion. Do I get a reward? If I get a few more layers it will be more siding than post.

comment by Bugmaster · 2013-02-23T08:59:48.406Z · score: 0 (0 votes) · LW · GW

However, I'm surprised that you say "In practice, such scenarios tend to work out... poorly." Do you mean that the free market doesn't do much to improve quality...

That is one big reason behind my statement, yes. Currently, it looks like many, if not most, people -- in the Southern states, at least -- want their schools to engage in cultural indoctrination as opposed to any kind of rationality training. The voucher programs, which were designed specifically to introduce some free market into the education system, are being used to teach things like Creationism and historical revisionism. Which is not to say that public education in states like Louisiana and Texas is any better, seeing as they are implementing the same kinds of curricula by popular vote.

In fact, most private schools are religious in nature. According to this advocacy site (hardly an unbiased source, I know), around 50% are Catholic. On the plus side, student performance tends to be somewhat better (though not drastically so) in private schools, according to CAPE as well as other sources. However, private schools are also quite a bit more expensive than public schools, with tuition levels somewhere around $10K (and often higher). This means that the students who attend them have much wealthier parents, and this fact alone can account for their higher performance.

This leads me to my second point: I believe that Gatto is mistaken when he yearns for earlier, simpler times, where education was unencumbered by any regulation whatsoever, and students were free to learn (or to avoid learning) whatever they wanted. We do not live in such times anymore. Instead, we live in a world that is saturated by technology. Literacy, along with basic numeracy, are no longer marks of high status, but an absolute requirement for daily life. Most well-paying jobs, creative pursuits, as well as even basic social interactions all rely on some form of information technology. Basic education is not a luxury, but an essential service.

Are public schools adequately providing this essential service ? No. However, we simply cannot afford to live in a world where access to it is gated by wealth -- which is what would happen if schools were completely privatized. As far as I know, most if not all efforts to privatize essential services have added in disaster; this includes police, fire departments, and even prisons (in California, at least). Basic health care is a particularly glaring example.

So, in summary, existing private schools are emphasizing for indoctrination rather than critical thinking; and even if they were not, we cannot afford to restrict access to basic education based on personal wealth.

comment by Bugmaster · 2013-02-23T08:05:10.728Z · score: 0 (0 votes) · LW · GW

The problem I see with television is that the average person spends 4 hours a day watching it. ... My problem with that is not that they aren't exercising ... or that they aren't being productive ... but that they aren't living.

What does "living" mean, exactly ? I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly).

Why, then, do they use entertainment to learn instead of seeking out the most efficient method?

I address this point below, but I'd like to also point out that some people people's goals are different from yours. They consume entertainment because it is enjoyable, or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).

So, although entertainment is a way of transmitting ideas, I question how efficient it is, and whether it provides enough other learning benefits to outweigh the cost of wrapping all those ideas in so much text.

Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative. Similarly, other people find it easier to communicate their ideas in the form of narratives; this is why Eliezer writes things like Three Worlds Collide and HPMOR instead of simply writing out the equations. This is also why he employs several tropes from fiction even in his non-fiction writing.

I'm not saying that this is the "right" way to learn, or anything; I am merely describing the situation that, as I believe, exists.

The statement "entertainment is an inefficient way to get ideas compared with other methods" seems true to me.

I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.

comment by Epiphany · 2013-02-23T09:20:52.053Z · score: 1 (1 votes) · LW · GW

What does "living" mean, exactly ?

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly).

I used "living" to refer to a subjective state. There's nothing objective about it, and IMO, there's nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal.

I feel like your real challenge here is more similar to Kawoomba's concern. Am I right?

They consume entertainment because it is enjoyable,

Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people's creativity was reduced to the point where doing your own project is too challenging, or people's self-confidence was made too dependent on others such that they don't feel comfortable pursuing that fulfilling sense of having done something on their own?

or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy - contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don't understand how that can be called social contact.

Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative.

I've been thinking about this and I think what might be happening is that I make my own narratives.

Similarly, other people find it easier to communicate their ideas in the form of narratives

This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That's an interesting point.

I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

comment by Bugmaster · 2013-02-24T21:59:38.189Z · score: 2 (2 votes) · LW · GW

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba's sub-thread:

Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say:

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life.

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best technique for learning things, or anything of the sort.

comment by Desrtopa · 2013-02-28T06:14:32.880Z · score: 2 (2 votes) · LW · GW

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?

comment by Bugmaster · 2013-02-28T10:07:38.061Z · score: 1 (1 votes) · LW · GW

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn't spend any time on watching soap operas or reading romance novels -- unless doing so would lead to more paperclips (which is unlikely).

Of course, one could argue that consumption of passive entertainment does contribute to the average human's goals, since humans are unable to function properly without some downtime. But I don't know if I'd go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.

comment by RichardKennaway · 2013-02-28T14:38:05.855Z · score: 3 (7 votes) · LW · GW

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible.

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I'd even call it the sort of toxic mindwaste that RationalWiki loves to mock.

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

comment by Viliam_Bur · 2013-02-28T20:05:02.824Z · score: 3 (3 votes) · LW · GW

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory.

Why exactly? I mean, my intuition also tells me it's wrong... but my intuition has a few assumptions that disagree with the proposed scenario. Let's make sure the intuition does not react to a strawman.

For example, when in real life people "work like slaves for a future paradise", the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader's personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.

The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

In other words, is your objection "in situation X the decision D is wrong", or is it "the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong"?

comment by RichardKennaway · 2013-02-28T22:52:46.861Z · score: 1 (7 votes) · LW · GW

However, for the sake of experiment, imagine that Omega comes and tells you

When Omega enters a discussion, my interest in it leaves.

comment by wedrifid · 2013-03-01T09:44:58.937Z · score: 1 (5 votes) · LW · GW

When Omega enters a discussion, my interest in it leaves.

To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.

comment by RichardKennaway · 2013-03-08T23:29:43.846Z · score: 1 (5 votes) · LW · GW

Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such.

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

comment by wedrifid · 2013-03-09T01:24:26.110Z · score: 2 (2 votes) · LW · GW

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

I intended the general claim as stated. I don't know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.

For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.

comment by RichardKennaway · 2013-03-09T14:55:28.010Z · score: -1 (1 votes) · LW · GW

For practical purposes pronouncements like this are best interpreted as indications

For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.

comment by wedrifid · 2013-03-09T16:03:29.199Z · score: 0 (0 votes) · LW · GW

For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.

This is evidently not a behavior you practice.

comment by Peterdjones · 2013-03-09T09:40:22.984Z · score: 0 (0 votes) · LW · GW

It is counterintuitive that you should slave for people you don't know, perhaps because you can't be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn't fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.

comment by [deleted] · 2013-03-01T12:59:39.106Z · score: 0 (0 votes) · LW · GW

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

comment by Bugmaster · 2013-03-01T22:11:53.390Z · score: 0 (0 votes) · LW · GW

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

I believe the answer is "yes", but I had to think about that for a moment. I'm not sure how that's relevant to the current discussion, though.

I think your real point might be closer to something like, "thought experiments are useless at best, and should thus be avoided", but I don't want to put words into anyone's mouth.

comment by [deleted] · 2013-03-02T11:57:35.512Z · score: 0 (0 votes) · LW · GW

My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn't yield much of an insight about the real world”.

comment by Bugmaster · 2013-03-04T21:13:25.066Z · score: 0 (0 votes) · LW · GW

That makes sense, but I don't think it's what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.

comment by Jack · 2013-03-09T01:45:32.604Z · score: 2 (2 votes) · LW · GW

"Decision theory" doesn't mean the same thing as "value system" and we shouldn't conflate them.

comment by Peterdjones · 2013-03-09T09:51:37.623Z · score: 1 (1 votes) · LW · GW

Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.

comment by Bugmaster · 2013-02-28T16:48:02.365Z · score: 1 (1 votes) · LW · GW

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise ... is prima facie a broken decision theory.

Why ? I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad. You ask,

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

But the answer depends entirely on your goals. These can be as relatively modest as, "the world will be just like it is today, but everyone wears a party hat". Or it could be as ambitious as, "the world contains as many paperclips as physically possible". In the latter case, if you asked the paperclip maximizer "who gets to slack off ?", it wouldn't find the question relevant in the least. It doesn't matter who gets to do what, all that matters are the paperclips.

You might argue that a paperclip-filled world would be a terrible place, and I agree, but that's just because you and I don't value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like "happy people in party hats", and not nearly enough paperclips.

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

comment by RichardKennaway · 2013-02-28T17:52:19.424Z · score: 3 (3 votes) · LW · GW

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time.

This is proving the conclusion by assuming it.

Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

BTW, for what it's worth, I do not watch TV. And now I am imagining a chapter of that book entitled "Never Sleep Alone".

comment by ygert · 2013-02-28T17:58:01.429Z · score: 7 (7 votes) · LW · GW

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.

comment by RichardKennaway · 2013-02-28T18:37:19.405Z · score: 1 (1 votes) · LW · GW

You would, then, not walk away from Omelas?

comment by Desrtopa · 2013-02-28T19:05:28.105Z · score: 10 (12 votes) · LW · GW

I definitely wouldn't. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I'd be getting myself into, so in that sense I've effectively volunteered myself to be the tormented child.

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

comment by drnickbone · 2013-03-01T08:08:22.826Z · score: 1 (3 votes) · LW · GW

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

You're assuming here that the "veil of ignorance" gives you exactly equal chance of being each citizen of Omelas, so that a decision under the veil reduces to average utilitarianism.

However, in Rawls's formulation, you're not supposed to assume that; the veil means you're also entirely ignorant about the mechanism used to incarnate you as one of the citizens, and so must consider all probability distributions over the citizens when choosing your society. In particular, you must assign some weight to a distribution picked by a devil (or mischievous Omega) who will find the person with the very lowest utility in your choice of society and incarnate you as that person. So you wouldn't choose Omelas.

This seems to be why Rawls preferred maximin decision theory under the veil of ignorance rather than expected utility decision theory.

comment by Desrtopa · 2013-03-01T13:40:03.381Z · score: 4 (4 votes) · LW · GW

In that case, don't use a Rawlsian veil of ignorance, it's not the best mechanism for addressing the decision. A veil where you have an equal chance of your own child being the victim to anyone else's (assuming you're already too old to be the victim) is more the sort of situation anyone actually deciding whether or not to live in Omelas would face.

Of course, I would pick Omelas even under the Rawlsian veil, since as I've said I'm willing to be the one who takes the hit.

comment by drnickbone · 2013-03-01T17:10:33.335Z · score: 0 (0 votes) · LW · GW

Ah, so you are considering the question "If Omelas already exists, should I choose to live there or walk away?" rather than the Rawlsian question "Should we create a society like Omelas in the first place?" The "veil of ignorance" meme nearly always refers to the Rawlsian concept, so I misunderstood you there.

Incidentally, I reread the story and there seems to be no description of how the child was selected in the first place or how he/she is replaced. So it's not clear that your own child does have the same chance of being the victim as anyone else's.

comment by Desrtopa · 2013-03-01T23:14:30.344Z · score: 3 (3 votes) · LW · GW

Well, as I mentioned in another comment some time ago (not in this thread,) I support both not walking away from Omelas, and also creating Omelases unless an even more utility efficient method of creating happy and functional societies is forthcoming.

Our society rests on a lot more suffering than Omelas, not just in an incidental way (such as people within our cities who don't have housing or medical care,) but directly, through channels such as economic slavery where companies rely on workers, mainly abroad, who they keep locked in debt, who could not leave to seek employment elsewhere even if they wanted to and other opportunities were forthcoming. I can respect a moral code that would lead people to walk out on Omelas as a form of protest that would also lead people to walk out on modern society to live on a self sufficient seasteading colony, but I reject the notion that Omelas is worse than, or as bad as, our own society, in a morally relevant way.

comment by shminux · 2013-02-28T20:22:03.127Z · score: 0 (2 votes) · LW · GW

A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot. This is not even the dust-specks-vs-torture case, given that the Omelas is not a very large city.

if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately

Imagine that it is not you, but your child you must sacrifice. Would you shrug and say "sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life"? I know what I would do.

comment by Desrtopa · 2013-03-01T00:15:36.641Z · score: 5 (5 votes) · LW · GW

Imagine that it is not you, but your child you must sacrifice. Would you shrug and say "sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life"?

I hope I would have the strength to say "sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life." Doing it just for myself and my own social circle wouldn't be a good tradeoff, but those aren't the terms of the scenario.

Considering how many of our basic commodities rely on sweatshop or otherwise extremely miserable labor, we're already living off the backs of quite a lot of tormented children.

comment by shminux · 2013-03-01T03:54:22.957Z · score: -4 (4 votes) · LW · GW

I hope I would have the strength to say "sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life."

And there I thought that Babyeaters lived only in the Eliezer's sci-fi story...

comment by Desrtopa · 2013-03-01T04:09:24.888Z · score: 8 (8 votes) · LW · GW

The Babyeaters' babies outnumber the adults; their situation is analogous, not to the city of Omelas, but to a utopian city built on top of another, even larger, dystopian city, on which it relies for its existence.

I would rather live in a society where people loved and cherished their children, but also valued their society, and were willing to shut up and multiply and take the hit themselves, or to their own loved ones, for the sake of a common good that really is that much greater, and I want to be the sort of person I'd want others in that society to be.

I've never had children, but I have been in love, in a reciprocated relationship of the sort where it feels like it's actually as big a deal as all the love songs have ever made it out to be, and I think that sacrificing someone I loved for the sake of a city like Omelas is something I'd be willing to do in practice, not just in theory (and she never would have expected me to do differently, nor would I of her.) It's definitely not the case that really loving someone, with true depth of feeling, precludes acknowledgment that there are some things worth sacrificing even that bond for.

comment by shminux · 2013-03-01T18:28:40.553Z · score: -1 (3 votes) · LW · GW

I've never had children

I'm guessing that neither have most of those who upvoted you and downvoted me. I literally cannot imagine a worse betrayal than the scenario we've been discussing. I can imagine one kind-of-happy society where something like this would be OK, though.

comment by Qiaochu_Yuan · 2013-03-01T18:42:17.016Z · score: 6 (6 votes) · LW · GW

I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot.

Sounds like you need to update your model of people who don't have children. Also, how aggressively do you campaign against things like sweatshop labor in third-world countries, which as Desrtopa correctly points out are a substantially worse real-world analogue? Do children only matter if they're your children?

comment by drethelin · 2013-02-28T20:32:39.717Z · score: 4 (4 votes) · LW · GW

the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet. Omelas somehow managed to get it down to one child. How many other children would you sacrifice for your own?

comment by shminux · 2013-02-28T20:50:57.511Z · score: 1 (1 votes) · LW · GW

the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet.

Unlike in the fictional Omelas, there is no direct dependence or direct sacrifice. Certainly it is possible to at least temporarily alleviate suffering of others in this non-hypothetical world by sacrificing some of your fortune, but that's the difference between active and passive approach, there is a large gap there.

comment by satt · 2013-03-07T02:42:43.280Z · score: 0 (0 votes) · LW · GW

Related. Nornagest put their finger on this being a conflict between the consequentially compelling (optimizing for general welfare) and the psychologically compelling (not being confronted with knowledge of an individual child suffering torture because of you). I think Nornagest's also right that a fully specified Omelas scenario would almost certainly feel less compelling, which is one reason I'm not much impressed by Le Guin's story.

comment by Bugmaster · 2013-02-28T23:54:42.387Z · score: 2 (2 votes) · LW · GW

Imagine that it is not you, but your child you must sacrifice.

The situation is not analogous, since sacrificing one's child would presumably make most parents miserable for the rest of their days. In Omelas, however, the sacrifice makes people happy, instead.

comment by shminux · 2013-03-01T01:25:06.820Z · score: 0 (0 votes) · LW · GW

And I thought that the Babyeaters only existed in Eliezer's fiction...

comment by Bugmaster · 2013-02-28T20:08:15.365Z · score: 0 (0 votes) · LW · GW

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

As I said in previous comments, I am genuinely not sure whether entertainment is a good terminal goal to have.

By analogy, I absolutely require sleep in order to be productive at all in any capacity; but if I could swallow a magic pill that removed my need for sleep (with no other side-effects), I'd do so in a heartbeat. Sleep is an instrumental goal for me, not a terminal one. But I don't know if entertainment is like that or not.

Thus, I'm really interested in hearing more about your thoughts on the topic.

comment by Desrtopa · 2013-03-01T00:26:14.894Z · score: 0 (0 votes) · LW · GW

I'm not sure that I would regard entertainment as a terminal goal, but I'm very sure I wouldn't regard productivity as one. As an instrumental goal, it's an intermediary between a lot of things that I care about, but optimizing for productivity seems like about as worthy a goal to me as paperclipping.

comment by Bugmaster · 2013-03-01T00:52:08.501Z · score: 0 (0 votes) · LW · GW

Right, agreed, but "productivity" is just a rough estimate of how quickly you're moving towards your actual goals. If entertainment is not one of them, then either it enhances your productivity in some way, or it reduces it, or it has no effect (which is unlikely, IMO).

Productivity and fun aren't orthogonal; for example, it is entirely possible that if your goal is "experience as much pleasure as possible", then some amount of entertainment would directly contribute to the goal, and would thus be productive. That said, though, I can't claim that such a goal would be a good goal to have in the first place.

comment by Bugmaster · 2013-02-28T19:58:45.949Z · score: 0 (0 votes) · LW · GW

This is proving the conclusion by assuming it.

How so ? Imagine that you have two identical paperclip maximizers; for simplicity's sake, let's assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That is what the saying "he who would be Pope must think of nothing else" looks like in practice.

FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they "polyhack" themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?

comment by RichardKennaway · 2013-03-01T00:00:10.276Z · score: 1 (1 votes) · LW · GW

WARNING: This comment contains explicit discussion of an information hazard.

Imagine that you have two identical paperclip maximizers

I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I'm more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they've created in their own heads.

This is a basilisk problem. Unlike Roko's, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It's a very short road from the innocent-sounding "the greatest good for the greatest number" to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you're killing babies! Having a beer? You're drinking dead babies. Own a car? You're driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

But even Peter Singer doesn't go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.

This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don't know what their responses are.

Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.

comment by TheOtherDave · 2013-03-01T03:30:12.235Z · score: 3 (3 votes) · LW · GW

Consider two humans, H1 and H2, both utilitarians.

H1 looks at the world the way you describe Peter Singer here.
H2 looks at the world "through the eyes of utilitarianism" as you describe it here.

My expectation is that H1 will do more good in their lifetime than H2.
What's your expectation?

comment by [deleted] · 2013-03-09T11:54:47.129Z · score: 0 (0 votes) · LW · GW

And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn't even try to be altruistic, and accuses H1 of hypocrisy because she's not like H2. (Exhibit A)

comment by RichardKennaway · 2013-03-01T09:57:06.048Z · score: 0 (2 votes) · LW · GW

That is my expectation also. However, persuading H2 of that ("but dead babies!") is likely to be a work of counselling or spiritual guidance rather than reason.

comment by TheOtherDave · 2013-03-01T22:11:52.836Z · score: 2 (2 votes) · LW · GW

Well... so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2.
But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1.
I am therefore deeply confused by your model of what's going on here.

comment by RichardKennaway · 2013-03-08T23:23:51.457Z · score: 0 (2 votes) · LW · GW

Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.

comment by TheOtherDave · 2013-03-08T23:42:39.425Z · score: 2 (2 votes) · LW · GW

You confuse me further with every post.

Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.?
Or do you think H2 has these various traits independent of them being a utilitarian?

comment by whowhowho · 2013-03-09T00:48:43.556Z · score: 1 (1 votes) · LW · GW

There's a lot of different kinds of utilitarian.

comment by RichardKennaway · 2013-03-08T23:50:05.164Z · score: 0 (2 votes) · LW · GW

WARNING: More discussion of a basilisk, with a link to a real-world example.

It's a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don't.

I don't understand your confusion and this pair of questions just seems misconceived.

comment by TheOtherDave · 2013-03-09T00:59:41.280Z · score: 1 (1 votes) · LW · GW

(shrug) OK.
I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don't.
I'm tapping out here.

comment by whowhowho · 2013-03-09T00:47:11.186Z · score: 1 (1 votes) · LW · GW

One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one's community, etc, simply because one understands them better.

comment by [deleted] · 2013-03-09T11:39:26.739Z · score: 0 (0 votes) · LW · GW

(Warning: replying to discussion of a potential information hazard.)

Whfg ol fvggvat gurer ernqvat YrffJebat lbh'er xvyyvat onovrf! Univat n orre? Lbh'er qevaxvat qrnq onovrf.

Gung'f na rknttrengvba (tvira gung ng gung cbvag lbh unqa'g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg'f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq'f yvsr jvgu Tvirjryy'f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh'er sebz?)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-01T18:52:18.658Z · score: 0 (4 votes) · LW · GW

Infohazard reference with no warning sign. Edit and reply to this so I can restore.

comment by RichardKennaway · 2013-03-08T23:18:33.872Z · score: 1 (1 votes) · LW · GW

Done. Sorry this took so long, I've been taken mostly offline by a biohazard for the last week.

comment by Bugmaster · 2013-03-01T01:04:25.703Z · score: 0 (0 votes) · LW · GW

What imaginary creatures would choose whose choice has been written into their definition is of no significance.

Are you saying that human choices are not "written into their definition" in some measure ?

Also, keep in mind that a goal like "make more paperclips" does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It's not constrained to just a single path.

Just by sitting there reading LessWrong you're killing babies! ... Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

On the one hand, I do agree with you, and I can't wait to see your proposed solution. On the other hand, I'm not sure what this has to do with the topic. I wasn't talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn't entirely fantastic (other than for the magic pill part, of course).

comment by whowhowho · 2013-03-09T16:21:41.545Z · score: 0 (0 votes) · LW · GW

Are you saying that human choices are not "written into their definition" in some measure ?

What is written in to humans by evolution is hardly relevant. The point is that you can't prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.

comment by RichardKennaway · 2013-03-08T23:43:56.805Z · score: 0 (0 votes) · LW · GW

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

On the one hand, I do agree with you, and I can't wait to see your proposed solution.

My only solution is "don't do that then". It's a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don't know how to fix anyone who isn't.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"? I am not getting a clear idea of what we are talking about. At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

FWIW, I do not watch television, and have never attended spectator sports.

People with extremely low preferences for passive entertainment do exist, after all

Quite.

comment by Bugmaster · 2013-03-09T02:48:00.931Z · score: 0 (0 votes) · LW · GW

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy's choices are "written into its definition". My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy.

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"?

You don't watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I'm just trying to pinpoint your general level of interest in entertainment.

At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

Just because you personally can't imagine something, doesn't mean it's not true. For example, art and music -- both of which are forms of passive entertainment -- has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we'd be better off without...

comment by RichardKennaway · 2013-03-09T15:08:31.022Z · score: 0 (2 votes) · LW · GW

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

The whole language is wrong here.

What does it mean to talk about a choice being "completely under the humans' conscious control"? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is "completely under conscious control"?

Then you talk as if a decision not "completely under conscious control" must be "written into the genes". Where does that come from?

do you read any fiction books?

Why do you specify fiction? Is fiction "passive entertainment" but non-fiction something else?

There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music.

What is this "us" that is separate from and acted upon by our genes? Mentalistic dualism?

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

Don't crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers.

To sum up, there's a large and complex set of assumptions behind everything you're saying here that I don't think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.

comment by whowhowho · 2013-03-09T00:53:10.524Z · score: 0 (0 votes) · LW · GW

Are you saying that human choices are not "written into their definition" in some measure ?

I think Bugmaster is equating being "written in" in the sense of a stipulation in a thought experiment with being "written in" in the sense of being the outcome of an evolutionary process.

comment by RichardKennaway · 2013-03-09T15:14:17.800Z · score: 0 (2 votes) · LW · GW

If he is, he shouldn't. These are completely different concepts.

comment by whowhowho · 2013-03-09T00:55:24.331Z · score: 0 (0 votes) · LW · GW

If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.

comment by IlyaShpitser · 2013-02-28T16:55:25.809Z · score: 1 (1 votes) · LW · GW

I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad.

This decision theory is bad because it fails the "Scientology test."

comment by FeepingCreature · 2013-02-28T17:32:07.050Z · score: 3 (3 votes) · LW · GW

That's hardly objective. The challenge is to formalize that test.

Btw: the problem you're having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent "eschew entertainment and fun" is the correct and sane behavior.

comment by Bugmaster · 2013-02-28T20:14:26.820Z · score: 0 (0 votes) · LW · GW

Exactly, though see my comment on a sibling thread.

Out of curiosity though, what is the "Scientology test" ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn't involve poorly calibrated galvanic skin response meters... :-/

comment by FeepingCreature · 2013-03-01T19:06:27.983Z · score: 2 (2 votes) · LW · GW

Not the commenter, but I think it's just "it makes you do crazy things, like scientologists". It's not a standard LW thing.

comment by [deleted] · 2013-03-01T12:54:08.730Z · score: 0 (0 votes) · LW · GW

if your goal is to optimize the world

Optimize it for what?

comment by Bugmaster · 2013-03-01T16:46:57.565Z · score: 1 (1 votes) · LW · GW

That is kind of up to you. That's the problem with terminal goals...

comment by [deleted] · 2013-03-09T13:02:06.864Z · score: 0 (0 votes) · LW · GW

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture.

Music is only passive entertainment if you just listen at it, not if you sing it, play it, or dance at it.

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people -- parties we've been to, people we know, places we've visited, our tastes in food and drinks, unusual stuff that happened to us, what we've been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin' weather, whatever -- and I'd consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph -- it is possible to entertain people by talking about STEM topics, if you're sufficiently Feynman-esque about that.)

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

I have read very little of that kind of fiction, and still I haven't felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn't read HPMOR yet, and the occasional Discussion threads about MLP -- but that's a small minority of the time).

comment by Bugmaster · 2013-02-24T22:40:53.001Z · score: 0 (0 votes) · LW · GW

This article, courtesy of the recent Seq Rerun, seems serendipitous:

http://lesswrong.com/lw/yf/moral_truth_in_fiction/

comment by Kawoomba · 2013-02-23T06:13:08.058Z · score: 0 (4 votes) · LW · GW

The problem I see with television is that the average person spends 4 hours a day watching it. (...) Spending four hours a day in fantasy mode is not possible for me (I'm too motivated to DO something) and I don't seem to need anywhere near that much daydreaming.

What's wrong with live and let live (for their notion of 'living'). You can value "DO"ing something (apparently not counting daydreaming) over other activities for yourself, that's your prerogative, but why do you get to say who is and isn't "living"?

comment by Epiphany · 2013-02-23T08:57:07.623Z · score: 2 (2 votes) · LW · GW

That was addressed here:

I imagine that if asked whether they would have preferred to watch x number of shows, or spent all of that free time on getting out there and living, most people would probably choose the latter - and that's sad.

It's not that I want to tell them whether they're "really living", it's that I think they don't think spending so much of their free time on TV is "really living".

Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.

comment by taelor · 2013-02-23T11:18:15.098Z · score: 2 (2 votes) · LW · GW

I suspect that many people who enjoy television, if asked, would claim that socializing with freinds or other things are somehow better or more pure, but only because TV is a low status medium, and so saying that watching TV isn't "real living" has become somewhat of a cached thought within our culture; I'd suspect you'd have a much harder time finding people who will claim that spending time enjoying art or reading classic literature or other higher status fictional media doesn't count as "real living".

comment by Nornagest · 2013-02-23T09:40:05.544Z · score: 1 (1 votes) · LW · GW

It's not that I want to tell them whether they're "really living", it's that I think they don't think spending so much of their free time on TV is "really living".

I think I might actually expect people to endorse different activities in this context at different levels of abstraction.

That is, if you asked J. Random TV Consumer to rank (say) TV and socialization, or study, or some other venue for self-improvement, I wouldn't be too surprised if they consistently picked the latter. But if you broke down these categories into specific tasks, I'd expect individual shows to rate more highly -- in some cases much more highly -- than implied by the category rating.

I'm not sure what this implies about true preferences.

comment by Epiphany · 2013-02-23T10:17:04.956Z · score: 0 (2 votes) · LW · GW

I think I need an example of this to understand your point here.

comment by Nornagest · 2013-02-23T10:40:04.013Z · score: 1 (1 votes) · LW · GW

Well, for example, I wouldn't be too surprised to find the same person saying both "I'd rather socialize than watch TV" and "I'd rather watch Game of Thrones [or other popular TV show] than call my friend for dinner tonight".

Of course that's just one specialization, and the plausibility of a particular scenario depends on personality and relative appeal.

comment by Bugmaster · 2013-02-21T05:14:16.285Z · score: 4 (6 votes) · LW · GW

I have, in fact, read the Speech before, quite some time ago. My point is that outstanding teachers can make a big positive difference in the students' lives (at least, that was the case for me), largely by deliberately avoiding some or all of the anti-patterns that Gatto lists in his Speech. We were also taught the basics of critical thinking in an English class (of all places), though this could've been a fluke (or, once again, a teacher's personal initiative).

I should also point out that these anti-patterns are not ubiquitous. I was lucky enough to attend a school in another country for a few of my teenage years (a long, long time ago). During a typical week, we'd learn how to solve equations in Math class, apply these skills to exercises in Statistics, stage an experiment and record the results in Physics, then program in the statistics formulae and run them on our experimental results in Informatics (a.k.a. Computer Science). Ideas tend to make more sense when connections between them are revealed.

I haven't seen anything like this in US-ian education, but I wouldn't be surprised to find out that some school somewhere in the US is employing such an approach.

Edited to add:

Failing to teach reasoning skills in school is a crime against humanity.

I share your frustration, but there's no need to overdramatize.

comment by MugaSofer · 2013-02-21T11:11:23.559Z · score: 0 (2 votes) · LW · GW

Offtopic: Does anyone know where you can find that speech in regular HTML format? I defenitely read it in that format, but I can't find it again.

Ontopic: While I appreciate (and agree with) the point he's making, overall, he uses a lot of exaggeration and hyperbole, at best. It seems pretty clear that specific teachers can make a difference to individuals, even if they can't enact structural change.

Also:

What do you mean by "crime against humanity"?

comment by Bugmaster · 2013-02-21T20:56:38.543Z · score: 0 (0 votes) · LW · GW

I could've sworn that I saw his entire book in HTML format somewhere, a long time ago, but now I can't find it. Perhaps I only imagined it.

From what I recall, in the later chapters he claims that our current educational system was deliberately designed in meticulous detail by a shadowy conspiracy of statists bent on world (or, at the very least, national) domination. Again, my recollection could be widely off the mark, but I do seem to remember staring at my screen and thinking, "Really, Gatto ? Really ?"

comment by Nornagest · 2013-02-22T05:24:57.266Z · score: 1 (1 votes) · LW · GW

I read Dumbing Us Down, which might not be the book you're thinking of -- if memory serves, he's written a few -- but I don't remember him ever quite going with the conspiracy theory angle.

He skirts the edges of it pretty closely, granted. In the context of history of education, his thesis is basically that the American educational system is an offshoot of the Prussian system and that that system was picked because it prioritizes obedience to authority. Even if we take that all at face value, though, it doesn't require a conspiracy -- just a bunch of 19th- and early 20th-century social reformers with a fondness for one of the more authoritarian regimes of the day, openly doing their jobs.

Now, while it's pretty well documented that Horace Mann and some of his intellectual heirs had the Prussian system in mind, I've never seen historical documentation giving exactly those reasons for choosing it. And in any case the systems diverged in the mid-1800s and we'd need to account for subsequent changes before stringing up the present-day American school system on those charges. But at its core it's a pretty plausible hypothesis -- many of the features that after two World Wars make the Prussians look kind of questionable to us were, at the time, being held up as models of national organization, and a lot of that did have to do with regimentation of various kinds.

comment by MugaSofer · 2013-02-21T11:20:42.920Z · score: -1 (1 votes) · LW · GW

For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources.

Speaking as a rationalist and a Christian, I've always found that a bit too propaganda-ish for my tastes. And I wouldn't call Luke's journey "completed", exactly. Still, it can be valuable to see what others have thought in similar positions to you, in a shoulders-of-giants sort of way.

I think it would be better to focus on improving your rationality, rather than seeking out tracts that disagree with you. There's nothing wrong with reading such tracts, as long as you're rational enough not to internalize mistakes from it (on either side) but I wouldn't make it your main goal.

comment by shminux · 2013-02-20T21:01:33.607Z · score: 1 (1 votes) · LW · GW

I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.

I would love to hear more details, both about the process and about the conclusion, if you are brave/foolish enough to share.

comment by Bugmaster · 2013-02-20T22:55:09.113Z · score: 0 (0 votes) · LW · GW

I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.

What does "evidence about X" mean, as opposed to "evidence for X" ?

comment by Qiaochu_Yuan · 2013-02-20T23:40:50.037Z · score: 7 (7 votes) · LW · GW

My interpretation is "evidence that was not obtained in the service of a particular bottom line."

comment by Desrtopa · 2013-02-20T23:05:55.607Z · score: 2 (2 votes) · LW · GW

I'd interpret it as "evidence which bears on the question X" as opposed to "Evidence which supports answer Y to question X."

For instance, if you wanted to know whether anthropogenic climate change was occurring, you would want to search for "evidence about anthropogenic climate change" rather than "evidence for anthropogenic climate change."

comment by Bugmaster · 2013-02-21T00:31:39.774Z · score: 0 (0 votes) · LW · GW

Fair enough, that makes sense. I guess I just wasn't used to seeing this verbal construct before.

comment by [deleted] · 2013-02-21T13:07:35.607Z · score: 0 (0 votes) · LW · GW

The former means that log(P(E|X)/P(E|~X)) is non-negligible, the latter means that it is positive.

comment by Ford · 2013-02-20T21:20:13.176Z · score: 0 (0 votes) · LW · GW

You may find this story (a scientist dealing with evidence that conflicts with his religion) interesting.

http://www.exmormonscholarstestify.org/simon-southerton.html

comment by Nisan · 2013-02-20T21:10:13.508Z · score: 0 (2 votes) · LW · GW

Sam Bhagwat has served a mission and has posted here about how to emulate the Latter-day Saints' approach to community-building..

comment by [deleted] · 2013-01-19T02:38:40.934Z · score: 8 (8 votes) · LW · GW

Hello. I've read sequence articles and discussion off this website for a while now. Been hesitant to join before because I like to keep my identity small but recently realized that being able to talk to others about topics on this site will make me more effective at reaching my goals.

Armchairs are very comfortable and I'm having some mental difficulty putting the effort into the practice of achieving set goals. It's very hard to actually do stuff and easy to just read about interesting topics without engaging.

I'm interested more in meta-ethics than in physics, more in decision theory than practical AI. My first comments will likely be in the sequences or in discussion comments of a few specific natures.

This should be fun, I look forward to talking with you. Ask me any questions that arouse your curiosity.

The browsing experience with Kibitzing off is strange but not unpleasant. How long did it take for you to get accustomed to it?

comment by Briony · 2013-01-04T00:51:25.494Z · score: 8 (8 votes) · LW · GW

Hi, my name is Briony Keir, I'm from the UK. I stumbled on this site after getting into an argument with someone on the internet and wondering why they ended up failing to refute my arguments and instead resorted to insults. I've had a read-around before posting and it's great to see an environment where rational thought is promoted and valued; I have a form of autism called Asperger Syndrome which, among many things, allows me to rely on rationality and logic more than other people seem to be able to - I too often get told I'm 'too analytical' and I 'shouldn't poke holes in other peoples' beliefs' when, the way I see it, any belief is there to be challenged and, indeed, having one's beliefs challenged can only make them stronger (or serve as an indicator that one should find a more sensible viewpoint). I'm really looking forward to reading what people have to say; my environment (both educational and domestic) has so far served more to enforce a 'we know better than you do so stop talking back' rule rather than one which allows for disagreement and resolution on a logical basis, and so this has led to me feeling both frustrated and unchallenged intellectually for quite some time. I hope I prove worthy of debate over the coming weeks and months :)

comment by kodos96 · 2013-01-04T04:56:16.544Z · score: 1 (1 votes) · LW · GW

I have a form of autism called Asperger Syndrome

This is not at all unusual here at LessWrong... I can't seem to find a link, but I seem to recall that a fairly large portion of LessWrong-ers (at least relative to the general population) have Aspergers (or at least are somewhat Asperger-ish), myself included.

I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"... I realize that that has been the general consensus for a while now, but I've read some articles (again, can't find a link at the moment, sorry) suggesting that Aspergers is not actually related to Autism at all... personally, my feeling on the matter is that "Aspergers" isn't an actual "disease" per se, but rather just a cluster of personality traits that happen to be considered socially unacceptable by modern mainstream culture, and have therefore been arbitrarily designated as a "disease".

In any case, welcome to LessWrong - I look forward to your contributions in the future!

comment by anansi133 · 2013-01-04T06:01:16.203Z · score: 2 (2 votes) · LW · GW

I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"

If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's. I certainly catch myself in the spectrum that includes ADD as well.

The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.

comment by kodos96 · 2013-01-04T06:15:20.781Z · score: 0 (0 votes) · LW · GW

If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's

That seems to me to be basically equivalent to saying that aspergers is a lesser form of autism. Again, sorry I can't find the links at the moment, but I recall reading several articles suggesting that the two might actually not be related at all, neurologically.

The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.

I agree. Unfortunately, modern culture and institutions (like the public education system for one notable example) don't seem to be set up based on this premise.

comment by NoisyEmpire · 2013-01-03T16:42:47.278Z · score: 8 (8 votes) · LW · GW

I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made me realize how little I’d actually learned thus far. I’m so grateful for this place.

Now I’m an artist – a writer and a musician.

A frequently-confirmed observation of mine is that art – be it a great sci-fi novel, a protest song, an anti-war film – works as a hack to help to change people’s minds who are resistant or unaccustomed to pure rational argument. This is true especially of ethical issues; works which go for the emotional gut-punch somehow make people change their minds. (I think there are a lot of overlapping reasons for this phenomenon, but one certainly is that a well-told story or convincing song provides an opportunity for empathy. It can also help people envision the real consequences of a mind-change in an environment of relative emotional safety.) This, even though of course the mere fact that someone who holds position X made a good piece of art about X doesn’t actually offer much real evidence for the truth of X. Thus, a perilous power. The negative word for the extreme end of this phenomenon is “propaganda.” Conversely, when folks end up agreeing with whatever a work of art brought them to believe, they praise it as “insightful” or some such. You can sort of understand why Plato was worried about having poets – those irrational, un-philosophic things – in his ideal city, swaying his people’s emotions and beliefs.

If I’m going to help save the world, though, I think I do it best through a) giving money to the efficient altruists and the smart people and b) trying to spread true ideas by being a really successful and popular creator.

But that means I have to be pretty damn certain what the true ideas are first, or I’m just spouting pretty, and pretty useless, nonsense.

So thank you, LessWrongers, for all caring about truth together.

comment by John_Maxwell (John_Maxwell_IV) · 2013-01-12T08:52:52.926Z · score: 0 (0 votes) · LW · GW

I think art that spreads the "politics is the mind-killer" meme (which actually seems to be fairly novel outside LW: 1 2) could be a good use of art. Some existential risks, like nuclear weapons, seem likely to be controlled by world governments. The other day it occurred to me that world leaders are people too and are likely susceptible to the same biases as typical folk. If world leaders were less "Go us!" and more "Go humanity!", that could be Really Good.

Welcome to LW, by the way!

comment by junk_science · 2012-12-20T20:27:24.830Z · score: 8 (8 votes) · LW · GW

Hello everyone,

I found Less Wrong through "Harry Potter and the Methods of Rationality" like many others. I started reading more of Eliezer Yudkowsky's work a few months ago and was completely floored. I now recommend his writing to other people at the slightest provocation, which is new for me. Like others, I'm a bit scared by how thoroughly I agree with almost everything he says, and I make a conscious effort not to agree with things just because he's said them. I decided to go ahead and join in hopes that it would motivate me to start doing more active thinking of my own.

comment by Benjamin_Martens · 2012-11-30T09:57:56.623Z · score: 8 (8 votes) · LW · GW

Hello all, My name is Benjamin Martens, a 19-year-old student from Newcastle, Australia. Michael Anissimov, director of Humanity+, added me to the Less Wrong Facebook group. I don’t know his reasons for adding me, but regardless I am glad that he did.

My interest in rational thinking, and in conscious thinking in general, stems, first, from the consequences of my apostasy from Christianity, which is my family’s faith; second, from my combative approach to my major depression, which I have (mostly) successfully beaten into submission through an analysis of some of the possible states of the mind and of the world— Less Wrong and the study of cognitive biases will, I hope, further aid me in revealing my depressive worldview as groundless; or, if not as groundless, then at least as something which is not by nature aberrant and which is, to some degree, justified; third, and in connection to my vegan lifestyle, I aim to understand the psychology which might lead a person to cause another being to suffer; and last, and in connection to all aforementioned, it is my hope that an understanding of cognitive biases will allow not merely myself to edge nearer to the true state of things, but also, through me, for others to do so; I want Less Wrong to school me in some underhand, PR techniques of psychological manipulation or modification which will help me teach others about scepticism, about the errors of learned helplessness and about ways out of the self-reinforcing and self-justifying loops of the pessimistic worldview, and allow me to ably coax others towards cruelty-free ways of living. So, that's me. Hello, Less Wrong.

comment by OneLonePrediction · 2012-11-16T08:01:23.597Z · score: 8 (8 votes) · LW · GW

I'm here to make one public prediction that I want to be as widely-read as possible. I'm here to predict publicly that the apparent increase in autism prevalence is over. It's important to predict it because it distinguishes between the position that autism is increasing unstoppably for no known reason (or because of vaccines) and the position that autism has not increased in prevalence, but diagnosis has increased in accuracy and a greater percentage of people with autism spectrum disorders are being diagnosed. It's important that this be as widely-read as possible as soon as possible because the next time prevalence estimates come out, I will be shown right or wrong. I want my theory and prediction out there now so that I can show that I predicted a surprising result before it happened. While many people are too irrational to be surprised when they see this result even though they have predicted the opposite, I hope that rationalists will come to believe my position when it is proven right. I hope that everyone disinterested will come to believe this. The reason why I hope this is because I want them to be more likely to listen to me when I make statements about human rights as they apply to people with autism spectrum disorders. It is important that society change its attitudes toward such individuals.

Please help me by upvoting me to two karma so I can post in the discussion section.

comment by AdeleneDawner · 2012-11-16T08:25:40.344Z · score: 4 (4 votes) · LW · GW

I'm not sure you're right that we won't see any increase in autism prevalance - there are still some groups (girls, racial minorities, poor people) that are "underserved" when it comes to diagnosis, so we could see an increase if that changes, even if your underlying theory is correct. Still upvoted, tho.

comment by OneLonePrediction · 2012-11-16T18:09:48.507Z · score: 0 (0 votes) · LW · GW

Thank you. Yes, this is possible, but the increase in those groups would end up exactly matching the decrease in adult rates from learning coping skills so well as to be undiagnosable and that seems unlikely to me. Why shouldn't one be vastly more or less?

Anyway, I'm going to make the article now. If you want to continue this, we can do it there.

comment by nancyhua · 2012-10-21T02:48:09.756Z · score: 8 (8 votes) · LW · GW

I'm Nancy Hua. I was MIT 2007 and worked in NYC and Chicago in automated trading for 5 years after graduating with BS's in Math with CS (18C) and in Writing (21W).

Currently I am working on a startup in the technology space. We have funding and I am considering hiring someone.

I started reading Eliezer's posts on Overcoming Bias. In 2011, I met Eliezer, Robin Hanson, and a bunch of the NYC Lesswrongers. After years of passive consumption, very recently I started posting on lesswrong after meeting some lesswrongers at the 2012 Singularity Summit and events leading up to it, and after reading HPMOR and wanting to talk about it. I tried getting my normal friends to read it but found that making new friends who have already read it is more efficient.

Many of the writings regarding overcoming our biases and asking more questions appeal to me because I see many places where we could make better decisions. It's amazing how far we've come without being all that intelligent or deliberate, but I wonder how much more slack we have before our bad decisions prevent us from reaching the stars. I want to make more optimal decisions in my own life because I need every edge I can get to achieve some of my goals! Plus I believe understanding and accepting reality is important to our success, as individuals and as a species.

comment by johnsonmx · 2012-09-08T20:23:02.835Z · score: 8 (8 votes) · LW · GW

I'm Mike Johnson. I'd estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it's worth investing time into.

My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I'd choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory "If I have seen farther than others, it's because I'm knee-deep in dwarves" Cochran… Hmm.)

Cards-on-the-table, it's my impression that

(1) Lesswrong and SIAI are doing cool things that aren't being done anywhere else (this is not faint praise);

(2) The basic problem of FAI as stated by SIAI is genuine;

(3) SIAI is a lightning rod for trolls and cranks, which is really detrimental to the organization (the metaphor of autoimmune disease comes to mind) and seems partly its own fault;

(4) Much of the work being done by SIAI and LW will turn out to be a dead-end. Granted, this is true everywhere, but in particular I'm worried that axiomatic approaches to verifiable friendliness will prove brittle and inapplicable (I do not currently have an alternative);

(5) SIAI has an insufficient appreciation for realpolitik;

(6) SIAI and LW seem to have a certain distaste for research on biologically-inspired AGI, due in parts to safety concerns, an organizational lack of expertise in the area, and (in my view) ontological/metaphysical preference. I believe this distaste is overly limiting and also leads to incorrect conclusions.

Many of these impressions may be wrong. I aim to explore the site, learn, change my mind if I'm wrong, and hopefully contribute. I appreciate the opportunity, and I hope my unvarnished thoughts here haven't soured my welcome. Hello!

comment by TheOtherDave · 2012-09-08T21:22:24.889Z · score: 4 (4 votes) · LW · GW

FWIW, I find your unvarnished thoughts, and the cogency with which you articulate them, refreshing. (The thoughts aren't especially novel, but the cogency is.)

In particular, I'm interested in your thoughts on what benefits a greater focus on biologically inspired AGI might provide that a distaste for it would limit LW from concluding/achieving.

comment by johnsonmx · 2012-09-09T07:14:29.730Z · score: 0 (0 votes) · LW · GW

Thank you.

I'd frame why I think biology matters in FAI research in terms of research applicability and toolbox dividends.

On the first reason--- applicability--- I think more research focus on biologically-inspired AGI would make a great deal of sense is because the first AGI might be a biologically-inspired black box, and axiom-based FAI approaches may not particularly apply to such. I realize I'm (probably annoyingly) retreading old ground here with regard to which method will/should win the AGI race, but SIAI's assumptions seem to run counter to the assumptions of the greater community of AGI researchers, and it's not obvious to me the focus on math and axiology isn't a simple case of SIAI's personnel backgrounds being stacked that way. 'If all you have is a hammer,' etc. (I should reiterate that I don't have any alternatives to offer here and am grateful for all FAI research.)

The second reason I think biology matters in FAI research--- toolbox dividends--- might take a little bit more unpacking. (Forgive me some imprecision, this is a complex topic.)

I think it's probable that anything complex enough to deserve the term AGI would have something akin to qualia/emotions, unless it was specifically designed not to. (Corollary: we don't know enough about what Chalmers calls "psychophysical laws" to design something that lacks qualia/emotions.) I think it's quite possible that an AGI's emotions, if we did not control for their effects, could produce complex feedback which would influence its behavior in unplanned ways (though perfectly consistent with / determined by its programming/circuitry). I'm not arguing for a ghost in the machine, just that the assumptions which allow us to ignore what an AGI 'feels' when modeling its behavior may prove to be leaky abstractions in the face of the complexity of real AGI.

Axiological approaches to FAI don't seem to concern themselves with psychophysical laws (modeling what an AGI 'feels'), whereas such modeling seems a core tool for biological approaches to FAI. I find myself thinking being able to model what an AGI 'feels' will be critically important for FAI research, even if it's axiom/math-based, because we'll be operating at levels of complexity where the abstractions we use to ignore this stuff can't help but leak. (There are other toolbox-based arguments for bringing biology into FAI research which are a lot simpler than this one, but this is on the top of my list.)

comment by TheOtherDave · 2012-09-09T16:51:18.649Z · score: 2 (2 votes) · LW · GW

(nods)

Regarding your first point... as I understand it, SI (it no longer refers to itself as SIAI, incidentally) rejects as too dangerous to pursue any approach (biologically inspired or otherwise) that leads to a black-box AGI, because a black-box AGI will not constrain its subsequent behavior in ways that preserve the things we value except by unlikely chance. The idea is that we can get safety only by designing safety considerations into the system from the ground up; if we give up control of that design, we give up the ability to design a safe system.

Regarding your second point... there isn't any assumption that AGIs won't feel stuff, or that its feelings can be ignored. (Nor even that they are mere "feelings" rather than genuine feelings.) Granted, Yudkowski talks here about going out of his way to ensure something like that, but he treats this as an additional design constraint that adequate engineering knowledge will enable us to implement, not as some kind of natural default or simplifying assumption. (Also, I haven't seen any indication that this essay has particularly informed SI's subsequent research. Those more closely -- which is to say, at all -- affiliated with SI might choose to correct me here.) And there certainly isn't an expectation that its behavior will be predictable at any kind of granular level.

What there is is the expectation that a FAI will be designed such that its unpredictable behaviors (including feelings, if it has feelings) will never act against its values, and such that its values won't change over time.

So, maybe you're right that explicitly modeling what an AGI feels (again, no scare-quotes needed or desired) is critically important to the process of AGI design. Or maybe not. If it turns out to be, I expect that SI is as willing to approach design that way as any other. (Which should not be taken as an expression of confidence in their actual ability to design an AGI, Friendly or otherwise.)

Personally, I find it unlikely that such explicit modeling will be useful, let alone necessary. I expect that AGI feelings will be a natural consequence of more fundamental aspects of the AGI's design interacting with its environment, and that explicitly modeling those feelings will be no more necessary than explicitly modeling how it solves a math problem. A sufficiently powerful AGI will develop strategies for solving math problems, and will develop feelings, unless specifically designed not to. I expect that both its problem-solving strategies and its feelings will surprise us.

But I could be wrong.

comment by johnsonmx · 2012-09-09T18:51:03.733Z · score: 0 (0 votes) · LW · GW

I definitely agree with your first paragraph (and thanks for the tip on SIAI vs SI). The only caveat is if evolved/brain-based/black-box AGI is several orders of magnitude easier to create than an AGI with a more modular architecture where SI's safety research can apply, that's a big problem.

On the second point, what you say makes sense. Particularly, AGI feelings haven't been completely ignored at LW; if they prove important, SI doesn't have anything against incorporating them into safety research; and AGI feelings may not be material to AGI behavior anyway.

However, I still do think that an ability to tell what feelings an AGI is experiencing-- or more generally, being able to look at any physical process and being able to derive what emotions/qualia are associated with it-- will be critical. I call this a "qualia translation function".

Leaving aside the ethical imperatives to create such a function (which I do find significant-- the suffering of not-quite-good-enough-to-be-sane AGI prototypes will probably be massive as we move forward, and it behooves us to know when we're causing pain), I'm quite concerned about leaky reward signal abstractions.

I imagine a hugely-complex AGI executing some hugely-complex decision process. The decision code has been checked by Very Smart People and it looks solid. However, it just so happens that whenever it creates a cat it (internally, privately) feels the equivalent of an orgasm. Will that influence/leak into its behavior? Not if it's coded perfectly. However, if something of its complexity was created by humans, I think the chance of it being coded perfectly is Vanishingly small. We might end up with more cats than we bargained for. Our models of the safety and stability dynamic of an AGI should probably take its emotions/qualia into account. So I think all FAI programmes really would benefit from such a "qualia translation function".

comment by TheOtherDave · 2012-09-09T20:11:50.819Z · score: 2 (2 votes) · LW · GW

I agree that, in order for me to behave ethically with respect to the AGI, I need to know whether the AGI is experiencing various morally relevant states, such as pain or fear or joy or what-have-you. And, as you say, this is also true about other physical systems besides AGIs; if monkeys or dolphins or dogs or mice or bacteria or thermostats have morally relevant states, then in order to behave ethically it's important to know that as well. (It may also be relevant for non-physical systems.)

I'm a little wary of referring to those morally relevant states as "qualia" because that term gets used by so many different people in so many different ways, but I suppose labels don't matter much... we can call them that for this discussion if you wish, as long as we stay clear about what the label refers to.

Leaving that aside... so, OK. We have a complex AGI with a variety of internal structures that affect its behavior in various ways. One of those structures is such that creating a cat gives the AGI an orgasm, which it finds rewarding. It wants orgasms, and therefore it wants to create cats. Which we didn't expect.

So, OK. If the AGI is designed such that it creates more cats in this situation than it ought to (regardless of our expectations), that's a problem. 100% agreed.

But it's the same problem whether the root cause lies within the AGI's emotions, or its reasoning, or its qualia, or its ability to predict the results of creating cats, or its perceptions, or any other aspect of its cognition.

You seem to be arguing that it's a special problem if the failure is due to emotions or qualia or feelings?

I'm not sure why.

I can imagine believing that if I were overgeneralizing from my personal experience. When it comes to my own psyche, my emotions and feelings are a lot more mysterious than my surface-level reasoning, so it's easy for me to infer some kind of intrinsic mysteriousness to emotions and feelings that reasoning lacks. But I reject that overgeneralization. Emotions are just another cognitive process. If reliably engineering cognitive processes is something we can learn to do, then we can reliably engineer emotions. If it isn't something we can learn to do, then we can't reliably engineer emotions... but we can't reliably engineer AGI in general either. I don't think there's anything especially mysterious about emotions, relative to the mysteriousness of cognitive processes in general.

So, if your reasons for believing that are similar to the ones I'm speculating here, I simply disagree. If you have other reasons, I'm interested in what they are.

comment by johnsonmx · 2012-09-09T20:37:36.291Z · score: 0 (0 votes) · LW · GW

I don't think an AGI failing to behave in the anticipated manner due to its qualia* (orgasms during cat creation, in this case) is a special or mysterious problem, one that must be treated differently than errors in its reasoning, prediction ability, perception, or any aspect of its cognition. On second thought, I do think it's different: it actually seems less important than errors in any of those systems. (And if an AGI is Provably Safe, it's safe-- we need only worry about its qualia from an ethical perspective.) My original comment here is (I believe) fairly mild: I do think the issue of qualia will involve a practical class of problems for FAI, and knowing how to frame and address them could benefit from more cross-pollination from more biology-focused theorists such as Chalmers and Tononi. And somewhat more boldly, a "qualia translation function" would be of use to all FAI projects.

*I share your qualms about the word, but there really are few alternatives with less baggage, unfortunately.

comment by TheOtherDave · 2012-09-09T23:22:56.206Z · score: 1 (1 votes) · LW · GW

Ah, I see. Yeah, agreed that what we are calling qualia here (not to be confused with its usage elsewhere) underlie a class of practical problems. And what you're calling a qualia translation function (which is related to what EY called a non-person predicate elsewhere, though finer-grained) is potentially useful for a number of reasons.

comment by Kawoomba · 2012-09-09T10:13:41.622Z · score: 2 (2 votes) · LW · GW

because we'll be operating at levels of complexity where the abstractions we use to ignore this stuff can't help but leak.

If that were the case (and it may very well be), there goes provably friendly AI, for to guarantee a property under all circumstances, it must be upheld from the bottom layer upwards.

comment by johnsonmx · 2012-09-09T21:26:09.465Z · score: 0 (0 votes) · LW · GW

I think it's possible that any leaky abstraction used in designing FAI might doom the enterprise. But if that's not true, we can use this "qualia translation function" to make a leaky abstractions in a FAI context a tiny bit safer(?).

E.g., if we're designing an AGI with a reward signal, my intuition is we should either (1) align our reward signal with actual pleasurable qualia (so if our abstractions leak it matters less, since the AGI is drawn to maximize what we want it to maximize anyway); (2) implement the AGI in an architecture/substrate which produces as little emotional qualia as possible, so there's little incentive for behavior to drift.

My thoughts here are terribly laden with assumptions and could be complete crap. Just thinking out loud.

comment by hairyfigment · 2012-09-09T18:38:52.487Z · score: 0 (0 votes) · LW · GW

more research focus on biologically-inspired AGI

As a layman I don't have a clear picture of how to start doing that. How would it differ from this? Looks like you can find the paper in question here (WARNING: out-of-date 2002 content).

comment by johnsonmx · 2012-09-09T20:40:37.989Z · score: 0 (0 votes) · LW · GW

I'd say nobody does! But a little less glibly, I personally think the most productive strategy in biologically-inspired AGI would be to focus on tools that help quantify the unquantified. There are substantial side-benefits to such a focus on tools: what you make can be of shorter-term practical significance, and you can test your assumptions.

Chalmers and Tononi have done some interesting work, and Tononi's work has also had real-world uses. I don't see Tononi's work as immediately applicable to FAI research but I think it'll evolve into something that will apply.

It's my hope that the (hypothetical, but clearly possible) "qualia translation function" I mention above could be a tool that FAI researchers could use and benefit from regardless of their particular architecture.

comment by AllanGering · 2012-07-19T04:03:44.504Z · score: 8 (8 votes) · LW · GW

Poll: how old are you?

Newcomers only, please.

How polls work: the comments to this post are the possible answers. Upvote the one that describes your age. Then downvote the "Karma sink" comment (if you don't see it, it is the collapsed one), so that I don't get undeserved karma. Do not make comments to this post, as it would make the poll options hard to find; use the "Discussion" comment instead.

comment by AllanGering · 2012-07-19T04:04:57.615Z · score: 18 (22 votes) · LW · GW

24-29

comment by AllanGering · 2012-07-19T04:04:48.939Z · score: 14 (14 votes) · LW · GW

18-23

comment by AllanGering · 2012-07-19T04:05:16.815Z · score: 5 (9 votes) · LW · GW

30-44

comment by AllanGering · 2012-07-19T04:04:34.926Z · score: 3 (3 votes) · LW · GW

<18

comment by AllanGering · 2012-07-19T04:05:23.924Z · score: 1 (1 votes) · LW · GW

45 or older

comment by AllanGering · 2012-07-19T04:04:13.708Z · score: 0 (0 votes) · LW · GW

Discussion

comment by VNKKET · 2012-07-20T02:18:24.240Z · score: 2 (2 votes) · LW · GW

Upvoted for explaining how polls work.

comment by AllanGering · 2012-07-19T04:04:04.386Z · score: -32 (36 votes) · LW · GW

Karma sink

comment by notsonewuser · 2013-03-15T15:49:36.712Z · score: 7 (7 votes) · LW · GW

Hi. I discovered LessWrong recently, but not that recently. I enjoy Yudkowsky's writings and the discussions here. I hope to contribute something useful to LessWrong, someday, but as of right now my insights are a few levels below those of others in this community. I plan on regularly visiting the LessWrong Study Hall.

Also, is it "LessWrong" or "Less Wrong"?

comment by Kawoomba · 2013-03-15T22:01:21.754Z · score: 4 (4 votes) · LW · GW

Also, is it "LessWrong" or "Less Wrong"?

You'll fit in great.

comment by TheOtherDave · 2013-03-15T18:19:03.459Z · score: 2 (2 votes) · LW · GW

I endorse "Less Wrong" as a standalone phrase but "LessWrong" as an affixed phrase (e.g., "LessWrongian").

comment by [deleted] · 2013-03-15T17:41:47.381Z · score: 1 (1 votes) · LW · GW

Also, is it "LessWrong" or "Less Wrong"?

Good question... :-)

comment by [deleted] · 2013-03-15T19:08:05.530Z · score: 1 (1 votes) · LW · GW

The front page and the About page consistently use the one with the space... except in the logo. Therefore I'm going to conclude that the change in typeface colour in the logo counts as a space and the ‘official’ name is the spaced one.

comment by notsonewuser · 2013-03-15T21:25:10.432Z · score: 1 (1 votes) · LW · GW

I went through the same reasoning pattern as you right before reading this comment. So I think I'll stick with "Less Wrong", for the time being.

comment by beoShaffer · 2013-03-15T18:21:54.454Z · score: 0 (0 votes) · LW · GW

Either is acceptable, though I'd say "Less Wrong" is slightly better.

comment by pinyaka · 2013-02-19T00:22:31.053Z · score: 7 (7 votes) · LW · GW

I am Pinyaka. I've been lurking a bit around this site for several months. I don't remember how I found it (probably a linked comment from Reddit), but stuck around for the main sequences. I've worked my way through two of them thanks to the epub compilations and am currently struggling to figure out how to prioritize and better put into practice the things that I learn from the site and related readings.

I hope to have some positive social interactions with the people here. I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant. This marks the first time I've tried to think about what I value and then seek out a group of like minded individuals.

I also hope to find a consistent stream of ideas for improving myself that are backed by reason and science. I recognize that removing (or at least learning to account for) my own biases will help me build a more accurate picture of the universe that I live in and how I function within that framework. Along with that, I hope to develop the ability to formulate and pursue goals to maximize my enjoyment of life (I've been reading a bunch of lukeprogs anti-akrasia posts recently, so following through on goals is on my mind currently).

I am excited to be here.

comment by beoShaffer · 2013-02-19T02:32:40.087Z · score: 2 (2 votes) · LW · GW

Hi Pinyaka!

I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant.

Semi-seriously, have you considered moving?

comment by pinyaka · 2013-02-20T13:52:27.518Z · score: 0 (0 votes) · LW · GW

I'm sort of averse to moving at the moment, since I'm in the middle of getting my doctorate, but I'll likely have to move once I finish that. Do you have specific suggestions? I have always picked where I live based on employment availability and how much I like the city from preliminary visits.

comment by beoShaffer · 2013-02-20T17:30:24.247Z · score: 0 (0 votes) · LW · GW

I have always picked where I live based on employment availability and how much I like the city from preliminary visits.

In that case its going to strongly depend on your field, and if your going into academia specifically you likely won't have much of a choice. That said NY and the Bay Area are both good places for finding rationality support.

comment by Nisan · 2013-02-19T00:53:21.318Z · score: 2 (2 votes) · LW · GW

Welcome! You might enjoy it if you show up to a meetup as well.

comment by pinyaka · 2013-02-19T01:17:13.875Z · score: 0 (0 votes) · LW · GW

Thank you. I haven't seen one in Iowa yet, but I do keep an eye out for them.

comment by John_Maxwell (John_Maxwell_IV) · 2013-02-19T00:45:26.288Z · score: 0 (0 votes) · LW · GW

Welcome!

comment by shaih · 2013-02-17T21:03:29.994Z · score: 7 (7 votes) · LW · GW

I'm Shai Horowitz. I'm currently a duel physics and mathematics major at Rutgers university. I first learned of the concept of "Bayesian" or "rationality" through HPMOR and from there i took it upon myself to read the Overcoming Bias post which has been an extremely long endeavor of which I have almost but not yet accomplished. Through conversation with others in my dorm at Rutgers I have realized simply how much this learning has done to my thought process and it allowed me to hone in on my own thoughts that i could see were still biased and go about fixing them. Through this same reasoning it became apparent to me that it would be largely beneficial to become an active part in the lesswrong community to sharpen my own skills as a rationalist while helping others along the way. I embrace rationality for the very specific reason that I wish to be a Physicists and realize that in trying to do so i could (as Eliezer puts hit) "shoot off my own foot" while doing things that conventional science allows. In the process of learning this I did stall out for months at a time and even became depressed for a while as I was stabbing my weakest points with the metaphorical knife. I do look back at laugh at the fact now that a college student was making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications, which to me seems to be a sign of my progress moving forward. I don't think that i will soon have to face a fact as daunting as that one and with the knowledge that I know how to accept even that law I will now be able to accept any truths much more easily. That being said even though hard science is my primary purpose for learning rationality I am a bit of a self proclaimed polymath and have spent recent times learning more of psychology and cognition then simply the cognitive bias's i need to be self weary of. I just finished the book "Influence: Science and Practice" which I've heard Eliezer mention multiple times and very recently as in this week my interest have turned into pushing standard ethical theories to there limits as to truly understand how to make the world a better place and to unravel the black box that is itself the word "better". I conclude with I would love to talk with anyone experienced or new to rationality about pretty much any topic and would very much like if someone would message me. furthermore if anyone reading this goes to Rutgers university or is around the area, a meet up over coffee or something similar would make my day.

comment by Qiaochu_Yuan · 2013-02-17T22:19:36.037Z · score: 3 (3 votes) · LW · GW

Welcome! I am really curious what you mean by

making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications

comment by shaih · 2013-02-17T22:29:19.369Z · score: 0 (2 votes) · LW · GW

My thoughts on its implications are along the lines of even if cryogenics works or the human race finds some other way of indefinitely increasing the length of the human life span, the second law of thermodynamics would eventually force this prolonged life to be unsustainable. That combined with the adjusting of my probability estimates of an afterlife made me have to face the unthinkable fact that there will be a day in which i cease to exist regardless of what i do and i am helpless to stop it. while i was getting over the shock of this i would have sleepless night which turned into days that i was to tired to be coherent which turned into missing classes which turned into missed grades. In summation I allowed a truth which would not come to pass for an unthinkable amount of time to change how i acted in the present in a way in which it did not warrant (being depressed or happy or any action now would not change that future).

comment by BenGilbert · 2013-02-04T12:57:20.356Z · score: 7 (7 votes) · LW · GW

Hello,

I'm Ben. I'm here mainly because I'm interested in effective altruism. I think that tracing through the consequences of one's actions is a complex task and I'm interested in setting out some ideas here in the hope that people can improve my reasoning. For example, I've a post on whether ethical investment is effective, which I'd like to put up once I've got a couple of points of karma.

I studied philosophy and theology, and worked for a while in finance. Now, I'm trying to work out how to increase the positive impact I have, which obviously demands answers about both what 'positive impact' means, and what the consequences are of the choices I make. I think these are far from simple to work out; I hope just to establish a few points with which I'm satisfied enough. I think that exposing ideas and arguments to thoughtful people who might want to criticise or expand them could help me a lot. And this seems a good place for doing that!

comment by capctr · 2012-11-08T06:39:39.264Z · score: 7 (7 votes) · LW · GW

I am a 43 year old man who loves to read, and stumbling across HPMOR was an eye opener for me, and it resonated profoundly within. My wife is not only the Queen of Critical Thinking and logic, she is also the breadwinner. Me? I raise the children( three girls), take care of the house, and function as a housewife/gourmet chef/personal trainer/massage therapist for my wife on top of being my daughters personal servant. This is largely due to my wife's towering intellect, overwhelming competence, my struggles with ADHD, and the fact that she makes huge amounts of money. Me, I just age almost supernaturally slowly(at 43, I still pass for thirty, possibly due to an obsession with fitness ), am above average handsome, passingly charming, good singing voice, and incapable of winning a logical argument, as the more stress I grow, the faster my IQ shrinks. I am taken as seriously by my wife, as Harry probably was by his father as a four year old. I am looking to change that. I am hoping that if I learn enough about less wrong, I just might learn how to put all the books I compulsively read to good use, and maybe learn how to...change.

comment by MileyCyrus · 2012-11-08T07:13:23.317Z · score: 1 (1 votes) · LW · GW

I'm actually incredibly interested in your story, if you don't mind. What is like dating a woman who is smarter than you are? What do you think attracted her to you? (I would love to pair-bond with a genius woman, but most of them only want to pair-bond with other geniuses.)

comment by Alicorn · 2012-11-08T07:07:40.675Z · score: 0 (2 votes) · LW · GW

housewife

"House spouse" works as a gender neutral term, and it rhymes!

comment by MugaSofer · 2012-11-08T11:04:39.474Z · score: 1 (1 votes) · LW · GW

it rhymes!

This is not a good thing.

comment by [deleted] · 2012-11-08T05:13:48.651Z · score: 7 (7 votes) · LW · GW

Hello rationalists-in-training of the internet. My name is Joseph Gnehm, I am 15 and I live in Montreal. Discovering LessWrong had a profound effect on me, shedding light on the way I study thought processes and helping me with a more rational approach.

comment by Baruta07 · 2012-11-06T18:01:18.342Z · score: 7 (7 votes) · LW · GW

I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect clarity after only hearing them once) but I have trouble combining that recall with my English classes, and I have trouble remembering names. I am informally studying formal logic, programming, game theory, and probability theory (don't you hate it when the curriculum changes. (I also have a unusual fondness for brackets, if you couldn't tell by now)

I also feel that any discussion about me that fails to mention my love of Sf/Fantasy should be shot dead, I caught onto reading at a very, very early age and by the time I was in 5th grade I was reading at a 12th grade comprehension level, and I was tackling Asimov, Niven, Pohl, Piers Anthony, Stephen R. Donaldson, Roger Zelazny and most good authors.

comment by Kawoomba · 2012-11-06T18:11:08.936Z · score: 5 (5 votes) · LW · GW

(I also have a unusual fondness for brackets, if you couldn't tell by now)

Lisp ith a theriouth condition, once you go full Lisp, you'll never (((((((((((((... come back)?n).

comment by Baruta07 · 2012-11-06T20:49:45.378Z · score: 0 (0 votes) · LW · GW

I was laughing so hard when I saw this.

comment by beoShaffer · 2012-11-08T06:50:58.991Z · score: 0 (0 votes) · LW · GW

How do you feel about Heinlein?

comment by Baruta07 · 2012-11-09T16:06:05.154Z · score: 1 (1 votes) · LW · GW

He's a decent author but I am having trouble finding anything of significance by him in Calgary

comment by beoShaffer · 2012-11-09T16:56:27.954Z · score: 0 (0 votes) · LW · GW

Too bad.

comment by LadyStardust · 2012-11-06T01:02:42.284Z · score: 7 (7 votes) · LW · GW

Hey there! I'm a 19-year old Canadian girl with a love for science, science fiction, cartoons, RPGs, Wayne Rowley, learning, reading, music, humour, and a few thousand other things.

Like many I found this site via HPMOR. As a long-time fan of both science and Harry Potter, I was ultimately addicted from chapter one. It's hard to apply scientific analysis to a fictional universe while still keeping a sense of humour, and HPMOR executes this brilliantly. My only complaint ( all apologies to Mr. Yudkowsky, though I doubt he'll ever read this) are that Harry comes off as rather Sue-ish. I wanted more, so I came here and found yet more excellent excellent writings. The story about the Pebblesorters is my personal favourite.

I'm mad about music. Queen, Rush, Black Sabbath, and Bowie are some of my favourite bands. I have a Telecaster, which I use mostly to play blues. God I love the blues. But I digress..

Though I'm merely a high school graduate looking for a part-time job, I'm really passionate about biology. I'm the kind of person who reads about sodium-potassium pumps not because it's on the the upcoming quiz, but because it indulges my curiousity about how humans and other lifeforms work. (Don't get me started about speculative xenobiology!)

I've lurked this site for about 7 months now and I really hope that I'll be accepted here in spite of my laconic, idiosyncratic, comma-ridden ramblings. Thank You.

comment by tilde · 2012-10-07T20:27:41.222Z · score: 7 (7 votes) · LW · GW

I'm a 20-year-old physics student from Finland whose hobbies include tabletop roleplaying games and Natalie Reed-Zinnia Jones-style intersection of rationality and social justice.

I've been sporadically lurking on LessWrong for the last 2-3 years and have read most of the sequences. My primary goal is to contribute useful research to either SI or FHI or failing that, a significant part of my income. I've contacted the X-risks Reduction Career Network as well.

I consider this an achievable goal as my general intelligence is extremely high and I have won a national level mathematics competition 7 years ago despite receiving effectively no training in a small backwards town. With dedication and training I believe I could reach the level of the greats.

However, my biggest challenge currently is Getting Things Done; apart from fun distractions, committing any significant effort to something is nigh impossible. This could probably be caused by clinical depression (without the mood effects) and I'm currently on venlafaxine as an attempt to improve my capability to actually do something useful but so far (about 3 months) it hasn't had the desired effect. Assistance/advice would be appreciated.

comment by blueowl · 2012-10-05T21:39:25.747Z · score: 7 (7 votes) · LW · GW

Hi everyone! Another longtime lurker here. I found LW through Yvain's blog (Emily and Control FTW!). I'm not really into cryonics or FAI, but the sequences are awesome, and I enjoy the occasional instrumental rationality post. I decided to become slightly more active here, and this thread seemed like a good place to start, even if a bit old.

comment by Jess_Whittlestone · 2012-10-05T10:38:12.852Z · score: 7 (7 votes) · LW · GW

Hi, I'm Jess. I've just graduated from Oxford with a masters degree in Mathematics and Philosophy. I'm trying to decide what to do next with my life, and graduate study in cognitive science is currently top of my list. What I'm really interested in is the application of research in human rationality, decision making and its limitations to wider issues in society, public policy etc.

I'm taking some time to challenge my intuition that I want to go into research, though, as I'm slightly concerned that I'm taking the most obvious option not knowing what else to do. My methods for doing this at the moment are a) trying to think about reasons it might not be the best option (a "consider the opposite" type approach) and b) initiating conversations with as many people as possible doing things that interest me, and getting some work experience in different areas this year, to broaden my limited perspective. Any better/additional suggestions are more than welcome!

I'm about to start an internship with 80000 hours, doing a project on the role of cognitive bias in career choice. The aim is to collect together the existing research on biases and mitigation techniques and apply it in a practical and accessible way, identifying the biases that most commonly affect career choice and providing useful strategies for avoiding them. I was wondering if anyone here has a summary of the existing literature on cognitive bias mitigation, or any recommendations of particularly useful/important research? Equally if anyone has spent much time thinking about this, I'd love to hear about it.

comment by beoShaffer · 2012-10-05T21:55:33.091Z · score: 1 (1 votes) · LW · GW

I don't have a full summary on-hand, but if you just want to jumpstart your own search you might want to read Lukeprogs article on efficient scholarship and look into the keyword "debiasing".

comment by therufs · 2012-09-29T02:50:26.101Z · score: 7 (7 votes) · LW · GW

I saw this site on evand's computer one day, so of course then had to look it up for myself. In my free time, I pester him with LW-y questions.

By way of background, I graduated from a trying-to-be-progressive-but-sort-of-hung-up-on-orthodoxy quasi-Protestant seminary in spring 2010. Primary discernible effects of this schooling (i.e., I would assign these a high probability of relevance on LW) include:

  • deeply suspicious of pretty much everything

  • a predisposition to enter a Hulk-smash rage at the faintest whiff of systematic injustice or oppression

  • high value on beauty, imagination*, and inclusivity

* Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)

I like learning more about how brains work (/don't work). Also about communities. Also about things like why people say and do what they say and do, both in terms of conditioning/unconscious motivation and conscious decision. And and and. I will start keeping track on a wiki page perhaps.

I cherish ambitions of being able to contribute to a discussion one day! (If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)

Hi!

comment by Epiphany · 2012-09-29T05:00:06.133Z · score: 1 (1 votes) · LW · GW

(If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)

Don't worry, you can't possibly look worse than I did.

Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)

I wanted to be around people who can point out my flaws and argue with me effectively and tell me things I didn't know. I wanted to be held to higher standards, to actually have to work hard to earn respect. I'm not getting that in other areas of my life. Here, I get it. (: I am so grateful that I found this. People will challenge you and make you work, and find your flaws, but that's a blessing. Embrace it.

comment by [deleted] · 2012-09-29T03:15:00.150Z · score: 1 (1 votes) · LW · GW

Welcome! You sound like just our type. Glad to have you with us.

If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...

Lurk, read the archives, brazenly post things you are quite sure of. Remember that downvotes don't mean we hate you. I dunno. I only get the fear after I post so it's not a problem for me.

comment by robertoalamino · 2012-08-23T18:51:02.339Z · score: 7 (7 votes) · LW · GW

Hi.

My name is Roberto and I'm a Brazilian physicist working in the UK. Even working in an academic environment, that obviously do not guarantee an environment where rational/unbiased/critical discussions can happen. Science production in universities not always are carried out by thinking critically about a subject as many papers can be purely technical in their nature. Also, free thinking is as regulated in academia as it is everywhere else in many aspects.

That said, I have been reading and browsing Less Wrong for some time and think that this can indeed be done here. In addition, given later developments all around the world in many aspects and how people react to them, I felt the urge to discuss them in a way which is not censored, specially by the other persons in the discussion. It promises to be relaxing anyway.

I'm sure I'm gonna have a nice time.

comment by Risto_Saarelma · 2012-08-24T04:02:52.310Z · score: 0 (0 votes) · LW · GW

My name is Roberto and I'm a Brazilian physicist working in the UK.

Do you get to hear about the Richard Feynman story often when you introduce yourself as a Brazilian physicist?

comment by robertoalamino · 2012-08-24T09:22:27.503Z · score: 5 (5 votes) · LW · GW

It's actually the first time I read it. I would be very happy to say that the situation improved over there, but that might not be true in general. Unfortunately, the way I see it is the completely opposite. The situation became worse everywhere else. Apparently, science education all around the world is becoming more distant of what Feynman would like. Someone once told me that "Science is not about knowledge anymore, it's about production". Feynman's description of his experience seems to be all about that. I refuse to believe in that, but as the world embraces this philosophy, science education becomes less and less related to really thinking about any subject.

comment by Risto_Saarelma · 2012-08-24T12:41:03.411Z · score: 1 (1 votes) · LW · GW

At least nowadays, unlike in 1950s Brazil, Feynman's stuff is a Google search away for just about any undergraduate student. Now they just need to somehow figure out they might want to search for him...

comment by [deleted] · 2012-08-24T00:46:59.687Z · score: 0 (0 votes) · LW · GW

Science production in universities not always are carried out by thinking critically about a subject as many papers can be purely technical in their nature.

I've found that theoretical physicists usually give me the vibe EY describes here, but experimental physicists usually don't.

comment by robertoalamino · 2012-08-24T09:28:34.409Z · score: 1 (1 votes) · LW · GW

That's more a question of taste, and there is nothing wrong with that. I also prefer theoretical physics, although I must admit that it's very exciting to be in a lab, as long as it is not me collecting the data or fixing the equipment.

My point in the sentence you quoted is that you can perfectly well carry on with some "tasks" without thinking to deeply about them, even in physics. Be it theoretical or experimental or computational. That is something I think is really missing in the whole spectrum of education, not only in science and not only in the universities.

comment by [deleted] · 2012-08-03T13:30:14.943Z · score: 7 (7 votes) · LW · GW

Hi everyone,

I'm Leisha. I originally came across this site quite a while ago when I read the Explain/Worship/Ignore analogy here. I was looking for insight into my own cognitive processes; to skip the unimportant details, I ended up reading a whole lot about the concept of infinity once I realized that contemplating the idea gave me the same feeling of Worship that religion used to. It still does, to some extent, but at least I'm better-informed and can Explain the sheer scale of what I'm thinking of a little better.

I didn't return here until yesterday, when I was researching the concept of rational thought (by way of cognitive processing, Ayn Rand, and Vulcans!) For background, I'm a Myers-Briggs F-type (INFJ) who has come to realize that while emotion has its value, it's certainly not to be relied upon for making sound judgements. What I'm looking to do, essentially, is to repair the faulty processes within my own mind. I've spent a lot of time reaching invalid conclusions because the premises I have been working from were wrong; the original input I was given (before I was of an age to think critically) was incorrect. I'm tracing back the origin of a lot of the aliefs I have, only to find that they're based on values I no longer hold to be important. My value-sets need tweaking.

Unlike with a computer, though, with a mind you can't just delete what you need to and start over. Those detrimental thought-processes need to be overwritten with something that works better. That's why I'm here, essentially, as a complement to my inner work. I'm here to read about a more rational way of thinking, to try out ideas, to compare and to analyze. I intend to work through the Sequences, a little at a time.

I expect to read much more than I comment. If I assess myself honestly and fairly, then I'm not an unintelligent person, but I am (particularly by comparison with the subset represented at this website!) uneducated, and so a great deal of the math and science will likely be beyond my comprehension at this point. However, I thought I'd post here to introduce myself anyway, and to say what a valuable resource this site looks to be. I look forward to reading more.

Other trivia: I'm female, which I know puts me in the minority here. I enjoy science fiction and am working on some original pieces of my own. I'm interested in psychology, anthropology and the "weirder" parts of physics. I like to think about the very large and very small ends of the scale, and contemplate the big questions about who we are, how we got here and where we're going. I'm a libertarian and a feminist, and I drink tea.

comment by [deleted] · 2012-08-03T14:33:29.635Z · score: 1 (1 votes) · LW · GW

Hmm... Explain/worship/ignore is one of the first articles I remember reading too.

I wish you the warmest welcome.

Make sure to at least read the Core Sequences (Map and Territory, Mysterious Answers to Mysterious Questions, Reductionism), as there is a tendency in discussion on this site to be rash against debaters who have not familiarized themselves with the basics.

comment by [deleted] · 2012-08-04T06:01:57.404Z · score: 0 (0 votes) · LW · GW

It's a good article!

Thank you for the kind welcome and for the advice. I don't intend to jump into discussion without having done the relevant reading (and acquired at least a small understanding of community norms) so hopefully I'll avoid too many mistakes. I'm working through Mysterious Answers to Mysterious Questions now, and what strikes me is how much of it I knew, in a sense, already, but never could have put forward in such a coherent and cohesive way.

So far, what I've read confirms my worldview. Being wary of confirmation bias and other such fun things, I'll be curious to see how I react when I read an article here that challenges it, as I'm near-certain will happen in due course. (And even typing that makes me wonder what exactly I mean by I there in each case, but that's off-topic for this thread)

comment by EmuSam · 2012-07-19T04:49:37.550Z · score: 7 (7 votes) · LW · GW

Hello.

I was raised by a rationalist economist. At some point I got the idea that I wanted to be a statistical outlier, and also that irrationality was the outlier. After starting to pay attention to current events and polls, I'm now pretty sure that the second premise is incorrect.

I still have many thought patterns from that period that I find difficult to overcome. I try to counter them in the more important decisions by assigning WAG numerical values and working through equations to find a weighted output. I read more non-fiction than fiction now, and I am working with a mental health professional to overcome some of those patterns. I suppose I consider myself to have a good rationalist grounding while being used to completely ignoring it in my everyday life.

I found Less Wrong through FreethoughtBlogs and "Harry Potter and the Methods of Rationalism." I added it to my feed reader and have been forcing my economist to help me work though some of the more science-of-choice oriented posts.

comment by [deleted] · 2012-07-19T11:53:10.156Z · score: 1 (1 votes) · LW · GW

WAG

???

The only expansion of that I can find with Google (Wifes And Girlfriends [of footballers]) doesn't seem too relevant.

comment by Morendil · 2012-07-19T12:13:26.331Z · score: 7 (7 votes) · LW · GW

Wild Ass Guess.

comment by DaFranker · 2012-07-19T14:05:53.577Z · score: 3 (3 votes) · LW · GW

Was that just meta, or did you already know it? In what fields would the saying be more common, out of curiosity?

comment by evand · 2012-07-19T17:24:51.023Z · score: 5 (5 votes) · LW · GW

It's reasonably common among engineers in my experience. Along with SWAG -- scientific wild-assed guessed, intended to denote something that has minimal support -- an estimation that is the output of combining WAGs and actual data, for example.

comment by Davidmanheim · 2012-07-19T23:44:24.614Z · score: 2 (2 votes) · LW · GW

He may not have known it, but it's used. I worked in Catastrophe Risk modeling, and it was a term that applied to what our clients and competitors did; not ourselves, we had rigorous methodologies that were not discussed because they were "trade secrets," or as I came to understand, what is referred to below as SWAG.

I have heard engineers use it as well..

comment by Viliam_Bur · 2012-07-18T18:32:01.878Z · score: 7 (7 votes) · LW · GW

Please add a few words about "Open Thread". Something like -- If you want to write just a simple question or one paragraph or text, don't create a new article, just add it as a comment to the latest discussion article called "Open Thread".

comment by AllanGering · 2012-07-19T03:44:17.293Z · score: 0 (0 votes) · LW · GW

In the same line of thought, it may be worth revising the following.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation;

comment by MikeDobbs · 2013-03-25T13:17:04.242Z · score: 6 (6 votes) · LW · GW

Hello LW community. I'm a HS math teacher most interested in Geometry and Number Theory. I have long been attracted to mathematics and philosophy because they both embody the search for truth that has driven me all my life. I believe reason and logic are profoundly important both as useful tools in this search, and for their apparently unique development within our species.

Humans aren't particularly fast, or strong, or resistant to damage as compared with many other creatures on the planet, but we seem to be the only ones with a reasonably well developed faculty for reasoning and questioning. This leads me to believe that developing these skills is a clear imperative for all human beings, and I have worked hard all my life to use rational thinking, discourse and debate to better understand the world around me and the decisions that I make every day.

This is what drove me towards teaching as a career, as I see my profession as providing me with the opportunity to help young people better understand the importance of reason and logic, as well as help them to develop their ability to utilise them.

I'm excited to finally become a member of this community which seems to share in many of the values I hold dear, and look forward to many intriguing and thought provoking discussions here on LW!

comment by Squark · 2013-03-01T22:02:42.728Z · score: 6 (6 votes) · LW · GW

Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about

I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I'm comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)

In my day job I'm a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.

I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don't fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact

Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I've written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn't have the karma for it. Maybe I'll post in on LW discussion.

My personal philosophy: As I said, I'm a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don't believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I'm not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word "God" is very different from the meaning most religious people ascribe to it (I choose to still use the word "God" since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn't punish/reward for good/bad behavior, doesn't perform divine intervention (in the sense of occasional violations of natural law) and doesn't write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic "soul" behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here

I'm 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I'm an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!

comment by lukeprog · 2013-04-20T08:08:54.623Z · score: 1 (1 votes) · LW · GW

Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.

comment by Kawoomba · 2013-03-01T22:24:41.791Z · score: 1 (1 votes) · LW · GW

Welcome! We're all 29.9 years old, here. I look forward to your comments, hopefully you'll find the time for that post on your Orseau-Ring variant.

Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason "god" - without believing in such a reason's personhood, or volition, or having a mind - invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of "god" that you could just as well call yourself a spiritual atheist (or related).

comment by Bugmaster · 2013-03-01T22:44:25.563Z · score: 6 (6 votes) · LW · GW

Welcome! We're all 29.9 years old, here.

Speak for yourself, youngster ! Why, back in my day, we didn't have these "internets" you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there's nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !

comment by Squark · 2013-03-05T21:22:28.894Z · score: 0 (0 votes) · LW · GW

Maybe from a consequentialist point-of-view, it's best to use the word "God" when arguing my philosophy with theists and use some other word when arguing my philosophy with atheists :) I'm thinking of "The Source". However there is a closely related construct which has a sort-of personhood. I named it "The Asymptote": I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power and "The Asymptote" is a formal limit of this sequence. Loosely speaking, "The Asymptote" is just any intelligence vastly more powerful than our own. This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization) and therefore my guess is that there is something about "The Source" which guarantees a indefinite process of this kind. Some sort of a fundamental Law of Evolution which should be complementary, in a way, to the Second Law of Thermodynamics.

comment by CCC · 2013-04-20T12:57:45.459Z · score: 0 (0 votes) · LW · GW

This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization)

I disagree that they are necessarily more elaborate. I don't think we (as humanity) fully appreciate the complexity of cosmological structures yet (and I don't think we will until we get out there and take a closer look at them; we can only see coarse features from several lightyears away). And civilisation seems less elaborate than sentience, to me.

comment by Squark · 2013-04-20T13:06:49.846Z · score: 1 (1 votes) · LW · GW

Well, civilization is a superstructure of sentience an is more elaborate in this sense (i.e. sentience + civilization is more elaborate than "wild" sentience)

comment by CCC · 2013-04-20T18:13:10.905Z · score: 1 (1 votes) · LW · GW

I take your point. However, I can turn it about and point out that cosmological structures (a category that includes the planet Earth) must by the same token be more elaborate than geological structures.

comment by Squark · 2013-04-20T18:26:31.323Z · score: 0 (0 votes) · LW · GW

Sure. Perhaps I chose careless wording but when I said "cosmological structure formation -> geological structure formation" my intent was the process thereby a universe initially filled with homogeneous gas develops inhomogeneities which condense to form galaxies, stars and planets which undergo further processes (galaxy collisions, supernova explosions, collisions within stellar systems, geologic / atmospheric processes within planets) that produce more and more complex structure over time.

comment by CCC · 2013-04-20T18:47:54.992Z · score: 0 (0 votes) · LW · GW

I see.

Doesn't that whole chain require the entropy of the universe to be negative? Or am I missing something?

comment by Squark · 2013-04-20T19:33:54.371Z · score: 0 (0 votes) · LW · GW

You mean that this process has the appearance of decreasing entropy? In truth it doesn't. For example gravitational collapse (the basic mechanism of galaxy and star formation) decreases entropy by reducing the spatial spread of matter but increases entropy by heating matter up. Thus we end up with a total entropy gain. On cosmic scale, I think the process is exploiting a sort-of temperature difference between gravity and matter, namely that initially the temperature of matter was much higher than the Unruh temperature associated with the cosmological constant. Thus even though the initial state had little structure it was very off-equilibrium and thus very low entropy compared to the final equilibrium it will reach.

comment by CCC · 2013-04-23T12:08:55.939Z · score: 0 (0 votes) · LW · GW

Huh. I don't think that I know enough physics to argue this point any further.

comment by Bugmaster · 2013-03-05T23:19:12.161Z · score: 0 (2 votes) · LW · GW

I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power...

I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.

...and "The Asymptote" is a formal limit of this sequence.

Wait, so is it bounded or isn't it ? I'm not sure what you mean.

cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization

There are plenty of planets where biological evolution had not happened, and most likely never will -- take Mercury, for example, or Pluto (yes yes I know it's not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?

comment by Squark · 2013-03-07T19:44:16.442Z · score: 1 (1 votes) · LW · GW

I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually. Of course you might argue that our universe it asymptotically de Sitter. This is true, but it also probably metastable and can collapse into a universe with other properties. In http://arxiv.org/abs/1105.3796 the authors present the following line of reasoning: there must be a way to perform an infinite sequence of measurements since otherwise the probabilities of quantum mechanics would be meaningless. In a similar vein I speculate it must be possible to perform an infinite number of computation (or even all possible computations). The authors then go on to explore cosmological explanation of how that might be feasible.

Wait, so is it bounded or isn't it ? I'm not sure what you mean.

The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is "like an intelligence but not quite" in the same way infinity is "like a number but not quite"

There are plenty of planets where biological evolution had not happened, and most likely never will -- take Mercury, for example, or Pluto (yes yes I know it's not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?

Good point. Indeed it seems that life formation is a rare event. So I'm not sure whether there really is a "Law of Evolution" or we're just seeing the anthropic principle at work. It would be interesting to understand how to distinguish these scenarios

comment by wedrifid · 2013-03-07T20:02:09.923Z · score: 1 (1 votes) · LW · GW

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

Does this hold in a universe that is also expanding (like ours)? Such a scenario makes the 'infinite' property largely moot given that any point within has an 'observable universe' that is not infinite. That would seem to rule out computations of anything more complicated than what can be represented within the Hubble volume.

comment by Squark · 2013-03-07T20:29:36.507Z · score: 0 (0 votes) · LW · GW

Yes, this was exactly my point regarding the universe being asymptotically de Sitter. The problem is that the universe is not merely expanding, it's expanding with acceleration. But there are possible solutions to this like escaping to an asymptotic region with a non-positive cosmological constant via false vacuum collapse.

comment by Bugmaster · 2013-03-07T22:02:17.741Z · score: 0 (0 votes) · LW · GW

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

wedrifid already replied better than I could; but I'd still like to add that "eventually" is a long time. For example, if the algorithm that you are computing is NP-complete, then you won't be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an "infinite series of computations".

The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is "like an intelligence but not quite" in the same way infinity is "like a number but not quite"

Sorry, but I literally have no idea what this means. I don't think that infinity is "like a number but not quite" at all, so the analogy doesn't work for me.

It would be interesting to understand how to distinguish these scenarios

Well, so far, we have observed one instance of "evolution", and thousands of instances of "no evolution". I'd say the evidence is against the "Law of Evolution" so far...

comment by Squark · 2013-03-12T20:12:11.721Z · score: 1 (1 votes) · LW · GW

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

wedrifid already replied better than I could; but I'd still like to add that "eventually" is a long time. For example, if the algorithm that you are computing is NP-complete, then you won't be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an "infinite series of computations".

For algorithms with exponential complexity, you will have to wait for exponential time, yes. But eternity is enough time for everything. I think the universe is eternal. Even an asymptotically de Sitter region is eternal (but useless since it reaches thermodynamic equilibrium), however the universe contains other asymptotic regions. See http://arxiv.org/abs/1105.3796

Sorry, but I literally have no idea what this means. I don't think that infinity is "like a number but not quite" at all, so the analogy doesn't work for me.

A more formal definition is given in my comment http://lesswrong.com/lw/do9/welcome_to_less_wrong_july_2012/8kt7 . Less formally, infinity is "like a number but not quite" because many predicates into which a number can be meaningfully plugged in, also work for infinity. For example:

infinity > 5 infinity + 7 = infinity infinity + infinity = infinity infinity * 2 = infinity

However not all such expressions make sense:

infinity - infinity = ? infinity * 0 = ?

Formally, adding infinity to the field of real numbers doesn't yield a field (or even a ring).

Well, so far, we have observed one instance of "evolution", and thousands of instances of "no evolution". I'd say the evidence is against the "Law of Evolution" so far...

There is clearly at least one Great Filter somewhere between life creation (probably there is one exactly there) and appearance of civilization with moderately supermodern technology: it follows from Fermi's paradox. However it feels as though there is a small number of such Great Filters with nearly inevitable evolution between them. The real question is what is the expected number of instances of passing these Filters within the volume of a cosmological horizon. If this number is greater than 1 then the universe is more pro-evolution than what is anticipated from the anthropic principle alone. Fermi's paradox puts an upper bound on this number, but I think this bound is much greater than 1

comment by shminux · 2013-03-05T23:05:14.805Z · score: 0 (0 votes) · LW · GW

"The Asymptote" is a formal limit of this sequence.

Why postulate that such a limit exists?

comment by Squark · 2013-03-07T19:57:31.758Z · score: -1 (1 votes) · LW · GW

To really explain what I mean by the Asymptote, I need to explain another construct which I call "the Hypermind" ( Kawoomba's commented motivated me to invest in the terminology :) ).

What is identity? What makes you today the same person like you yesterday? My conviction is that the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday" and fully understands them. In a similar manner, if a hypothetical superintelligence Omega would learn all of your memories and understand them (you) on the same level you understand yourself, Omega should be deemed a continuation of you, i.e. it assimilated your identity into its own. Thus in the space of "moments of consciousness" in the universe we have a partial order where A < B means "B is a continuation of A" i.e. "B shares A's memories and understands them". The Hypermind hypothesis is that for any A and B in this space there is C s.t. C > A and C > B. This seems to me a likely hypothesis if you take into account that the Omega in the example above doesn't have to exist in your physical vicinity but may exist anywhere in the (multi/)universe and have a simulation of you running on its laptop.

The Asymptote is then a formal limit of the Hypermind. That is, the semantics of "The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P". It is then an interesting problem to find non-trivial properties of the Asymptote. In particular, I suspect (without strong evidence yet) that the opposite of the Orthogonality Thesis is true, namely that the Asymptote has a well-defined preference / utility function

comment by shminux · 2013-03-07T20:33:39.161Z · score: 2 (2 votes) · LW · GW

This seems like a rather simplistic view, see counter-examples below.

My conviction is

"conviction" might not be a great term, maybe what you mean is a careful conclusion based on something.

that the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday"

except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.

and fully understands them.

Not sure what you mean by understanding here, feel free to define it better. For example, we often "understand" our memories differently at different times in our lives.

Thus in the space of "moments of consciousness" in the universe we have a partial order where A < B means "B is a continuation of A" i.e. "B shares A's memories and understands them"

So, if you forgot what you had for breakfast the other day, you today are no longer a continuation of you from yesterday?

"The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P"

That's a rather non-standard definition. If anything, it's close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.

To sum up, your notion of Asymptote needs a lot more fleshing out before it starts making sense.

comment by Squark · 2013-03-08T21:46:32.247Z · score: -1 (1 votes) · LW · GW

the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday"

except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.

Good point. The description I gave so far is just a first approximation. In truth, memory is far from ideal. However if we assign weight to memories by their potential impact on our thinking and decision making then I think we would get that most of the memories are preserved, at least on short time scales. So, from my point of view, the "you of today" is only a partial continuation of the "you of yesterday". However it doesn't essentially changing the construction of the Hypermind. It is possible to refine the hypothesis by stating that for every two "pieces of knowledge" a and b, there exists a "moment of consciousness" C s.t. C contains a and b.

"The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P"

That's a rather non-standard definition. If anything, it's close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.

Actually I overcomplicated the definition. The definition should read "Exists A s.t. for any B > A, B has property P". The neighbourhoods are sets of the form {B | B > A}. This form of the definition implies the previous form using the assumption that for any A, B there is C > A, B.

comment by shminux · 2013-03-12T20:07:53.891Z · score: 0 (0 votes) · LW · GW

The definition should read "Exists A s.t. for any B > A, B has property P"

Hmm, it seems like your definition of Asymptote is nearly that of a limit ordinal.

comment by shev · 2013-02-02T05:52:18.792Z · score: 6 (6 votes) · LW · GW

Hi, I'm Alex.

Every once in a while I come to LessWrong because I want to read more interesting things and have more interesting discussions on the Internet. I've found it a lot easier to spend time on Reddit (having removed all the drivel) and dredging through Quora to find actually insightful content (seriously, do they have any sort of actual organization system for me to find reading material?) in the past. LessWrong's discussions have seemed slightly inaccessible, so maybe posting an introduction like I'm supposed to will set in motion my figuring out how this community works.

I'm interested in a lot of things here, but especially physics and mathematics. I would use the word "metaphysics" but it's been appropriated for a lot of things that aren't actually meta-physics like I mean. Maybe I want "meta-mathematics"? Anyway, I'm really keen on the theory behind physical laws and on attempts at reformulating math and physics into more lucid and intuitive systems. Some of my reading material (I won't say research, but ... maybe I should say research) recently has been on geometric algebra, re-axiomizing set theory, foundations and interpretations of quantum mechanics, reformulations of relativity, quantum field theory's interpretation, things like that. I have a permanent distaste for spinors and all the math we don't try to justify with intuition when teaching physics, so I've spent a lot of my last few years studying those.

I was really intrigued by the articles/blog posts? on what proofs actually mean and causality a few months ago; that's when I started reading the site. I've spent the better part of the last year sifting through all kinds of math ideas related to reinterpretations or 'fundamental' insights, so I hope hanging around here can expose me to some more.

Oh, and I've spent a good amount of time on the Internet refuting crackpots who think they solved physics, so I, um, promise I'm not one.

I'm a programmer by trade and have a good interest in revolutionary (or just convenient) software projects and disruptive ideas and really naive, idealist world-changing ideas, which is fun.

I have read some of the sequences and such but - I guess I'm a rationalist at heart already, maybe because I've studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other than atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it's possible I've missed a lot of the material here - I find navigating the site to be pretty unintuitive.

I'm based in Seattle and I hope to go to the meetups if they... ever happen again. I mostly just like talking to smart people; I find it makes my brain work better - as if there's some sort of 'conversation mode' which hypercharges my creativity.

Oh, and I have a blog: http://ajkjk.com/blog/. I'm slightly terrified of linking it; it's the first time I've shown it to anyone but friends. It only has 6 posts so far. I've written a lot more but deleted/hid them until they're cleaned up.

comment by [deleted] · 2013-02-02T06:57:45.748Z · score: 3 (3 votes) · LW · GW

I have read some of the sequences and such but - I guess I'm a rationalist at heart already, maybe because I've studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other an atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it's possible I've missed a lot of the material here - I find navigating the site to be pretty unintuitive.

Be very careful thinking you are done. I was in pretty much exactly the same position as you about a year ago. ("yep, I'm pretty rational. Lol @ god; I wonder what it's like to have delusional beliefs"). After a year and a half here, having read pretty much everything in the sequences and most of the other archives, running a meetup, etc, I now know that I suck at rationality. You will find that you are nowhere near the limits, or even the middle, of possible human rationality.

Further, I now know what it's like to have delusional beliefs that are so ingrained you don't even recognize them as beliefs, because I had some big ones. I probably have more. There not easy to spot from the inside.

On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep.

The Seattle guys are pretty cool, from those I've met. Go hang out with them.

comment by Kawoomba · 2013-02-02T07:32:11.595Z · score: 5 (5 votes) · LW · GW

On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep.

Don't be mysterious, Morpheus, please elaborate.

comment by shev · 2013-02-02T07:21:03.650Z · score: 1 (1 votes) · LW · GW

Okay, sure. Rather I mean: I feel like I'm passed the introductory material. Like I'm coming in as a sophomore, say. But - I could be totally wrong! We'll see.

I've definitely got counter-rational behaviors ingrained; I'm constantly fighting my brain.

And, if we're pedantic about things pretty similar to atheism, I might not be an atheist. I'm not up to speed on all the terms. What do you call:

I don't 'believe' anything, I have degrees of thinking information might be accurate but I talk as though I believe the best model I have; physics provides a model of the universe which I accept to a high degree and I think it's very likely accurate as an abstraction (the finer points are up for debate); I make and accept no claims about things that can't be covered by that model such as extra-universal entities or the reason we exist at all; I consider the elegance of a model as working to its merit as well as its accuracy so invoking supernatural or arbitrary forces where there's an alternative makes an explanation very implausible to me; I see no reason to invoke anything other than physics anywhere between the "big bang" step and my perception of the present so my currently preferred explanation excludes anything supernatural in any form.

I was calling that atheism.

comment by [deleted] · 2013-02-02T16:21:43.325Z · score: 0 (0 votes) · LW · GW

I was calling that atheism.

In that sense, then, I'm an atheist.

My test was whether my gods-related beliefs would get me flamed on r/atheism. I don't think my beliefs would pass the ideological turing test for atheism.

I used to think the god hypothesis was not just wrong, but incoherent. How could there be a being above and outside physics? How could god break the laws of physics? Of course now I take the simulation argument much more seriously, and even superintelligences within the universe can probably do pretty neat things.

I still think non-reductionism is incoherent; "a level above ours" makes sense, "supernatural" does not.

This isn't really a major update, though. I'm just not going to refer to myself as an atheist any more, because my beliefs permit a lot more.

comment by shminux · 2013-02-02T08:47:37.347Z · score: 0 (2 votes) · LW · GW

Seems like agnosticism to me, or atheism in a broader sense. The narrow atheism is a belief in zero gods.

comment by shminux · 2013-02-02T06:11:05.493Z · score: 3 (3 votes) · LW · GW

From your blog:

Recently it occurred to me that a large part of being addicted to Reddit isn't actually the content but the fact that the links turn purple when you click on them. And my brain is slightly obsessed with turning all the blue purple, all the time.

This is amazing, yet seems so obvious in retrospect. So many of us have turned into blue-minimizing robots without realizing it. Hopefully breaking the reward feedback loop with your extension would force people to try to examine their true reasons for clicking.

comment by shev · 2013-02-02T07:26:15.202Z · score: 1 (1 votes) · LW · GW

I was pretty pleased with myself for discovering that. It - sorta works. I still find myself going to Reddit, but so far it's still "feeling" less addictive (which is really hard to quantify or describe). Now I'm finding myself just clicking to websites more looking for something, rather than specifically clicking links. I've been sleeping badly lately, though, and I find that my brain is a lot more vulnerable to my Internet addiction when I haven't slept well - so it's not a good comparison to my norm.

Incidentally, if anyone wanted me to I could certainly make the extension work on other browsers. It's the simplest thing ever, it just injects 7 clauses of CSS into Reddit pages. I thought about making it mess with other websites I used (hackernews, mostly) but I decided they weren't as much of a problem and it was better to keep it single-purpose for now.

comment by itaibn0 · 2013-02-21T13:15:25.657Z · score: 1 (1 votes) · LW · GW

re-axiomizing set theory

Now I'm tempted to spread a meme. Have you heard Martin-Loef type theory? In my opinion, it's a much better foundation of mathematics than ZFC.

comment by Nisan · 2013-02-21T16:04:36.236Z · score: 0 (0 votes) · LW · GW

Welcome. There are some e-reader format and pdf versions of the Sequences that may be easier to navigate.

comment by Rixie · 2013-01-20T00:02:35.146Z · score: 6 (8 votes) · LW · GW

Hi! I was wondering where to start on this website. I started reading the sequence "How to actually change your mind", but there's a lot of lingo and stuff I still don't understand. Is there a sequence here that's like, Rationality for Beginners, or something? Thanks.

comment by Kindly · 2013-01-20T06:04:03.608Z · score: 2 (2 votes) · LW · GW

Probably the best thing you can do, for yourself and for others, is to post comments on the posts you've read, asking questions where you don't understand something. The sequences ought to be as easy to understand as possible, but the reality may not always approach the ideal.

But if the jargon is the problem, the LW wiki has a dictionary

comment by beoShaffer · 2013-01-20T03:59:44.566Z · score: 1 (1 votes) · LW · GW

I found the order presented in the wiki's guide to the sequences to be quite helpful.

comment by Dorikka · 2013-01-30T03:22:01.598Z · score: 0 (0 votes) · LW · GW

This may be a decent starting post.

comment by TimS · 2013-01-20T03:43:29.042Z · score: 0 (0 votes) · LW · GW

Welcome. As intro pieces, I really like Making Beliefs Pay Rent and Belief in Belief. The rest of the Mysterious Answers sequence is attempts to illuminate or elaborate on the points made in those two essays.

I was less impressed with "A Human's Guide to Words," but that might be because my legal training forced me think about those issues far before I ever wandered here. As a brief heuristic, if the use-mention distinction seems really insightful to you, try it out. If you've already thought about similar issues, you could pass on it.

I think the other Sequences are far less interestingly novel, but some of that is my (rudimentary but still above average for here) background in philosophy. And some of it is that I don't care about some of the topics that are central to the discussion in this community.

As always with advice like this, take what I say with a substantial grain of salt. Feel free to look at our wiki page on the Sequences to see all of what's out there.

comment by anansi133 · 2013-01-03T18:21:57.642Z · score: 6 (8 votes) · LW · GW

Hello, newbie here. I'm intrigued by the premise of this forum.

About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.

In the last few weeks, I noticed what I think are some elemental breakdowns in human politics. When things go bad between people, I think it can be attributed to one of three causes: immaturity, addiction, or insanity. I would love to discuss this further, hoping someone's interested.

I wasn't going to mention theism, but it's here in the main post, and suddenly I'm interested: I trend toward the athiestic- I'm really unimpressed with my grandmother's deity, and "supernatural" doesn't seem a useful or interesting category of phenomena. But I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me, and I notice that other people seem to find survival value in it. So that's probably something I will want to talk about.

Many of my more intellectual friends and neighbors can seem like bullies a lot of the time. So I like the word "rationality" in the title of this place, much more than I like "science" or "logic". When I see the war of the darwin fish on people's bumpers, I remember that the Romans still get a lot of credit for their accomplishments even though math and science as we know it barely existed. Obsession with mere logic seems to put an awful lot of weight on some unexamined premises- and people don't talk in formal logic any more than they math in roman numerals.

I'm not against vaccination, but I am a caregiver to a profoundly autistic child. It's frustrating to try to have any sort of conversation about autism without it devolving into a vaccination tirade.

I don't think of myself as a 9/11 "truther", and yet I still have many questions about those events and the response that trouble me. Some of these questions are getting answered now that the 10 year anniversary has seen the release of more information. As with the Kennedy assassination, I don't think the full story will ever be widely known. I'm cynical enough that I doubt that it matters.

SETI fascinates me. Bigfoot, the Loch Ness Monster, UFOs- not so much. Whitley Streiber is actually kind of interesting, when I can muster up the required grains of salt.

Anyway, it feels a bit like I'm crawling out from under a rock, not sure what the weather is really like out here. I want to outgrow the pleasures of cleverness, hoping for some happiness in wisdom.

comment by simplicio · 2013-01-03T18:46:10.506Z · score: 1 (1 votes) · LW · GW

About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.

Yes, I know the feeling. Welcome out of the echo chamber!

I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me...

Do you mean that it's literally the words you find interesting? Which ones?

comment by anansi133 · 2013-01-03T19:15:36.962Z · score: 0 (0 votes) · LW · GW

That's not actually what I meant, but the challenge seems interesting. lemme see...

Reciprocity? (I'm looking for a word to describe what happens when Islam holds Jesus up as a prophet worth listening to, but Christians afford no such courtesy to Muhammad.)

Faith (Firefly's Book asks Mal, "when I ask you to have faith, why do you think I'm talking about God?")

Ethics vs Morals (few people I know seem to recognize a difference, let alone agree on it)

Moral Class (If we were to encounter a powerful extraterrestrial, how would we know they weren't God? How would they understand the question if we asked them?)

I guess the words weren't so small after all...

comment by MusicMapsReality · 2012-12-24T22:36:43.914Z · score: 6 (6 votes) · LW · GW

Hello, I'm Ben Kidwell. I'm a middle-aged classical pianist and lifelong student of science, philosophy, and rational thought. I've been reading posts here for years and I'm excited to join the discussion. I'm somewhat skeptical of some things that are part of the conventional wisdom around here, but even when I think the proposed answers are wrong - the questions are right. The topics that are discussed here are the topics that I find interesting and significant.

I am only formally and professionally trained in music, but I have tried to self-study physics, math, computer science, and philosophy in a focused way. I confess that I do have one serious weakness as a rationalist, which is that I can read and understand a lot of math symbology, but I can't actually DO math past the level of simple calculus with a few exceptions. (Some computer programming work with algorithms has helped with a few things.) It's frustrating because higher math is The Key that unlocks a lot of deep understanding of the universe.

I have a particular interest in entropy, information theory, cosmology, and their relation to the human experience of temporality. I think the discovery that information-theoretic entropy and thermodynamic entropy are equivalent and the quantum formalism encodes this duality is a crucial insight which should be a foundational cornerstone of philosophy and our understanding of the world. The sequence about quantum theory and decoherence is one of my favorites and I think there is a lot more to be done to adjust our philosophy and use of language when it comes to what kind of quantum reality we are living in.

comment by [deleted] · 2012-12-20T20:33:10.980Z · score: 6 (6 votes) · LW · GW

Hey everyone, I'm sean nolan. I found less wrong from tvtropes.org, but I made sure to lurk sufficiently long before joining. I've been finding a lot of interesting stuff on lesswrong (most of which was posted by eliezer), some of which I've applied to real life (such as how procrastination vs doing something is the equivalent of defect vs cooperate in a prisoners' dilemma against your future self). I'm 99.5% certain I'm a rationalist, the other 0.5% being doubt cast upon me by noticing I've somehow attained negative karma.

comment by mjankovic · 2012-11-21T22:32:22.726Z · score: 6 (6 votes) · LW · GW

Hello, I'm a physics student from Croatia, though I've attended a combined physics and computer science program (study programs here are very specific) for couple of years at a previous university that I left, though my high school specialization is in economy. I am currently working towards my bachelor's degree in physics.

I have no idea how I learned of this site, though it was probably trough some transhumanist channels (there's a lot of half-forgotten bits and pieces of information floating in my mind, so I can't be sure). Lately I've started reading the core sequences, mostly on my cell phone, while traveling (it avoids tab explosions). So far I've encountered a lot of what I've already considered or concluded for myself in a more expanded form.

comment by JaySwartz · 2012-11-19T23:12:52.305Z · score: 6 (6 votes) · LW · GW

Hello,

I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.

I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and thought processes in order to gain insights into how to create user interfaces and how to describe technology in concise ways to help people to evaluate the merits of the technology. I've spent time at Apple, Sun, Seagate, Mensa, Osborne and a few start-ups applying my ever-deepening understanding of the human condition.

Over the years, I have watched synthetic intelligence (I much prefer the more precise SI over AI) grow in fits and starts. I am increasing my focus in this area because I believe we are on the cusp of general SI (GSI). There is a good possibility that within my life time I will witness the convergence of technology that leads to the appearance of GSI. This will in part be facilitated by advances in medicine that will extend my lifespan well past 100 years.

I am currently building my first SI web crawler that will begin building a corpus to be mined by some SciPy applications I have on my list of things to do. These efforts will provide me with technical insights on the SI challenge. There is even the possibility, however slight, that they can be matured to make a contribution to the creation of SI.

Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.

I am looking forward to reading and exchanging ideas here. I will strive to contribute as much as I receive.

Jay

comment by gwern · 2012-11-19T23:31:16.555Z · score: 1 (1 votes) · LW · GW

Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.

I don't see anything. I assume you mean you put it in the LW edit box and then saved it as a draft? Drafts are private.

comment by StonesOnCanvas · 2012-11-15T20:50:48.733Z · score: 6 (6 votes) · LW · GW

Hi I’m Bojidar (also known as Bobby). I was introduced to LW by Luke Muehlhauser’s blog “Common Sense Atheism” and I've been reading LW ever since he first started writing about it. I am a 25 year old laboratory technician (and soon to be PhD student) at a major cancer research hospital in Buffalo, NY. I've been reading LW for a while and recently I've been really wishing that Buffalo had a LW group (I've been considering starting one, but I’m a bit concerned that I don’t have much experience in running groups nor have I been very active in the online community). A bit about myself: I enjoy reading about rationality, psychology, biology, philosophy and methods of self-help (or self-optimization). In my spare time I like doing artistic things (oil painting, figure drawing, and making really cool Halloween costumes), gardening, travel, playing video games (casual MMO gamer & RPG fan), and I like watching sci-fi, fantasy genre movies/TV programs. Also, I work out 5 times per week (which thanks to some awesome self-help advice has been a whole lot easier to stick with – thanks Luke!). I hope to learn how to play the piano well (I currently just freestyle on occasion or attempt to learn songs I like by watching youtube synthesia videos, but I would really like to learn how to read sheet music).

As far as by background in rationality, I would have to say that I didn't really grow up in a particularly rational environment. I grew up Christian, but religion wasn't a huge influence on my upbringing. On the other hand, my family (particularly my mom), is really into alternative medicine. I wish I could say it is just a general belief in “healthy eating” coupled with the naturalistic fallacy, but sadly it is not. She is a homeopathic “doctor” (thankfully non-practicing!) and can easily be convinced of even the most biologically implausible remedies (on rare occasions even scaring me by taking or suggesting potentially dangerous treatments). I really fear the possible outcome of these beliefs; given the option between effective chemotherapy and magical sugar pills, she probably won’t choose the option that saves her life. (After several failed attempts to improve her rationality and change her mind, I have long abandoned any attempts in hopes to preserve my relationship with my family.)

That being said, for a large portion of my life, I believed many of the same things my parents taught me to believe. Then I went to college as a premed student and was exposed to a lot of new information, which over time, made me start to reject those beliefs. Growing up, I was considered to be pretty rational by other people around me (not always in a good way; often it was negatively attached to the claim of being "left – brained” or “not being in touch with my intuitive self”). In retrospect, I was only marginally saner than other people around me, perhaps just sane enough to change my mind given the chance.

P.S. I have not taken any formal logic classes and on occasion might need some terms or symbols clarified (although my boyfriend has and frequent discussions with him have helped me pick up some of this nomenclature).

comment by avantguard · 2012-11-06T19:42:10.149Z · score: 6 (6 votes) · LW · GW

I'm Rachel Haywire and I love to hate culture. I've been in "the community" for almost 2 years but just registered an account today. I need to read more of the required texts here before saying much but wanted to pop my head out from lurking. I've been having some great conversations on Twitter with a lot of the regulars here.

I organize the annual transhumanist/alt-culture event Extreme Futurist Festival (http://extremefuturistfest.info) and should have my new website up soon. I like to write, argue, and write about arguing. I've also done silly things such as producing industrial music and modeling.

You probably know me as that really loud girl at parties with the tattoos and crazy hair. I'm actually not trying to get attention. I'm just an autist. I am here so I can become a more rational person. I love philosophy and debate but my thinking is not always... correct?

comment by RobertPearson · 2012-11-06T01:26:59.487Z · score: 6 (6 votes) · LW · GW

Hi! I am Robert Pearson: Political professional of the éminence grise variety. Catholic rationalist of the Aquinas variety. Avid chess player, pistol shooter. Admirer of the writings of Ayn Rand and Robert Heinlein. Liberal Arts BA from a small state university campus. I read Overcoming Bias occasionally some years ago, but heard of LessWrong from Leah Libresco.

My real avocation is learning how to be a smarter, better, more efficient, happier human being. Browsing the site for awhile convinced me it was a good means to those ends.

I write a column on Thursdays for Grandmaster Nigel Davies' The Chess Improver

comment by mattwise · 2012-08-02T00:48:57.599Z · score: 6 (6 votes) · LW · GW

Hi,

I was introduced to LW by a friend of mine but I will admit I dismissed it fairly quickly as internet philosophy. I came out to a meetup on a recent trip to visit him and I really enjoyed the caliber of people I met there. It has given me reason to come back and be impressed by this community.

I studied Math and a little bit of Philosophy in undergrad. I'm here mostly to learn, and hopefully to meet some interesting people. I enjoy a good discussion and I especially enjoy having someone change my mind but I lose interest quickly when I realize that the other party has too much ego involved to even consider changing his or her mind.

I look forward to learning from you all!

Matt

comment by Haladdin · 2012-07-25T06:49:20.998Z · score: 6 (6 votes) · LW · GW

Hi, LessWrong,

I used to entertain myself by reading psychology, and philosophy articles on Wikipedia and following the subsequent links. When I was really interested in a topic though, I used google to further find websites would provide me more information on said topics. Around late 2010, I found that some of my search results led to this very website. Less Wrong proved to be a little too dense for me to enjoy; I needed to fully utilize my cognitive capabilities to even begin to comprehend some of the articles posted here.

Since I was looking for entertainment, I decided to ignore all links to LW for quite a while, but the frequency of LW result coming up in my queries became more and more frequent with time. I finally decided to read some of the posts, and some of the articles (determinism, cryonics, and death related ones) described conclusions I've derived independently. It was quite shocking as I thought of myself as a rather unique thinker. Thinking more about this, I came to a conclusion. Instead of having a "eureka" moment every couple of months to come at the same conclusion people arrived at centuries ago, I decided to optimize my time - compressing the learning/awakening period by reading the sequences instead of attempting to figure out everything myself.

Funnily enough, I detest reading the same articles that I enjoyed reading before now that I've given myself the goal of reading them. I'm sure that the explanation and the solution to this conundrum can be found on this website as well.

Lastly, a note to ciphergoth - I do not identify myself as a rationalist, as the second sentence of this post implies. I found out that labeling myself limits my words, my actions, and more importantly, my thoughts, so I refuse to label myself by my political ideologies, gender, nationality, etc. I even go by a few different names so I can become more detached to my name itself as I found people to be irrationally attached to names as it is nothing but an identifying label. I will use rationalist techniques and tools, and I may even grow to adopt your ideologies, but I will not identify myself as a rationalist. At least until the benefits of applying labels to myself becomes more concrete.

Nice to meet all of you.

comment by tmosley · 2012-07-21T02:01:12.549Z · score: 6 (6 votes) · LW · GW

So I recently found LessWrong after seeing a link to the Harry Patter fanfiction, and I have been enthralled with the concept of rationalism since. The concepts are not foreign to me as I am a chemist by training, but the systematization and focus on psychology keep me interested. I am working my way through the sequences now.

As for my biography, I am a 29 year old laboratory manager trained as a chemist. My lab develops and tests antimicrobial materials and drugs based on selenium's oxygen radical producing catalysis. It is rewarding work if you can get it, which you can't, because our group is the only one doing it ;)

Besides my primary field of work, I am generally interested in science, technology, economics, and history.

I am looking at retirement from the 9-5 life in the next year or so, and am interested in learning the methods of rationality, which I feel would allow me to excel in other endeavors in the future. I already find myself linking to articles from here to explain and predict human behavior.

This place is overwhelming with its content. I don't think I have ever seen a website with a comment section so worth reading. I fear that I could spend the remainder of my life reading and never have the time to DO anything.

In the realm of politics, I would be considered an anarcho-capitalist, though I value any and all types of values between there and where the USA's politics currently lay. I am an atheist to the extent that I don't believe in an anthropomorphic god, though reading the "an alien god" (not quite sure how to post links here yet) sequence certainly made me realize that certain pervasive and extremely powerful processes do exist, so I am reexamining some of my long-held assumptions in that arena.

I spend quite a lot of my time in the online "Fight Club" that is Zerohedge's comment section, so apologies in advance if I come off as sharp in some of my remarks. I prefer appeals to logic and reason as a rule, but sometimes I resort to pathos and personal attack, especially when I feel that I am being personally attacked. This impulse has been greatly curbed by what I have read here, however, and I find that I am able to pierce through inflammatory arguments much more cool-ly, which I count as a positive result for all involved.

In any event, I generally try not to comment when I feel ill-informed on a subject, but when I think I have something to contribute, I will. I am really enjoying the site so far.

Now, back to reading. So much to read, so little time.

comment by bdbaruah · 2013-01-23T15:05:51.178Z · score: 5 (5 votes) · LW · GW

Aaron's blog brought me here. Sad that he's no longer with us.

I have been thinking for a long time about overcoming biases, and to put them into action in life. I work as an orthopaedic surgeon in the daytime and all I see around me is an infinite amount of bias. I can't take it on unless I can understand them and apply them to my life processes!

comment by [deleted] · 2012-11-17T05:00:21.207Z · score: 5 (5 votes) · LW · GW

I'm Rev. PhD in mathematics, disabled shut-in crank. I spend a lot of time arguing with LW people on Twitter.

comment by drethelin · 2012-11-17T06:35:19.824Z · score: 1 (1 votes) · LW · GW

Noooooo don't get sucked in

comment by [deleted] · 2012-11-17T06:55:48.020Z · score: 0 (0 votes) · LW · GW

I think it is unlikely.

comment by Rixie · 2012-11-14T02:01:15.119Z · score: 5 (7 votes) · LW · GW

Hi, I'm Rixie, and I read this fan fic called Harry Potter and the Methods of Rationality, by lesswrong, so I decided to check out Lesswrong.com. It is totally different from what I thought it would be, but it's interesting and I like it. And right now I'm reading the post below mine, and wow, my comment sounds all shallow now . . .

comment by Strange7 · 2012-11-14T03:27:08.695Z · score: 1 (1 votes) · LW · GW

What did you think it would be like?

comment by Rixie · 2012-11-29T01:30:40.883Z · score: -1 (1 votes) · LW · GW

I thought it would be more like hpmor.com, but for the authour.

Little did I know . . .

comment by [deleted] · 2012-11-14T02:38:55.983Z · score: 1 (1 votes) · LW · GW

Hi Rixie! Don't worry! Lots of people came to LessWrong after reading HPMoR (myself included). I know it can be intimidating here at first, but well worth the effort, I think.

You might also be interested in Three Worlds Collide. It's another fiction by the same guy who wrote HPMoR, and a bunch of the Sequence posts here.

If you have any questions about anything, feel free to PM me!

comment by Rixie · 2012-11-14T02:11:14.686Z · score: 0 (2 votes) · LW · GW

And, question: What does 0 children mean? It's on the comments which were down-voted a lot and not shown.

comment by Slackson · 2012-11-14T03:29:49.540Z · score: 0 (0 votes) · LW · GW

It means it has 0 replies. The way the comments work is that the one above is the "parent" and the one's below are "children". Sometimes you see people using terminology such as "grand-parent" and "great grand-parent" to refer to posts further above.

comment by Nornagest · 2012-11-14T02:30:07.541Z · score: 0 (0 votes) · LW · GW

Means no one replied to the comment. Normally this is implicit in the number of comments nested under it, but since those aren't shown when comments are downvoted below the threshold, the site provides the number of child comments as a convenience.

comment by Nisan · 2012-11-14T02:29:04.667Z · score: 0 (0 votes) · LW · GW

If the downvoted comment had, e.g. 5 total replies to it, it would say "5 children".

comment by CharlieDavies · 2012-11-08T23:56:49.486Z · score: 5 (5 votes) · LW · GW

Hi, Charlie here.

I'm a middle-aged high-school dropout, married with several kids. Also a self-taught computer programmer working in industry for many years.

I have been reading Eliezer's posts since before the split from Overcoming Bias, but until recently only lurked the internet -- I'm shy.

I broke cover recently by joining a barbell forum to solve some technical problems with my low-bar back squat, then stayed to argue about random stuff. Few on the barbell forum argue well -- it's unsatisfying. Setting my sights higher, I now join this forum.

I'll probably start by trying some of the self-improvement schemes and reporting results. Any recommendations re: where to start?

comment by CharlieDavies · 2012-11-09T03:19:58.911Z · score: 1 (1 votes) · LW · GW

Never mind, I found the Group rationality diary which is exactly the right aggregation point for self-improvement schemes.

comment by CAE_Jones · 2012-11-06T14:37:10.660Z · score: 5 (5 votes) · LW · GW

Apologies in advance for the novella. And any spelling errors that I don't catch (I'm typing in notepad, among other excuses).
It's always very nice when I come across something that reminds me that there are not only people in the world who can actually think rationally, but that many of them are way better at it than me.
I don't like mentioning this so early in any introduction, but my vision is terrible to the point of uselessness; I mostly just avoid calling myself "blind" because it internally feels like that would be giving up on the tiny power left in my right eye. I mention it now just because it will probably be relevant by the end of my rambling. (Feel free to skip to the last paragraph if you'd rather avoid all the backstory.)
I'm from northeast Arkansas. My parents were never really religious (I kinda internalized the ambient mythos of "God=good and fluffy cloud heaven, Satan=bad and fire and brimstone hell" just because it seemed to be the accepted way of things among all of my other relatives. TUrns out my dad identified himself as a Buddhist after one of our many trips to Disneyworld. ... they.... really like Disney. They have a dog named Disney.). They did emphasize the importance of education and individualism and all of those ideals from the late eighties and nineties that turned out to be counterproductive (though I'm having trouble finding the cracked.com articles that point this out in the most academically sound manner imaginable. (note: the previous statement was sarcastic)). So I tried to learn as much as I could in the general direction of science. Being that this was all done at public schools, and that a whole 0 of the more advanced science books I wanted were available in braille, this didn't get me very far.
I did my last two years of highschool at the Arkansas School of Mathematics and Science (which added "and the arts" when I got there, though before they'd actually added an art program), and somehow graduated without actually doing much science (I did a study of the effects of atmosphere on dreams for the year-and-a-half science project that everyone had to do, but forewent trying to organize an experiment and just wrote a terrible research paper). Then I got to college, and everything went to hell. I'd somehow managed to sneak around learning things like vectors, dot/cross products, and actual lab reports in highschool, and the experiments we did in gen physics never felt like experiments so much as demonstrations ("Behold: gravity still works!"). This is about where it became extremely clear to me that I simply could no longer make myself do things by force of will alone (and it became doubly clear that no one else seemed capable of understanding that I wasn't just "blowing off" everything). It took several semesters after that for me to realize that I had seriously missed out on some basic life things and that I actually needed friends (and that I needed to seriously reevaluate what qualified as friendship). They finally made me pick a new major, seeing as I'd kinda kept away from physics after the first semester ended in disaster. So I took the quickest way out, that being French, and now I'm still living with my parents, have about a dozen essays on Franco-african literature to write, and am about $30,000 in debt (that's only counting the loans in my name; my parents took the rest of the financial burden in their names).
So I mostly try to focus on creative endeavors, such as fiction and video games. Except the lack-of-vision thing makes that harder (I've been focusing on developing audio games for the past couple years, but it's virtually impossible to actually live off the tiny audio games market. Oh, but I could write pages on my observations there, and I rather want to, as I'm sure many of you could make some meaningful observations/analyses on some of those trends.).
... Well crap, I just wrote a few pages without actually getting to anything useful. I have serious need of better rationality skills than I'm currently applying: independence, dealing with emotional/cognative weirdness, finding ways to actually travel outside of my house (public transportation might as well not exist anywhere but the capital in Arkansas, and good sidewalks are hard to find), social issues, productivity issues, finding ways to get in physical activity, being unemployed with an apparent hiring bias against disabilities, financial ability, etc. The total money that I have to work with is less than $400, so I can't exactly sign up for cryonics or hire a driver to take me places. And this wall-o-text demonstrates my horrible disorganization rather well, I fear. (Hm, is there not a way to preview a comment before one posts it?)

comment by Neurosteel · 2012-11-06T11:27:22.970Z · score: 5 (5 votes) · LW · GW

After having read all of the Sequences, I suppose its time I actually registered. I did the most recent (Nov 2012) survey. I'm doing my PhD in the genetics of epilepsy (so a neurogenetics background is implied). I'm really interested in branching out into the field of biases and heuristics, especially from a functional imaging and genetics perspective (my training includes EEG, MRI/fMRI, surgical tissue analysis, and all the usual molecular stuff/microarrays).

Experiences with grant writing makes me lean more toward starting my own biotech or research firm and going from there, but academics is an acceptable backup plan.

comment by wesley · 2012-11-06T02:49:58.281Z · score: 5 (5 votes) · LW · GW

Hi, my name is Wes(ley), and I'm a lurkaholic.

First, I'd like to thank this community. I think it is responsible in a large way for my transformation (perceived transformation of course) from a cynical high schooler who truly was only motivated enough to use his natural (not worked hard for) above average reasoning skills to troll his peers, to a college kid currently making large positive lifestyle changes, and dreaming of making significant positive changes in the world.

I think I have observed significant changes in my thinking patterns since reading the sequences, learning about Bayes, and watching discussions unfold on LessWrong over the last two years or so.

Three examples (and there are many more) of this are:

  1. Noticing quicker, and more often when a dispute is about terms and not substance.

  2. Identifying situations in which myself or others are trying to "guess the teacher's password" (this has really helped me identify gaps in understanding)

  3. Increased internal dialogue concerning bias (in myself, and in others, I at first started to notice myself being strongly subject to confirmation bias; I suspect realizing this has at least a little bias-reducing effect)

Unfortunately, I don't think I have come even close to being able to apply these skills in a place where they would be highly beneficial to others, like a decision making position. That is okay, my belief is that this is something that will come with age, and career advancement.

One of my goals for the next year is to start a LessWrongish student organization at my college campus (Auburn University), which is a traditionally very conservative place. This is partially out of a wholly selfish desire to engage in more stimulating discussions (instead of just spectating, this is also why I am delurking), and partially out of a part selfish desire to create a community at school that fosters instrumental rationality. I think that by posting this goal here, it is at least slightly more likely I will go through with it.

Some of the things I like to do include: race small sailboats, read, play video games, try new foods, explore, learn, smile at people I don't know, play rough with my family's dogs, drive with high acceleration (not necesscarily high speeds), travel, talk with people I don't know and will likely never meet again, find a state of flow in work, read comments on CNN political articles (it's a comedy thing), learn about native animal and plant species, catch critters, listen to big band music, find humor in unusual places, laugh at myself, fantasize about getting superpowers, and lab benchwork.

Some of the things I don't like to do include: get to know new people (I like knowing people though), spend time on social networking sites (I don't have a Facebook or Twitter), have text conversations, dress formally (ties? why do we need to cling to those?), "jumping through hoops" (e.g. make sure to attend 5 events for this class, suck up to professor x for a good recc, make sure to put x on your resume), engaging in politics, talk to people who say things like "it's all relative man," or "I choose to not let my world be bound by logic", clean, binge drink (okay, actually, I don't like being hung over, or the thought of poisoning myself), die to lag, percieve assignment of undue credit.

Currently I am taking a semester off from studying cell and molecular biology, and volunteering as a research student in a solid tumor immunology lab. I think long-term I would like to get involved with research on the molecular basis of aging, or applied research related to life extension.

comment by blacktrance · 2012-11-05T21:39:16.412Z · score: 5 (5 votes) · LW · GW

Long-time lurker, first-time poster. I'm 21, male, and a college student majoring in economics and minoring in CS. I first heard of Eliezer Yudkowsky when a couple of my friends discovered Harry Potter and the Methods of Rationality two years ago. I started reading it and enjoyed it immensely at first, but as the plot eclipsed what I'd call the "cool tricks", I became less interested and dropped it. More recently, a different friend linked me to Intellectual Hipsters. After reading it, I read several sequences and was hooked.

My journey to rationality was started by my parents (both of whom are atheists with degrees in STEM fields). I was provided with numerous science books as a child, and I was taught the basics of the scientific method, as well as encouraged to think analytically in general. They also introduced me to science fiction. I grew up in a heavily religious part of the US, so I frequently had to defend my beliefs. Then I discovered what people call "arguing on the Internet", which I found I enjoy. That caused me to refine and develop my beliefs.

My current beliefs. I'm a quasi-Objectivist (in the Ayn Rand sense), though politically I'm a classical liberal (pragmatic libertarian). I'm not particularly interested in AI or cryonics (though I support transhumanism). I'm a compatiblist (free will and determinism are not mutually exclusive). I think technological and scientific progress will continue to reduce limitations on humans, and that's a good thing.

comment by Cinnia · 2012-11-05T21:24:03.143Z · score: 5 (5 votes) · LW · GW

Hi, I’m Cinnia, the name I go by on the net these days. I found my way here by way of both HPMOR and Luminosity about 8 months ago, but never registered an account until the survey.

Like Alan, I’m also in my final year of secondary school, though I’m on the other side of the pond. I love science and math and plan to have a career in neuroscience and/or psychiatry after I graduate. This year I finally decided to branch out my interests a bit and joined the local robotics club (a part of FIRST, if anyone’s curious), and it’s possibly the best extracurricular I’ve ever tried.

I’ve noticed that there aren’t many virtual communities that manage to hold my interest for long, due to a number of different reasons, but I’ve been lurking around LessWrong for about 8 months now and find it incredibly enlightening. I am (very) slowly working my way through the Sequences and some of the top articles here, but have finished Eliezer’s “Three Worlds Collide” and Alicorn’s original posts on Luminosity.

I’m still very much in the process of learning and trying to understand many of the concepts LessWrong explores, so I’m not sure how often I’ll be contributing. However, I do have some understanding of Riso and Hudson’s Enneagram and Spiral Dynamics, so I suppose there’s some groundwork that I can build from in the future.

Anyway, I like LessWrong’s mission and am happy to have finally joined the community.

Edited to clarify: Spiral Dynamics is an entirely separate psychological theory from the Enneagram, in case it wasn't clear.

comment by Bugmaster · 2012-11-05T22:21:35.855Z · score: 0 (0 votes) · LW · GW

What are "Riso and Hudson’s Enneagram and Spiral Dynamics", out of curiosity ? I Googled the terms, but didn't see anything that I could immediately relate to Less Wrong, hence my curiosity.

comment by Cinnia · 2012-11-05T22:47:47.111Z · score: 1 (1 votes) · LW · GW

My apologies for not making it clearer. The Enneagram and Spiral Dynamics are two entirely separate subjects, though both related to psychology. At least one other user here knows about the Enneagram, — Mercurial, I think — though I'm not sure if anyone knows about the Spiral. The Enneagram is a model for human personality types and the Spiral is theory of evolutionary psychology.

Personally, the way I've learned the Enneagram is from this book, with help from another person who is far more knowledgeable than I am. That same person helped me to understand the Spiral and didn't teach me with books, so I'm afraid I can't refer you to any particular resources, though I assure you there's plenty out there. Don Beck, who wrote a book on it in the late nineties, is the name that usually comes up whenever people talk about it, though.

comment by Bugmaster · 2012-11-05T22:49:21.837Z · score: 0 (0 votes) · LW · GW

Thanks for the info !

comment by Alicorn · 2012-11-05T22:09:24.065Z · score: 0 (0 votes) · LW · GW

Welcome! I like it when people come here by way of my stuff :)

comment by Cinnia · 2012-11-06T14:06:08.491Z · score: 0 (0 votes) · LW · GW

Thanks! Reading Luminosity and Radiance helped me move on from most of the disgust and anger I harbored toward the original series, and after reading the other posts on luminosity, I'm starting to observe and monitor my thoughts and actions more often.

comment by alanog · 2012-11-04T15:20:13.157Z · score: 5 (5 votes) · LW · GW

Hi, I'm Alan, a student in my final year of secondary school in London, England. For some reason I'm finding it hard to remember how and when I stumbled upon Less Wrong. It was probably in March or April this year, and I think it was because Julia Galef mentioned it at some point, thought I may be misremembering.

Anyway, I've now read large chunks of the Sequences (though I can never remember which bits exactly) and HPMOR, and enjoy reading all the discussion that goes on here. I've never registered as a user before as I've never felt the burning need to comment on anything, but thought I should take the survey as I seemed part of its intended audience, so maybe I'll find things to say now.

I only study maths and science subjects in school, and am planning to study for a science degree when I head off to University next year. However, I tend to hang out more with the philosophically inclined people in school, and have had much fun introducing and debating Newcomb, prisoners' dillemas, torture vs dust specks, transhumanism and the like with them.

LessWrong is definitely one of those things I regret not finding out about earlier. It's my favourite website now, although I should probably stop using it as a place to procrastinate so much.

comment by lucb1e · 2012-10-17T15:31:31.702Z · score: 5 (5 votes) · LW · GW

Hello everyone, I'm Luc, better known on the web as lucb1e. (I prefer not to advertise my last name for privacy reasons.) I'm currently a 19 year old student, doing application development in Eindhoven, The Netherlands.

Like Aaron Swartz, I meant to post in discussion but don't have enough karma. I've been reading articles from time to time for years now, so I think I have an okay idea what fits on this site.

I think I ended up on LessWrong originally via Eliezer's NPC story. After reading that I looked around on the site, read about the AIBox experiment (which I later conducted myself), and eventually found LessWrong. This was probably about three or four years ago. During this time I've read some articles, sometimes being linked here and sometimes coming here by myself. I'm a bit hesitant to participate in the community because it seems quite out of my league; everybody knows a ton about rationality whereas I've only read some bits and pieces. I think I have an okay idea of what is appropriate to post, though, and also especially where I should not try to post :)

comment by [deleted] · 2012-10-07T00:16:42.073Z · score: 5 (5 votes) · LW · GW

Well, I haven't really figured out what you all need to know about me, but I suppose there must be something relevant. Let's start with why I'm here.

I can remember being introduced to Less Wrong in two ways, though I don't know in what order. One was through HPMoR, and the other through a post about Newcomb's problem. Neither of those really brought me here in a direct way, though. I guess I am here based on the cumulative sum of recommendations and mentions of LW made by people in my social circle, combined with a desire for new reading material that is between SF/fantasy novels and statistics textbooks in need for concentration. So, since I want stuff to read, preferably lots of it, I am starting with the Sequences.

I think the next-most-relevant information here is what fields I am knowledgeable (or not) about. My single area of greatest expertise is pure mathematics; I dropped out of grad school most of the way (I was told by people who should know) to a PhD with a thesis in algebraic topology, and am now a math tutor at the high school and college levels. I have a big gap in my useful math knowledge around statistics, though, which I am now working to fill. Hence the textbooks. I also know more than the average person about archaic household chores like canning and sewing.

comment by [deleted] · 2012-09-30T23:11:29.897Z · score: 5 (9 votes) · LW · GW

I'm new on Less Wrong and I want to solve P vs. NP.

comment by shminux · 2012-10-01T03:36:12.492Z · score: 5 (5 votes) · LW · GW

One of my main goals right now is to solve P vs. NP.

Consider partitioning into smaller steps. For example, getting a PhD in math or theoretical comp sci is a must before you can hope to tackle something like that. Well, actually before you can even evaluate whether you really want to. While you seem to be on your way there, you clearly under-appreciate how deep this problem is. Maybe consider asking for a chat with someone like Scott Aaronson.

comment by [deleted] · 2012-10-01T14:24:12.753Z · score: 1 (1 votes) · LW · GW

You clearly under-appreciate how deep this problem is.

Yes, I do.

comment by shminux · 2012-10-01T14:59:23.150Z · score: 5 (5 votes) · LW · GW

After that, will it be a difficult, but possible, problem?

Do the math yourself to calculate your odds. Only one of the 7 Millennium Prize Problems have been solved so far, and that by a person widely considered a math genius since his high-school days at one of the best math-oriented schools in Russia and possibly the world at the time. And he was lucky that most of the scaffolding for the Poincaré conjecture happened to be in place already.

So, your odds are pretty bad, and if you don't set a smaller sub-goal, you will likely end up burned out and disappointed. Or worse, come up with a broken proof and bitterly defend it against others "who don't understand the math as well as you do" till your dying days. It's been known to happen.

Sorry to rain on your parade.

comment by TimS · 2012-10-01T14:56:19.575Z · score: 3 (3 votes) · LW · GW

My sense is that you are underestimating the number of extremely smart mathematicians who have been attacking N ? NP. And further, you are not yet in a position to accurately estimate your chances.

For example, PhDs in math OR comp. sci. != PhDs in math AND comp. sci. The later is more impressive because it is much, much harder.

If you find theoretical math interesting, by all means pursue it as far as you can - but I wouldn't advise a person to attend law school unless they wanted to be a lawyer. And I wouldn't advise you to enroll in a graduate mathematics program if you wouldn't be happy in that career unless you worked on P ? NP

comment by [deleted] · 2012-10-01T18:52:52.135Z · score: 1 (1 votes) · LW · GW

You are underestimating the number of extremely smart mathematicians who have been attacking the problem. And further, you are not yet in a position to accurately estimate your chances.

I was definitely engaging in motivated cognition.

comment by TimS · 2012-10-01T19:13:33.332Z · score: 0 (0 votes) · LW · GW

How many?

If your father has a PhD in Comp.Sci., he's more likely to know than a lawyer like myself.

That said, the Wikipedia article has 38 footnotes (~3/4 appear to be research papers) and 7 further readings. I estimate that at least 10x as many papers could have been cited. Conservatively, that's 300 papers. With multiple authors, that's at least 500 mathematicians who have written something relevant to P ? NP.

Adjust downward because relevant != proof, adjust upward because the estimate was deliberately conservative - but how much to move in each direction is not clear.

comment by [deleted] · 2012-10-01T21:04:32.599Z · score: 0 (0 votes) · LW · GW

The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. Will I get it? No.

comment by shminux · 2012-10-01T21:52:03.586Z · score: 7 (7 votes) · LW · GW

The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. I clearly need a backup plan, though, and I don't have one. Will someone with a BS in mathematics and computer science be able to find a good job? Where should I look?

Sorry to put it bluntly, but this sounds incredibly naive. One cannot plan on winning the Millenium Prize any more than one can plan on winning a lottery. So, it's not an instrumentally useful approach to funding your cryo. The latter only requires a modest monthly income, something that you will in all likelihood have regardless of your job description.

As for the jobs for CS graduates, there are tons and tons of those in the industry. For example, the computer security job market is very hot and requires the best and the brightest (on both sides of the fence).

comment by TimS · 2012-10-02T13:31:58.306Z · score: 1 (1 votes) · LW · GW

In addition to what shimnux said (and which I fully endorse), I think you sell your father short. He doesn't just teach, he does research. Even if he's stopped doing that because he has tenure, he still helps peer-review papers. Even if he's at a community college and does no research or peer-review, he still probably knows what was cutting edge 10 to 15 years ago (which is much more than you or I).

Regarding actual career advice, I think there are three relevant skills:

  • Math skill
  • Writing skill
  • Social skill

Having all three at good levels is much better than having only one at excellent levels. Developing them requires lots of practice - but that's true of all skills.

At college, I recommend taking as much statistics as you can tolerate. Also, take enough history so that you identify something specific taught to you as fact in high school was false/insufficiently nuanced - but not something that you currently think is false.

In terms of picking majors, its probably to early to tell - if you pick a school with a strong science program, you'll figure out the rest later. Pick courses by balancing your interest with your perception of how useful the course will be (keeping in mind that most courses are useless in real life). Topic is much less important than quality of the professor. In fact, forming good relationships with specific professors is more valuable than just about any "facts" you get from particular classes - you'll have to figure out who is a good potential mentor, but a good mentor can answer the very important questions you are asking much more effectively than a bunch of random strangers on the Internet.

Good luck.

comment by Mitchell_Porter · 2012-10-01T02:16:44.981Z · score: 2 (2 votes) · LW · GW

Mulmuley's geometric complexity theory is still where I would start. It's based on continuum mathematics, but extending it to boolean objects is the ultimate goal. A statement of P!=NP in GCT language can be seen as Conjecture 7.10 here. (Also downloadable from Mulmuley's homepage, see "Geometric complexity theory I".)

comment by EvelynM · 2012-10-01T03:13:57.549Z · score: 0 (2 votes) · LW · GW

Welcome!

A fresh perspective on hard problems is always valuable.

Getting the skills to be able to solve hard problems is even more valuable.

comment by beoShaffer · 2012-10-01T02:31:25.110Z · score: 0 (0 votes) · LW · GW

Hi, Jimmy. Welcome to less wrong. Unfortunately I don't have much advice on P vs. NP. On doing the impossible is kinda related, but not to close.

comment by [deleted] · 2013-03-03T15:28:21.144Z · score: 0 (0 votes) · LW · GW

Do you mean this guy? That's not me. I'm the anonymous one.

comment by aotell · 2012-09-20T13:37:38.994Z · score: 5 (5 votes) · LW · GW

Hi everyone!

I'm a theoretical physicist from Germany. My work is mostly about the foundations of quantum theory, but also information theory and non-commutative geometry. Currently I'm working as head of research in a private company.

As a physicist I have been confronted with all sorts of (semi-) esoteric views about quantum theory and its interpretation, and my own lack of a better understanding got me started to explore the fundamental questions related to understanding quantum theory on a rational basis. I believe that all mainstream interpretations have issues and that the real answer is a rigorous theory of quantum measurement. On my blog at http://aquantumoftheory.wordpress.com I argue that quantum theory does not have to be interpreted and I propose a rational alternative to interpretation. This is also the main reason I came here, to discuss my results with other rationalists to see if they are indeed satisfying. So your feedback is very welcome!

Other interests of mine include cognitive psychology, music (both active and passive), cooking and photography. Science in general and the philosophy of science, at least the more rational parts, are also interests of mine.

comment by NancyLebovitz · 2012-09-20T14:36:10.707Z · score: 1 (1 votes) · LW · GW

Welcome to Less Wrong!

I'm interested in your idea that quantum theory doesn't have to be interpreted.

comment by aotell · 2012-09-20T14:57:46.314Z · score: 0 (0 votes) · LW · GW

Thanks Nancy!

Have you checked out the posts at my blog? I don't know about your background, but maybe you will find them helpful. If you would like to have a more accessible break down then I can write something here too. In any case, thank you for your interest, highly appreciated!

comment by Mitchell_Porter · 2012-09-20T15:45:34.582Z · score: 0 (0 votes) · LW · GW

From your blog and your paper, your idea seems to be that the quantum state of the universe is a superposition, but only one branch at a time is ever real, and the selection of which branch will become real at a branching is nondeterministic. Well, Bohmian mechanics gets criticised for having ghost wavepackets in its pilot wave - why are they less real than the wavepackets which happen to be guiding the classical system - and you must be vulnerable to the same criticism. Why aren't the non-dominant branches (page 11) just as real as the dominant branch?

comment by aotell · 2012-09-20T16:06:38.011Z · score: 0 (0 votes) · LW · GW

Thank you for your feedback Mitchell,

I'm afraid you have not understood the paper correctly. First, if a system is in a superposition depends on the basis you use to expand it, it's not a physical property but one of description. The mechanism of branching is actually derived, and it doesn't come from superpositions but from eigenstates of the tensor factor space description that an observer is unable to reconstruct. The branching is also perfectly deterministic. I think your best option to understand how the dominance of one branch and the non-reality of the others emerges from the internal observation of unitary evolution is to work through my blog posts. I try to explain precisely where everything comes from and why it has to follow. The blog is also more comprehensible than the paper, which I will have to revise at some point. So please see if you can more make sense of it from my blog, and let me know if you still understand what I'm trying to say there. Unfortunately the precise argument is too long to present here in all detail.

comment by aotell · 2012-09-20T18:04:18.257Z · score: 0 (0 votes) · LW · GW

I think it will be helpful if I briefly describe what my approach to understanding quantum theory is, so that you can put my statements in the correct context. I assume a minimal set of postulates, namely that the universe has a quantum state and that this state evolves unitarity, generated by the strictly local interactions. The usual state space is assumed. Specifically, there is no measurement postulate or any other postulates about probability measures or anything like that. Then I go on to define an observer as a mechanism within the quantum universe that is realized locally and gathers information about the universe by interacting with it. With this setup I am able to show that an observer is unable to reconstruct the (objective) density operator of a subsystem that he is part of himself. Instead he is limited to finding the eigenvector belonging to the greatest eigenvalue of this density operator. It is then shown that the measurement postulate follows as the observer's description of the universe, specifically for certain processes that evolve the density operator in a way that changes the order of the eigensubspaces sorted by their corresponding eigenvalues. That is really all. There are no extra assumptions whatsoever. So if the derivation is correct then the measurement postulate is already contained in the unitary structure (and the light cone structure) of quantum theory.

comment by Mitchell_Porter · 2012-09-20T21:33:58.238Z · score: 2 (2 votes) · LW · GW

As you would know, the arxiv sees several papers every month claiming to have finally ex