Welcome to Less Wrong! (July 2012)

post by Paul Crowley (ciphergoth) · 2012-07-18T17:24:51.381Z · LW · GW · Legacy · 850 comments

Contents

  A few notes about the site mechanics
  A few notes about the community
  A list of some posts that are pretty awesome
None
850 comments

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

(This is the fourth incarnation of the welcome thread, the first three of which which now have too many comments. The text is by orthonormal from an original by MBlume.)

A few notes about the site mechanics

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax  (you can click the "Help" link below the text box to bring up a primer).
You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.
However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you've any questions about karma or voting, please feel free to ask here.
Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.
It's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.
Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.
EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion.

A few notes about the community

If you've come to Less Wrong to discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)
If you want to write a post about a LW-relevant topic, awesome!  I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma- honestly, you don't know what you don't know about the community norms here.)
If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page.
There's also a Facebook group.  If you've your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Normal_Anomaly 
Randaly 
shokwave 
Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

850 comments

Comments sorted by top scores.

comment by OnTheOtherHandle · 2012-07-19T07:01:10.246Z · LW(p) · GW(p)

Hello!

  • Age: Years since 1995
  • Gender: Female
  • Occupation: Student

I actually started an account two years ago, but after a few comments I decided I wasn't emotionally or intellectually ready for active membership. I was confused and hurt for various reasons that weren't Less Wrong's fault, and I backed away to avoid saying something I might regret. I didn't want to put undue pressure on myself to respond to topics I didn't fully understand. Now, after many thousands of hours reading and thinking about neurology, evolutionary psychology, and math, I'm more confident that I won't just be swept up in the half-understood arguments of people much smarter than I am. :)

Like almost everyone here, I started with atheism. I was raised Hindu, and my home has the sort of vague religiosity that is arguably the most common form in the modern world. For the most part, I figured out atheism on my own, when I was around 11 or 12. It was emotionally painful and socially costly, but I'm stronger for the experience. I started reading various mediocre atheist blogs, but I got bored after a couple of years and wanted to do something more than shoot blind fish in tiny barrels. I wanted to build something up, not just tear something down (no matter how much it really should be torn down.)

The actual direct link to Less Wrong came from TV Tropes. I suspect it's one of the best gateway drugs because TV Tropes, while not explicitly atheist or rationalist, does more to communicate the positive ideals and emotional memes of LW-style rationality than most of the atheosphere does. For the first time, I got the sense that "our" way of thinking could be so much more powerful than simply bashing religion and astrology.

One important truth beyond atheism that I have slowly come to accept is inborn IQ differentials, between individuals and groups of individuals. I had to face the fact that P(male| IQ 2 standard deviations above mean) was significantly higher than 50%. I had to deal with the fact that historical oppression probably wasn't the end-all be-all explanation for why women on average hadn't done as much inventing and discovering and brilliant thinking as men. I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I further learned that my brain was modular, and the bits of me that I choose to call "I" don't constitute everything. My own brain could sabotage the values and ideals and that "I" hold so dearly. For a long time I struggled with the idea that everything I believed in and loved was fake, because I couldn't force my body to actually act accordingly. Did I value human life? Why wasn't I doing everything I possibly could to save lives, all the time? Did I value freedom and autonomy and gender equality? Why could I not help sometimes being attracted to domineering jerks?

It took me a while to accept that the newly-evolved, conscious, abstractly-reasoning, self-reflecting "I" simply did not have the firepower to bully ancient and powerful urges into submission. It took me a while to accept that my values were not lies simply because my monkey brain sometimes contradicted them. The "I" in my brain does not have as much power as she would like; that does not mean she doesn't exist.

Other, non-rationality related information: I love writing, and for a long time I convinced myself that therefore I would love being a novelist. Now, I recognize that I would much rather compose a non-fiction or reflective essay, although ideas for fiction stories still flood in and I rarely do much about it due to laziness and/or fear. I fell in love with Avatar: The Last Airbender for its great storytelling and its combination of intelligence and idealism. I adore Pixar and many Disney movies for the sweetness and heart. I like somewhat traditional-sounding music with easily discernible lyrics that tells a story; I can't get into anything that involves screaming or deliberate disharmony. Show-tunes are great. :)

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar; I consciously cultivate it and am lucky to have it. If this disposition will be "destroyed by the truth"...well, I have a choice to make then.

Replies from: Swimmer963, shminux, GLaDOS, Xachariah, iceman, hankx7787, Solvent, Jayson_Virissimo, MBlume, RobertLumley
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-19T09:03:36.889Z · LW(p) · GW(p)

Welcome to Less Wrong, and I for one am glad to have you here (again)! You sound like someone who thinks very interesting thoughts.

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

I can't say that this is something that has ever really bothered me. Your IQ is what it is. Whether or not there's an overall gender-based trend in one direction or another isn't going to change anything for you, although it might change how people see you. (If anything, I found that I got more attention as a "girl who was good at/interested in science"...which, if anything, was irritating and made me want to rebel and go into a "traditionally female" field just because I could.

Basically, if you want to accomplish greatness, it's about you as an individual. Unless you care about the greatness of others, and feel more pride or solidarity with females than with males who accomplish greatness (which I don't), the statistical tendency doesn't matter.

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar; I consciously cultivate it and am lucky to have it. If this disposition will be "destroyed by the truth"...well, I have a choice to make then.

I think that more than idealism, what I wouldn't want to lose is a sense of humour. Idealism, in the sense of "believing that the world is good deep down/people will do the best they can/etc", can be broken by enough bad stuff happening. A sense of humour is a lot harder to break.

Replies from: OnTheOtherHandle, Jayson_Virissimo, Rubix
comment by OnTheOtherHandle · 2012-07-19T16:58:33.495Z · LW(p) · GW(p)

I know that it's not particularly rational to feel more affiliation with women than men, but I do. It's one of the things my monkey brain does that I decided to just acknowledge rather than constantly fight. It's helped me have a certain kind of peace about average IQ differentials. The pain I described in the parent has mellowed. Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both. I wish I had the inner confidence to care about self-improvement more than competition, but as yet I don't.

ETA: I characterize "idealism" as a hope for the future more than a belief about the present.

Replies from: Viliam_Bur, ViEtArmis
comment by Viliam_Bur · 2012-07-19T21:11:20.483Z · LW(p) · GW(p)

Still, I have to face the fact that if I want to major in, say, applied math, chances are I might be lonely or below-average or both.

As long as you know your own skills, there is no need to use your gender as a predictor. We use the worse information only in the absence of better information; because the worse information can be still better than nothing. We don't need to predict the information we already have.

When we already know that e.g. "this woman has IQ 150", or "this woman has won a mathematical olympiad" there is no need to mix general male and female IQ or math curves into the equation. (That's only what you do when you see a random woman and you have no other information.)

If there are hundred green balls in the basket and one red ball, it makes sense to predict that a randomly picked ball will be almost surely green. But once you have randomly picked a ball and it happened to be red... then it no longer makes sense to worry that this specific ball might still be green somehow. It's not; end of story.

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small. But that's not the situation; you already had some math experience. So make your guesses based on that experience. Your gender is already included in the probability of you having that specific experience. Don't count it twice!

Replies from: Bugmaster, Rubix
comment by Bugmaster · 2012-07-26T22:49:54.479Z · LW(p) · GW(p)

If you had no experience with math yet, then I'd say that based on your gender, your chances to be a math genius are small.

To be perfectly accurate, any person's chances of being a math genius are going to be small anyway, regardless of that person's gender. There are very few geniuses in the world.

comment by ViEtArmis · 2012-07-19T17:37:55.164Z · LW(p) · GW(p)

It is particularly not rational to ignore the effect of your unconscious in your relationships. That fight is a losing battle (right now), so if having happy relationships is a goal, the pursuit of that requires you pay attention.

There is almost no average IQ differential, since men pad out the bottom as well. Greater chromosomal genetic variations in men lead to stupidity as often as intelligence.

Really, this gender disparity only matters at far extremes. Men may pad out the top and bottom 1% (or something like that) in IQ, but applied mathematicians aren't all top 1% (or even 10%, in my experience). It is easy to mistake finally being around people who think like you do (as in high IQ) with being less intelligent than them, but this is a trick!

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-19T19:17:45.573Z · LW(p) · GW(p)

There is almost no average IQ differential, since men pad out the bottom as well.

Sorry, you're right, I did know that. (And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.)

I was thinking about "IQ differentials" in the very broad sense, as in "it sucks that anyone is screwed over before they even start." I also suffer from selection bias, because I seek out people in general for intelligence, so I see the men to the right of the bell curve, while I just sort of abstractly "know" there are more men than women to the left, too.

Replies from: philh
comment by philh · 2012-07-19T22:22:00.302Z · LW(p) · GW(p)

And it's exasperating to see highly intelligent men make the rookie mistake of saying "women are stupid" or "most women are stupid" because they happen to be high-IQ. There's an obvious selection bias - intelligent men probably have intelligent male friends but only average female acquaintances - because they seek out the women for sex, not conversation.

Another possible explanation comes to mind: people with high IQs consider the "stupid" borderline to be significantly above 100 IQ. Then if they associate equally with men and women, the women will more often be stupid; and if they associate preferentially with clever people, there will be fewer women.

(This doesn't contradict selection bias. Both effects could be at play.)

Replies from: ViEtArmis, OnTheOtherHandle
comment by ViEtArmis · 2012-07-20T14:37:58.135Z · LW(p) · GW(p)

You'd have to raise the bar really far before any actual gender-based differences showed up. It seems far more likely that the cause is a cultural bias against intellectualism in women (women will under-report IQ by 5ish points and men over-report by a similar margin, women are poorly represented in "smart" jobs, etc.). That makes women present themselves as less intelligent and makes everyone perceive them as less intelligent.

Replies from: juliawise, Desrtopa
comment by juliawise · 2012-07-20T15:46:57.574Z · LW(p) · GW(p)

Does anyone know of a good graph that shows this? I've seen several (none citing sources) that draw the crossover in quite different places. So I'm not sure what the gender ratio is at, say, IQ 130.

Replies from: Vaniver, ViEtArmis
comment by Vaniver · 2012-07-20T16:26:33.114Z · LW(p) · GW(p)

La Griffe Du Lion has good work on this, but it's limited to math ability, where the male mean is higher than the female mean as well as the male variance being higher than the female variance.

The formulas from the first link work for whatever mean and variance you want to use, and so can be updated with more applicable IQ figures, and you can see how an additional 10 point 'reporting gap' affects things.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-21T01:38:51.923Z · LW(p) · GW(p)

Unfortunately, intelligence in areas other than math seem to be an "I know it when I see it" kind of thing. It's much harder to design a good test for some of the "softer" disciplines, like "interpersonal intelligence" or even language skills, and it's much easier to pick a fight with results you don't like.

It could be that because intelligence tests are biased toward easy measurement, they focus too much on math, so they under-predict women's actual performance at most jobs not directly related to abstract math skills.

comment by ViEtArmis · 2012-07-20T17:02:11.156Z · LW(p) · GW(p)

Of course, if you use IQ testing, it is specifically calibrated to remove/minimize gender bias (so is the SAT and ACT), and intelligence testing is horribly fraught with infighting and moving targets.

I can't find any research that doesn't at least mention that social factors likely poison any experimental result. It doesn't help any that "intelligence" is poorly defined and thus difficult to quantify.

Considering that men are more susceptible to critical genetic failure, maybe the mean is higher for men on some tests because the low outliers had defects that made them impossible to test (such as being stillborn)?

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-21T01:40:38.306Z · LW(p) · GW(p)

The SAT doesn't seem to be calibrated to make sure average scores are the same for math, at least. At least as late as 2006, there's still a significant gender gap.

Replies from: ViEtArmis
comment by ViEtArmis · 2012-07-22T02:33:50.027Z · LW(p) · GW(p)

Apparently, the correction was in the form of altering essay and story questions to de-emphasize sports and business and ask more about arts and humanities. This hasn't been terribly effective. The gap is smaller in the verbal sections, but it's still there. Given that the entire purpose of the test is to predict college grades directly and women do better in college than men, explanations and theories abound.

comment by Desrtopa · 2012-07-22T14:23:41.881Z · LW(p) · GW(p)

Not a rigorously conducted study, but this (third poll) suggests a rather greater tendency to at least overestimate if not willfully over-report IQ, with both men and women overestimating, but men overestimating more.

comment by OnTheOtherHandle · 2012-07-21T01:53:41.629Z · LW(p) · GW(p)

You're right; my explanation was drawn from many PUA-types who had said similar things, but this effect is perfectly possible in non-sexual contexts, too.

There's actually little use in using words like "stupid", anyway. What's the context? How intelligent does this individual need to be do what they want to do? Calling people "stupid" says "reaching for an easy insult," not "making an objective/instrumentally useful observation."

Sure, there will be some who say they'll use the words they want to use and rail against "censorship", but connotation and denotation are not so separate. That's why I didn't find the various "let's say controversial, unspeakable things because we're brave nonconformists!" threads on this site to be all that helpful. Some comments certainly were both brave and insightful, but I felt on the whole a little bit of insight was brought at the price of a whole lot of useless nastiness.

comment by Jayson_Virissimo · 2012-07-19T09:39:43.598Z · LW(p) · GW(p)

Idealism, in the sense of "believing that the world is good deep down/people will do the best they can/etc", can be broken by enough bad stuff happening. A sense of humour is a lot harder to break.

Arguably, if it was "broken" this way it would be a mistake (specifically, of generalizing from too small a sample size). I have a job where I am constantly confronted with suffering and death, but at the end of the day, I can still laugh just like everyone else, because I know my experience is a biased sample and that there is still lots of good going on in the world.

comment by Rubix · 2012-07-30T17:45:26.902Z · LW(p) · GW(p)

I like this post more than I like most things; you've helped me, for one, with a significant amount of distress.

comment by shminux · 2012-07-19T17:12:46.634Z · LW(p) · GW(p)

I had to face the fact that mere biology may have systematically biased my half of the population against greatness. And it hurt. I had to fight the urge to redefine intelligence and/or greatness to assuage the pain.

Consciously keeping your identity small and thus not identifying with everyone who happens to have the same internal plumbing might be helpful there.

Replies from: OnTheOtherHandle, ViEtArmis
comment by OnTheOtherHandle · 2012-07-19T19:14:04.002Z · LW(p) · GW(p)

PG is awesome, but his ideas do basically fall into the category of "easier said than done." This doesn't mean "not worth doing," of course, but practical techniques would be way more helpful. It's easier to replace one group with another (arguably better?) group than to hold yourself above groupthink in general.

Replies from: shminux
comment by shminux · 2012-07-19T19:43:33.208Z · LW(p) · GW(p)

easier said than done

My approach is to notice when I want to say/write "we", as opposed to "I", and examine why. That's why I don't personally identify as a "LWer" (only as a neutral and factual "forum regular"), despite the potential for warm fuzzies resulting from such an identification.

There is an occasional worthy reason to identify with a specific group, but gender/country/language/race/occupation/sports team are probably not good criteria for such a group.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-19T21:15:26.535Z · LW(p) · GW(p)

Thank you! I'll look for that.

Replies from: shminux
comment by shminux · 2012-07-20T00:04:34.788Z · LW(p) · GW(p)

Here is a typical LW comment that raises the "excessive group identification" red flag for me.

comment by ViEtArmis · 2012-07-19T20:55:53.284Z · LW(p) · GW(p)

I always think of that in the context of conflict resolution, and refer to it as "telling someone that what they did was idiotic, not that they are an idiot." Self-identifying is powerful, and people are pretty bad at it because of a confluence of biases.

comment by GLaDOS · 2012-07-19T13:05:34.236Z · LW(p) · GW(p)

Great to see you here and great to hear you took the time to read up on the relevant material before jumping in. I'm confident that you will find many people who comment quite a bit don't have such prudence, so don't be surprised if you outmatch a long time commenter. (^_^)

For the first time, I got the sense that "our" way of thinking could be so much more powerful than simply bashing religion and astrology.

Yesss! This is exactly how I felt when I found this community.

comment by Xachariah · 2012-07-20T00:53:10.259Z · LW(p) · GW(p)

I fell in love with Avatar: The Last Airbender for its great storytelling and its combination of intelligence and idealism.

I don't want to lose the hope/idealism/inner happiness that makes me able to in-ironically enjoy Disney and Pixar and Avatar

I'm not sure about Disney, but the you should still be able to enjoy Avatar. Avatar (TLA and Korra) is in many ways a deconstruction of magical worlds. They take the basic premise of kung-fu magic and then let that propagate to it's logical conclusions. The TLA war was enabled by rapid industrialization when one nation realized they could harness their breaking the laws of thermodynamics for energy. The premise of S1 Korra is exploring social inequality in the presence of randomly distributed magical powers.

In these ways, Avatar is less Harry Potter and more HPMoR.

Replies from: Alicorn, OnTheOtherHandle
comment by Alicorn · 2012-07-20T01:06:56.163Z · LW(p) · GW(p)

randomly distributed magical powers

They run strongly in families (although it's not clear exactly how, since neither of Katara's parents appears to have been a waterbender). It's not really random.

Replies from: Xachariah
comment by Xachariah · 2012-07-20T03:28:32.793Z · LW(p) · GW(p)

You are correct. I wouldn't consider it much different from personality. It's part heritable, part environmental and upbringing, and part randomness.

Now you've got me wondering if philosophers in the Avatar universe have debates on whether your element/bending is nature vs nurture.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-20T18:47:07.109Z · LW(p) · GW(p)

Now I want an ATLA fanfic infused with Star Trek-style pensive philosophizing. :D

I would argue that it has even more potential than HP for a rationalist makeover. Aang stays in the iceberg and Sokka saves the planet?

comment by OnTheOtherHandle · 2012-07-20T17:54:45.326Z · LW(p) · GW(p)

Honestly, I was disappointed with the ending of Season 1 Korra: (rot13)

Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat.

I'm not necessarily idealistic enough to be happy with a world that has no consequences or really difficult choices; I'm just not cynical enough to find misanthropy and defeatism cool. That's why children's entertainment appeals to me - while it can be overly sugary-sweet, adult entertainment often seems to be both narrow and shallow, and at the same time cynical. Outside of science fiction, there doesn't seem to be much adult entertainment that's about things I care about - saving the world, doing something big and important and good.

ETA: What Zach Weiner makes fun of here - that's what I'm sick of. Not just misanthropy and undiscriminating cynicism, but glorifying it as the height of intelligence. LessWrong seemed very pleasantly different in that sense.

Replies from: Bugmaster, Nornagest, Xachariah
comment by Bugmaster · 2012-07-26T22:17:17.748Z · LW(p) · GW(p)

I agree; I found the ending very disappointing, as well.

The authors throw one of the characters into a very powerful personal conflict, making it impossible for the character to deny the need for a total accounting and re-evaluation of the character's entire life and identity. The authors resolve this personal conflict about 30 seconds later with a Deus Ex Machina. Bleh.

comment by Nornagest · 2012-07-20T18:32:39.841Z · LW(p) · GW(p)

Are you sure that's rot13? It's generating gibberish in two different decoders for me, although I'm pretty sure I know what you're talking about anyway.

ETA: Yeah, looks like a shift of three characters right.

ETA AGAIN: Fixed now, thanks.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-21T01:21:26.104Z · LW(p) · GW(p)

Sorry, I dumped it into Briangle and forgot to change the setting.

comment by Xachariah · 2012-07-20T22:32:27.143Z · LW(p) · GW(p)

Nnat zntvpnyyl tvirf Xbeen ure oraqvat onpx nsgre Nzba gbbx vg njnl, naq gurer ner ab creznarag pbafrdhraprf gb nalguvat.

V gubhtug vg jnf irel rssrpgvir. Gubhtu irvyrq fb xvqf jba'g pngpu vg, univat gur qnevat gb fubj n znva punenpgre pbagrzcyngvat naq nyzbfg nggrzcgvat fhvpvqr jnf n terng jnl gb pybfr gur nep. Gurer'f nyernql rabhtu 'npgvba' pbafrdhraprf qhr gb gur eribyhgvba, fb vg'f avpr onynapvat bhg univat gur irel raq or gur erfhygvat punatrf gb Xbeen'f punenpgre. Jura fur erwrpgf fhvpvqr nf na bcgvba, fur ernyvmrf gung fur ubyqf vagevafvp inyhr nf n uhzna orvat engure guna nf na Ningne. Cyhf nf bar bs gur ener srznyr yrnqf va puvyqera'f gryrivfvba, gur qenzngvp pyvznk bs gur fgbel orvat gur qr-bowrpgvsvpngvba bs gur srznyr yrnq vf uhtr. Nyfb gur nagv-fhvpvqr zrffntr orvat gung onq thlf pbzzvg zheqre/fhvpvqr naq gur tbbq thlf qba'g vf tbbq gb svavfu jvgu. V'z irel fngvfsvrq jvgu gurz raqvat vg gung jnl.

Znal fubjf raq jvgu jvgu ovt onq orvat orngra. Fubjf gung cergraq gb or zngher unir cebgntbavfgf qvr ng gur raq. Ohg Xbeen'f raqvat vf bar bs gur bayl gung fgevxrf zr nf npghnyyl zngher, orpnhfr vg'f qverpgyl n zbeny/cuvybfbcuvpny ceboyrz ng gur raq.

Replies from: OnTheOtherHandle, Desrtopa
comment by OnTheOtherHandle · 2012-07-21T01:30:17.267Z · LW(p) · GW(p)

Gung'f na vagrerfgvat jnl gb chg vg, naq V guvax V'z unccvre jvgu gur raqvat orpnhfr bs gung. Ubjrire, V jnf rkcrpgvat Frnfba Gjb gb or Xbeen'f wbhearl gbjneq erpbirel (rvgure culfvpny be zragny be obgu) nsgre Nzba gbbx njnl ure oraqvat. Vg'f abg gung V qba'g jnag ure gb or jubyr naq unccl; vg'f whfg gung vg frrzrq gbb rnfl. V gubhtug Nzba/Abngnx naq Gneybpx'f fgbel nep jnf zhpu zber cbjreshy. Va snpg, gurve zheqre/fhvpvqr frrzrq gb unir fb zhpu svanyvgl gung V svtherq vg zhfg or gur raq bs gur rcvfbqr hagvy V ernyvmrq gurer jrer fvk zvahgrf yrsg.

Va bgure jbeqf, vg'f terng gung gur fgbel yraqf vgfrys gb gur vagrecergngvba gung vg jnf nobhg vagevafvp jbegu nf n uhzna orvat qvfgvapg sebz bar'f cbjref, ohg gurl unq n jubyr frnfba yrsg gb npghnyyl rkcyvpvgyl rkcyber gung. Nnat'f wbhearl jnf nobhg yrneavat gb fgbc ehaavat njnl naq npprcg gur snpg gung ur vf va snpg gur Ningne, naq ur pna'g whfg or nal bgure xvq naq sbetrg nobhg uvf cbjre naq erfcbafvovyvgl. Xbeen'f wbhearl jnf gb or nobhg npprcgvat gung whfg orpnhfr fur vf gur Ningne, naq fur ybirf vg naq qrevirf zrnavat sebz vg, qbrfa'g zrna fur'f abguvat zber guna n ebyr gb shysvyy. Vg sryg phg fubeg. Nnat tnir vg gb Xbeen; fur qvqa'g svaq vg sbe urefrys.

comment by Desrtopa · 2012-07-21T01:51:46.610Z · LW(p) · GW(p)

V funerq BaGurBgureUnaqyr'f qvfnccbvagzrag jvgu gur raqvat, naq V jnfa'g irel vzcerffrq jvgu Xbeen'f rzbgvbany erfbyhgvba ng gur raq. Fur uvgf n anqve bs qrcerffvba, frrzvatyl pbagrzcyngrf fhvpvqr, naq gura... rirelguvat fhqqrayl erfbyirf vgfrys. Fur trgf ure oraqvat onpx, jvgubhg nal rssbeg be cynaavat, naq jvgu ab zber fvtavsvpnag punenpgre qrirybczrag guna univat orra erqhprq gb qrfcrengvba. Gur Ovt Onq vf xvyyrq ol fbzrbar ryfr juvyr gur cebgntbavfgf' nggragvba vf ryfrjurer, naq Xbeen tnvaf gur novyvgl gb haqb nyy gur qnzntr ur pnhfrq va gur svefg cynpr. Gur fbpvrgny vffhrf sebz juvpu ur ohvyg uvf onfr bs fhccbeg jrer yrsg hanqqerffrq, ohg jvgubhg n pyrne nirahr gb erfbyir gurz nf n pbagvahngvba bs gur qenzngvp pbasyvpg.

Vs Xbeen unq orra qevira gb qrfcrengvba, naq nf n erfhyg, frnepurq uneqre sbe fbyhgvbaf naq sbhaq bar, V jbhyq unir sbhaq gung n ybg zber fngvfslvat. Gung'f bar bs gur ernfbaf V engr gur raqvat bs Ningne: Gur Ynfg Nveoraqre uvture guna gung bs gur svefg frnfba bs Xbeen. Vg znl unir orra vanqrdhngryl sberfunqbjrq naq orra fbzrguvat bs n Qrhf Rk Znpuvan, ohg ng yrnfg Nnat qrnyg jvgu n fvghngvba jurer ur jnf snprq jvgu bayl hanpprcgnoyr pubvprf ol frrxvat bgure nygreangvirf, svaqvat, naq vzcyrzragvat bar. Ohg Xbeen'f ceboyrzf jrer fbyirq, abg ol frrxvat fbyhgvbaf, ohg ol pbzvat va gbhpu jvgu ure fcvevghny fvqr ol ernpuvat ure rzbgvbany ybj cbvag.

Jung Fcvevg!Nnat fnvq unf erny jbeyq gehgu gb vg. Crbcyr qb graq gb or zber fcvevghny va gurve ybjrfg naq zbfg qrfcrengr pvephzfgnaprf. Ohg engure guna orvat fbzrguvat gb ynhq, V guvax guvf ercerfragf n sbez bs tvivat hc, jurer crbcyr ghea gb gur fhcreangheny sbe fbynpr be ubcr orpnhfr gurl qba'g oryvrir gurl pna fbyir gurve ceboyrzf gurzfryirf. Fb nf erfbyhgvbaf bs punenpgre nepf tb, V gubhtug gung jnf n cerggl onq bar.

Nyy va nyy V jnf n sna bs gur frevrf, ohg gur raqvat haqrefubg zl rkcrpgngvbaf.

comment by iceman · 2012-07-19T22:46:18.553Z · LW(p) · GW(p)

I adore Pixar and many Disney movies for the sweetness and heart.

Have you seen the new My Little Pony show? It's really good. It's sweet without being twee.

comment by hankx7787 · 2012-07-19T10:55:05.324Z · LW(p) · GW(p)

I further learned that my brain was modular, and the bits of me that I choose to call "I" don't constitute everything. My own brain could sabotage the values and ideals and that "I" hold so dearly. For a long time I struggled with the idea that everything I believed in and loved was fake, because I couldn't force my body to actually act accordingly. Did I value human life? Why wasn't I doing everything I possibly could to save lives, all the time? Did I value freedom and autonomy and gender equality? Why could I not help sometimes being attracted to domineering jerks?

It took me a while to accept that the newly-evolved, conscious, abstractly-reasoning, self-reflecting "I" simply did not have the firepower to bully ancient and powerful urges into submission. It took me a while to accept that my values were not lies simply because my monkey brain sometimes contradicted them. The "I" in my brain does not have as much power as she would like; that does not mean she doesn't exist.

I've been through this kind of thing before, and Less Wrong did nothing for me in this respect (although Less Wrong is awesome for many other reasons). Reading Ayn Rand on the other hand made all the difference in the world in this respect, and changed my life.

Replies from: OnTheOtherHandle, ViEtArmis
comment by OnTheOtherHandle · 2012-07-19T17:01:22.536Z · LW(p) · GW(p)

I haven't read Ayn Rand, but those who do seem to talk almost exclusively about the politics, and I just can't work up the energy to get too excited about something I have such little chance of affecting. Would you mind telling me where/how Ayn Rand discussed evolutionary psychology or modular minds? I'm curious now. :)

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-19T17:32:22.965Z · LW(p) · GW(p)

She doesn't, is the short answer.

She does discuss, however, the integration of personal values into one's philosophical system. I was struggling with a possibly similar issue; I had previously regarded rationalism as an end in itself. Emotions were just baggage that had to be overcome in order to achieve a truly enlightened state. If this sounds familiar to you, her works may help.

The short version: You're a human being. An ethical system that demands you be anything else is fatally flawed; there is no universal ethical system, what is ethical for a rabbit is not ethical for a wolf. It's necessary for you to live, not as a rabbit, not as a rock, not as a utility or paperclip maximizer, but as a human being. Pain, for example, isn't to be denied - for to do so is as sensible as denying a rock - but experienced as a part of your existence. (That you shouldn't deny pain is not the same as that you should seek it; it is simply a statement that it's a part of what you are.)

Objectivism, the philosophy she founded, is named on the claim that ethics are objective; not subjective, which is to say, whatever you want it to be; not universal, which is to say, there's a single ethics system in the whole universe that applies equally to rocks, rabbits, mice, and people; but objective, which is to say, it exists as a definable property for a given subject, given certain preconditions (ethical axioms; she chose "Life" as her ethical axiom).

Replies from: OnTheOtherHandle, hankx7787
comment by OnTheOtherHandle · 2012-07-19T19:45:38.060Z · LW(p) · GW(p)

I don't know that I would call that "objective." I mean, the laws of physics are objective because they're the same for rabbits and rocks and humans alike.

I honestly don't trust myself to go much more meta than my own moral intuitions. I just try not to harm people without their permission or deceive/manipulate them. Yes, this can and will break down in extreme hypothetical scenarios, but I don't want to insist on an ironclad philosophical system that would cause me to jump to any conclusions on, say, Torture vs. Dust Specks just yet. I suspect that my abstract reasoning will just be nuts.

My understanding of morality is basically that we're humans, and humans need each other, so we worked out ways to help one another out. Our minds were shaped by the same evolutionary processes, so we can agree for the most part. We've always seemed to treat those in our in-group the same way; it's just that those we included in the in-group changed. Slowly, women were added, and people of different races/religions, etc.

Replies from: hankx7787, thomblake
comment by hankx7787 · 2012-07-19T20:25:32.328Z · LW(p) · GW(p)

See this comment regarding this common confusion about 'objective'...

comment by thomblake · 2012-07-19T20:33:08.249Z · LW(p) · GW(p)

I don't know that I would call that "objective."

It's a sticky business, and different ethicists will frame the words different ways. On one view, objective includes "It's true even if you disagree" and subjective includes "You can make up whatever you want". On another, objective includes "It's the same for everybody" and subjective includes "It's different for different people". The first distinction better matches the usual meaning of 'objective', and the second distinction better matches the usual meaning of 'subjective', so I think the terms were just poorly-chosen as different sides of a distinction.

Because of this, my intuition these days is to say that ethics is both subjective and objective, or "subjectively objective" as Eliezer has said about probability. Though I'd like it if we switched to using "subject-sensitive" rather than "subjective", as is now commonly used in Epistemology.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-19T20:53:45.909Z · LW(p) · GW(p)

So, this isn't the first time I've seen this distinction made here, and I have to admit I don't get it.

Suppose I'm studying ballistics in a vacuum, and I'm trying to come up with some rules that describe how projectiles travel, and I discover that the trajectory of a projectile depends on its mass.

I suppose I could conclude that ballistics is "subjectively objective" or "subject-sensitive," since after all the trajectory is different for different projectiles. But this is not at all a normal way of speaking or thinking about ballistics. What we normally say is that ballistics is "objective" and it just so happens that the proper formulation of objective ballistics takes projectile mass as a parameter. Trajectory is, in part, a function of mass.

When we say that ethics is "subject-sensitive" -- that is, that what I ought to do depends on various properties of me -- are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among individuals?

Similarly, while we acknowledge that the same projectile will follow a different trajectory in different environments, and that different projectiles of the same mass will follow different trajectories in different environments, we nevertheless say that ballistics is "universal", because the equations that predict a trajectory can take additional properties of the environment and the projectile as parameters. Trajectory is, in part, a function of environment.

When we say that ethics is not universal, are we saying it's different from the ballistics example? Or is this just a way of saying that we haven't yet worked out how to parametrize our ethics to take into account differences among environments?

Replies from: drethelin, hankx7787
comment by drethelin · 2012-07-22T08:44:59.219Z · LW(p) · GW(p)

I think it's an artifact of how we think about ethics. It doesn't FEEL like a bullet should fly the same exact way as an arrow or as a rock, but when you feel your moral intuitions they seem like they should obviously apply to everyone. Maybe because we learn about throwing things and motion through infinitely iterated trial and error, but we learn about morality from simple commands from our parents/teachers, we think about them in different ways.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-22T16:50:54.642Z · LW(p) · GW(p)

So, I'm not quite sure I understood you, but you seem to be explaining how someone might come to believe that ethics are universal/objective in the sense of right action not depending on the actor or the situation at all, even at relatively low levels of specification like "eat more vegetables" or whatever.

Did I get that right?

If so... sure, I can see where someone whose moral intuitions primarily derive from obeying the commands of others might end up with ethics that work like that.

comment by hankx7787 · 2012-07-20T01:46:19.925Z · LW(p) · GW(p)

"the proper formulation of objective ballistics takes projectile mass as a parameter"

I think the best analogy here is to say something like, the proper formulation of decision theory takes terminal values as a parameter. Decision theory defines a "universal" optimum (that is, universal "for all minds"... presumably anyway), but each person is individually running a decision theory process as a function of their own terminal values - there is no "universal" terminal value, for example if I could build an AI then I could theoretically put in any utility function I wanted. Ethics is "universal" in the sense of optimal decision theory, but "person dependent" in the sense of plugging in one's own particular terminal values - but terminal values and ethics are not necessarily "mind-dependent", as explained here.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-20T17:11:09.817Z · LW(p) · GW(p)

I would certainly agree that there is no terminal value shared by all minds (come to that, I'm not convinced there are any terminal values shared by all of any given mind).

Also, I would agree that when figuring out how I should best apply a value-neutral decision theory to my environment I have to "plug in" some subset of information about my own values and about my environment.

I would also say that a sufficiently powerful value-neutral decision theory instructs me on how to optimize any environment towards any value, given sufficiently comprehensive data about the environment and the value. Which seems like another way of saying that decision theory is objective and universal, in the same sense that ballistics is.

How that relates to statements about ethics being universal,objective, person-dependent, and/or mind-dependent is not clear to me, though, even after following your link.

comment by hankx7787 · 2012-07-19T19:39:06.577Z · LW(p) · GW(p)

Surprisingly, this isn't a bad short explanation of her ethics.

I've been reading a lot of Aristotle lately (I highly recommend Aristotle by Randall, for anyone who is in to that kind of thing), and Rand mostly just brought Aristotle's philosophy into the 20th century - of course note now that it's the 21st century, so she is a little dated at this point. Take for example, Rand was offered by various people to get fully paid-for cryonics when she was close to death, but for unknown reasons she declined, very sadly (if you're looking for someone to take her philosophy into the 21st century, you will need to talk to, well... ahem... me).

It's important to mention that politics is only one dimension of her philosophy and of her writing (although, naturally, it's the subject that all the pundits and mind-killed partisans obsess over) - and really it is the least important, since it is the most derivative of all of her other more fundamental philosophical ideas on metaphysics, epistemology, man's nature, and ethics.

Replies from: OrphanWilde
comment by OrphanWilde · 2012-07-19T19:57:28.244Z · LW(p) · GW(p)

I'll willingly confess to not being interested in Aristotle in the least. Philosophy coursework cured me of interest in Greek philosophy. Give me another twenty years and I might recover from that.

Have you read TVTropes' assessment of Objectivism? It's actually the best summary I've ever read, as far as the core of the philosophy goes.

Replies from: hankx7787
comment by hankx7787 · 2012-07-19T20:16:11.201Z · LW(p) · GW(p)

No I haven't! That was quite good, thanks.

By the way, I fully share yours (and Eliezer's) sentiment in regard to academic philosophy. I took a "philosophy of mind" course in college, thinking that would be extremely interesting, and I ended up dropping the class in short order. It was only after a long study of Rand that I ever became interested in philosophy again, once I realized I had a sane basis on which to proceed.

comment by ViEtArmis · 2012-07-19T20:28:44.261Z · LW(p) · GW(p)

Specifically, her non-fiction work (if you find that sort of thing palatable) provides a lot more concrete discussion of her philosophy.

Unfortunately, Ayn Rand is little too... abrasive... for many people who don't agree entirely with her. She has a lot of resonant points that get rejected because of all the other stuff she presents along with it.

comment by Solvent · 2012-07-28T01:20:30.725Z · LW(p) · GW(p)

I wonder why it is that so many people get here from TV Tropes.

Also, you're not the only one to give up on their first LW account.

Replies from: shokwave, army1987
comment by shokwave · 2012-07-28T18:50:07.052Z · LW(p) · GW(p)

I wonder why it is that so many people get here from TV Tropes.

Possibly: TV Tropes approaches fiction the way LessWrong approaches reality.

Replies from: Solvent
comment by Solvent · 2012-07-29T01:18:07.144Z · LW(p) · GW(p)

How do you mean?

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-29T03:32:32.955Z · LW(p) · GW(p)

At a guess, I would say: looking for recurring patterns in fiction, and extrapolating principles/tropes. It's a very bottom-up approach to literature, taking special note of subversions, inversions, aversions, etc, as opposed to the more top-down academic study of literature that loves to wax poetic about "universal truths" while ignoring large swaths of stories (such as Sci Fi and Fantasy) that don't fit into their grand model. Quite frankly, from my perspective, it seems they tend to force a lot of stories into their preferred mold, falling prey to True Art tropes.

comment by A1987dM (army1987) · 2012-07-29T12:22:02.324Z · LW(p) · GW(p)

I wonder why it is that so many people get here from TV Tropes.

Because it uses as many examples from HP:MoR as it possibly could?

comment by Jayson_Virissimo · 2012-07-19T08:14:50.151Z · LW(p) · GW(p)

Welcome to Less Wrong! I would say something about a rabbit hole but it would be pointless, since you already seem to be descending at quite a high rate of speed.

comment by MBlume · 2012-07-19T23:15:45.845Z · LW(p) · GW(p)

We seem to have a lot of Airbender fans here at LW -- Alicorn was the one who started me watching it, and I know SarahC and rubix are fans.

Welcome =)

comment by RobertLumley · 2012-07-19T21:45:50.529Z · LW(p) · GW(p)

I adore Pixar and many Disney movies for the sweetness and heart.

Did you see Brave? I thought it was great.

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-21T01:32:38.214Z · LW(p) · GW(p)

I did. :) I was so happy to see a mother-daughter movie with no romantic angle (other than the happily married king and queen).

Replies from: RobertLumley
comment by RobertLumley · 2012-07-21T01:57:11.528Z · LW(p) · GW(p)

I thought she was going to have to end up married at the end and I was so. angry. Brave ranked up there with Mulan in terms of kids movies that I think actually teach kids good lessons, which is a pretty high honor in my book.

Replies from: Desrtopa
comment by Desrtopa · 2012-07-21T02:50:54.658Z · LW(p) · GW(p)

Personally, for their first female protagonist, I felt like Pixar could have done a lot better than a Rebellious Princess. It's cliche, and I would have liked to see them exercise more creativity, but besides that, I think the instructive value is dubious. Yes, it's awfully burdensome to have one's life direction dictated to an excessive degree by external circumstances and expectations. But on the other hand, Rebellious Princesses, including Merida, tend to rail against the unfairness of their circumstances without stopping to consider that they live in societies where practically everyone has their lives dictated by external circumstances, and there's no easy transition to a social model that allows differently.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant, since all the necessary house and field work wouldn't leave her with the time, her family wouldn't own a horse, unless it was a ploughhorse she wouldn't be able to take out for pleasure riding, and she'd be married off at an early age out of economic rather than political necessity. And she'd be similarly out of luck if her parents were merchants, or craftsmen, or practically anyone else. Like most Rebellious Princesses, she has modern expectations of entitlement in a society where those expectations don't make sense.

It sucks to be told you can't do something you love because of societal preconceptions; "You shouldn't try to be a mathematician, you're a girl," "'You're a black ghetto kid, what are you doing aiming to be a businessman?" etc. But Rebellious Princesses are in a situation more analogous to "You might want not to have to go to school and be able to spend your time partying with friends and maybe make a living drawing pictures of cartoons you like, but there's no social structure to support you if you try to do that."

By the end of the movie, Merida and her mother birepbzr gurve cevqr naq zhghny zvfhaqrefgnaqvat, naq Zrevqn'f zbgure yrneaf gb frr gur vffhr sebz ure Zrevqn'f cbvag bs ivrj naq abg sbepr ure vagb n fhqqra zneevntr sbe cbyvgvpny rkcrqvrapl, juvyr Zrevqn yrneaf... gung fur ybirf ure zbz rabhtu gb abg jnag ure gb or ghearq vagb n orne? Fhccbfvat gur bgure gevorf jrera'g cercnerq gb pnyy bss gur zneevntr, naq fur jnf fghpx pubbfvat orgjrra n cebonoyl haunccl zneevntr naq crnpr, be ab zneevntr naq jne, jbhyq fur unir pubfra nal qvssreragyl guna fur qvq ng gur fgneg bs gur zbivr?

This probably all sounds like I disapproved of the movie a lot more than I really did, but I definitely wouldn't rank it alongside Mulan terms of positive social message. Mulan wanted to bring her family honor and keep her father safe, so she went and performed a service for her society which demanded great perseverance and courage, which her society neither expected nor encouraged her to perform. Merida wasn't happy with the expectations and duties her society placed on her, so she tried to duck out of them, nearly caused a disaster, and ultimately got what she wanted without having to make a hard choice between personal satisfaction and doing her part for her society.

Replies from: Bugmaster, Vaniver, OnTheOtherHandle, RobertLumley
comment by Bugmaster · 2012-07-26T23:18:23.890Z · LW(p) · GW(p)

I thought that Brave was actually a somewhat subversive movie -- perhaps inadvertently so. The movie is structured and presented in a way that makes it look like the standard Rebellious Princess story, with the standard feminist message. The protagonist appears to be a girl who overcomes the Patriarchy by transgressing gender norms, etc. etc. This is true to a certain extent, but it's not the main focus of the movie.

Instead, the movie is, at its core, a very personal story of a child's relationship with her parent, the conflict between love and pride, and the difference between having good intentions and being able to implement them into practice. By the end of the movie, both Merida and her mother undergo a significant amount of character development. Their relationship changes not because the social order was reformed, or because gender norms were defeated -- but because they have both grown as individuals.

Thus, Brave ends up being a more complex (and IMO more interesting) movie than the standard "Rebellious Princess" cliche would allow. In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong. Contrast this with something like Disney's Rapunzel, where the mother is basically a glorified plot device, as opposed to a full-fledged character.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T00:28:07.331Z · LW(p) · GW(p)

In Brave, there are no clear villains; neither Merida nor her mother are wholly in the right, or wholly in the wrong.

How boring. Was there at least some monsters to fight or an overtly evil usurper to slay? What on earth remains as motivation to watch this movie?

Replies from: Alicorn, Desrtopa
comment by Alicorn · 2012-07-27T00:53:33.805Z · LW(p) · GW(p)

The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.

Replies from: wedrifid, Bugmaster, wedrifid
comment by wedrifid · 2012-07-27T02:37:07.201Z · LW(p) · GW(p)

The antagonist is the rapey cultural artifact of forced marriage.

There should be a word for forcing other people to have sex (with each other, not yourself). The connotations of calling a forced arranged marriage 'rapey' should be offensive to the victims. It is grossly unfair to imply that the wife is a 'rapist' just because her husband's father forced his son to marry her for his family's political gain. (Or vice-versa.)

Replies from: Alicorn
comment by Alicorn · 2012-07-27T08:05:21.219Z · LW(p) · GW(p)

I wasn't specifying who was being rapey. Just that the entire setup was rapey.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T08:07:34.616Z · LW(p) · GW(p)

I wasn't specifying who was being rapey. Just that the entire setup was rapey.

That was clear and my reply applies.

(The person to whom the applies is the person who forces the marriage. Rape(y/ist) would also apply if that person was also a participant in the marriage.)

comment by Bugmaster · 2012-07-27T02:05:18.882Z · LW(p) · GW(p)

As per my post above, I'd argue that the "rapey cultural artifact of forced marriage" is less of a primary antagonist, and more of a bumbling comic relief character.

comment by wedrifid · 2012-07-27T02:01:20.674Z · LW(p) · GW(p)

The antagonist is the rapey cultural artifact of forced marriage. Vg vf fynva.

Cute rot13. I never would have predicted that in a Pixar animation!

comment by Desrtopa · 2012-07-27T02:59:32.203Z · LW(p) · GW(p)

There is an evil monster to fight, of a more literal sort, but it would be a bit of a stretch to call it the primary antagonist.

comment by Vaniver · 2012-07-21T06:05:25.801Z · LW(p) · GW(p)

Upvoted. My thoughts on Brave are over here, but basically Merida is actually a really dark character, and it's sort of sickening that she gets away with everything she does.

Interesting enough to repeat is my suggestion for a better setting:

Consider another movie they could have made, Paisley, about a Scottish girl on the cusp of womanhood who gets a job in one of the first textile mills and is able to support herself and live independently through hard work. This story has the supreme virtue of having actually happened: arranged marriage was not done away with because a preteen girl complained that she wasn't ready, it was done away with because people got richer and could afford something better.

Of course, it's difficult to make a movie glorifying sweatshop labor, whereas princesses are distant enough to be a tame example.

comment by OnTheOtherHandle · 2012-07-21T03:39:08.989Z · LW(p) · GW(p)

I understand your critique, and I mostly agree with it. I actually would have been even happier if Merida had bitten the bullet and married the winner - but for different reasons. She would have married because she loved her mother and her kingdom, and understood that peace must come at a cost - it would still very much count as a movie with no romantic angle. She would have been like Princess Yue in Avatar, a character I had serious respect for. When Yue was willing to marry Han for duty, and then was willing to fnpevsvpr ure yvsr gb orpbzr gur zbba, that was the first time I said to myself, "Wow, these guys really do break convention."

Merida would have been a lot more brave to accept the dictates of her society (but for the right reasons), or to find a more substantial compromise than just convincing the other lords to yrg rirelbar zneel sbe ybir. But I still think it was a sweet movie.

Replies from: Desrtopa
comment by Desrtopa · 2012-07-21T05:07:09.884Z · LW(p) · GW(p)

I agree that it was a sweet movie, and overall I enjoyed watching it. The above critique is a lot harsher than my overall impression. But when I heard that Pixar was making their first movie with a female lead, I expected a lot out of them and thought they were going to try for something really exceptional in both character and message, and it ended up undershooting my expectations on those counts.

I can sympathize with the extent to which simply having competent important female characters with relatable goals is a huge step forward for a lot of works. Ironically, I don't think I really grasped how frustrating the lack of them must be until I started encountering works which are supposed to be some sort of wish fulfillment for guys. There are numerous anime and manga, particularly harem series, which are full of female characters graced with various flavors of awesomeness, without any significant male protagonists other than the lead who's a total loser, and I find it infuriating when the closest thing I have to a proxy in the story is such a lousy and overshadowed character. It wasn't until I started encountering works like those that it hit me how painful it must be to be hard pressed to find stories that aren't like that on some level.

Replies from: OnTheOtherHandle, Nornagest
comment by OnTheOtherHandle · 2012-07-22T01:00:20.607Z · LW(p) · GW(p)

One thing that disappointed me about this whole story was that it was the one and only Pixar movie that was set in the past. Pixar has always been about sci fi, not fantasy, and its works have been set in contemporary America (with Magic Realism), alternate universes, or the future. Did "female protagonist" pattern-match so strongly with "rebellious medieval princess" that even Pixar didn't do anything really unusual with it?

Even though I was happy Merida wasn't rebelling because of love, it seems like they stuck with the standard old-fashioned feminist story of resisting an arranged marriage, when they could have avoided all of that in a work set in the present or the future, when a woman would have more scope to really be brave.

All in all, it seems like their father-son movie was a lot stronger than their mother-daughter movie.

comment by Nornagest · 2012-07-21T05:26:00.564Z · LW(p) · GW(p)

I don't think "This Loser Is You" is the right trope for that. Actually, I don't think TV Tropes has the right trope for that; as best I can tell, harem protagonists are the way they are not because they're supposed to stand for the audience in a representative sort of way but because they're designed as a receptacle for the audience to pour their various insecurities into. They can display negative traits, because that's assumed to make them more sympathetic to viewers that share them. But they can't display negative traits strong enough to be grounds for actual condemnation, or to define their characters unambiguously; you'll never see Homer Simpson as a harem lead. And they can't show positive traits except for a vague agreeableness and whatever supernatural powers the plot requires, because that breaks the pathos. Yes, Tenchi Muyo, that's you I'm looking at.

More succinctly, we're all familiar with sex objects, right? Harem anime protagonists are sympathy objects.

Replies from: Desrtopa
comment by Desrtopa · 2012-07-21T05:42:49.452Z · LW(p) · GW(p)

I agree that This Loser Is You isn't quite the right trope. There's a more recent launch, Loser Protagonist, which doesn't quite describe it either, but uses the same name as I did when I tried to put the trope which I thought accurately described it through the YKTTW ages ago.

If I understand what you mean by "sympathy objects," I think we have the same idea in mind. I tend to think of them as Lowest Common Denominator Protagonists, because they lack any sort of virtue or achievement that would alienate them from the most insecure or insipid audience members.

comment by RobertLumley · 2012-07-21T03:19:01.854Z · LW(p) · GW(p)

That's a very fair critique. A few things though:

First, you might want to put that in ROT13 or add a [SPOILER](http://lh5.ggpht.com/_VZewGVtB3pE/S5C8VF3AgJI/AAAAAAAAAYk/5LJdTCRCb8k/eliezer_yudkowskyjpg_small.jpg) tag or something.

Zrevqn yrneaf... gung fur ybirf ure zbz rabhtu gb abg jnag ure gb or ghearq vagb n orne?

Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound.

Merida wants to live a life where she's free to pursue her love of archery and riding, and get married when and to whom she wants? Well she'd be screwed if she were a peasant etc.

Well yeah, but the answer to "society sucks and how can I fix it" isn't "oh it sucks for everyone and even more for others, I'll just sit down and shut up". (Not that you argue it is.)

From TV Tropes:

If she's not the hero, quite often she's the hero's love interest. This will sometimes invoke Marry for Love not only as another way for her to rebel, but to also get out of an Arranged Marriage

This is exactly why I thought Brave was good - it moved away from this trope. It wasn't "I don't love this person, I love this other person!", it was "I don't have to love/marry someone to be a competent and awesome person". She was the hero of her own story, and didn't need anyone else to complete her. That doesn't have to be true for everyone, but the counterpoint needs to be more present in society.

And I said it ranked up there. Not that it passed Mulan. :) And it gets that honor by being literally one of the two movies I can think of that has a positive message in this respect. Although I will concede that I'm not very familiar with a particularly high number of kids movies.

Replies from: Desrtopa
comment by Desrtopa · 2012-07-21T04:09:10.600Z · LW(p) · GW(p)

I edited my comment to rot13 the ending spoilers; I left in the stuff that's more or less advertised as the premise of the movie. You might want to edit your reply so that it doesn't quote the uncyphered text.

Meridia learned to value her relationship with her mother, which I think a lot of kids need to hear going into adolescence. When you put it this way it doesn't seem nearly as trite as your phrasing makes it sound.

I think that's a valuable lesson, but I felt like Brave's presentation of it suffered for the fact that Merida and her mother really only reconcile after Merida essentially gets her way about everything. Teenagers who feel aggrieved in their relationships with their parents and think that they're subject to pointless unfairness are likely to come away with the lesson "I could get along so much better with my parents if they'd stop being pointlessly unfair to me!" rather than "Maybe I should be more open to the idea that my parents have legitimate reasons for not being accommodating of all my wishes, and be prepared to cut them some slack."

A more well rounded version of the movie's approximate message might have been something like "Some burdensome social expectations and life restrictions have good reasons behind them and others don't, learn to distinguish between them so you can focus your effort on solving the right ones." But instead, it came off more like "Kids, you should love and appreciate your parents, at least when you work past their inclination to arbitrarily oppress you."

Replies from: OnTheOtherHandle
comment by OnTheOtherHandle · 2012-07-22T01:22:15.511Z · LW(p) · GW(p)

Now that I think about it, very few movies or TV shows actually teach that lesson. There are plenty of works of fiction that portray the whiney teenager in a negative light, and there are plenty that portray the unreasonable parent in a negative light, but nothing seems to change. It all plays out with the boring inevitability of a Greek tragedy.

comment by aaronsw · 2012-08-04T09:56:50.973Z · LW(p) · GW(p)

I'm Aaron Swartz. I used to work in software (including as a cofounder of Reddit, whose software that powers this site) and now I work in politics. I'm interested in maximizing positive impact, so I follow GiveWell carefully. I've always enjoyed the rationality improvement stuff here, but I tend to find the lukeprog-style self-improvement stuff much more valuable. I've been following Eliezer's writing since before even the OvercomingBias days, I believe, but have recently started following LW much more carefully after a couple friends mentioned it to me in close succession.

I found myself wanting to post but don't have any karma, so I thought I'd start by introducing myself.

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

Replies from: Jayson_Virissimo, ata, Jonathan_Graehl, Emile, the_sober_grudge
comment by Jayson_Virissimo · 2012-08-05T08:50:08.978Z · LW(p) · GW(p)

I've been thinking on-and-off about starting a LessWrong spinoff around the self-improvement stuff (current name proposal: LessWeak). Is anyone else interested in that sort of thing? It'd be a bit like the Akrasia Tactics Review, but applied to more topics.

Instead of a spinoff, maybe Discussion should be split into more sections (one being primarily about instrumental rationality/self-help).

Replies from: kilobug
comment by kilobug · 2012-08-24T08:01:04.448Z · LW(p) · GW(p)

Topic-related discussion seems a good idea to me. Some here may be interested in rationality/cognitive bias but not in IA or not in space exploration or not in cryonics, ...

This would also allow to lift the "bans" like "no politics", if it says in a dedicated section not "polluting" those not interested in it.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-08-24T08:57:53.049Z · LW(p) · GW(p)

I endorse this idea.

comment by ata · 2012-08-04T23:21:09.172Z · LW(p) · GW(p)

Yay, it is you!

(I've followed your blog and your various other deeds on-and-off since 2002-2003ish and have always been a fan; good to have you here.)

comment by Jonathan_Graehl · 2012-08-05T06:54:28.527Z · LW(p) · GW(p)

LessWeak - good idea. On the name: cute but I imagine it getting old. But it's not as embarrassing as something unironically Courage Wolf, like 'LiveStrong'.

comment by Emile · 2012-08-04T13:23:27.496Z · LW(p) · GW(p)

Welcome to LessWrong!

Apparently I used to comment on your blog back in 2004 - my, how time flies!

comment by the_sober_grudge · 2013-02-23T11:45:58.357Z · LW(p) · GW(p)

Reboot in peace, friend.

comment by Dahlen · 2012-07-18T21:10:02.047Z · LW(p) · GW(p)

'Twas about time that I decided to officially join. I discovered LessWrong in the autumn of 2010, and so far I felt reluctant to actually contribute -- most people here have far more illustrious backgrounds. But I figured that there are sufficiently few ways in which I could show myself as a total ignoramus in an intro post, right?

I don't consider my gender, age and nationality to be a relevant part of my identity, so instead I'd start by saying I'm INTP. Extreme I (to the point of schizoid personality disorder), extreme T. Usually I have this big internal conflict going on between the part of me that wishes to appear as a wholly rational genius and the other part, who has read enough psychology and LW (you guys definitely deserve credit for this) to know I'm bullshitting myself big time.

My educational background so far is modest, a fact for which procrastination is the main culprit. I'm currently working on catching up with high school level math... so far I've only reviewed trigonometry, so I'm afraid I won't be able to participate in more technical discussions around here. Aside from a few Khan Academy videos, I'm still ignorant about probability; I did try to solve that cancer probability problem though, and when put like that into a word problem, I used Bayes' theorem intuitively. (Funny thing is, I still don't understand the magic behind it, even if I can apply it.) I know no programming beyond really elementary C++ algorithms; I have a pretty good grasp of high school physics, minus relativity and QM. I am seeking to do everything in my power to correct these shortcomings, and when/if I achieve results, I'll be happy to post my findings about motivation & procrastination on LW, if anyone is interested.

That which I have in common with the rest of this community is a love for rational, intelligent and productive discussions. I'm hugely disappointed with the overwhelming majority of internet and RL debates. Many times I've found myself trying to be the voice of reason and pointing out flaws in people's reasoning, even when I agreed with the core idea, only to have them tell me that I'm being too analytical and that I should... what... close off my mind and stop noticing mistakes, right? So I come here seeking discussions with people who would listen to reason and facilitate intellectually fruitful debates.

I'm very eager to help spread the knowledge about cognitive biases and educate people in the art of good reasoning.

I'm also interested (although not necessarily well-versed, as mentioned above) in most topics people here are interested in -- everything concerning mathematics and science, as well as philosophy and the mind (which are, by comparison, my two strongest points).

There are quite a few ways in which I don't fit the typical LW mold, though, and I'm mentioning this so that I find out whether any of these are going to be problematic in our interaction.

  • For one, I'm not particularly interested in AI and transhumanism. Not opposed to, just indifferent. The only related topic which interests me is life extension research. In the eventuality that some people might try to change my mind about this from the get-go, as I've seen some do with other newbies, I know you probably have some very good arguments for your position, but hopefully nobody's going to mind one less potential AI enthusiast. My interests are spread thin enough as they are.
  • I seem to be significantly more left-leaning than the majority of folks here. I'm decidedly not dogmatic about it, though, and on occasion I speak out against heavily ideological discourse even when it has a central message that I agree with.
  • Kind of clueless and mathematically illiterate at this moment.

This has to be getting rather long, so I'll stop here, hoping that I've said everything that I believed to be relevant to an intro post.

Replies from: Swimmer963, Davidmanheim
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-20T02:03:12.714Z · LW(p) · GW(p)

Welcome!

Many times I've found myself trying to be the voice of reason and pointing out flaws in people's reasoning, even when I agreed with the core idea, only to have them tell me that I'm being too analytical and that I should... what... close off my mind and stop noticing mistakes, right?

That's interesting... I don't think I've ever had someone respond to my pointing out flaws in this way. I've had people argue back plenty of times, but never tell me that we shouldn't be arguing about it. Can you give some examples of topics where this has happened? I would be curious what kind of topics engender this reaction in people.

Replies from: juliawise, Dahlen, Davidmanheim
comment by juliawise · 2012-07-20T16:00:12.951Z · LW(p) · GW(p)

I've seen this happen where one person enjoys debate/arguing and another does not. To one person it's an interesting discussion, and to the other it feels like a personal attack. Or, more commonly, I've seen onlookers get upset watching such a discussion, even if they don't personally feel targeted. Specifically, I'm remembering three men loudly debating about physics while several of their wives left the room in protest because it felt too argumentative to them.

Body language and voice dynamics can affect this a lot, I think - some people get loud and frowny when they're excited/thinking hard, and others may misread that as angry.

Replies from: Nornagest
comment by Nornagest · 2012-07-20T18:27:15.063Z · LW(p) · GW(p)

I ended up having to include a disclaimer in the FAQ for an older project of mine, saying that the senior staff tends to get very intense when discussing the project and that this doesn't indicate drama on our part but is actually friendly behavior. That was a text channel, though, so body dynamics and voice wouldn't have had anything to do with it. I think a lot of people just read any intense discussion as hostile, and quality of argument doesn't really enter into it -- probably because they're used to an arguments-as-soldiers perspective.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-20T18:49:51.253Z · LW(p) · GW(p)

We used to say of two friends of mine that "They don't so much toss ideas back and forth as hurl sharp jagged ideas directly at one another's heads."

Replies from: gwern
comment by gwern · 2012-07-21T02:28:12.788Z · LW(p) · GW(p)

"Wise words are like arrows flung at your forehead. What do you do? Why, you duck of course."

--Steven Erikson, House of Chains (2002)

comment by Dahlen · 2012-07-21T14:37:59.182Z · LW(p) · GW(p)

Oh, it's not a topic-specific behavior. Every time I go too far down a chain of reasoning ("too far" meaning as few as three causal relationships), sometimes people start complaining that I'm giving too much thought to it, and imply they are unable to follow the arguments. I'm just not surrounded by a lot of people that like long and intricate discussions.

(Funnily, both my parents are the type that get tired listening to complex reasoning, and I turned out the complete opposite.)

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-22T03:02:34.155Z · LW(p) · GW(p)

I'm just not surrounded by a lot of people that like long and intricate discussions.

That is...intensely frustrating. I've had people tell me that "well, I find all the points you're trying to make really complicated, and it's easier for me to just have faith in God" or that kind of thing, but I've never actually been rebuked for applying an analytical mindset to discussions. Props on having acquired those habits anyway, in spite of what sounds like an unfruitful starting environment!

Replies from: Dahlen
comment by Dahlen · 2012-07-22T18:58:46.496Z · LW(p) · GW(p)

Thanks! Anyway, there's the internet to compensate for that. The wide range of online forums built around ideas of varied intellectual depth means you even get to choose your difficulty level...

comment by Davidmanheim · 2012-07-20T14:06:32.762Z · LW(p) · GW(p)

This happens frequently in places where reasoning is suspect, or not valued. Kids in poor areas with few scholastic or academic opportunities find more validation in pursuits that are non-academic, and they tend to deride logic. It's parodied well by Colbert, but it's not uncommon.

I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-20T18:13:00.049Z · LW(p) · GW(p)

I just avoid those people, now know few of them. Most of the crowd here, I suspect, is in a similar position.

I may be in a similar position of never having known anyone who was like this. Also, I'm very conflict averse myself (but like discussing), so any discussion I start is less likely to have any component of raised voices or emotional involvement that could make it sound like an argument.

comment by Davidmanheim · 2012-07-20T01:46:57.738Z · LW(p) · GW(p)

The best way for me to get good at some particular type of math, or programming, or skill, in my experience, is to put yourself in a position where you need to do it for something. Find a job that requires you to do a bit of programming, or pick a task that requires it. Spend time on it, and you'll learn a bit. Then go back and realize you missed some basics, and pick them up. Oh, and read a ton.

You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?

Replies from: Dahlen
comment by Dahlen · 2012-07-21T16:00:13.901Z · LW(p) · GW(p)

I prefer the practice-based approach too, but from my position theoretical approaches are cheaper and much more available, if slower and rather tedious. In school they taught us that the only way to get better in an area is to do extra homework, and frankly my methods haven't improved much since. My usual way is to take an exercise book and solve everything in it, if that counts for practice; other than that, I only have the internet and a very limited budget.

You're interested in a lot of things, and trying to catch up with what you feel you should know, which is wonderful. What do you do with your time? Are you working? College?

Senior year in high school. Right now I have 49 vacation days left, after which school will start, studying will get replaced with busywork and my learning rates will have no choice but to fall dramatically. So now I'm trying to maximize studying time while I still can... It's all kind of backwards, isn't it?

Replies from: Davidmanheim
comment by Davidmanheim · 2012-07-22T13:39:47.032Z · LW(p) · GW(p)

Where you go to college and the amount of any scholarships you get are a bigger deal for your long term personal growth than any of the specific subjects you will learn right now.

In the spirit of long term decision making, figure out where you want to go to college, or what your options are, and spend the summer maximizing the odds of getting in to your first choice schools. I cannot imagine that it won't be a better investment of your time than any one subject you are studying (unless you are preparing for SAT or some such test.) So I guess you should spend the summer on Khan, and learning and practicing vocabulary to get better at taking the tests that will get you into a great college, where your opportunities to learn are greatly expanded.

Replies from: Dahlen
comment by Dahlen · 2012-07-22T18:49:42.702Z · LW(p) · GW(p)

I'm afraid all of this is not really applicable to me... My country isn't Western enough for such a wide range of opportunities. Here, institutes for higher education range from almost acceptable (state universities) to degree factories (basically all private colleges). Studying abroad in a Western country costs, per semester, somewhere between half and thrice my parents' yearly income. On top of everything, my grades would have to be impeccable and my performances worthy of national recognition for a foreign college to want me as a student so much as to step over the money issue and cover my whole tuition. (They're not, not by a long shot.)

Thanks for the support, in any case...

comment by iceman · 2012-07-19T23:05:25.740Z · LW(p) · GW(p)

I've commented infrequently, but never did one of these "Welcome!" posts.

Way back in the Overcoming Bias days, my roomate raved constantly about the blog and Eliezer Yudkowsky in particular. I pattern matched his behaviour to being in a cult, and moved on with my life. About two years later (?), a common friend of ours recommended Harry Potter and the Methods of Rationality, which I then read, which brought me to Lesswrong, reading the Sequences, etc. About a year later, I signed up for cryonics with Alcor, and I now give more than my former roomate to the Singularity Institute. (He is very amused by this.)

I spend quite a bit of time working on my semi-rationalist fanfic, My Little Pony: Friendship is Optimal, which I'll hopefully release on a timeframe of a few months. (I previously targeted releasing this damn thing for April, but...planning fallacy. I've whittled my issue list down to three action items, though, and it's been through it's first bout of prereading.)

Replies from: Alicorn, maia
comment by Alicorn · 2012-07-19T23:19:00.773Z · LW(p) · GW(p)

My Little Pony: Friendship is Optimal

Want.

comment by maia · 2012-07-26T00:39:06.085Z · LW(p) · GW(p)

Could I convince you to perhaps post on the weekly rationality diaries about progress, or otherwise commit yourself, or otherwise increase the probability that you'll put this fic up soon? :D

comment by AliceKingsley · 2012-07-19T17:57:17.521Z · LW(p) · GW(p)

Hi! I got here from reading Harry Potter and the Methods of Rationality, which I think I found on TV Tropes. Once I ran out of story to catch up on, I figured I'd start investigating the source material.

I've read a couple of sequences, but I'll hold off on commenting much until I've gotten through more material. (Especially since the quality of discussions in the comment sections is so high.) Thanks for an awesome site!

comment by wdmacaskill · 2012-11-09T17:57:42.979Z · LW(p) · GW(p)

Hi All,

I'm Will Crouch. Other than one other, this is my first comment on LW. However, I know and respect many people within the LW community.

I'm a DPhil student in moral philosophy at Oxford, though I'm currently visiting Princeton. I work on moral uncertainty: on whether one can apply expected utility theory in cases where one is uncertain about what is of value, or what one one ought to do. It's difficult to do so, but I argue that you can.

I got to know people in the LW community because I co-founded two organisations, Giving What We Can and 80,000 Hours, dedicated to the idea of effective altruism: that is, using one's marginal resources in whatever way the evidence supports as doing the most good. A lot of LW members support the aims of these organisations.

I woudn't call myself a 'rationalist' without knowing a lot more about what that means. I do think that Bayesian epistemology is the best we've got, and that rational preferences should conform to the von Neumann-Morgenstern axioms (though I'm uncertain - there are quite a lot of difficulties for that view). I think that total hedonistic utilitarianism is the most plausible moral theory, but I'm extremely uncertain in that conclusion, partly on the basis that most moral philosophers and other people in the world disagree with me. I think that the more important question is what credence distribution one ought to have across moral theories, and how one ought to act given that credence distribution, rather than what moral theory one 'adheres' to (whatever that means).

Replies from: MixedNuts, Nisan, beoShaffer
comment by MixedNuts · 2012-11-09T18:30:02.717Z · LW(p) · GW(p)

Pretense that this comment has a purpose other than squeeing at you like a 12-year-old fangirl: what arguments make you prefer total utilitarianism to average?

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-09T19:42:13.120Z · LW(p) · GW(p)

Haha! I don't think I'm worthy of squeeing, but thank you all the same.

In terms of the philosophy, I think that average utilitarianism is hopeless as a theory of population ethics. Consider the following case:

Population A: 1 person exists, with a life full of horrific suffering. Her utility is -100.

Population B: 100 billion people exist, each with lives full of horrific suffering. Each of their utility levels is -99.9

Average utilitarianism says that Population B is better than Population A. That definitely seems wrong to me: bringing into existence people whose lives aren't worth living just can't be a good thing.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-09T23:32:45.219Z · LW(p) · GW(p)

That's not obvious to me. IMO, the reason why in the real world “bringing into existence people whose lives aren't worth living just can't be a good thing” is that they consume resources that other people could use instead; but if in the hypothetical you fix the utility of each person by hand, that doesn't apply to the hypothetical.

I haven't thought about these things that much, but my current position is that average utilitarianism is not actually absurd -- the absurd results of thought experiments are due to the fact that those thought experiments ignore the fact that people interact with each other.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-11-10T17:14:57.172Z · LW(p) · GW(p)

I don't understand your comment. Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false.

Here's another, even more obviously decisive, counterexample to average utilitariainsm. Consider a world A in which people experience nothing but agonizing pain. Consider next a different world B which contains all the people in A, plus arbitrarily many more people all experiencing pain only slightly less intense. Since the average pain in B is less than the average pain in A, average utilitarianism implies that B is better than A. This is clearly absurd, since B differs from A only in containing a surplus of agony.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2012-11-10T19:32:46.512Z · LW(p) · GW(p)

Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more. If you don't think that such a world would be better, then you must agree that average utilitarianism is false.

I do think that the former is better (to the extent that I can trust my intuitions in a case that different from those in their training set).

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-11T00:01:15.677Z · LW(p) · GW(p)

Interesting. The deeper reasons why I reject average utilitarianism is that it makes the value of lives non-seperable.

"Separability" of value just means being able to evaluate something without having to look at anything else. I think that, whether or not it's a good thing to bring a new person into existence depends only on facts about that person (assuming they don't have any causal effects on other people): the amount of their happiness or suffering. So, in deciding whether to bring a new person into existence, it shouldn't be relevant what happened in the distant past. But average utilitarianism makes it relevant: because long-dead people affect the average wellbeing, and therefore affect whether it's good or bad to bring that person into existence.

But, let's return to the intuitive case above, and make it a little stronger.

Now suppose:

Population A: 1 person suffering a lot (utility -10)

Population B: That same person, suffering an arbitrarily large amount (utility -n, for any arbitrarily large n), and a very large number, m, of people suffering -9.9.

Average utilitarianism entails that, for any n, there is some m such that Population B is better than Population A. I.e. Average utilitarianism is willing to add horrendous suffering to someone's already horrific life, in order to bring into existence many other people with horrific lives.

Do you still get the intuition in favour of average here?

Replies from: TorqueDrifter, army1987, drnickbone
comment by TorqueDrifter · 2012-11-13T03:36:48.607Z · LW(p) · GW(p)

Suppose your moral intuitions cause you to evaluate worlds based on your prospects as a potential human - as in, in pop A you will get utility -10, in pop B you get an expected (1/m)(-n) + (m-1/m)(-9.9). These intuitions could correspond to a straightforward "maximize expected util of 'being someone in this world'", or something like "suppose all consciousness is experienced by a single entity from multiple perspectives, completing all lives and then cycling back again from the beginning, maximize this being's utility". Such perspectives would give the "non-intuitive" result in these sorts of thought experiments.

Replies from: TorqueDrifter
comment by TorqueDrifter · 2012-11-14T05:46:04.870Z · LW(p) · GW(p)

Hm, a downvote. Is my reasoning faulty? Or is someone objecting to my second example of a metaphysical stance that would motivate this type of calculation?

Replies from: MugaSofer
comment by MugaSofer · 2012-11-14T09:47:04.819Z · LW(p) · GW(p)

Perhaps people simply objected to the implied selfish motivations.

Replies from: TorqueDrifter
comment by TorqueDrifter · 2012-11-14T17:23:05.230Z · LW(p) · GW(p)

Perhaps! Though I certainly didn't intend to imply that this was a selfish calculation - one could totally believe that the best altruistic strategy is to maximize the expected utility of being a person.

comment by A1987dM (army1987) · 2012-11-11T00:32:55.235Z · LW(p) · GW(p)

assuming they don't have any causal effects on other people

Once you make such an unrealistic assumption, the conclusions won't necessarily be non-unrealistic. (If you assume water has no viscosity, you can conclude that it exerts no drag on stuff moving in it.) In particular, ISTM that as long as my basic physiological needs are met, my utility almost exclusively depends on interacting with other people, playing with toys invented by other people, reading stuff written by other people, listening to music by other people, etc.

comment by drnickbone · 2012-11-14T08:16:30.696Z · LW(p) · GW(p)

When discussing such questions, we need to be careful to distinguish the following:

  1. Is a world containing population B better than a world containing population A?
  2. If a world with population A already existed, would it be moral to turn it into a world with population B?
  3. If Omega offered me a choice between a world with population A and a world with population B, and I had to choose one of them, knowing that I'd live somewhere in the world, but not who I'd be, would I choose population B?

I am inclined to give different answers to these questions. Similarly for Parfit's repugnant conclusion; the exact phrasing of the question could lead to different answers.

Another issue is background populations, which turn out to matter enormously for average utilitarianism. Suppose the world already contains a very large number of people wth average utility 10 (off in distant galaxies say) and call this population C. Then the combination of B+C has lower average utility than A+C, and gets a clear negative answer on all the questions, so matching your intuition.

I suspect that this is the situation we're actually in: a large, maybe infinite, population elsewhere that we can't do anything about, and whose average utility is unknown. In that case, it is unclear whether average utilitarianism tells us to increase or decrease the Earth's population, and we can't make a judgement one way or another.

comment by MugaSofer · 2012-11-10T19:47:50.931Z · LW(p) · GW(p)

Average utilitarianism implies that a world in which lots and lots of people suffer a lot is better than a world in which a single individual suffers just a little bit more.

While I am not an average utilititarian, (I think,) A world containing only one person suffering horribly does seem kinda worse.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-11-10T20:20:42.529Z · LW(p) · GW(p)

Both worlds contain people "suffering horribly".

Replies from: MugaSofer
comment by MugaSofer · 2012-11-10T20:32:41.053Z · LW(p) · GW(p)

One world contains pople suffering horribly. The other contains a person suffering horribly. And no-one else.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-11-10T21:59:10.135Z · LW(p) · GW(p)

So, the difference is that in one world there are many people, rather than one person, suffering horribly. How on Earth can this difference make the former world better than the latter?!

Replies from: MugaSofer
comment by MugaSofer · 2012-11-10T22:05:02.136Z · LW(p) · GW(p)

Because it doesn't contain anyone else. There's only one human left and they're "suffering horribly".

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-11-10T22:12:42.281Z · LW(p) · GW(p)

Suppose I publicly endorse a moral theory which implies that the more headaches someone has, the better the world becomes. Suppose someone asks me to explain my rationale for claiming that a world that contains more headaches is better. Suppose I reply by saying, "Because in this world, more people suffer headaches."

What would you conclude about my sanity?

Replies from: MugaSofer
comment by MugaSofer · 2012-11-10T22:22:04.859Z · LW(p) · GW(p)

Most people value humanity's continued existence.

comment by Nisan · 2012-11-09T18:34:35.932Z · LW(p) · GW(p)

I'm glad you're here! Do you have any comments on Nick Bostrom and Toby Ord's idea for a "parliamentary model" of moral uncertainty?

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-09T19:38:52.394Z · LW(p) · GW(p)

Thanks! Yes, I'm good friends with Nick and Toby. My view on their model is as follows. Sometimes intertheoretic value comparisons are possible: that is, we can make sense of the idea that the difference in value (or wrongness) between two options A and B one one moral theory is greater, lesser, or equal to the difference in value (or wrongness) between two options C and D on another moral theory. So, for example, you might think that killing one person in order to save a slightly less happy person is much more wrong according to a rights-based moral view than it is according to utilitarianism (even though it's wrong according to both theories). If we can make such comparisons, then we don't need the parliamentary model: we can just use expected utility theory.

Sometimes, though, it seems that such comparisons aren't possible. E.g. I add one person whose life isn't worth living to the population. Is that more wrong according to total utilitarianism or average utilitarianism? I have no idea. When such comparisons aren't possible, then I think that something like the parliamentary model is the right way to go. But, as it stands, the parliamentary model is more of a suggestion than a concrete proposal. In terms of the best specific formulation, I think that you should normalise incomparable theories at the variance of their respective utility functions, and then just maximise expected value. Owen Cotton-Barratt convinced me of that!

Sorry if that was a bit of a complex response to a simple question!

comment by beoShaffer · 2012-11-09T18:57:27.399Z · LW(p) · GW(p)

Hi Will,

I woudn't call myself a 'rationalist' without knowing a lot more about what that means.

I think most LWer's would agree that; "Anyone who tries to practice rationality as defined on Less Wrong." is a passible description of what we mean by 'rationalist'.

Replies from: wdmacaskill
comment by wdmacaskill · 2012-11-09T19:48:26.164Z · LW(p) · GW(p)

Thanks for that. I guess that means I'm not a rationalist! I try my best to practice (1). But I only contingently practice (2). Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering. I.e. I do what I do not because it's what I happen to value, but because I think it's objectively valuable (and if you value something else, like promoting suffering, then I think you're mistaken!) That is, I'm a moral realist. Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism. When I talk with other LW-ers, this often seems to be a point of disagreement, so I hope I'm not just being pedantic!

Replies from: thomblake, Kindly
comment by thomblake · 2012-11-09T20:01:30.211Z · LW(p) · GW(p)

Whereas the definition given in Eliezer's post suggests that being a rationalist presupposes moral anti-realism

Not at all. (Eliezer is a sort of moral realist). It would be weird if you said "I'm a moral realist, but I don't value things that I know are objectively valuable".

It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not. Just like math lets you crunch numbers, whether they're real statistics or made up. But believing you shouldn't make up statistics doesn't therefore mean you don't do math.

Replies from: Pablo_Stafforini, somervta
comment by Pablo (Pablo_Stafforini) · 2012-11-10T17:17:46.746Z · LW(p) · GW(p)

Could you provide a link to a blog post or essay where Eliezer endorses moral realism? Thanks!

Replies from: thomblake
comment by thomblake · 2012-11-12T14:17:54.147Z · LW(p) · GW(p)

Sorting Pebbles Into Correct Heaps notes that 'right' is the same sort of thing as 'prime' - it refers to a particular abstraction that is independent of anyone's say-so.

Though Eliezer is also a sort of moral subjectivist; if we were built differently, we would be using the word 'right' to refer to a different abstraction.

Really, this is just shoehorning Eliezer's views into philosophical debates that he isn't involved in.

comment by somervta · 2012-11-10T04:48:55.497Z · LW(p) · GW(p)

"It doesn't really matter whether you're a moral realist or not - instrumental rationality is about achieving your goals, whether they're good goals or not."

It seems to me that moral realism is an epistemic claim - it is a statement about how the world is - or could be - and that is definitely a matter that impinges on rationality.

comment by Kindly · 2012-11-09T20:13:44.734Z · LW(p) · GW(p)

Even if I didn't care one jot about increasing happiness and decreasing suffering in the world, then I think I still ought to increase happiness and decrease suffering.

This seems to be similar to Eliezer's beliefs. Relevant quote from Harry Potter and the Methods of Rationality:

"No," Professor Quirrell said. His fingers rubbed the bridge of his nose. "I don't think that's quite what I was trying to say. Mr. Potter, in the end people all do what they want to do. Sometimes people give names like 'right' to things they want to do, but how could we possibly act on anything but our own desires?"

"Well, obviously," Harry said. "I couldn't act on moral considerations if they lacked the power to move me. But that doesn't mean my wanting to hurt those Slytherins has the power to move me more than moral considerations!"

Replies from: somervta
comment by somervta · 2012-11-10T04:39:07.952Z · LW(p) · GW(p)

I don't think that's what Harry is saying there. Your quote from HPMOR seems to me to be more about the recognition that moral considerations are only one aspect of a decision-making process (in humans, anyway), and that just because that is true doesn't mean that moral considerations won't have an effect.

comment by dac69 · 2012-07-18T23:00:22.503Z · LW(p) · GW(p)

Hello, everyone!

I'd been religious (Christian) my whole life, but was always plagued with the question, "How would I know this is the correct religion, if I'd grown up with a different cultural norm?" I concluded, after many years of passive reflection, that, no, I probably wouldn't have become Christian at all, given that there are so many good people who do not. From there, I discovered that I was severely biased toward Christianity, and in an attempt to overcome that bias, I became atheist before I realized it.

I know that last part is a common idiom that's usually hyperpole, but I really did become atheist well before I consciously knew I was. I remember reading HPMOR, looking up lesswrong.com, reading the post on "Belief in Belief", and realizing that I was doing exactly that: explaining an unsupported theory by patching the holes, instead of reevaluating and updating, given the evidence.

It's been more than religion, too, but that's the area where I really felt it first. Next projects are to apply the principles to my social and professional life.

Replies from: jacoblyles
comment by jacoblyles · 2012-07-18T23:43:10.562Z · LW(p) · GW(p)

Welcome!

The least attractive thing about the rationalist life-style is nihilism. It's there, it's real, and it's hard to handle. Eliezer's solution is to be happy and the nihilism will leave you alone. But if you have a hard life, you need a way to spontaneously generate joy. That's why so many people turn to religion as a comfort when they are in bad situations.

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism. I'm looking into Tai Chi as a replacement for going to church. But that's still eastern mumbo-jumbo as opposed to western mumbo-jumbo. Stoicism might be the most rational joy machine I can find.

Let me know if you ever un-convert.

Replies from: Oscar_Cunningham, Nornagest, moocow1452
comment by Oscar_Cunningham · 2012-07-19T10:42:22.574Z · LW(p) · GW(p)

The problem that I find is that all ways to spontaneously generate joy have some degree of mysticism.

What? What about all the usual happiness inducing things? Listening to music that you like; playing games; watching your favourite TV show; being with friends? Maybe you've ruled these out as not being spontaneous? But going to church isn't less effort than a lot of things on that list.

comment by Nornagest · 2012-07-19T00:43:18.685Z · LW(p) · GW(p)

I suspect that a tendency towards mysticism just sort of spontaneously accretes onto anything sufficiently esoteric; you can see this happening over the last few decades with quantum mechanics, and to a lesser degree with results like Gödel's incompleteness theorems. Martial arts is another good place to see this in action: most of those legendary death touch techniques you hear about, for example, originated in strikes that damaged vulnerable nerve clusters or lymph nodes, leading to abscesses and eventually a good chance of death without antibiotics. All very explicable. But layer the field's native traditional-Chinese-medicine metaphor over that and run it through several generations of easily impressed students, partial information, and novelists without any particular incentive to be realistic, and suddenly you've got the Five-Point Palm Exploding Heart Technique.

So I don't think the mumbo-jumbo is likely to be strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

Replies from: Oligopsony
comment by Oligopsony · 2012-07-19T00:47:44.524Z · LW(p) · GW(p)

So I suspect it's unlikely that the mumbo-jumbo is strictly necessary to most eudaemonic approaches, Eastern or Western. I expect it'd be difficult to extract from a lot of them, though.

It would be difficult to do it on your own, but it's not very hard to find e.g. guides to meditation that have been bowlderized of all the mysterious magical stuff.

comment by moocow1452 · 2012-08-17T21:25:15.788Z · LW(p) · GW(p)

Maybe it's incomprehensibility itself that makes some people happy? If you don't understand it, you don't feel responsible, and ignorance being bliss, all that weird stuff there is not your problem, and that's the end of it as far as your monkey bits are concerned.

comment by Despard · 2012-07-20T01:13:23.958Z · LW(p) · GW(p)

Hello everyone,

Thought it was about time to do one of these since I've made a couple of comments!

My name's Carl. I've been interested in science and why people believe the strange things they believe for many years. I was raised Catholic but came to the conclusion around the age of ten that it was all a bit silly really, and as yet I have found no evidence that would cause me to update away from that.

I studied physics as an undergrad and switched to experimental psychology for my PhD, being more interested at that point in how people work than how the universe does. I started to study motor control and after my PhD and a couple of postdocs I know way more about how humans move their arms than any sane person probably should. I've worked in behavioural, clinical and computational realms, giving me a wide array of tools to use when analysing problems.

My current postdoc is coming to an end and a couple of months ago I was undergoing somewhat of a crisis. What was I doing, almost 31 and with no plan for my life? I realised that motor control had started to bore me but I had no real idea what to do about it. Stay in science, or abandon it and get a real job? That hurts after almost a decade of high-level research. And then I discovered, on Facebook, a link to HPMOR. And then I read it all, in about a week. And then I found LW, and a job application for curriculum design for a new rationality institute, and I wrote an email, and then flew to San Francisco to participate in the June minicamp...

And now I'm in the midst of writing some fellowship applications to come to Berkeley and study rationality - specifically how the brain is Bayesian in some ways but not in others, and how that can inform the teaching of rationality. (Or something. It's still in the planning stages!) I'm also volunteering for CFAR at the moment by helping to find useful papers on rationality and cognitive science, though that's on somewhat of a back burner since these fellowships are due very soon. Next month, in fact.

I've started a new blog: it's called 'Joy in the Merely Real', and at the moment I'm exploring a few ideas about the Twelve Virtues of Rationality and what I think about them. You can find it at:

themerelyreal.blogspot.com

Looking forward to doing more with this community in the coming months and years. :)

comment by [deleted] · 2013-02-01T18:16:34.535Z · LW(p) · GW(p)

Greetings LWers,

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

I wasn't always in such a stimulating environment -- indeed I grew up in what can only be deemed intellectual deprivation, from which I narrowly escaped -- and, as a result of my disregard for authority and despise for traditional classroom learning, I am largely self-taught. Unlike most autodidacts, though, I never was a voracious reader, on the contrary I barely opened books at all, instead preferring to think things over in my head; this has left me an ignorant person -- something I'm constantly striving to improve on -- but has also protected me from many diseased ideas and even allowed me to better appreciate certain notions by having to rediscover them myself. (case in fact, throughout my adolescence I took great satisfaction in analysing my mental mechanisms and correcting for what I now know to be biases, yet I never came across the relevant literature, essentially missing out on a wealth of knowledge)

For a long time I've aspired to join a cultural movement modelled on the principles of the Enlightenment and, to my eyes, LW, MIRI, CFAR, FHI and CSER are exactly the kind of community that can impact society through the use of reason. Alas, I was long unaware of their existence and when I first heard about the 'Singularity' I immediately dismissed it as the science fiction it sounds like, but thankfully this is no longer the case and I can now start making my modest contributions to reducing existential risk.

Lastly, I've never had my IQ measured properly -- passing the Mensa admission test places me at least two SDs above the norm, but that's hardly impressive by LW standards -- and, as much as I value such an indicator, I'm too emotionally invested in my intelligence to dare undergo psychometric testing. (for what it's worth, as a child my development was precocious -- e.g. the development of my motor skills was superior to that of the subjects taking part in this well-known longitudinal study)

I've opened up a lot to you, LWers; I hope my only regret will be not having discovered you earlier...

Replies from: Eliezer_Yudkowsky, Kawoomba, shminux
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-01T23:56:02.065Z · LW(p) · GW(p)

Nice! What part of FAI interests you?

Replies from: None
comment by [deleted] · 2013-02-02T09:50:21.857Z · LW(p) · GW(p)

Too soon to say, as I discovered FAI a mere two months ago -- this, incidentally, could mean that it's a fleeting passion -- but CEV has definitely caught my attention, while the concept of a reflective decision theory I find really fascinating. The latter is something I've been curious about for quite some time, as plenty of moral precepts seem to break down once an agent -- even a mere homo sapiens -- reaches certain levels of self-awareness and, thus, is able to alter their decision mechanisms.

comment by Kawoomba · 2013-02-01T18:45:01.527Z · LW(p) · GW(p)

Lastly, I've never had my IQ measured properly -- passing the Mensa admission test places me at least two SDs above the norm

Isn't that a proper IQ test? At least it is where I live. Funny how we like to talk about things we're good at. The real test is "time from passing test to time you leave to save the yearly fee."

I'm an aspiring Friendliness theorist, currently based at the Australian National University -- home to Marcus Hutter, Rachael Briggs and David Chalmers, amongst others -- where I study formal epistemology through the Ph.B. (Hons) program.

That's awesome. Don't miss Marcus' lectures, such a sharp mind. Also, midi - Imperial March (used to be?) playing on his home page.

Replies from: None
comment by [deleted] · 2013-02-01T19:54:24.143Z · LW(p) · GW(p)

Isn't that a proper IQ test? At least it is where I live.

Yes and no; it's some version of the Cattell, but it's not administered individually, has a lowish ceiling and they don't reveal your exact result.

The real test is "time from passing test to time you leave to save the yearly fee."

For the record, you needn't join in order to take their heavily subsidised admission test.

Replies from: Kawoomba
comment by Kawoomba · 2013-02-01T19:58:28.298Z · LW(p) · GW(p)

(...) has a lowish ceiling and they don't reveal your exact result.

Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting.

For the record, you needn't join in order to take their heavily subsidised admission test.

'Twas free even, in my case, some kind of promo action.

Replies from: None
comment by [deleted] · 2013-02-01T20:50:54.959Z · LW(p) · GW(p)

Is your info Aussie-specific? (EDIT: We're not quite antipodes, but not far off, either) They did when I took it, ceiling 145, was administered in a group setting.

Yep I had Australia in mind, though it's by no means the only country where it works that way. Also, various national Mensa chapters have stopped releasing scores -- something to do with egalitarianism, go figure... -- and pardon my imprecise language, but by lowish I meant around 145 SD15. (didn't mean it in a patronising manner, it's just that plenty of tests have a ceiling of 160 SD15 and some, e.g. Stanford-Binet Form L-M, are employed even above that cutoff)

Replies from: Kawoomba
comment by Kawoomba · 2013-02-01T20:54:15.746Z · LW(p) · GW(p)

I do wonder if someone who'd score, say 155 on a 160 ceiling test would probably score 145 on a 145 ceiling test. You project an aura of knowledgeability on the subject, so I'll just go ahead and ask you. Consider yourself asked.

Replies from: None
comment by [deleted] · 2013-02-01T21:09:03.667Z · LW(p) · GW(p)

I'm afraid I'm not sufficiently knowledgeable to answer that and I have no intention of becoming one of those self-proclaimed internet experts! (plus the rest of the internet, outside of LW, already does a good enough job at spreading misinformation)

comment by shminux · 2013-02-01T18:53:28.993Z · LW(p) · GW(p)

I'm an aspiring Friendliness theorist

"machine/emergent intelligence theorist" would not box you in as much. Friendliness is only one model, you know, no matter how convincing it may sound.

Replies from: None
comment by [deleted] · 2013-02-01T18:55:03.953Z · LW(p) · GW(p)

"machine intelligence researcher" is also much more employable -- which isn't saying much.

Replies from: None
comment by [deleted] · 2013-02-01T19:50:24.744Z · LW(p) · GW(p)

One can signal differently to make oneself more palatable to different audiences and, indeed, "machine/emergent intelligence theorist" is less confining, while "machine intelligence researcher" is more suitable for academia or industry; here at LW, however, I needn't conceal my specific interests, which happen to be in AI safety and friendliness.

comment by [deleted] · 2012-07-19T22:45:01.168Z · LW(p) · GW(p)

Hello everyone! I've been a lurker on here for awhile, but this is my first post. I've held out on posting anything because I've never felt like I knew enough to actually contribute to the conversation. Some things about me:

I'm currently 22, female, and a recent graduate of college with a degree in computer science. I'm currently employed as a software engineer at a health insurance company, though I am looking into getting into research some day. I mainly enjoy science, playing video games, and drawing.

I found this site through a link on the Skeptics Stack Exchange page. The post was about cryonics, which is how I got over here. I've been reading the site for about six months now and I have found it extremely helpful. It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

My journey to rationality probably started with atheism and a real understanding of the scientific method and human psychology. I grew up Mormon, which has since given me some interesting perspectives into groupthink and the general problem of humanity. Leaving Mormonism is what prompted me into understanding why and how so many people could be so systematically insane.

In some ways, I've also found this very isolating because I now have a hard time relating to a lot of people. Just sitting back and watching the ways people destroy themselves and others is very frustrating. It's made worse by my knowledge that I must also be doing this to myself, albeit on a smaller level.

Anyway, I enjoy meeting you all and I will try to comment more on the site! I really enjoy this site and everyone on it seems to have very good comments.

Replies from: fiddlemath
comment by fiddlemath · 2012-07-29T14:11:25.651Z · LW(p) · GW(p)

It has also been depressing, though, because I've since realized many of the "problems" in the world were caused by the ineptitude of the species and aren't easily fixed. I've had some problems with existential nihilism since then and if anyone has any advice on the matter, I'd love to hear it.

You describe "problems with existential nihilism." Are these bouts of disturbed, energy-sucking worry about the sheer uselessness of your actions, each lasting between a few hours and a few days? Moreover, did you have similar bouts of worry about other important seeming questions before getting into LW?

Replies from: None
comment by [deleted] · 2012-08-14T20:48:12.629Z · LW(p) · GW(p)

Yes, that is how I would describe it. It normally comes and goes, with the longest period lasting a few weeks. I'm not entirely sure if it's a byproduct of recent life events or if I am suffering from regular depression, but it's something I've had on and off for a few years. LW hasn't specifically made it worse, but it hasn't made it better either.

Replies from: fiddlemath
comment by fiddlemath · 2012-08-15T15:07:47.950Z · LW(p) · GW(p)

In that case, it sounds very, very similar to what I've learned to deal with -- especially as you describe feeling isolated from the people around you. I started to write a long, long comment, and then realized that I'd probably seen this stuff written down better, somewhere. This matches my experience precisely.

For me, the most important realization was that the feeling of nihilism presents itself as a philosophical position, but is never caused or dispelled by philosophy. You can ruminate forever and find no reason to value anything; philosophical nihilism is fully internally consistent. Or, you can get exercise, and spend some time with friends, and feel better due not to philosophy, but to physiology. (I know this is glib, and that getting exercise when you just don't care about anything isn't exactly easy. The link above discusses this.)

That above post, and Alicorn's sequence on luminosity -- effective self-awareness -- probably lay out the right steps to take, if you'd like to most-effectively avoid these crappy moods.

Moreover, if you'd like to chat more, over skype some time, or via pm, or whatever, I'd be happy to. I'm pretty busy, so there may be high latency, but it sounds like you're dealing with things that are very similar to my own experience, and I've partly learned how to handle this stuff over the past few years.

comment by wsean · 2012-07-18T19:22:02.368Z · LW(p) · GW(p)

Hi! Long-time lurker, first-time... joiner?

I was inspired to finally register by this post being at the top of Main. Not sure yet how much I'll actually post, but the removal of the passive barrier of, you know, not actually being registered is gone, so we'll see.

Anyway. I'm a dude, live in the Bay Area, work in finance though I secretly think I'm actually a writer. I studied cog sci in college, and that angle is what I tend to find most interesting on Less Wrong.

I originally came across LW via HPMoR back in 2010. Since then, I've read the Sequences, been to a few meetups, and attended the June minicamp (which, P.S., was awesome).

I'm still struggling a bit with actually applying rationality tools in my life, but it's great to have that toolbox ready and waiting. Sometimes... I hear it calling out to me. "Sean! This is an obvious place to apply Bayes! Seaaaaaaan!"

Replies from: Nisan
comment by Nisan · 2012-07-18T20:01:00.579Z · LW(p) · GW(p)

Welcome!

comment by Davidmanheim · 2012-07-20T00:11:04.807Z · LW(p) · GW(p)

Hi all,

Not quire recently joined, but when I first joined, I read some, then got busy and didn't participate after that.

Age: Not yet 30. Former Occupation: Catastrophe Risk Modeling New Occupation: Graduate Student, Public Policy, RAND Corporation.

Theist Status: Orthodox Jew, happy with the fact that there are those who correctly claim that I cannot prove that god exists, and very aware of the confirmation bias and lack of skepticism in most religious circles. It's one reason I'm here, actually. And I'll be glad to discuss it in the future, elsewhere.

I was initially guided here, about a year ago, by a link to The Best Textbooks on Every Subject . I was a bit busy working at the time, building biased mathematical models of reality. (Don't worry, they weren't MY biases, they were those of the senior people and those of the insurance industry. And they were normalized to historical experience, so as long as history is a good predictor of the future...) So I decided that I wanted to do something different, possibly something with more positive externalities, less short term thinking about how the world could be more profitable for my employer, and more long-term thinking about how it could be better for everyone.

Skip forward; I'm going to be going to graduate school for Policy Analysis at RAND, and they asked us to read Thinking Fast and Slow, by Kahneman - and I'm a big fan of his. While reading and thinking about it, I wanted to reference something I read on here, but couldn't remember the name of the site. I ended up Googling my way to a link to HP:MOR, which I read in about a day, (yesterday, actually) and a link back here. So now LR is in my RSS reader, and I'm here to improve myself and my mind, and become a bit less wrong.

comment by SamLL · 2013-02-09T02:02:19.955Z · LW(p) · GW(p)

Hello and goodbye.

I'm a 30 year old software engineer with a "traditional rationalist" science background, a lot of prior exposure to Singularitarian ideas like Kurzweil's, with a big network of other scientist friends since I'm a Caltech alum. It would be fair to describe me as a cryocrastinator. I was already an atheist and utilitarian. I found the Sequences through Harry Potter and the Methods of Rationality.

I thought it would be polite, and perhaps helpful to Less Wrong, to explain why I, despite being pretty squarely in the target demographic, have decided to avoid joining the community and would recommend the same to any other of my friends or when I hear it discussed elsewhere on the net.

I read through the entire Sequences and was informed and entertained; I think there are definitely things I took from it that will be valuable ("taboo" this word; the concept of trying to update your probability estimates instead of waiting for absolute proof; etc.)

However, there were serious sexist attitudes that hit me like a bucket of cold water to the face - assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.

Looking around the site even further, I find that it is over 90% male as of the last survey, and just a lot of gender essentialist, women-are-objects-not-people-like-us crap getting plenty of upvotes.

I'm not really willing to put up with that and still less am I enthused about identifying myself as part of a community where that's so widespread.

So, despite what I think could be a lot of interesting stuff going on, I think this will be my last comment and I would recommend against joining Less Wrong to my friends. I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.

If you're interested in one problem that is causing at least one rationalist to bounce off your site (and, I think the odds are not unreasonable, where one person writes a long heartfelt post, there might be multiple others who just click away) here you go. If not, go ahead and downvote this into oblivion.

Perhaps I'll see you folks in some years if this problem here gets solved, or some more years after that when we're all unfrozen and immortal and so forth.

Sincerely,

Sam

Replies from: Qiaochu_Yuan, Eliezer_Yudkowsky, army1987, Kawoomba, earthwormchuck163
comment by Qiaochu_Yuan · 2013-02-09T02:18:03.432Z · LW(p) · GW(p)

Thanks for writing this. It's true that LW has a record of being bad at talking about gender issues; this is a problem that has been recognized and commented on in the past. The standard response seems to have been to avoid gender issues whenever possible, which is unfortunate but maybe better than the alternative. But I would still like to comment on some of the specific things you brought up:

assertions that understanding anyone of the other gender is like trying to understand an alien, for example.

I think I know the post you're referring to, I didn't read this as sexist, and I don't think that indicates a male-techy failure mode on my part about sexism. Some men are just really, really bad at understanding women (and maybe commit the typical mind fallacy when they try to understand men, and maybe just don't know anyone who doesn't fall into one of those categories), and I don't think they should be penalized for being honest about this.

gender essentialist

I haven't seen too much of this. Edit: Found some more.

women-are-objects-not-people-like-us crap

Where? Edit: Found some of this too.

I think it has fallen very squarely into the "nothing more than sexism, the especially virulent type espoused by male techies who sincerely believe that they are too smart to be sexists" cognitive failure mode.

This is a somewhat dangerous weapon to wield. It is very easy to classify any attempt to counter this argument as falling into the failure mode you describe; please don't use this as a fully general counterargument.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-02-09T16:57:27.135Z · LW(p) · GW(p)

Try to keep in mind selection effects. The post was titled Failed Utopia - people who agreed with this may have posted less than those who disagreed.

I confess to being somewhat surprised by this reaction. Posts and comments about gender probably constitute around 0.1% of all discussion on LessWrong.

Replies from: Wei_Dai, Kawoomba, Risto_Saarelma
comment by Wei Dai (Wei_Dai) · 2013-02-10T10:01:41.020Z · LW(p) · GW(p)

Whenever I see a high quality comment made by a deleted account (see for example this thread where the two main participants are both deleted accounts), I'd want to look over their comment history to see if I can figure out what sequence of events alienated them and drove them away from LW, but unfortunately the site doesn't allow that. Here SamLL provided one data point, for which I think we should be thankful, but keep in mind that many more people have left and not left visible evidence of the reason.

Also, aside from the specific reasons for each person leaving, I think there is a more general problem: why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW, either through an explicit statement (SamLL's "still less am I enthused about identifying myself as part of a community where that's so widespread"), or by deleting their account? Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Replies from: Gastogh, prase, Kawoomba, Eugine_Nier, Kindly
comment by Gastogh · 2013-02-10T12:10:08.330Z · LW(p) · GW(p)

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some possibilities:

  1. There have been deliberate efforts at community-building, as evidenced by all the meetup-threads and one whole sequence, which may suggest that one is supposed to identify with the locals. Even relatively innocuous things like introduction and census threads can contribute to this if one chooses to take a less than charitable view of them, since they focus on LW itself instead of any "interesting idea" external to LW.

  2. Labeling and occasionally hostile rhetoric: Google gives dozens of hits for terms like "lesswrongian" and "LWian", and there have been recurring dismissive attitudes regarding The Others and their intelligence and general ability. This includes all snide digs at "Frequentists", casual remarks to the effect of how people who don't follow certain precepts are "insane", etc.

  3. The demographic homogeneity probably doesn't help.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2013-02-11T03:17:36.243Z · LW(p) · GW(p)

I agree with these, and I wonder how we can counteract these effects. For example I've often used "LWer" as shorthand for "LW participant". Would it be better to write out the latter in full? Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation? For example, we could explain that "joining the LW community" ought to be interpreted as "making use of LW facilities and contributing to LW discussions and projects" rather than "adopting 'LW member' as part of one's social identity and endorsing some identifying set of ideas", and maybe link to some articles like Paul Graham's Keep Your Identity Small.

Replies from: None, IlyaShpitser, shminux
comment by [deleted] · 2013-02-11T03:51:21.306Z · LW(p) · GW(p)

"Here at LW, we like to keep our identity small."

Replies from: shminux
comment by shminux · 2013-02-11T04:50:57.237Z · LW(p) · GW(p)

Nice one.

comment by IlyaShpitser · 2013-02-21T09:28:29.256Z · LW(p) · GW(p)

Should we more explicitly invite newcomers to think of LW in instrumental/consequentialist terms, and not in terms of identity and affiliation?

I think so. The other thing about "snide digs" the grandparent is talking about is they are not just bad image, they are also wrong (as in incorrect). I think the LW "hit rate" on specific enough technical matters is not all that good, to be honest.

comment by shminux · 2013-02-11T04:50:14.685Z · LW(p) · GW(p)

One of the times the issue of overidentifying with LW came up here, about a year ago, I mentioned that my self-description is "LW regular [forum participant]". It means that I post regularly, but does not mean that I derive any sense of identity from it. "LWer" certainly sounds more like "this is my community", so I stay away from using it except toward people who explicitly self-identify as such. I also tend to discount quite a bit of what someone here posts, once I notice them using the pronoun "we" when describing the community, unless I know for sure that they are not caught up in the sense of belonging to a group of cool "rationalists".

Replies from: satt
comment by satt · 2013-02-11T07:13:38.464Z · LW(p) · GW(p)

I think the "LWer" appellation is just plain accurate (but then I've used the term myself). Any blog with a regular group of posters & commenters constitutes a community, so LW is a community. Posting here regularly makes us members of this community by default, and being coy about that fact would make me feel odd, given that we've strewn evidence of it all over the site. But I suspect I'm coming at this issue from a bit of an odd angle.

comment by prase · 2013-02-11T01:22:11.369Z · LW(p) · GW(p)

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

It may be because lot of LW regulars visibly think of it in terms of identity. LW is described by most participants as a community rather than a discussion forum, and there has been a lot of explicit effort to strengthen the communitarian aspect.

comment by Kawoomba · 2013-02-10T10:32:24.821Z · LW(p) · GW(p)

why do perfectly reasonable people see a need to not just leave LW, but to actively disidentify or disaffiliate with LW

As a hypothesis, they may be ambivalent about discontinuing their hobby ("Two souls alas! are dwelling in my breast; (...)) and prefer to burn their bridges to avoid further ambivalence and decision pressures. Many prefer a course of action being locked in, as opposed to continually being tempted by the alternative.

comment by Eugine_Nier · 2013-02-11T05:47:25.618Z · LW(p) · GW(p)

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

Some people come from a background where they're taught to think of everything in terms of identity.

comment by Kindly · 2013-02-10T16:36:53.381Z · LW(p) · GW(p)

Why are we causing them to think of LW in terms of identity in the first place, instead of, say, a place to learn about and discuss some interesting ideas?

LW is a hub for several abnormal ideas. An implication that you're affiliated with LW is an implication that you take these ideas seriously, which no reasonable person would do.

comment by Kawoomba · 2013-02-09T17:24:03.709Z · LW(p) · GW(p)

Your comment's first sentence answers your second paragraph.

comment by Risto_Saarelma · 2013-02-10T06:55:58.871Z · LW(p) · GW(p)

I guess you get considered fully unclean even if you're only observed breaking a taboo a few times.

comment by A1987dM (army1987) · 2013-02-09T16:35:46.315Z · LW(p) · GW(p)

Coming here to Less Wrong, I posted a little bit about that, but I was immediately struck in the "sequence rerun" by people talking about what a great utopia the gender-segregated "Failed Utopia 4-2" would be.

Did you use a Rawlsian veil of ignorance when judging it? From a totally selfish point of view, I would very, very, very much rather be myself in this world than myself in that scenario (given that, among plenty of other things, I dislike most people of my gender), but think of, say, starving African children or people with disabilities. I don't know much about what it feels like to be in such dire straits so I'm not confident that I'd rather be a randomly chosen person in Failed Utopia 4-2 than a randomly chosen person in the actual world, but the idea doesn't sound obviously absurd to me.

Replies from: Kawoomba
comment by Kawoomba · 2013-02-09T17:21:27.603Z · LW(p) · GW(p)

I dislike most people of my gender

Is that ... like ... allowed?

edit: I agree with you and object to all the conditioning against contradicting "sacred" values (sexism = ugh, bad).

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-09T17:43:50.518Z · LW(p) · GW(p)

By whom? (Of course, that's not literally true, since the overwhelming majority of all 3.5 billion male humans alive are people I've never met or heard of and so I have little reason to dislike, but...)

comment by Kawoomba · 2013-02-09T06:18:19.162Z · LW(p) · GW(p)

Since I cannot imagine anything but a few cherry picked examples that could have led to your impression, let me use some of my own (the number of cases is low):

The extremely positive reception of Alicorns "Living Luminously" sequence (karma +50 for the main post alone, Anja's great and technical posts (karmas +13, +34, +29) all indicate that good content is not filtered along gender lines, which it should be if there were some pervasive bias.

Even asserting that understanding anyone of the other gender is "like trying to understand an alien" does not imply any sort of male superiority complex. If you object to sexism as just pointing out that there are differences both based on culture and genetics, well you got me there. Quite obviously there are, I assume you don't live in a hermaphrodite community. Why is it bad when/if that comes up? Forbidden knowledge?

If you're interested in one problem that is causing at least one rationalist to bounce off your site (...)

Are you sure that's the rationalist thing to do? Gender imbalance and a few misplaced or easily misinterpreted remarks need not be representative of a community, just as a predominantly male CS program at Caltech and frat jokes need not be representative of College culture.

Replies from: jooyous
comment by jooyous · 2013-02-09T07:06:02.052Z · LW(p) · GW(p)

Gender imbalances and the occasional frat jokes didn't cause you to leave Caltech.

It's possible that user is sensitive to gender issues precisely because it's comparatively difficult and not entirely rationalist to leave a community like Caltech.

It's generally the stance of gender-sensitive humans that no one should have to listen to the occasional frat joke if they don't want to. I agree with everything else in your post; that final "can't you take a frat joke?" strikes me as defensive and unnecessary.

Replies from: Kawoomba
comment by Kawoomba · 2013-02-09T07:32:12.442Z · LW(p) · GW(p)

You're right, it was too carelessly formulated.

Replies from: jooyous
comment by jooyous · 2013-02-09T07:39:58.537Z · LW(p) · GW(p)

Will you fix it? =) Is there an established protocol for fixing these sorts of things?

Replies from: Manfred
comment by Manfred · 2013-02-10T19:32:50.911Z · LW(p) · GW(p)

The edit button? :P

Replies from: Kawoomba
comment by Kawoomba · 2013-02-10T19:42:51.401Z · LW(p) · GW(p)

Is that a protocol, strictly speaking? "Pressing the edit button" would be a protocol with only one action (not sufficient).

Maybe there will be a policy post on this soon.

Replies from: Manfred
comment by Manfred · 2013-02-10T20:00:35.063Z · LW(p) · GW(p)

You're right, strictly speaking, the protocol would be TCPIP. :)

(There is no mandatory or even authoritative social protocol for this situation. The typical behavior is editing and then putting an EDIT: brief explanation of edit, but just editing with no explanation is also fine, particularly if nobody's replied yet, or the edit is explained in child comments).

Replies from: Kawoomba
comment by Kawoomba · 2013-02-10T20:06:25.818Z · LW(p) · GW(p)

just editing with no explanation is also fine, particularly if nobody's replied yet

Well earlier today I clarified (euphemism for edited) a comment shortly after it was made, then found a reply that cited the old, unclarified version. You know what that looks like, once the tribe finds out? OhgodImdone.

In a hushed voice I just found out that EY can edit his comments without an asterisk appearing.

comment by earthwormchuck163 · 2013-02-09T04:22:48.412Z · LW(p) · GW(p)

Why not stay around and try to help fix the problem?

Replies from: Nornagest, wedrifid
comment by Nornagest · 2013-02-09T05:30:11.659Z · LW(p) · GW(p)

Ordinarily I'd leave this for SamLL to respond to, but I'd say the chances of getting a response in this context are fairly low, so hopefully it won't be too presumptuous for me to speculate.

First of all, we as a community suck at handling gender issues without bias. The reasons for this could span several top-level posts and in any case I'm not sure of all the details; but I think a big one is the unusually blurry lines between research and activism in that field and consequent lack of a good outside view to fall back on. I don't think we're methodologically incapable of overcoming that, but I do think that any serious attempt at doing so would essentially convert this site into a gender blog.

To make matters worse, for one inclined to view issues through the lens of gender politics, Failed Utopia 4-2 is close to the worst starting point this site has to offer. Never mind the explicitly negative framing, or its place within the fun theory sequence: we have here a story that literally places men on Mars on gender-essentialist grounds, and doesn't even mention nonstandard genders or sexual preferences. No, that's not meant to be taken all that seriously or to inform people's real behavior. Doesn't matter. We're talking enormously poor associations here.

From there, the damage has basically been done. If you take that as a starting point and look around the site with gender in mind -- perhaps not even consciously trying to vet things in those terms, but having framed things in that way -- you aren't going to go anywhere good with it. Facts like the predominately male gender mix (which I'd be inclined to explain in terms of background demographics; computer science is the dominant intellectual framework here and that field's even more gender-skewed) or the evopsych reasoning we use occasionally start to look increasingly sinister, and every related data point's going to build on an already dismal impression. These data points are in fact pretty sparse -- we don't talk much about gender here, for what I see as good reasons -- but they're fairly salient if you're looking for them. And there aren't many pointing in the other direction.

I don't agree with the conclusion. But I can see where it's coming from, and once it's been accepted sticking around to fight a presumptively hopeless battle wouldn't be a very smart move. Now, can we prevent impressions like this from being formed without losing sight of our primary goals or engaging in types of moderation that aren't going to happen with our current leadership and culture? That I'm not sure of.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-09T16:48:08.111Z · LW(p) · GW(p)

we as a community suck at handling gender issues without bias.

As far as I can tell, we as a species suck at handling gender issues without bias, the closest thing to an exception to that I recall seeing being some (not all) articles (but usually not the comments) on the Good Men Project and the discussions on Yvain's “The $Nth Meditation on $Thing” blog post series.

Replies from: Nornagest, shminux
comment by Nornagest · 2013-02-09T18:55:39.539Z · LW(p) · GW(p)

Yeah, I was fairly impressed with Yvain's posts on the subject; if we did want to devote some serious effort to tackling this issue, I can think of far worse starting points.

comment by shminux · 2013-02-11T04:55:19.714Z · LW(p) · GW(p)

we as a species suck at handling gender issues without bias

s/gender//

Though I think that this particular forum sucks less at handling at least some issues.

comment by wedrifid · 2013-02-09T06:30:18.261Z · LW(p) · GW(p)

Why not stay around and try to help fix the problem?

Fixing the problem needs less people with a highly polarizing agenda, not more.

comment by ViEtArmis · 2012-07-19T16:41:25.003Z · LW(p) · GW(p)

Hello! I'm David.

I'm 26 (at the time of writing), male, and an IT professional. I have three (soon to be four) children, three (but not four) of which have a different dad.

My immediate links here were through the Singularity Institute and Harry Potter and the Methods of Rationality, which drove me here when I realized the connection (I came to those things entirely separately!). When I came across this site, I had read through the Wikipedia list of biases several times over the course of years, come to many conscious conclusions about the fragility of my own cognition, and had innumerable arguments with friends and family that changed minds, but I never really considered that there would be a large community of people that got together on those grounds.

I'm going to do the short version of my origin story here, since writing it all out seems both daunting and pretentious. I was raised rich and lucky by an entrepreneur/university professor/doctor father and a mother who always had to be learning something or go crazy (she did some of both). I dropped out of a physics major in college and got my degree in gunsmithing instead, but only after I worked a few years. Along the way, I've politically and morally moved around, but I'm worried that the settling of my moral and political beliefs is a symptom of my brain settling rather than because of all of my rationalizations.

There are a few reasons that I haven't commented on here yet (mostly because I despise any sort of hard work), and this is an attempt to break some of those inhibitions and maybe even get to know some people well enough (i.e. at all) to actively desire discourse.

Ok, David Fun Facts time:

  • I know enough Norwegian, Chinese, Latin, Lojban, and Spanish to do...something useful maybe?

  • I almost never think of what I'm saying before I say it (as in black-box), and I let it continue because it works.

  • Corollary: I curse a lot when I'm comfortable with people.

  • Corollary: My voice is low and loud, so it carries quite far.

  • I play a lot of video games, board games, and thought experiment games.

comment by RobertChange · 2013-01-17T21:35:24.051Z · LW(p) · GW(p)

Hi LWers,

I am Robert and I am going to change the world. Maybe just a little bit, but that’s ok, since it’s fun to do and there’s nothing else I need to do right now. (Yay for mini-retirements!)

I find some of the articles here on LW very useful, especially those on heuristics and biases, as well as material on self-improvement although I find it quite scattered among loads of way to theoretic stuff. Does it seem odd that I have learned much more useful tricks and gained more insight from reading HPMOR than from reading 30 to 50 high-rated and “foundational” articles on this site? I am sincerely sad that even the leading rationalists on LW seem to struggle getting actual benefits out of their special skills and special knowledge (Yvain: Rationality is not that great; Eliezer: Why aren't "rationalists" surrounded by a visible aura of formidability?) and I would like to help them change that.

My interest is mainly in contributing more structured, useful content and also to band together with fellow LWers to practice and apply our rationalist skills. As a stretch goal I think that we could pick someone really evil as our enemy and take them down, just to show our superiority. Let me stress that I am not kidding here. If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!” And then we’ll have changed the world, because people will start taking rationality serious.

Let me send out a warm “thank you” to you all for welcoming me in your rationalist circles!

Replies from: John_Maxwell_IV, OrphanWilde, shminux, RobertChange, Kawoomba
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-18T06:23:58.912Z · LW(p) · GW(p)

Welcome!

Why aren't "rationalists" surrounded by a visible aura of formidability?

Because they don't project high status with their body language?

Re: Taking out someone evil. Let's be rational about this. Do we want to get press? Will taking them out even be worthwhile? What sort of benefits from testing ideas against reality can we expect?

I think humans who study rationality might be better than other humans at avoiding certain basic mistakes. But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable.

Also, if the recent survey is to be believed, the average IQ at Less Wrong is very high. So if LW does accomplish something, it could very well be due to being smart rather than having read a bunch about rationality. (Sometimes I wonder if I like LW mainly because it seems to have so many smart people.)

Replies from: Peterdjones, MugaSofer
comment by Peterdjones · 2013-01-18T13:51:49.840Z · LW(p) · GW(p)

But that doesn't mean that the study of rationality (as it currently exists) amounts to a "success spray" that you can spray on any goal to make it more achievable.

Some lessWrongians believe it is

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T07:02:42.451Z · LW(p) · GW(p)

That comment doesn't rule out selection effects, e.g. the IQ thing I mentioned.

Replies from: Peterdjones
comment by Peterdjones · 2013-01-19T23:49:51.104Z · LW(p) · GW(p)

IQ without study will not make you are super philosopher or super anything else.

comment by MugaSofer · 2013-01-18T09:33:00.717Z · LW(p) · GW(p)

Don't be too pessimistic to the newcomer, John. We're not completely useless. It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be. Of course, how much the right choices can help you varies a bit, but it's hard to know how much you could achieve if you're biased, isn't it?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-19T07:00:22.399Z · LW(p) · GW(p)

It doesn't grant any new abilities as such, admittedly, but if you're interested in making the right decision, then rationality is quite useful; to the extent that choosing correctly can help you, then this is place to be.

Hm. My correction on that would be: To the extent that your native decisionmaking mechanisms are broken and can be fixed by reading blog posts on Less Wrong, then this is the place to be. In other words, how useful the study of rationality is depends on how important and easily beaten the bugs Less Wrong tries to fix in human brains are.

Many people are interested in techniques for becoming more successful and getting more out of life. Techniques range from reading The Secret to doing mindfulness meditation to reading Less Wrong. I don't see any a priori reason to believe that the ROI from reading Less Wrong is substantially higher than other methods. (Though, come to think of it, self-improvement guru Sebastian Marshall gives LW a rave review. So in practice LW might work pretty well, but I don't think that is the sort of thing you can derive from first principles, it's really something that you determine through empirical investigation.)

comment by OrphanWilde · 2013-01-17T22:43:49.443Z · LW(p) · GW(p)

I'm evil by some people's standards. You'll have to get a little bit more specific about what you think constitutes evil.

From what I've seen, real evil tends to be petty. Most grand atrocities are committed by people who are simply incorrect about what the right thing to do is.

comment by shminux · 2013-01-17T23:18:01.335Z · LW(p) · GW(p)

If rationality really counts for something (other than being good entertainment for sciency types and sci-fi lovers), then we should be able to find the right leverages and play out a great plot which just leaves everyone gasping “shit!”

You may follow HJPEV in calling world domination "world optimization", but running on some highly unreliable wetware means that grand projects tend to become evil despite best intentions, due to snowballing unforeseen ramifications. In other words, your approach seems to be lacking wisdom.

Replies from: jpaulson
comment by Jonathan Paulson (jpaulson) · 2013-01-18T02:09:32.545Z · LW(p) · GW(p)

You seem to be making a fully general argument against action.

Replies from: shminux
comment by shminux · 2013-01-18T03:20:03.917Z · LW(p) · GW(p)

Against any sweeping action without carefully considering and trying out incremental steps.

comment by RobertChange · 2013-01-19T17:10:08.219Z · LW(p) · GW(p)

Thanks to all for the warm welcome and the many curious questions about my ambition! And special thanks to MugaSofer, Peterdjones, and jpaulsen for your argumentative support. I am very busy writing right now, and I hope that my posts will answer most of the initial questions. So I’ll rather use the space here to write a little more about myself.

I grew up a true Ravenclaw, but after grad school I discovered that Hufflepuff’s modesty and cheering industry also have their benefits when it comes to my own happiness. HPMOR made me discover my inner Slytherin because I realized that Ravenclaw knowledge and Hufflepuff goodness do not suffice to bring about great achievements. The word “ambition” in the first line of the comment is therefore meant in professor Quirrell’s sense. I also have a deep respect for the principles of Gryffindor’s group (of which the names of A. Swartz and J. Assange have recently caught much mainstream attention), but I can’t find anything of that spirit in myself. If I have ever appeared to be a hero, it was because I accidentally knew something that was of help to someone.

@shminux: I love incremental steps and try to incorporate them into any of my planning and acting! My mini-retirement is actually such a step that, if successful, I’d like to repeat and expand.

@John_Maxwell_IV: Yay for empirical testing of rationality!

@OrphanWilde: “Don't be frightened, don't be sad, We'll only hurt you if you're bad.“ Or to put it into more utilitarian terms: If you are in the way of my ambition, for instance if I would have to hurt your feelings to accomplish any of my goals for the greater good, I would not hesitate to do what has to be done. All I want is to help people to be happy and to achieve their goals, whatever they are. And you’ll probably all understand that I might give a slight preference to helping people whose goals align with mine. ;-)

May you all be happy and healthy, may you be free from stress and anxiety, and may you achieve your goals, whatever they are.

comment by Kawoomba · 2013-01-17T22:23:09.222Z · LW(p) · GW(p)

Let me stress that I am not kidding here.

Anything more specific you have in mind?

comment by Error · 2012-09-14T16:14:36.319Z · LW(p) · GW(p)

Greetings. I am Error.

I think I originally found the place through a comment link on ESR's blog. I'm a geek, a gamer, a sysadmin, and a hobbyist programmer. I hesitate to identify with the label "rationalist"; much like the traditional meaning of "hacker", it feels like something someone else should say of me, rather than something I should prematurely claim for myself.

I've been working through the Sequences for about a year, off and on. I'm now most of the way through Metaethics. It's been a slow but rewarding journey, and I think the best thing I've taken out of it is the ability to identify bogus thoughts as they happen. (Identifying is not always the same as correcting them, unfortunately) Another benefit, not specifically from the sequences but from link-chasing, is the realization that successful mental self-engineering is possible; I think the tipping point for me there was Alicorn's post about polyhacking. The realization inspired me to try and beat the tar out of my akrasia, and I've done fairly well so far.

My current interests center around "updating efficiently." I just turned 30; I burnt my 20s establishing a living instead of learning all the stuff I wanted to learn. I figure I only have so many years left before neural rigor mortis begins to set in, and there's more stuff I want to learn and more skills I want to aquire than time to do it in. So, how does one learn as much truth as possible while wasting as little time as possible on things that are wrong? The difficulty I see is that a layperson to a subject (the C programming language for purposes of this example) can't tell the difference between K&R and Herbert Schildt, and may waste a lot of time on the latter when they should be inhaling the former or something similar. The "Best Textbooks" thread looks like it will be invaluable here.

A related concern is that some subjects in science don't lend themselves to easy verification. How does one construct an accurate model of a thing when, for reasons of cost or time, you can't directly compare your map (or your textbook's map) to the territory? I can read a great deal about, say, quantum mechanics, but without an atom smasher in my backyard it's difficult to check if what I'm reading is correct. That's fine when dealing with something you know is settled science. It's harder when trying to draw accurate conclusions about things that are politically charged (e.g. global warming), or for which evidence in any direction is slim. (e.g. cryonics)

Something else I'm interested in is the Less Wrong local meetups. There's one listed for my area (Atlanta) but it doesn't appear to be active. Finding interesting people is hard when you're excessively introverted. I've tried Mensa meetings, but most of the people there were nearly twice my age and I found it difficult to relate. Dragoncon worked out better (well, almost), but only happens once a year.

A fair number of intro posts seem to include religious leanings or (more frequently) lack thereof, so I'll add mine: I was raised mildly Christian but it began to fade out of my worldview around the time I read the bit about how disobedient children should be stoned to death. In retrospect my parents probably shouldn't have made me read the Bible on days that we skipped church. Churches leave that stuff out. Now I swing back and forth between atheism, misotheism, and discordianism, depending on how I'm feeling on any given day, and I don't take any of those seriously.

Is it still acceptable/advisable to comment in the Sequences, even as old as they are? It looks from the comment histories in them that some people still watch and answer in them. I doubt I'll muck around too much elsewhere until I've finished them.

Replies from: NancyLebovitz, shokwave
comment by NancyLebovitz · 2012-09-14T17:13:23.930Z · LW(p) · GW(p)

Welcome!

It's acceptable and welcome to comment in the Sequences. The Recent Comments feature (link on the right sidebar, with distinct Recent Comments for the Main section and for the Discussion section) mean that there's a chance that new comments on old threads will get noticed.

comment by shokwave · 2012-09-14T17:11:09.549Z · LW(p) · GW(p)

Welcome! Commenting on the Sequences isn't against any rules. You stand a chance of getting responses from who watch the Recent Comments. However, in Discussion you'll see [SEQ RERUN] posts (which are bringing up old posts in the Sequences in chronological order) that encourage comments on the rerun, not the original. If you happen to be reading a post that's been recently re-run, you might get a better response in the rerun thread.

comment by CoffeeStain · 2013-02-08T11:15:39.505Z · LW(p) · GW(p)

Hey everyone,

As I continue to work through the sequences, I've decided to go ahead and join the forums here. A lot of the rationality material isn't conceptually new to me, although much of the language is very much so, and thus far I've found it to be exceptionally helpful to my thinking.

I'm a 24 year old video game developer, having worked on graphics on a particular big-name franchise for a couple years now. It's quite the interesting job, and is definitely one of the realms I find the heady, abstract rationality tools to be extremely helpful. Rationality is what it is, and that seems to be acknowledged here, a fact I'm quite grateful for.

When I'm not discussing the down-to-earth topics here, people may find I have a sometimes anxiety-ridden attachment to certain religious ideas. Religious discussion has been extremely normal for me throughout my life, so while the discussion doesn't make me uncomfortable, my inability to come to answers that I'm happy with does, and has caused me a bit of turmoil outside of discussion. Obviously there is much to say about this, and much people may like to say to me, but I'd like to first get through all the sequences, get all of my questions about it all answered, pay attention a bit to the discussions here, and I'll go from there. I have no grand hopes to finally put these beliefs to rest, but I will go to lengths to see whether it is something I should do. To pick either seems to me to suppose I have a Way to rationality, if I understand the point correctly. I would invite any and all discussion on the topic, and I appreciate the little "welcome to Theists" in the main post here. :)

See you all around.

Replies from: Vaniver
comment by Vaniver · 2013-02-20T19:23:50.984Z · LW(p) · GW(p)

Welcome! Glad to see you here. :D

comment by maia · 2012-07-19T17:35:41.535Z · LW(p) · GW(p)

I've been commenting for a few months now, but never introduced myself in the prior Welcome threads. Here goes: Student, electrical engineering / physics (might switch to math this fall), female, DC area.

I encountered LW when I was first linked to Methods a couple years ago, but found the Sequences annoying and unilluminating (after having taken basic psych and stats courses). After meeting a couple of LWers in real life, including my now-boyfriend Roger (LessWrong is almost certainly a significant part of the reason we are dating, incidentally), I was motivated to go back and take a look, and found some things I'd missed: mostly, reductionism and the implications of having an Occam prior. This was surprising to me; after being brought up as an anti-religious nut, then becoming a meta-contrarian in order to rebel against my parents, I thought I had it all figured out, and was surprised to discover that I still had attachments to mysticism and agnosticism that didn't really make any sense.

My biggest instrumental rationality challenge these days seems to be figuring out what I really want out of life. Also, dealing with an out-of-control status obsession.

To cover some typical LW clusters: I am not signed up for cryonics, and am not entirely convinced it is worth it. And I am interested in studying AI, but mostly because I think it is interesting and not out of Singularity-related concern. (I get the feeling that people who don't share the prominent belief patterns about AI/cryonics hereabouts think they are much more of a minority than they actually are.)

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-19T17:47:48.718Z · LW(p) · GW(p)

I'm not quite sure what you're referring to by "the prominent belief patterns," but neither low confidence that signing up for cryonics results in life extension, nor low confidence that AI research increases existential risk, are especially uncommon here. That said, high confidence in those things is far more common here than elsewhere.

Replies from: maia
comment by maia · 2012-07-19T19:04:24.730Z · LW(p) · GW(p)

That is more or less what I am trying to say. It's just that I've noticed several people on Welcome threads saying things like, "Unlike many LessWrongers, I don't think cryonics is a good idea / am not concerned about AI risk."

comment by cogwerk · 2013-03-25T22:00:23.692Z · LW(p) · GW(p)

Hi, I'm Edward and have been reading the occasional article on here for a while. I've finally decided to officially join as this year I'm starting to do more work on my knowledge and education (especially maths & science) and I like the thoughtful community I see here. I'm a programmer, but also have a passion for history. Just as I was finishing university, my thinking led me to abandon the family religion (many of my friends are still theists). I was going to keep thinking and exploring ideas but I ended up just living - now I want to begin thinking again.

Regards, Edward

comment by Abd · 2012-10-31T00:25:03.141Z · LW(p) · GW(p)

I'm Abd ul-Rahman Lomax, introducing myself. I have six grandchildren, from five biological children, and I have two adopted girls, age 11 from China, and age 9 from Ethiopia.

Born in 1944, Abd ul-Rahman is not my birth name, I accepted Islam in 1970. Not being willing to accept pale substitutes, I learned to read the Qur'an in Arabic by reading the Qur'an in Arabic.

Back to my teenage years, I was at Cal Tech for a couple of years, being in Richard P. Feynman's two years of undergraduate physics classes, the ones made into the textbook. I had Linus Pauling for freshman chemistry, as well. Both of them helped create how I think.

I left Cal Tech to pursue a realm other than "science," but was always interested in direct experience rather than becoming stuffed with tradition, though I later came to respect tradition (and memorization) far more than at the outset. I became a leader of a "spiritual community," and a successor to a well-known teacher, Samuel L. Lewis, but was led to pursue many other interests.

I delivered babies (starting with my own) and founded a school of midwifery that trained midwives for licensing in Arizona.

Self-taught, I started an electronics design consulting business, still going with a designer in Brazil.

I became known as one of the many independent inventors of delegable proxy as a method of creating hierarchical communication structure from the bottom up. Social structure, and particularly how to facilitate collective intelligence, has been a long-term interest.

I was a Muslim chaplain at San Quentin State Prison, serving an almost entirely Black community. In case you haven't guessed, I'm not black. I loved it. People are people.

So much I'm not saying yet.... I became interested in wikis early on, but didn't get to Wikipedia until 2005, becoming seriously active in 2007. Eventually, I came across an abusive blacklisting of a web site, a well-known archive of scientific papers on cold fusion. I'd been very aware of the 1989 announcement and some of the ensuing flap, but had assumed, like most people with enough knowledge to know what it was about, that the work had not been replicated.

When I looked, I became interested enough to buy a number of major works in the area (including almost all of the skeptical literature).

Among those who have become familiar, cold fusion (a bit of a misnomer; at the least it was prematurely named), is an ultimately clear example of how pseudoskepticism came to dominate a whole field, for over fifteen years. The situation flipped in the peer-reviewed journals beginning about eight years ago, but that's not widely recognized, it is merely obvious if one looks at what has been published in that period of time..

Showing this is way beyond the scope of this introduction, but I assume it will come up. I'm just asserting what I reasonably conclude, having become familiar with the evidence, (and I'm working with the scientists in the field now, in many ways).

As to rational skepticism, I was known to Martin Gardner, who quoted a study of mine on the so-called Miracle of the Nineteen in the Qur'an, the work of Rashad Khalifa, whom I knew personally.

I naively thought, for a couple of days, that a rational-skeptic approach to cold fusion might be welcome on RationalWiki. Definitely not. Again, that's another story. However, I'm not banned there and have sysop privileges (like most users).

On RationalWiki, however, I came across the work of Yudkowsky, and this blog. Wow! In some of the circles in which I've moved, I've been a voice crying in the wilderness, with only a few echoes here and there. Here, I'm reluctant to say anything, so commonly cogent is comment in this community. I know I'm likely to stick my foot in my mouth.

However, that's never stopped me, and learning to recognize the taste of my foot, with the help of my friends, is one way in which I've kept my growth alive. The fastest way to learn is generally to make mistakes.

I'm also likely to comment, eventually, on the practical ontology and present reality of Landmark Education, with which I've become quite familiar, as well as on the myths and facts which widely circulate about Landmark. To start, they do let you go to the bathroom.

Meanwhile, I've caught up with HPMOR, and am starting to read the sequences. Great stuff, folks.

Replies from: Nisan
comment by Nisan · 2012-10-31T01:09:16.898Z · LW(p) · GW(p)

Welcome! That's a fascinating biography.

I have been to one introductory Landmark seminar and wrote about the experience here.

comment by kaneleh · 2012-10-24T19:37:35.410Z · LW(p) · GW(p)

Hello. I was brought here by HPMOR, which I finished reading today. Back in 1999 or something I found the site called sysopmind.com which had interesting reads on AI, Bayes theorem (that I didn't understand) and 12 virtues of rationality. I loved it for the beauty that reminded me of Asimov. I kept it in my bookmarks forever. (I knew him before he was famous? ;-))

I like SF (I have read many SF books but most were from before 1990 for some reason) and I'm a computer nerd, among other things. I want to learn everything, but I have a hard time putting in the work. I study to become a psychologist, scheduled to finish in 2013. My favorite area of psychology is social psychology, especially how humans make decisions, how humans are influenced by biases or norms or high status people. I'm married and have a daughter born in 2011.

I like to watch tv-shows, but I have high standards. It is SF if it is based in science and rationality, otherwise it's just space drama/space action and I have no patience for it. I also like psychological drama, but it has to be realistic and believable. Please give recommendations if you like. (edited:) Also, someone could explain in what way Star Trek, Babylon 5 or Battlestar Galactica is really SF or Buffy is feminist, so I know if they are worth my while.

Replies from: CCC, Alejandro1, shminux
comment by CCC · 2012-10-25T08:44:22.566Z · LW(p) · GW(p)

Of those, the only one I've seen is Star Trek. They can be a bit handwavey about the science sometimes; I liked it, but if you're looking for hard science then you might not. As far as recommendations go, may I recommend the Chanur series (books, not TV) by one C.J. Cherryh?

comment by Alejandro1 · 2012-10-25T06:45:49.273Z · LW(p) · GW(p)

For realistic psychological drama, I haven't seen any show that beats Mad Men.

comment by shminux · 2012-10-24T20:40:52.771Z · LW(p) · GW(p)

Also, someone could explain why Star Trek, Babylon 5, Battlestar Galactica or Buffy is worth my while.

Not without knowing you well enough. Sherlock), on the other hand should suit you just fine.

Replies from: kaneleh
comment by kaneleh · 2012-10-25T06:33:16.475Z · LW(p) · GW(p)

Ah, yes, thank you. I have seen Sherlock and loved it. Too few episodes though! =)

comment by candyfromastranger · 2012-07-26T04:13:11.719Z · LW(p) · GW(p)

I highly doubt that I'll be posting articles or even joining discussions anytime soon, since right now, I'm just getting started on reading the sequences and exploring other parts of the site, and don't feel prepared yet to get involved in discussions. However, I'll probably comment on things now and then, so because of that (and, honestly, just because I'm a very social person), I figured I might as well post an introduction here.

I appreciate the way that discussions are described as ending on here, because I've noticed in other debates that "tapping out" is seen as running away, and the main trait that gives me problems in my quest for rationality is that I'm inherently a competitive person, and get more caught up in the idea of "winning" than of improving my thinking. I'm working on this, but if I do get involved in discussions, the fact that they aren't seen as much as competitions here compared to other places should be helpful to me.

Anyway, I guess I'll introduce myself. I'm Alexandra, and I'm a seventeen year old high-school student in the United States (I applied to the camp in August, but I never received any news about it, so I assume that I wasn't accepted). Like many people here, I found out about this website through Harry Potter and the Methods of Rationality, but I've been interested in improving my rational thinking since I was young. I grew up in a secular and intellectual home, so seeing the world and myself realistically have always been major goals for myself, and I've always naturally tried to apply logical thinking and the scientific method to my problems, but I've never really formally studied rationality (though I did take statistics last year).

I'm pretty smart, but as a high school student (especially one who, due to various bad experiences with the school system, only really found motivation and purpose in school-work less than a year ago), I don't have too much technical knowledge, which I hope to change. I'm more experienced in aggressive self-awareness than I am in more technical ideas (such as the contrast between Bella from Luminosity and Harry in HPMOR). I'm not really interested in a future in rationality work (and, while I'm interested in transhumanism, I don't really see myself being pulled in that direction for a career), I just want to improve my own thinking in order to better use my mind as a tool to achieve my goals.

While I might come across it on here, I actually don't act very intellectual in my usual social interactions (especially compared to my younger brother, who's very openly and almost aggressively rational). I usually keep my rationality to myself except for certain situations, and use it internally to figure out the best way to approach situations, but I usually come across as much more flippant and frivolous than I actually am (especially since I'm very much an extrovert). I'm too misanthropic to expect rationality from others, so I prefer to use my inner logical side to figure out how to interact with people on their respective levels in a way that works best for me. I can understand the desire to appear as rational and intelligent as you truly are, I just am a very utilitarian person and have found that placing less emphasis on that side of myself works best for me.

I'm used to most people that I debate with being irrational and easily upset. It never used to bother me, because I consider my intelligence to be a mental tool of mine rather than a personality trait, and because my naturally competitive personality meant that I still enjoyed debates that fell into petty conflict, but recently (maybe because I'm maturing, maybe because I'm busier these days), I've found myself getting bored with that sort of thing. So I'm definitely interested in intellectual discussions on here, though I might not involve myself in them until I'm better prepared.

One thing that I've noticed about myself is that, in discussions, I tend to insist on responding to every single point made by others rather than just selecting some to focus on (before I realized that's what people were doing, it used to bother me that others wouldn't respond to every individual point I made). I'm not sure whether that's something shared by other members of this website or just a personal quirk.

This is getting rambly because I'm a long-winded person, but I'll add a bit more (mostly non-rationality-related) information. I'm not a theist or a spiritual person, but atheism seems obvious enough to me that I don't see much point in discussing it anymore (unless the more "New Age"-y members of my family get a little too pushy with me). I'm interested in physics, math, foreign languages, literature, singing, exploring urban areas, climbing things, transhumanism (especially life-extension, because I want to live forever) and throwing parties. I have a strong appreciation for the arts, but I don't personally do anything artistic (other than singing, which is just a hobby), and I'm easily entertained by the small pleasures in life (good food, pretty views, attractive people of either gender, and fluffy blankets). I really like cats and books and the nighttime, and I'm more interested in clothes and makeup than might be expected from an eccentric, science-loving rationalist with quite a few geeky interests, but people are complex. I tend to be a bit surreal when I'm not purposefully trying to be serious.

Replies from: Bugmaster, hannahelisabeth
comment by Bugmaster · 2012-07-26T06:06:29.749Z · LW(p) · GW(p)

I applied to the camp in August, but I never received any news about it, so I assume that I wasn't accepted

I'm not affiliated with SIAI or the summer camps in any way, but IMO this sounds like a breakdown somewhere in the organization's communication protocols. If I were you, I wouldn't just assume that I wasn't accepted, I would ask for an explanation.

Replies from: candyfromastranger
comment by candyfromastranger · 2012-07-26T06:23:13.673Z · LW(p) · GW(p)

I'll contact them, then. I wasn't expecting to be accepted, but on the off chance that I was, it's hopefully not too late.

comment by hannahelisabeth · 2012-11-10T22:47:52.404Z · LW(p) · GW(p)

I like your description of yourself. You remind me a bit of myself, actually. I think I'd enjoy conversing with you. Though I have nothing on my mind at the moment that I feel like discussing.

Hm, I kind of feel like my comment ought to have a bit more content than "you seem interesting" but that's really all I've got.

comment by Gaviteros · 2012-07-19T07:03:39.662Z · LW(p) · GW(p)

Hellow Lesswrong! (I posted this in the other July2012 welcome thread aswell. :P Though apparently it has too many comments at this point or something to that effect.)

My name is Ryan and I am a 22 year old technical artist in the Video Game industry. I recently graduated with honors from the Visual Effects program at Savannah College of Art and Design. For those who don't know much about the industry I am in, my skill set is somewhere between a software programmer, a 3D artist, and a video editor. I write code to create tools to speed up workflows for the 3D things I or others need to do to make a game, or cinematic.

Now I found lesswrong.com through the Harry Potter and the Methods of Rationality podcast. Up unto that point I had never heard of Rationalism as a current state of being... so far I greatly resonate with the goals and lessons that have come up in the podcast, and what I have seen about rationalism. I am excited to learn more.

I wouldn't go so far to claim the label for myself as of yet, as I don't know enough and I don't particularly like labels for the most part. I also know that I have several biases, I feel like I know the reasons and causes for most, but I have not removed them from my determinative process.

Furthermore I am not an atheist, nor am I a theist. I have chosen to let others figure out and solve the questions of sentient creators through science, and I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything. I just try to leave religion out of most of my determinations.

Anyway! I'm looking forward to reading and discussing more with all of you!

Current soapbox: Educational System of de-emphasizing critical thinking skills.

If you are interested you can check out my artwork and tools at www.ryandowlingsoka.com

Replies from: Grognor, Emile
comment by Grognor · 2012-07-25T04:58:26.629Z · LW(p) · GW(p)

I am no more qualified to disprove a religious belief than I would be to perform surgery... on anything.

I disagree with this claim. If you are capable of understanding concepts like the Generalized Anti-Zombie Principle, you are more than capable of recognizing that there is no god and that that hypothesis wouldn't even be noticeable for a bounded intelligence unless a bunch of other people had already privileged it thanks to anthropomorphism.

Also, please don't call what we do here, "rationalism". Call it "rationality".

comment by Emile · 2012-07-19T13:18:44.574Z · LW(p) · GW(p)

Welcome to LessWrong!

There are a few of us here in the Game Industry, and a few more that like making games in their free time. I also played around with Houdini, though never produced anything worth showing.

Replies from: Gaviteros
comment by Gaviteros · 2012-07-20T06:35:48.672Z · LW(p) · GW(p)

Thanks for the welcome!

Houdini can be a lot of fun- but without a real goal it is almost too open for anything of value to be easily made. Messing around in Houdini is a time sink without a plan. :) That said, I absolutely love it as a program.

comment by fowlertm · 2012-07-19T00:15:46.502Z · LW(p) · GW(p)

Hello,

My name is Trent Fowler. I started leaning toward scientific and rational thinking while still a child, thanks in part to a variety of aphorisms my father was fond of saying. Things like "think for yourself" and "question your own beliefs" are too general to be very useful in particular circumstances, but were instrumental in fostering in me a skepticism and respect for good argument that has persisted all my life (I'm 23 as of this writing). These tools are what allowed me to abandon the religion I was brought up in as a child, and to eventually begin salvaging the bits of it that are worth salvaging. Like many atheists, when I first dropped religion I dropped every last thing associated with it. I've since grown to appreciate practices like meditation, ritual, and even outright mysticism as techniques which are valuable and pursuable in a secular context.

What I've just described is basically the rationality equivalent of lifting weights twice a week and going for a brisk walk in the mornings. It's great for a beginner, but anyone who sticks with it long enough will start to get a glimpse of what's achievable by systematizing training and ramping up the intensity. World-class martial artists, olympic powerlifters, and ultramarathoners may seem like demi-gods to the weekend warriors, but a huge amount of what they've accomplished is attributable to hard work and dedication (with a dash of luck and genetics, of course).

The Bruce Lees of the mind, however, are more than just role models. They're the people who will look extinction risk square in the face and start figuring out how to actually the problems. They're the people who will build transhuman AIs, extinguish death, probe the bedrock of reality, and fling human civilization into deep-space. As the dojo is to the apprentice, so is Less Wrong to the aspiring rationalist.

Sadly, when I was gripped rather suddenly by a fascination with math and physics as a child, there was not enough in the way of books, support, and instruction to get the prodigy-fires burning. To this day deep math and physics remain and interesting but largely inscrutable realm of human knowledge. But I'm still young enough that with hard work and dedication I could be a Bostrom or a Yudkowsky, especially if I manage to scramble onto their shoulders.

So here I am, ready to sharpen the blade of my thinking, that it may more effectively be turned to both pondering metaphysical quandaries and solving problems that threaten our collective future. I am excited by the prospects, and hope I am up to the challenge.

comment by krzhang · 2013-02-19T05:33:23.535Z · LW(p) · GW(p)

I am Yan Zhang, a mathematics grad student specializing in combinatorics at MIT (and soon to work at UC Berkeley after graduation) and co-founder of Vivana.com. I was involved with building the first year of SPARC. There, I met many cool people at CFAR, for which I'm now a curicculum consultant.

I don't know much about LW but have liked some of the things I have read here; AnnaSalamon described me as a "street rationalist" because my own rationality principles are home-grown from a mix of other communities and hobbies. In that sense, I'm happy to step foot into this "mainstream dojo" and learn your language.

Recently Anna suggested I may want to cross-post something I wrote to LW and I've always wanted to get to know the community better, so this is the first step, I suppose. I look forward to learning from all of you.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-19T05:54:04.835Z · LW(p) · GW(p)

Welcome! It's good to see you here.

Replies from: krzhang
comment by krzhang · 2013-02-19T05:58:03.584Z · LW(p) · GW(p)

Haha hey QC. Remind me sometime to learn the "get ridiculously high points in karma-based communities and learn a lot" metaskill from you... you seem to be off to a good start here too ;)

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-19T06:00:36.144Z · LW(p) · GW(p)

Step 1 is to spend too much time posting comments. I'm not sure I recommend this to someone whose time is valuable. I would like to see you share your "street rationalist" skills here, though!

comment by hannahelisabeth · 2012-11-10T22:08:51.215Z · LW(p) · GW(p)

Hi,

My name is Hannah. I'm an American living in Oslo, Norway (my husband is Norwegian). I am 24 (soon to be 25) years old. I am currently unemployed, but I have a bachelor's degree in Psychology from Truman State University. My intention is to find a job working at a day care, at least until I have children of my own. When that happens, I intend to be a stay-at-home mother and homeschool my children. Anything beyond that is too far into the future to be worth trying to figure out at this point in my life.

I was referred to LessWrong by some German guy on OkCupid. I don't know his name or who he is or anything about him, really, and I don't know why he messaged me randomly. I suppose something in my profile seemed to indicate that I might like it here or might already be familiar with it, and that sparked his interest. I really can't say. I just got a message asking if I was familiar with LessWrong or Harry Potter and the Methods of Rationality (which I was not), and if so, what I thought of them. So I decided to check them out. I thought the HP fanfiction was excellent, and I've been reading through some of the major series here for the past week or so. At one point I had a comment I wanted to make, so I decided to join in order to be able to post the comment. I figure I may as well be part of the group, since I am interested in continuing reading and discussing here. :-)

As for more about my background in rationality and such, I like to think I've always been oriented towards rationality. Well, when I was younger I was probably less adept at reasoning and certainly less aware of cognitive biases and such, but I've always believed in following the evidence to find the truth. That's something I think my mother helped to instill in me. My acute interest in rationality, however, probably occurred when I was around 18-19 years old. It was at this point that I became an atheist and also when I began Rational Emotive Behavior Therapy.

I had been raised as a Christian, more or less. My mother is very religious, but also very intelligent, and she believes fervently in following the evidence wherever it leads (despite the fact that, in practice, she does not actually do this). The shift in my religious perspective initially occurred around when I first began dating my husband. He was not religious, and I had the idea in my head that it was important that he be religious, in order for us to be properly compatible. But I observed that he was very open-minded and sensible, so I believed that the only requirement for him to become a Christian was for me to formulate a sufficiently compelling argument for why it was the true religion. And if this had been possible, it's likely he would have converted, but alas, this was a task I could not succeed at. It was by examining my own religion and trying to answer his honest questions that I came to realize that I didn't actually know what any good reasons for being a Christian were, and that I had merely assumed there must be good reasons, since my mother and many other intelligent relgious people that I knew were convinced of the religion. So I tried to find out what these reasons were, and they came up lacking.

When I found that I couldn't find any obvious reasons that Christianity had to be the right religion, I realized that I didn't have enough information to come to that conclusion. When I reflected on all my religious beliefs, it occured to me that I didn't even know where most of them came from. So I decided to throw everything out the window and start from scratch. This was somewhat difficult for me emotionally, since I was honestly afraid that I was giving up something important that I might not get back. I mean, what if Christianity were the true religion and I gave it up and never came back? So I prayed to God (whichever god(s) he was, if any) to lead me on a path towards the truth. I figured if I followed evidence and reason, then I would end up at the truth, whatever it was. If that meant losing my religion, then my religion wasn't worth having. I trusted that anything worth believing would come back to me. And that even if I was led astray and ended up believing the wrong thing, God would judge me based on my intent and on my deeds. A god who is good will not punish me for seeking the truth, even if I am unsuccessful in my quest. And a god who is not good is not worth worshipping. I know this idea has been voiced by many others before me, but for me this was an original conclusion at the time, not something I'd heard as a quote from someone else.

Another pertinent influence of rationality on my life occured during my second year of college. I had decided to see a counselor for problems with anxiety and depression. The therapy that counselor used was Rational Emotive Behavior Therapy, and we often engaged in a lot of meaningful discussions. I found the therapy and that particular approach extremely helpful in managing my emotions and excellent practice in thinking rationally. I think it really helped me become a better thinker in addition to being more emotionally stable.

So it's been sort of a cumulative effect, losing my religion, going to college, going through counseling, etc. As I get older, I expose myself to more and more ideas (mostly through reading, but also through some discussion) and I feel that I get better and better at reasoning, understanding biases, and being more rational. A lot of the things I've read here are things that I had either encountered before or seemed obvious to me already. Although, there is plenty of new stuff too. So I feel that this community will be a good fit for me, and I hope that I will be a positive addition to it.

I have a lot of unorthodox ideas and such that I'd be happy to discuss. My interests are parenting (roughly in line with Unconditional Parenting by Alfie Kohn), schooling/education (I support a Sudbury type model), diet (I'm paleo), relationships (I don't follow anyone here; I've got my own ideas in this area), emotions and emotional regulation (REBT, humanistic approach, and my own experience/ideas) and pretty much anything about or related to psychology (I'm reasonably educated in this area, but I can always learn more!). I'm open to having my ideas challenged and I don't shy away from changing my mind when the evidence points in the opposite direction. I used to have more of a problem with this, in so far as I was concerned about saving face (I didn't want to look bad by publicly admitting I was wrong, even if I privately realized it), but I've since reasoned that changing my mind is actually a better way of saving face. You look a lot stupider clinging to a demonstrably wrong position than simply admitting that you were mistaken and changing your ideas accordingly.

Anyway, I hope that wasn't too long an introduction. I have a tendency to write a lot and invest a lot of time and effort in to my writings. I care a lot about effective communication, and I like to think I'm good at expressing myself and explaining things. That seems to be something valued here too, so that's good.

Replies from: Morendil
comment by Morendil · 2012-11-10T22:13:40.659Z · LW(p) · GW(p)

Welcome here!

comment by [deleted] · 2012-08-26T19:25:59.395Z · LW(p) · GW(p)

Hello LW,

Last Thursday, I was asked by User:rocurley if, in his absence, I wanted to organize a hiking event (originally my idea) for this week's DC metro area meetup, during which I discovered I could not make posts, etc. here because I had zero karma. I chose to cancel the meetup on account of weather. I had registered my account previously, but realizing that I might have need to post here in the future, and that I had next to nothing to lose, I have decided to introduce myself finally.

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others. Areas of interest include sciences (I have a BS in physics), psychology, personality disorders, some areas of philosophy, reading, and generally learning new things. One of my favorite books (if not /the/ favorite) is Godel, Escher, Bach, which I read for the first (and certainly not last) time while I was in college, 5+ years ago.

I'm extremely introverted, and I am aware that I have certain anxiety issues; while rationality has not helped with the actual feeling of anxiety, it has allowed me to push through it, in some cases.

Replies from: Vaniver
comment by Vaniver · 2012-08-26T21:30:43.294Z · LW(p) · GW(p)

Welcome!

I discovered LW through HPMOR, through Tvtropes, I believe. I've read some LW articles, but not others.

Specific! :P Which is the most interesting one you've read so far? We might have recommendations of similar ones that you would like.

I'm extremely introverted, and I am aware that I have certain anxiety issues; while rationality has not helped with the actual feeling of anxiety, it has allowed me to push through it, in some cases.

So, I found my introversion much easier to manage when I started scheduling time by myself to recharge, and scheduling infrequent social events to make sure I didn't get into too much of a cave. It had been easy to get overwhelmed with social events near each other if I didn't have something on my calendar reminding me "you'll want to read a book by yourself for a few hours before you go to another event." That sort of thing might be helpful to consider.

Replies from: None
comment by [deleted] · 2012-08-27T01:13:11.902Z · LW(p) · GW(p)

Some of my favorite articles, off the top of my head (and a bit of browsing) :

  • A Fable of Science and Politics
  • Explain, Worship, Ignore - I am, as of now, something of a naturalistic pantheist / pandeist; if you've heard Carl Sagan or Neil Degrasse Tyson speak on the wonder that is the existence of the universe, it's something like that. Unlike what is written in the linked article, however, I'm not convinced that the initial singularity, or whatever cause the Big Bang might have, can be explained by science. (Is it even meaningful to ask questions about what is outside the universe?)
  • Belief in Belief
  • Avoiding Your Belief's Real Weak Points
  • The 'Outside the Box' Box - How much of my belief system is actually a result of my own thinking, as opposed to a result of culture, society, etc? Granted, sometimes collective wisdom is better than what one might come up with by oneself, but not always...

So, I found my introversion much easier to manage when I started scheduling time by myself to recharge, and scheduling infrequent social events to make sure I didn't get into too much of a cave. It had been easy to get overwhelmed with social events near each other if I didn't have something on my calendar reminding me "you'll want to read a book by yourself for a few hours before you go to another event." That sort of thing might be helpful to consider.

I have Meetup.com to organize and schedule social events, and of course there's the LW meetups. I get plenty of alone time, so that isn't really a problem for me. (Some minutes of thinking later) The particular issues aren't something I can accurately put into words, but they're something like 'active avoidance of (perceived) excessive attention or expectations, either positive or negative' and 'fear of exposing "personal" info I'd rather not share, and of any negative consequences that might result'. Perhaps not surprisingly, I greatly prefer internet or written "non-personal" communication over verbal communication.

comment by Rukifellth · 2012-07-25T23:54:59.659Z · LW(p) · GW(p)

I got into a community of intelligent, creative free-thinkers by reading fan fiction of all things.

You know the one.

Anyway, my knowledge of what is collectively referred to as Rationality is slim. I read the first 6 pages of The Sequences, felt like I was cheating on a test, and stopped. I'll try to make up for it with some of the most unnecessarily theatrical and hammy writing I can get away with.

I love word play, and over the course of a year I offered (as a way of apology) to owe my friend a quarter for every time I improvised a pun or awful joke mid-conversation, by the end of which I could have bought a dinner for him at Pizza Delight- I didn't. It's on my to-do list to compile all the wises that Carlos Ramon ever cracked on The Magic School Bus and put it on you tube, because no one else has and it needs to be done, damn it. As you can tell, I sometimes write for it's own sake, sort of a literary hedonist if you will. But all good things must come to an end...

My greatest principle is that a person's course in life is governed by their reaction to their circumstance, and that nothing at all is of certainty. The nature of the human mind is a process which our current metaphors and models can only approximate, a physical system adjusting itself, which words like "I", "our" and "qualia" can only activate whatever concept we have to answer the question of "What". Because of this, I have a great sympathy towards Eastern spirituality and some Christian mysticism, because they have the spirit of what we're all trying to accomplish here; to answer a question.

Sometimes I end up in the psychological equivalence of a fractal zoom where philosophy has this impossible to divide property, of all things linking to others without there being any elementary axioms or parts, probably because of that whole "brain made of neurons" racket. I concluded that emotions are just another form of sense; love, curiosity and understanding being reactions and sensory input much like taste and touch. Happily any cognitive dissonance or emptiness can be discarded the same way, and the logical contradiction a property of the purely physical (rather than comforting "conceptual") nature of our very thought, meaning that I'll simultaneously accept the objective truth of this, but reject any emotional significance, as emotional significance is itself deconstructed as a concept.

Of course the empathy gap and the nature of attention span (or at least my attention span) means that I'm normally not like this unless triggered. To me, regular life is the reaction of our psyche, broken up occasionally by the temporary delusion that a fractal zoom of philosophy can answer my questions. I call this a "delusion" because the concept of a question to be answered is an extraneous layer added to by an entity which just wants to avoid suffering.

The human mind; a non-linear physical system which tries to evaluate itself with a linear processing system that's not suited to that sort of thing at all. Sometimes I wonder if who we are is just the sum of five or six different personalities, each with about a fifth of sixth of our functioning, plus a heavy specialization in one type of behaviour, the sum of which is an idea of what is right and wrong with a sense of identity. Given the existence of neural pathways in our spinal column, I wouldn't be surprised. Sometimes I feel like I can feel the shape of our brains based on this, but that's probably just me connecting concepts to high school biology.

I went off the rails a bit there, but looking back, I figure this should be a more honest introduction from me than any structured post. Even so, I doubt I can really convey that kind of leg twisting logical insanity without the meaning being hallowed by interpretation and pattern recognition.

Ugh, I feel like there wasn't a speck of relate-ability there at all. Well, I'm eighteen years old and male. I followed the My Little Pony following out of a combination of boredom, fascination and a love of the bizarre. The show never struck a chord with me at all, really, but the fandom was something else. There was a period of about a month where I read crossover fan fictions, but I couldn't be bothered after that point, because the fandom's growth wound down and the novelty was gone. Even so, Nine Knackered Souls is the funniest fan fiction I've ever read, a Red vs Blue crossover. Fallout Equestria is the longest and most "so-okay-it's average" fan fiction, despite the fact that I was drawn in enough to overlook the Mary sue aspects and read the whole thing in like four days in one sitting...

I'm going into Computer Science at Dalhousie University, and CSci being what it is, I'm going to make up my path as I go along. I really don't know enough about robotics, AI or informatics to make the choice between them right now anyway.

Replies from: Rukifellth
comment by Rukifellth · 2012-07-26T00:03:23.982Z · LW(p) · GW(p)

Also, I enjoy playing Superman 64's ring levels.

comment by kirpi · 2012-07-21T08:18:09.132Z · LW(p) · GW(p)

Hello. I am from Istanbul, Turkey (A Turkish Citizen born and raised). I came across LessWrong on a popular Turkish website called EkşiSözlük. Since then, this is the place I checked to see what's new when there's nothing worth reading on Google Reader and I have time. (So long posts you have!)

I am 31 years old and I have a BSc in Computer Science and MSc in Computational Sciences (Research on Bioinformatics). But then, like most of the people in my country does, I've landed upon a job where I can't utilize any of these information. Information Security :)

Why did I complain about my job? Here is why:

I've been long since looking for "the best way to have lived a life". What I mean by this is, I have to say, at the moment of death "I lived my life the best way I could, and I can die blissfully". This may come off a bit cliché but bear in mind that I'm relatively new to this rationality thing.

While I was learning Computer Science for the first time, I saw there was great opportunity in relating computational sciences to social sciences so as to understand inner workings of human beings. This I realised when the Law&Ethics instructor asked us to write an essay on what would be "the best way to live your life" and I was then learning "Greedy Algorithms" Granted there would be many gaps in my arguments but my case was like this: "You can't predict how long you will live. So the best way to search for the (sub)optimal life was to utilize a greedy algorithm. That is, at every decision point, you have to select the best alternative that maximizes your utility at that time." You soon come to learn that this is easier said than done. (No long term goals, no relationships.. etc) And greedy algorithms may generate a sub-optimal solution, rather than the optimal solution (because you have at one point chosen the wrong path since you didn't consider leaving this far)

I currently suspect that Bayesian (Or Laplacian maybe? ) methods may have the best luck to increase the possibility that I live a good life. I wrote all over the place, but one last thing I want to add.

I do not believe an afterlife or a soul for that matter. This has happened very recently relative to most of you. So, I was constantly looking for a "rational" justification for continuing living a good life . I am on the verge of giving up looking, since there seems to be nothing to find, and just living. Which is a little sad actually, since I still have the feeling that I could probably do something great with my life. But then constant questioning seems to also lead to a sub-optimal life. (May be with an even lower utility than greedy algorithm) I guess, what I am trying to say is I am on the verge of becoming a hedonist..

I'd love to learn your ideas or reading recommendations on how best to live a life. I'd also love to organize meetups of rationalists in Turkey.

P.S. If you haven't seen yet, there's a book called "The theory that would not die", which is an excellent source on many (and I mean it when I say many) things Bayesian.

Replies from: NancyLebovitz, NotInventedHere
comment by NancyLebovitz · 2012-12-05T19:58:47.688Z · LW(p) · GW(p)

"You can't predict how long you will live. So the best way to search for the (sub)optimal life was to utilize a greedy algorithm. That is, at every decision point, you have to select the best alternative that maximizes your utility at that time."

However, you can estimate how long you will live with fairly good accuracy. If you know you're very likely to live for some decades more, then I think it makes sense to optimize around the estimate rather than for the very small possibility that you'll be dead in the next hour.

comment by NotInventedHere · 2012-12-05T18:32:15.070Z · LW(p) · GW(p)

This is an extremely belated reply, but with regards to

So, I was constantly looking for a "rational" justification for continuing living a good life . I am on the verge of giving up looking, since there seems to be nothing to find, and just living.

The Fun Theory and Metaethics sequences helped me through my personal period of existential angst.

The two most immediately helpful posts I would recommend for someone like you are Joy in The Merely Real and Joy in the Merely Good.

comment by [deleted] · 2013-01-19T02:38:40.934Z · LW(p) · GW(p)

Hello. I've read sequence articles and discussion off this website for a while now. Been hesitant to join before because I like to keep my identity small but recently realized that being able to talk to others about topics on this site will make me more effective at reaching my goals.

Armchairs are very comfortable and I'm having some mental difficulty putting the effort into the practice of achieving set goals. It's very hard to actually do stuff and easy to just read about interesting topics without engaging.

I'm interested more in meta-ethics than in physics, more in decision theory than practical AI. My first comments will likely be in the sequences or in discussion comments of a few specific natures.

This should be fun, I look forward to talking with you. Ask me any questions that arouse your curiosity.

The browsing experience with Kibitzing off is strange but not unpleasant. How long did it take for you to get accustomed to it?

comment by findis · 2012-12-26T06:20:13.853Z · LW(p) · GW(p)

Hi, I'm Liz.

I'm a senior at a college in the US, soon to graduate with a double major in physics and economics, and then (hopefully) pursue a PhD in economics. I like computer science and math too. I'm hoping to do research in economic development, but more relevantly to LW, I'm pretty interested in behavioral economics and in econometrics (statistics). Out of the uncommon beliefs I hold, the one that most affects my life is that since I can greatly help others at a small cost to myself, I should; I donate whatever extra money I have to charity, although it's not much. (see givingwhatwecan.org)

I think I started behaving as a rationalist (without that word) when I became an atheist near the end of high school. But to rewind...

I was raised Christian, but Christianity was always more of a miserable duty than a comfort to me. I disliked the music and the long services and the awkward social interactions. I became an atheist for no good reason in the beginning of high school, but being an atheist was terrible. There was no one to forgive me when I screwed up, or pray to when the world was unbearably awful. My lack of faith made my father sad. Then, lying in bed and angsting about free will one night, I had some philosophical revelation, and it seemed that God must exist. I couldn't re-explain the revelation to myself, but I clung to the result and became seriously religious for the next year or so. But objections to the major strands of theism began to creep up on me. I wanted to believe in God, and I wanted to know the truth, and I found out that (surprise) having an ideal set of beliefs isn't compatible with seeking truth. I did lots of reading (mostly old-school philosophy), slowly changed my mind, then came out as an atheist (to close friends only) once the Bible Quiz season was over. (awk.)

At that point I decided to never lie to myself again. Not just to avoid comforting half-truths, but to actively question all beliefs I held, and to act on whatever conclusions I come to. After hard practice, unrelenting honesty towards myself is a habit I can't break, but I'm not sure it's actually a good policy. For example, a few white lies would've helped me move past a situation of extreme guilt last year.

Anyway, more recently, I read HPMOR and I'm now reading Kahneman's Thinking, Fast and Slow. I'm slowly working through the Sequences too. I always appreciate new reading recommendations.


I have some thoughts on Newcomb's Paradox. (Of course I am new to this, probably way off base, etc.) I think two boxes is the right way to go, and it seems that intuition towards one-boxing often comes from the idea that your decision somehow changes the contents of the boxes. (No reverse causality is supposed to be assumed, right?) Say that instead of an infallible superintelligence, the story changes to

"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."

Terribly small sample size, but a friend told me this changes his answer from one box to two. As far as I can tell these changes are aesthetic and make the story clearer without changing the philosophy.


And, a question. Why is Bayes so central to this site? I use Bayesian reasoning regularly, but I learned Bayes' Theorem around the time I started thinking seriously about anything, so I'm not clear on what the alternative is. Why do y'all celebrate Bayes, rather than algebra or well-designed experiments?

Edit: Read farther in Thinking, Fast and Slow; question answered.

Replies from: John_Maxwell_IV, Desrtopa
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-12T08:48:19.940Z · LW(p) · GW(p)

Welcome to LW.

Also not an expert on Newcomb's Problem, but I'm a one-boxer because I choose to have part of my brain say that I'm a one-boxer, and have that part of my brain influence my behavior if I get in to a Newcomb-like situation. Does that make any sense? Basically, I'm choosing to modify my decision algorithm so I no longer maximize expected value because I think having this other algorithm will get me better results.

comment by Desrtopa · 2012-12-26T07:01:23.409Z · LW(p) · GW(p)

"You go to visit your friend Ann, and her mom pulls you into the kitchen, where two boxes are sitting on a table. She tells you that box A has either $1 billion or $0, and box B has $1,000. She says you can take both boxes or just A, and that if she predicted you take box B she didn't put anything in A. She has done this to 100 of Anne's friends and has only been wrong for one of them. She is a great predictor because she has been spying on your philosophy class and reading your essays."

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

In Newcomb's Problem, if you choose on the basis of which choice is consistent with a higher expected return, then you would choose to one-box. You know that your choice doesn't cause the box to be filled, but given the knowledge that whether the money is in the box or not is contingent on a perfect predictor's assessment of whether or not you were likely to one-box, you should assign different probabilities to the box containing the money depending on whether you one-box or two-box. Since your own mental disposition is evidence of whether the money is in the box or not, you can behave as if the contents were determined by your choice.

Replies from: findis
comment by findis · 2012-12-29T20:51:23.025Z · LW(p) · GW(p)

To be properly isomorphic to the Newcomb's problem, the chance of the predictor being wrong should approximate to zero.

If I thought that the chance of my friend's mother being wrong approximated to zero, I would of course choose to one-box. If I expected her to be an imperfect predictor who assumed I would behave as if I were in the real Newcomb's problem with a perfect predictor, then I would choose to two-box.

Hm, I think I still don't understand the one-box perspective, then. Are you saying that if the predictor is wrong with probability p, you would take two-boxes for high p and one box for a sufficiently small p (or just for p=0)? What changes as p shrinks?

Or what if Omega/Ann's mom is a perfect predictor, but for a random 1% of the time decides to fill the boxes as if it made the opposite prediction, just to mess with you? If you one-box for p=0, you should believe that taking one box is correct (and generates $1 million more) in 99% of cases and that two boxes is correct (and generates $1000 more) in 1% of cases. So taking one box should still have a far higher expected value. But the perfect predictor who sometimes pretends to be wrong behaves exactly the same as an imperfect predictor who is wrong 1% of the time.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-29T22:10:18.525Z · LW(p) · GW(p)

You choose the boxes according to the expected value of each box choice. For a 99% accurate predictor, the expected value of one-boxing is $990,000,000 (you get a billion 99% of the time, and nothing 1% of the time,) while the expected value of two-boxing is $10,001,000 (you get a thousand 99% of the time, and one billion and one thousand 1% of the time.)

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega. If you're playing against an agent who you know will fill the boxes according to how you would choose if you were playing Omega (we'll call it Omega-1,) then you should always two-box (if you would one-box against Omega, both boxes will contain money, so you get the contents of both. If you would two-box against Omega, only one box would contain money, and if you one-box you'll get the empty one.)

An imperfect predictor with random error is a different proposition from an imperfect predictor with nonrandom error.

Of course, if I were dealing with this dilemma in real life, my choice would be heavily influenced by considerations such as how likely it is that Ann's mom really has billions of dollars to give away.

Replies from: findis
comment by findis · 2013-01-02T00:59:04.235Z · LW(p) · GW(p)

The difference between this scenario and the one you posited before, where Ann's mom makes her prediction by reading your philosophy essays, is that she's presumably predicting on the basis of how she would expect you to choose if you were playing Omega.

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

I still think I'm missing something, since a lot of people have thought carefully about this and come to a different conclusion from me, but I'm still not sure what it is. :/

Replies from: ArisKatsaris, Desrtopa
comment by ArisKatsaris · 2013-01-04T04:34:10.449Z · LW(p) · GW(p)

Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

You are focusing too much on the "already have been filled", as if the particular time of your particular decision is relevant. But if your decision isn't random (and yours isn't), then any individual decision is dependent on the decision algorithm you follow -- and can be calculated in exactly the same manner, regardless of time. Therefore in a sense your decision has been made BEFORE the filling of the boxes, and can affect their contents.

You may consider it easier to wrap your head around this if you think of the boxes being filled according to what result the decision theory you currently have would return in the situation, instead of what decision you'll make in the future. That helps keep in mind that causality still travels only one direction, but that a good predictor simply knows the decision you'll make before you make it and can act accordingly.

comment by Desrtopa · 2013-01-02T03:06:23.795Z · LW(p) · GW(p)

Ok, but what if Ann's mom is right 99% of the time about how you would choose when playing her?

I would one-box. I gave the relevant numbers on this in my previous comment; one-boxing has an expected value of $990,000,000 to the expected $10,001,000 if you two-box.

I agree that one-boxers make more money, with the numbers you used, but I don't think that those are the appropriate expected values to consider. Conditional on the fact that the boxes have already been filled, two-boxing has a $1000 higher expected value. If I know only one box is filled, I should take both. If I know both boxes are filled, I should take both. If I know I'm in one of those situations but not sure of which it is, I should still take both.

When you're dealing with a problem involving an effective predictor of your own mental processes (it's not necessary for such a predictor to be perfect for this reasoning to become salient, it just makes the problems simpler,) your expectation of what the predictor will do or already have done will be at least partly dependent on what you intend to do yourself. You know that either the opaque box is filled, or it is not, but the probability you assign to the box being filled depends on whether you intend to open it or not.

Let's try a somewhat different scenario. Suppose I have a time machine that allows me to travel back a day in the past. Doing so creates a stable time loop, like the time turners in Harry Potter or HPMoR (on a side note, our current models of relativity suggest that such loops are possible, if very difficult to contrive.) You're angry at me because I've insulted your hypothetical scenario, and are considering hitting me in retaliation. But you happen to know that I retaliate against people who hit me by going back in time and stealing from them, which I always get away with due to having perfect alibis (the police don't believe in my time machine.) You do not know whether I've stolen from you or not, but if I have, it's already happened. You would feel satisfied by hitting me, but it's not worth being stolen from. Do you choose to hit me or not?

Another analogous situation would be that you walk into an exam, and the professor (who is a perfect or near-perfect predictor) announces that he has written down a list of people whom he has predicted will get fewer than half the questions right. If you are on that list, he will add 100 points to your score at the end. The people who get fewer than half of the questions right get higher scores, but you should still try to get questions right on the test... right? If not, does the answer change if the professor posts the list on the board?

If the professor is a perfect predictor, then I would deliberately get most of the problems wrong, thereby all but guaranteeing a score of over 100 points. I would have to be very confident that I would get a score below fifty even if I weren't trying to on purpose before trying to get all the questions right would give me a higher expected score than trying to get most of the questions wrong.

If the professor posts the list on the board, then of course it should affect the answer. If my name isn't on the list, then he's not going to add the 100 points to my test in any case, so my only recourse to maximizing my grade is to try my best on the test. If my name is on the list, then he's already predicted that I'm going to score below 50, so whether he's a perfect predictor or not, I should try to do well so that he's adding 100 points to as high a score as I can manage.

The difference between the scenario where he writes the names on the board and the scenario where he doesn't is that in the former, my expectations of his actions don't vary according to my own, whereas in the latter, they do.

Replies from: wedrifid, findis
comment by wedrifid · 2013-01-02T08:01:49.541Z · LW(p) · GW(p)

If the professor posts the list on the board, then of course it should affect the answer. If my name isn't on the list, then he's not going to add the 100 points to my test in any case, so my only recourse to maximizing my grade is to try my best on the test. If my name is on the list, then he's already predicted that I'm going to score below 50, so whether he's a perfect predictor or not, I should try to do well so that he's adding 100 points to as high a score as I can manage.

I believe you are making a mistake. Specifically, you are implementing a decision algorithm that ensures that "you lose" is a correct self fulfilling prophecy (in fact you ensure that it is the only valid prediction he could make). I would throw the test (score in the 40s) even when my name is not on the list.

The difference between the scenario where he writes the names on the board and the scenario where he doesn't is that in the former, my expectations of his actions don't vary according to my own, whereas in the latter, they do.

Do you also two box on Transparent Newcomb's?

Replies from: Desrtopa
comment by Desrtopa · 2013-01-02T22:29:34.519Z · LW(p) · GW(p)

I believe you are making a mistake. Specifically, you are implementing a decision algorithm that ensures that "you lose" is a correct self fulfilling prophecy (in fact you ensure that it is the only valid prediction he could make). I would throw the test (score in the 40s) even when my name is not on the list.

If I were in a position to predict that this were the sort of thing the professor might do, then I would precommit to throwing the test should he implement such a procedure. But you could just as easily end up with the perfect predictor professor saying that in the scoring for this test, he will automatically fail anyone he predicts would throw the test in the previously described scenario. I don't think there's any point in time where making such a precommitment would have positive expected value. By the time I know it would have been useful, it's already too late.

Do you also two box on Transparent Newcomb's?

Edit: I think I was mistaken about what problem you were referring to. If I'm understanding the question correctly, yes I would, because until the scenario actually occurs I have no reason to suspect any precommitment I make is likely to bring about more favorable results. For any precommitment I could make, the scenario could always be inverted to punish that precommitment, so I'd just do what has the highest expected utility at the time at which I'm presented with the scenario. It would be different if my probability distribution on what precommitments would be useful weren't totally flat.

Replies from: Desrtopa, wedrifid
comment by Desrtopa · 2013-01-03T00:58:47.296Z · LW(p) · GW(p)

As an aside, I'll note that a lot of the solutions bandied around here to decision theory problems remind me of something from Magic: The Gathering which I took notice of back when I still followed it.

When I watched my friends play, one would frequently respond to another's play with "Before you do that, I-" and use some card or ability to counter their opponent's move. The rules of MTG let you do that sort of thing, but I always thought it was pretty silly, because they did not, in fact, have any idea that it would make sense to make that play until after seeing their opponent's move. Once they see their opponent's play, they get to retroactively decide what to do "before" their opponent can do it.

In real life, we don't have that sort of privilege. If you're in a Counterfactual Mugging scenario, for instance, you might be inclined to say "I ought to be the sort of person who would pay Omega, because if the coin had come up the other way, I would be making a lot of money now, so being that sort of person would have positive expected utility for this scenario." But this is "Before you do that-" type reasoning. You could just as easily have ended up in a situation where Omega comes and tells you "I decided that if you were the sort of person who would not pay up in a Counterfactual Mugging scenario, I would give you a million dollars, but I've predicted that you would, so you get nothing."

When you come up with a solution to an Omega-type problem involving some type of precommitment, it's worth asking "would this precommitment have made sense when I was in a position of not knowing Omega existed, or having any idea what it would do even if it did exist?"

In real life, we sometimes have to make decisions dealing with agents who have some degree of predictive power with respect to our thought processes, but their motivations are generally not as arbitrary as those attributed to Omega in most hypotheticals.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-03T04:23:21.984Z · LW(p) · GW(p)

Can you give a specific example of a bandied-around solution to a decision-theory problem where predictive power is necessary in order to implement that solution?

I suspect I disagree with you here -- or, rather, I agree with the general principle you've articulated, but I suspect I disagree that it's especially relevant to anything local -- but it's difficult to be sure without specifics.

With respect to the Counterfactual Mugging you reference in passing, for example, it seems enough to say "I ought to be the sort of person who would do whatever gets me positive expected utility"; I don't have to specifically commit to pay or not pay. Isn't it? But perhaps I've misunderstood the solution you're rejecting.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-03T16:19:00.174Z · LW(p) · GW(p)

Well, if your decision theory tells you you ought to be the sort of person who would pay up in a Counterfactual Mugging, because that gets you positive utility, then you could end up in with Omega coming and saying "I would have given you a million dollars if your decision theory said not to pay out in a counterfactual mugging, but since you would, you don't get anything."

When you know nothing about Omega, I don't think there's any positive expected utility in choosing to be the sort of person who would have positive expected utility in a Counterfactual Mugging scenario, because you have no reason to suspect it's more likely than the inverted scenario where being that sort of person will get you negative utility. The probability distribution is flat, so the utilities cancel out.

Say Omega comes to you with a Counterfactual Mugging on Day 1. On Day 0, would you want to be the sort of person who pays out in a Counterfactual Mugging? No, because the probabilities of it being useful or harmful cancel out. On Day 1, when given the dilemma, do you want to be the sort of person who pays out in a Counterfactual Mugging? No, because now it only costs you money and you get nothing out of it.

So there's no point in time where deciding "I should be the sort of person who pays out in a Counterfactual Mugging" has positive expected utility.

Reasoning this way means, of course, that you don't get the money in a situation where Omega would only pay you if it predicted you would pay up, but you do get the money in situations where Omega pays out only if you wouldn't pay out. The latter possibility seems less salient from the "before you do that-" standpoint of a person contemplating a Counterfactual Mugging, but there's no reason to assign it a lower probability before the fact. The best you can do is choose according to whatever has the highest expected utility at any given time.

Omega could also come and tell me "I decided that I would steal all your money if you hit the S key on your keyboard between 10:00-11:00 am on a Sunday, and you just did," but I don't let this influence my typing habits. You don't want to alter your decision theories or general behavior in advance of specific events that are no more probable than their inversions.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-01-03T17:01:16.617Z · LW(p) · GW(p)

So there's no point in time where deciding "I should be the sort of person who pays out in a Counterfactual Mugging" has positive expected utility.

Sure, I agree.

What I'm suggesting is that "I should be the sort of person who does the thing that has positive expected utility" causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.

Is this not true?

"I decided that I would steal all your money if you hit the S key on your keyboard between 10:00-11:00 am on a Sunday, and you just did,"

I agree that this is not something I can sensibly protect against. I'm not actually sure I would call it a decision theory problem at all.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-03T17:23:25.061Z · LW(p) · GW(p)

What I'm suggesting is that "I should be the sort of person who does the thing that has positive expected utility" causes me to pay out in a Counterfactual Mugging, and causes me to not pay out in a Counterfactual Antimugging, without requiring any prophecy. And that as far as I know, this is representative of the locally bandied-around solutions to decision-theory problems.

In the inversion I suggested to the Counterfactual Mugging, your payout is determined on the basis of whether you pay up in the Counterfactual Mugging. In the Counterfactual Mugging, Omega predicts whether you would pay out in the Counterfactual Mugging, and if you would, you get a 50% shot at a million dollars. In the inverted scenario, Omega predicts whether you would pay out in the Counterfactual Mugging scenario, and if you wouldn't, you get a shot at a million dollars.

Being the sort of person who would pay out in a Counterfactual Mugging only brings positive expected utility if you expect the Counterfactual Mugging scenario to be more likely than the inverted Counterfactual Mugging scenario.

The inverted Counterfactual Mugging scenario, like the case where Omega rewards or punishes you based on your keyboard usage, isn't exactly a decision theory problem, in that once it arises, you don't get to make a decision, but it doesn't need to be.

When the question is "should I be the sort of person who pays out in a Counterfactual Mugging?" if the chance of it being helpful is balanced out by an equal chance of it being harmful, then it doesn't matter whether the situations that balance it out require you to make decisions at all, only that the expected utilities balance.

If you take as a premise "Omega simply doesn't do that sort of thing, it only provides decision theory dilemmas where the results are dependent on how you would respond in this particular dilemma," then our probability distribution is no longer flat, and being the sort of person who pays out in a Counterfactual Mugging scenario becomes utility maximizing. But this isn't a premise we can take for granted. Omega is already posited as an entity which can judge your decision algorithms perfectly, and imposes dilemmas which are highly arbitrary.

comment by wedrifid · 2013-01-03T04:30:18.053Z · LW(p) · GW(p)

Edit: I think I was mistaken about what problem you were referring to. If I'm understanding the question correctly, yes I would, because until the scenario actually occurs I have no reason to suspect any precommitment I make is likely to bring about more favorable results. For any precommitment I could make, the scenario could always be inverted to punish that precommitment, so I'd just do what has the highest expected utility at the time at which I'm presented with the scenario. It would be different if my probability distribution on what precommitments would be useful weren't totally flat.

You don't need a precommitment to make the correct choice. You just make it. That does happen to include one boxing on Transparent Newcomb's (and conventional Newcomb's, for the same reason). The 'but what if someone punishes me for being the kind of person who makes this choice' is a fully general excuse to not make rational choices. The reason why it is an invalid fully general excuse is because every scenario that can be contrived to result in 'bad for you' is one in which your rewards are determined by your behavior in an entirely different game to the one in question.

For example your "inverted Transparent Newcomb's" gives you a bad outcome, but not because of your choice. It isn't anything to do with a decision because you don't get to make one. It is punishing you for your behavior in a completely different game.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-03T16:21:34.322Z · LW(p) · GW(p)

Could you describe the Transparent Newcomb's problem to me so I'm sure we're on the same page?

"What if I face a scenario that punishes me for being the sort of person who makes this choice?" is not a fully general counterargument, it only applies in cases where the expected utilities of the scenarios cancel out.

If you're the sort of person who won't honor promises made under duress, and other people are sufficiently effective judges to recognize this, then you avoid people placing you under duress to extract promises from you. But supposing you're captured by enemies in a war, and they say "We could let you go if you made some promises to help out our cause when you were free, but since we can't trust you to keep them, we're going to keep you locked up and torture you to make your country want to ransom you more."

This doesn't make the expected utilities of "Keep promises made under duress" vs. "Do not keep promises made under duress" cancel out, because you have an abundance of information with respect to how relatively likely these situations are.

Replies from: wedrifid
comment by wedrifid · 2013-01-03T18:43:58.111Z · LW(p) · GW(p)

Could you describe the Transparent Newcomb's problem to me so I'm sure we're on the same page?

Take a suitable description of Newcomb's problem (you know, with Omega and boxes). Then make the boxes transparent. That is the extent of the difference. I assert that being able to see the money makes no difference to whether one should one box or two box (and also that one should one box).

Replies from: Desrtopa
comment by Desrtopa · 2013-01-03T19:28:36.699Z · LW(p) · GW(p)

Well, if you know advance that Omega is more likely to do this than it is to impose a dilemma where it will fill both boxes only if you two-box, then I'd agree that this is an appropriate solution.

I think that if in advance you have a flat probability distribution for what sort of Omega scenarios might occur (Omega is just as likely to fill both boxes only if you would two-box in the first scenario as it is to fill both boxes only if you would one-box,) then this solution doesn't make sense.

In the transparent Newcomb's problem, when both boxes are filled, does it benefit you to be the the sort of person who would one-box? No, because you get less money that way. If Omega is more likely to impose the transparent Newcomb's problem than its inversion, then prior to Omega foisting the problem on you, it does benefit you to be the sort of person who would one-box (and you can't change what sort of person you are mid-problem.)

If Omega only presents transparent Newcomb's problems of the first sort, where the box containing more money is filled only if the person presented with the boxes would one-box, then situations where a person is presented with two transparent boxes of money and picks both will never arise. People who would one-box in the transparent Newcomb's problem come out ahead.

If Omega is equally likely to present transparent Newcomb's problems of the first sort, or inversions where Omega fills both boxes only for people it predicts would two-box in problems of the first sort, then two-boxers come out ahead, because they're equally likely to get the contents of the box with more money, but always get the box with less money, while the one-boxers never do.

You can always contrive scenarios to reward or punish any particular decision theory. The Transparent Newcomb's Problem rewards agents which one-box in the Transparent Newcomb's Problem over agents which two-box, but unless this sort of problem is more likely to arise than ones which reward agents which two-box in Transparent Newcomb's Problem over ones that one-box, that isn't an an argument favoring decision theories which say you should one-box in Transparent Newcomb's.

If you keep a flat probability distribution of what Omega would do to you prior to actually being put into a dilemma, expected-utility-maximizing still favors one-boxing in the opaque version of the dilemma (because based on the information available to you, you have to assign different probabilities to the opaque box containing money depending on whether you one-box or two-box,) but not one-boxing in the transparent version.

Replies from: wedrifid
comment by wedrifid · 2013-01-03T19:55:44.189Z · LW(p) · GW(p)

You can always contrive scenarios to reward or punish any particular decision theory. The Transparent Newcomb's Problem rewards agents which one-box in the Transparent Newcomb's Problem over agents which two-box, but unless this sort of problem is more likely to arise than ones which reward agents which two-box in Transparent Newcomb's Problem over ones that one-box, that isn't an an argument favoring decision theories which say you should one-box in Transparent Newcomb's.

No, Transparent Newcomb's, Newcomb's and Prisoner's Dilemma with full mutual knowledge don't care what the decision algorithm is. They reward agents that take one box and mutually cooperate for no other reason than they decide to make the decision that benefits them.

You have presented a fully general argument for making bad choices. It can be used to reject "look both ways before crossing a road" just as well as it can be used to reject "get a million dollars by taking one box". It should be applied to neither.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-03T22:03:35.774Z · LW(p) · GW(p)

It's not a fully general counterargument, it demands that you weigh the probabilities of potential outcomes.

If you look both ways at a crosswalk, you could be hit by a falling object that you would have avoided if you hadn't paused in that location. Does that justify not looking both ways at a crosswalk? No, because the probability of something bad happening to you if you don't look both ways at the crosswalk is higher than if you do.

You can always come up with absurd hypotheticals which would punish the behavior that would normally be rational in a particular situation. This doesn't justify being paralyzed with indecision, the probabilities of the absurd hypotheticals materializing are miniscule. But the possibilities of absurd hypotheticals will tend to balance out other absurd hypotheticals.

Transparent Newcomb's Problem is a problem that rewards agents which one-box in Transparent Newcomb's Problem, via Omega predicting whether the agent one-boxes in Transparent Newcomb's Problem and filling the boxes accordingly. Inverted Transparent Newcomb's Problem is one that rewards agents that two-box in Transparent Newcomb's Problem via Omega predicting whether the agent two-boxes in Transparent Newcomb's Problem, and filling the boxes accordingly.

If one type of situation is more likely than the other, you adjust your expected utilities accordingly, just as you adjust your expected utility of looking both ways before you cross the street because you're less likely to suffer an accident if you do than if you don't.

Replies from: wedrifid
comment by wedrifid · 2013-01-04T00:24:36.072Z · LW(p) · GW(p)

Transparent Newcomb's Problem is a problem that rewards agents which one-box in Transparent Newcomb's Problem

Yes.

Inverted Transparent Newcomb's Problem is one that rewards agents that two-box in Transparent Newcomb's Problem via Omega predicting whether the agent two-boxes in Transparent Newcomb's Problem, and filling the boxes accordingly.

That isn't an 'inversion' but instead an entirely different problem in which agents are rewarded for things external to the problem.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T03:51:24.017Z · LW(p) · GW(p)

There's no reason an agent you interact with in a decision problem can't respond to how it judges you would react to different decision problems.

Suppose Andy and Sandy are bitter rivals, and each wants the other to be socially isolated. Andy declares that he will only cooperate in Prisoner's Dilemma type problems with people he predicts would cooperate with him, but not Sandy, while Sandy declares that she will only cooperate in Prisoner's Dilemma type problems with people she predicts would cooperate with her, but not Andy. Both are highly reliable predictors of other people's cooperation patterns.

If you end up in a Prisoner's Dilemma type problem with Andy, it benefits you to be the sort of person who would cooperate with Andy, but not Sandy, and vice versa if you end up in a Prisoner's Dilemma type problem with Sandy. If you might end up in a Prisoner's Dilemma type problem with either of them, you have higher expected utility if you pick one in advance to cooperate with, because both would defect against an opportunist willing to cooperate with whichever one they ended up in a Prisoner's Dilemma with first.

That isn't an 'inversion' but instead an entirely different problem in which agents are rewarded for things external to the problem.

If you want to call it that, you may, but I don't see that it makes a difference. If ending up in Transparent Newcomb's Problem is no more likely than ending up in an entirely different problem which punishes agents for one-boxing in Transparent Newcomb's Problem, then I don't see that it's advantageous to one-box in Transparent Newcomb's Problem. You can draw a line between problems determined by factors external to the problem, and problems determined only by factors internal to the problem, but I don't think this is a helpful distinction to apply here. What matters is which problems are more likely to occur and their utility payoffs.

In any case, I would honestly rather not continue this discussion with you, at least if TheOtherDave is still interested in continuing the discussion. I don't have very high expectations of productivity from a discussion with someone who has such low expectations of my own reasoning as to repeatedly and erroneously declare that I'm calling up a fully general counterargument which could just as well be used to argue against looking both ways at a crosswalk. If possible, I would much rather discuss this with someone who's prepared to operate under the presumption that I'm willing and able to be reasonable.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2013-01-04T04:21:51.184Z · LW(p) · GW(p)

(I haven't followed the discussion, so might be missing the point.)

If ending up in Transparent Newcomb's Problem is no more likely than ending up in an entirely different problem which punishes agents for one-boxing in Transparent Newcomb's Problem, then I don't see that it's advantageous to one-box in Transparent Newcomb's Problem.

If you are actually in problem A, it's advantageous to be solving problem A, even if there is another problem B in which you could have much more likely ended up. You are in problem A by stipulation. At the point where you've landed in the hypothetical of solving problem A, discussing problem B is a wrong thing to do, it interferes with trying to understand problem A. The difficulty of telling problem A from problem B is a separate issue that's usually ruled out by hypothesis. We might discuss this issue, but that would be a problem C that shouldn't be confused with problems A and B, where by hypothesis you know that you are dealing with problems A and B. Don't fight the hypothetical.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T06:01:30.401Z · LW(p) · GW(p)

In the case of Transparent Newcomb's though, if you're actually in the problem, then you can already see either that both boxes contain money, or that one of them doesn't. If Omega only fills the second box, which contains more money, if you would one-box, then by the time you find yourself in the problem, whether you would one-box or two-box in Transparent Newcomb's has already had its payoff.

If I would two-box in a situation where I see two transparent boxes which both contain money, that ensures that I won't find myself in a situation where Omega lets me pick whether to one-box or two-box, but only fills both boxes if I would one-box. On the other hand, A person who one-boxes in that situation could not find themself in a situation where they can pick one or both of two filled boxes, where Omega would only fill both boxes if they would two-box in the original scenario.

So it seems to me that if I follow the principle of solving whatever situation I'm in according to maximum expected utility, then unless the Transparent Newcomb's Problem is more probable, I will become the sort of person who can't end up in Transparent Newcomb's problems with a chance to one-box for large amounts of money, but can end up in the inverted situation which rewards two-boxing, for more money. I don't have the choice of being the sort of person who gets rewarded by both scenarios, just as I don't have the choice of being someone who both Andy and Sandy will cooperate with.

I agree that a one-boxer comes out ahead in Transparent Newcomb's, but I don't think it follows that I should one-box in Transparent Newcomb's, because I don't think having a decision theory which results in better payouts in this particular decision theory problem results in higher utility in general. I think that I "should" be a person who one-boxes in Transparent Newcomb's in the same sense that I "should" be someone who doesn't type between 10:00-11:00 on a Sunday if I happen to be in a world where Omega has, unbeknownst to anyone, arranged to rob me if I do. In both cases I've lucked into payouts due to a decision process which I couldn't reasonably have expected to improve my utility.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2013-01-04T15:49:36.154Z · LW(p) · GW(p)

I agree that a one-boxer comes out ahead in Transparent Newcomb's, but I don't think it follows that I should one-box in Transparent Newcomb's, because I don't think having a decision theory which results in better payouts in this particular decision theory problem results in higher utility in general.

We are not discussing what to do "in general", or the algorithms of a general "I" that should or shouldn't have the property of behaving a certain way in certain problems, we are discussing what should be done in this particular problem, where we might as well assume that there is no other possible problem, and all utility in the world only comes from this one instance of this problem. The focus is on this problem only, and no role is played by the uncertainty about which problem we are solving, or by the possibility that there might be other problems. If you additionally want to avoid logical impossibility introduced by some of the possible decisions, permit a very low probability that either of the relevant outcomes can occur anyway.

If you allow yourself to consider alternative situations, or other applications of the same decision algorithm, you are solving a different problem, a problem that involves tradeoffs between these situations. You need to be clear on which problem you are considering, whether it's a single isolated problem, as is usual for thought experiments, or a bigger problem. If it's a bigger problem, that needs to be prominently stipulated somewhere, or people will assume that it's otherwise and you'll talk past each other.

It seems as if you currently believe that the correct solution for isolated Transparent Newcomb's is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so? (You seem to understand "I'm in Transparent Newcomb's problem" incorrectly, which further motivates fighting the hypothetical, suggesting that for the general player that has other problems on its plate two-boxing is better, which is not so, but it's a separate issue, so let's settle the problem statement first.)

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T16:46:58.503Z · LW(p) · GW(p)

It seems as if you currently believe that the correct solution for isolated Transparent Newcomb's is one-boxing, but the correct solution in the context of the possibility of other problems is two-boxing. Is it so?

Yes.

I don't think that the most advantageous solution for isolated Transparent Newcomb's is likely to be a very useful question though.

I don't think it's possible to have a general case decision theory which gets the best possible results for every situation (see the Andy and Sandy example, where getting good results for one prisoner's dilemma necessitates getting bad results from the other, so any decision theory wins in at most one of the two.)

That being the case, I don't think that a goal of winning in Transparent Newcomb's Problem is a very meaningful one for a decision theory. The way I see it, it seems like focusing on coming out ahead in Sandy prisoner's dilemmas, while disregarding the relative likelihoods of ending up in a dilemma with Andy or Sandy, and assuming that if you ended up in an Andy prisoner dilemma you could use the same decision process to come out ahead in that too.

comment by wedrifid · 2013-01-04T05:13:38.700Z · LW(p) · GW(p)

If possible, I would much rather discuss this with someone who's prepared to operate under the presumption that I'm willing and able to be reasonable.

Don't confuse an intuition aid that failed to help you with a personal insult. Apart from making you feel bad it'll ensure you miss the point. Hopefully Vladimir's explanation will be more successful.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T06:40:40.252Z · LW(p) · GW(p)

I didn't take it as a personal insult, I took it as a mistaken interpretation of my own argument which would have been very unlikely to come from someone who expected me to have reasoned through my position competently and was making a serious effort to understand it. So while it was not a personal insult, it was certainly insulting.

I may be failing to understand your position, and rejecting it only due to a misunderstanding, but from where I stand, your assertion makes it appear tremendously unlikely that you understand mine.

If you think that my argument generalizes to justifying any bad decision, including cases like not looking both ways when I cross the street, when I say otherwise, it would help if you would explain why you think it generalizes in this way in spite of the reasons I've given for believing otherwise, rather than simply repeating the assertion without acknowledging them, otherwise it looks like you're either not making much effort to comprehend my position, or don't care much about explaining yours, and are only interested in contradicting someone you think is wrong.

Edit: I would prefer you not respond to this comment, and in any case I don't intend to respond to a response, because I don't expect this conversation to be productive, and I hate going to bed wondering how I'm going to continue tomorrow what I expect to be a fruitless conversation.

comment by findis · 2013-01-04T05:55:55.695Z · LW(p) · GW(p)

Do you choose to hit me or not?

No, I don't, since you have a time-turner. (To be clear, non-hypothetical-me wouldn't hit non-hypothetical-you either.) I would also one-box if I thought that Omega's predictive power was evidence that it might have a time turner or some other way of affecting the past. I still don't think that's relevant when there's no reverse causality.

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T06:28:28.665Z · LW(p) · GW(p)

Back to Newcomb's problem: Say that brown-haired people almost always one-box, and people with other hair colors almost always two-box. Omega predicts on the basis of hair color: both boxes are filled iff you have brown hair. I'd two-box, even though I have brown hair. It would be logically inconsistent for me to find that one of the boxes is empty, since everyone with brown hair has both boxes filled. But this could be true of any attribute Omega uses to predict.

If the agent filling the boxes follows a consistent, predictable pattern you're outside of, you can certainly use that information to do this. In Newcomb's Problem though, Omega follows a consistent, predictable pattern you're inside of. It's logically inconsistent for you to two box and find they both contain money, or pick one box and find it's empty.

I agree that changing my decision conveys information about what is in the boxes and changes my guess of what is in the boxes... but doesn't change the boxes.

Why is whether your decision actually changes the boxes important to you? If you know that picking one box will result in your receiving a million dollars, and picking two boxes will result in getting a thousand dollars, do you have any concern that overrides making the choice that you expect to make you more money?

A decision process of "at all times, do whatever I expect to have the best results" will, at worst, reduce to exactly the same behavior as "at all times, do whatever I think will have a causal relationship with the best results." In some cases, such as Newcomb's problem, it has better results. What do you think the concern with causality actually does for you?

We don't always agree here on what decision theories get the best results (as you can see by observing the offshoot of this conversation between Wedrifid and myself,) but what we do generally agree on here is that the quality of decision theories is determined by their results. If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

Replies from: findis
comment by findis · 2013-01-04T06:50:44.516Z · LW(p) · GW(p)

Why is whether your decision actually changes the boxes important to you? [....] If you argue yourself into a decision theory that doesn't serve you well, you've only managed to shoot yourself in the foot.

In the absence of my decision affecting the boxes, taking one box and leaving $1000 on the table still looks like shooting myself in the foot. (Of course if I had the ability to precommit to one-box I would -- so, okay, if Omega ever asks me this I will take one box. But if Omega asked me to make a decision after filling the boxes and before I'd made a precommitment... still two boxes.)

I think I'm going to back out of this discussion until I understand decision theory a bit better.

Replies from: Desrtopa
comment by Desrtopa · 2013-01-04T06:56:15.249Z · LW(p) · GW(p)

I think I'm going to back out of this discussion until I understand decision theory a bit better.

Feel free. You can revisit this conversation any time you feel like it. Discussion threads never really die here, there's no community norm against replying to comments long after they're posted.

comment by johnsonmx · 2012-09-08T20:23:02.835Z · LW(p) · GW(p)

I'm Mike Johnson. I'd estimate I come across a reference to LW from trustworthy sources every couple of weeks, and after working my way through the sequences it feels like the good outweighs the bad and it's worth investing time into.

My background is in philosophy, evolution, and neural nets for market prediction; I presently write, consult, and am in an early-stage tech startup. Perhaps my highwater mark in community exposure has been a critique of the word Transhumanist at Accelerating Future. In the following years, my experience has been more mixed, but I appreciate the topics and tools being developed even if the community seems a tad insular. If I had to wear some established thinkers on my sleeve I'd choose Paul Graham, Lawrence Lessig, Steve Sailer, Gregory Cochran, Roy Baumeister, and Peter Thiel. (I originally had a comment here about having an irrational attraction toward humility, but on second thought, that might rule out Gregory "If I have seen farther than others, it's because I'm knee-deep in dwarves" Cochran… Hmm.)

Cards-on-the-table, it's my impression that

(1) Lesswrong and SIAI are doing cool things that aren't being done anywhere else (this is not faint praise);

(2) The basic problem of FAI as stated by SIAI is genuine;

(3) SIAI is a lightning rod for trolls and cranks, which is really detrimental to the organization (the metaphor of autoimmune disease comes to mind) and seems partly its own fault;

(4) Much of the work being done by SIAI and LW will turn out to be a dead-end. Granted, this is true everywhere, but in particular I'm worried that axiomatic approaches to verifiable friendliness will prove brittle and inapplicable (I do not currently have an alternative);

(5) SIAI has an insufficient appreciation for realpolitik;

(6) SIAI and LW seem to have a certain distaste for research on biologically-inspired AGI, due in parts to safety concerns, an organizational lack of expertise in the area, and (in my view) ontological/metaphysical preference. I believe this distaste is overly limiting and also leads to incorrect conclusions.

Many of these impressions may be wrong. I aim to explore the site, learn, change my mind if I'm wrong, and hopefully contribute. I appreciate the opportunity, and I hope my unvarnished thoughts here haven't soured my welcome. Hello!

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-08T21:22:24.889Z · LW(p) · GW(p)

FWIW, I find your unvarnished thoughts, and the cogency with which you articulate them, refreshing. (The thoughts aren't especially novel, but the cogency is.)

In particular, I'm interested in your thoughts on what benefits a greater focus on biologically inspired AGI might provide that a distaste for it would limit LW from concluding/achieving.

Replies from: johnsonmx
comment by johnsonmx · 2012-09-09T07:14:29.730Z · LW(p) · GW(p)

Thank you.

I'd frame why I think biology matters in FAI research in terms of research applicability and toolbox dividends.

On the first reason--- applicability--- I think more research focus on biologically-inspired AGI would make a great deal of sense is because the first AGI might be a biologically-inspired black box, and axiom-based FAI approaches may not particularly apply to such. I realize I'm (probably annoyingly) retreading old ground here with regard to which method will/should win the AGI race, but SIAI's assumptions seem to run counter to the assumptions of the greater community of AGI researchers, and it's not obvious to me the focus on math and axiology isn't a simple case of SIAI's personnel backgrounds being stacked that way. 'If all you have is a hammer,' etc. (I should reiterate that I don't have any alternatives to offer here and am grateful for all FAI research.)

The second reason I think biology matters in FAI research--- toolbox dividends--- might take a little bit more unpacking. (Forgive me some imprecision, this is a complex topic.)

I think it's probable that anything complex enough to deserve the term AGI would have something akin to qualia/emotions, unless it was specifically designed not to. (Corollary: we don't know enough about what Chalmers calls "psychophysical laws" to design something that lacks qualia/emotions.) I think it's quite possible that an AGI's emotions, if we did not control for their effects, could produce complex feedback which would influence its behavior in unplanned ways (though perfectly consistent with / determined by its programming/circuitry). I'm not arguing for a ghost in the machine, just that the assumptions which allow us to ignore what an AGI 'feels' when modeling its behavior may prove to be leaky abstractions in the face of the complexity of real AGI.

Axiological approaches to FAI don't seem to concern themselves with psychophysical laws (modeling what an AGI 'feels'), whereas such modeling seems a core tool for biological approaches to FAI. I find myself thinking being able to model what an AGI 'feels' will be critically important for FAI research, even if it's axiom/math-based, because we'll be operating at levels of complexity where the abstractions we use to ignore this stuff can't help but leak. (There are other toolbox-based arguments for bringing biology into FAI research which are a lot simpler than this one, but this is on the top of my list.)

Replies from: TheOtherDave, Kawoomba, hairyfigment
comment by TheOtherDave · 2012-09-09T16:51:18.649Z · LW(p) · GW(p)

(nods)

Regarding your first point... as I understand it, SI (it no longer refers to itself as SIAI, incidentally) rejects as too dangerous to pursue any approach (biologically inspired or otherwise) that leads to a black-box AGI, because a black-box AGI will not constrain its subsequent behavior in ways that preserve the things we value except by unlikely chance. The idea is that we can get safety only by designing safety considerations into the system from the ground up; if we give up control of that design, we give up the ability to design a safe system.

Regarding your second point... there isn't any assumption that AGIs won't feel stuff, or that its feelings can be ignored. (Nor even that they are mere "feelings" rather than genuine feelings.) Granted, Yudkowski talks here about going out of his way to ensure something like that, but he treats this as an additional design constraint that adequate engineering knowledge will enable us to implement, not as some kind of natural default or simplifying assumption. (Also, I haven't seen any indication that this essay has particularly informed SI's subsequent research. Those more closely -- which is to say, at all -- affiliated with SI might choose to correct me here.) And there certainly isn't an expectation that its behavior will be predictable at any kind of granular level.

What there is is the expectation that a FAI will be designed such that its unpredictable behaviors (including feelings, if it has feelings) will never act against its values, and such that its values won't change over time.

So, maybe you're right that explicitly modeling what an AGI feels (again, no scare-quotes needed or desired) is critically important to the process of AGI design. Or maybe not. If it turns out to be, I expect that SI is as willing to approach design that way as any other. (Which should not be taken as an expression of confidence in their actual ability to design an AGI, Friendly or otherwise.)

Personally, I find it unlikely that such explicit modeling will be useful, let alone necessary. I expect that AGI feelings will be a natural consequence of more fundamental aspects of the AGI's design interacting with its environment, and that explicitly modeling those feelings will be no more necessary than explicitly modeling how it solves a math problem. A sufficiently powerful AGI will develop strategies for solving math problems, and will develop feelings, unless specifically designed not to. I expect that both its problem-solving strategies and its feelings will surprise us.

But I could be wrong.

Replies from: johnsonmx
comment by johnsonmx · 2012-09-09T18:51:03.733Z · LW(p) · GW(p)

I definitely agree with your first paragraph (and thanks for the tip on SIAI vs SI). The only caveat is if evolved/brain-based/black-box AGI is several orders of magnitude easier to create than an AGI with a more modular architecture where SI's safety research can apply, that's a big problem.

On the second point, what you say makes sense. Particularly, AGI feelings haven't been completely ignored at LW; if they prove important, SI doesn't have anything against incorporating them into safety research; and AGI feelings may not be material to AGI behavior anyway.

However, I still do think that an ability to tell what feelings an AGI is experiencing-- or more generally, being able to look at any physical process and being able to derive what emotions/qualia are associated with it-- will be critical. I call this a "qualia translation function".

Leaving aside the ethical imperatives to create such a function (which I do find significant-- the suffering of not-quite-good-enough-to-be-sane AGI prototypes will probably be massive as we move forward, and it behooves us to know when we're causing pain), I'm quite concerned about leaky reward signal abstractions.

I imagine a hugely-complex AGI executing some hugely-complex decision process. The decision code has been checked by Very Smart People and it looks solid. However, it just so happens that whenever it creates a cat it (internally, privately) feels the equivalent of an orgasm. Will that influence/leak into its behavior? Not if it's coded perfectly. However, if something of its complexity was created by humans, I think the chance of it being coded perfectly is Vanishingly small. We might end up with more cats than we bargained for. Our models of the safety and stability dynamic of an AGI should probably take its emotions/qualia into account. So I think all FAI programmes really would benefit from such a "qualia translation function".

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-09T20:11:50.819Z · LW(p) · GW(p)

I agree that, in order for me to behave ethically with respect to the AGI, I need to know whether the AGI is experiencing various morally relevant states, such as pain or fear or joy or what-have-you. And, as you say, this is also true about other physical systems besides AGIs; if monkeys or dolphins or dogs or mice or bacteria or thermostats have morally relevant states, then in order to behave ethically it's important to know that as well. (It may also be relevant for non-physical systems.)

I'm a little wary of referring to those morally relevant states as "qualia" because that term gets used by so many different people in so many different ways, but I suppose labels don't matter much... we can call them that for this discussion if you wish, as long as we stay clear about what the label refers to.

Leaving that aside... so, OK. We have a complex AGI with a variety of internal structures that affect its behavior in various ways. One of those structures is such that creating a cat gives the AGI an orgasm, which it finds rewarding. It wants orgasms, and therefore it wants to create cats. Which we didn't expect.

So, OK. If the AGI is designed such that it creates more cats in this situation than it ought to (regardless of our expectations), that's a problem. 100% agreed.

But it's the same problem whether the root cause lies within the AGI's emotions, or its reasoning, or its qualia, or its ability to predict the results of creating cats, or its perceptions, or any other aspect of its cognition.

You seem to be arguing that it's a special problem if the failure is due to emotions or qualia or feelings?

I'm not sure why.

I can imagine believing that if I were overgeneralizing from my personal experience. When it comes to my own psyche, my emotions and feelings are a lot more mysterious than my surface-level reasoning, so it's easy for me to infer some kind of intrinsic mysteriousness to emotions and feelings that reasoning lacks. But I reject that overgeneralization. Emotions are just another cognitive process. If reliably engineering cognitive processes is something we can learn to do, then we can reliably engineer emotions. If it isn't something we can learn to do, then we can't reliably engineer emotions... but we can't reliably engineer AGI in general either. I don't think there's anything especially mysterious about emotions, relative to the mysteriousness of cognitive processes in general.

So, if your reasons for believing that are similar to the ones I'm speculating here, I simply disagree. If you have other reasons, I'm interested in what they are.

Replies from: johnsonmx
comment by johnsonmx · 2012-09-09T20:37:36.291Z · LW(p) · GW(p)

I don't think an AGI failing to behave in the anticipated manner due to its qualia* (orgasms during cat creation, in this case) is a special or mysterious problem, one that must be treated differently than errors in its reasoning, prediction ability, perception, or any aspect of its cognition. On second thought, I do think it's different: it actually seems less important than errors in any of those systems. (And if an AGI is Provably Safe, it's safe-- we need only worry about its qualia from an ethical perspective.) My original comment here is (I believe) fairly mild: I do think the issue of qualia will involve a practical class of problems for FAI, and knowing how to frame and address them could benefit from more cross-pollination from more biology-focused theorists such as Chalmers and Tononi. And somewhat more boldly, a "qualia translation function" would be of use to all FAI projects.

*I share your qualms about the word, but there really are few alternatives with less baggage, unfortunately.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-09T23:22:56.206Z · LW(p) · GW(p)

Ah, I see. Yeah, agreed that what we are calling qualia here (not to be confused with its usage elsewhere) underlie a class of practical problems. And what you're calling a qualia translation function (which is related to what EY called a non-person predicate elsewhere, though finer-grained) is potentially useful for a number of reasons.

comment by Kawoomba · 2012-09-09T10:13:41.622Z · LW(p) · GW(p)

because we'll be operating at levels of complexity where the abstractions we use to ignore this stuff can't help but leak.

If that were the case (and it may very well be), there goes provably friendly AI, for to guarantee a property under all circumstances, it must be upheld from the bottom layer upwards.

Replies from: johnsonmx
comment by johnsonmx · 2012-09-09T21:26:09.465Z · LW(p) · GW(p)

I think it's possible that any leaky abstraction used in designing FAI might doom the enterprise. But if that's not true, we can use this "qualia translation function" to make a leaky abstractions in a FAI context a tiny bit safer(?).

E.g., if we're designing an AGI with a reward signal, my intuition is we should either (1) align our reward signal with actual pleasurable qualia (so if our abstractions leak it matters less, since the AGI is drawn to maximize what we want it to maximize anyway); (2) implement the AGI in an architecture/substrate which produces as little emotional qualia as possible, so there's little incentive for behavior to drift.

My thoughts here are terribly laden with assumptions and could be complete crap. Just thinking out loud.

comment by hairyfigment · 2012-09-09T18:38:52.487Z · LW(p) · GW(p)

more research focus on biologically-inspired AGI

As a layman I don't have a clear picture of how to start doing that. How would it differ from this? Looks like you can find the paper in question here (WARNING: out-of-date 2002 content).

Replies from: johnsonmx
comment by johnsonmx · 2012-09-09T20:40:37.989Z · LW(p) · GW(p)

I'd say nobody does! But a little less glibly, I personally think the most productive strategy in biologically-inspired AGI would be to focus on tools that help quantify the unquantified. There are substantial side-benefits to such a focus on tools: what you make can be of shorter-term practical significance, and you can test your assumptions.

Chalmers and Tononi have done some interesting work, and Tononi's work has also had real-world uses. I don't see Tononi's work as immediately applicable to FAI research but I think it'll evolve into something that will apply.

It's my hope that the (hypothetical, but clearly possible) "qualia translation function" I mention above could be a tool that FAI researchers could use and benefit from regardless of their particular architecture.

comment by skeptical_lurker · 2012-07-28T18:51:25.616Z · LW(p) · GW(p)

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity (I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...).

Oddly enough, I am not interested in improving epistemic rationality right now, partially because I am already quite good at it. But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive (unless they are talking to other rationalists). I think I am borderline asperger's (again, like many people here) and optimizing social skills probably takes precedence over most other things.

I am currently doing a PhD in "absurdly simplistic computational modeling of the blatantly obvious" which better damn well have some signaling value. In my spare time, to stop my brain turning to mush, among other things I am writing a story which is sort of rationalist, in that some of the characters keep using science effectively even when the world is going crazy and the laws of physics seem to change dependent upon whether you believe in them. On the other hand, some of the characters are (a) heroes/heroines (b) awesomely successful (c) hippies on acid who do not believe in objective reality (not that I am implying that all hippies/people who use lsd are irrational). Maybe the point of the story is that you need more than just rationality? Or that some people are powerful because of rationality, while others have imagination, and that friendship combines their powers in a my little pony like fashion? Or maybe its all just an excuse for pretentious philosophy and psychic battles?

Replies from: robert-miles, wedrifid, John_Maxwell_IV, Swimmer963
comment by Robert Miles (robert-miles) · 2012-07-28T20:23:15.913Z · LW(p) · GW(p)

I am not interested in improving epistemic rationality right now, partially because I am already quite good at it.

But remember that it's not just your own rationality that benefits you.

it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma

Presume away. Karma doesn't win arguments, arguments win karma.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2012-07-29T20:21:02.048Z · LW(p) · GW(p)

But remember that it's not just your own rationality that benefits you.

Are you saying that improving epistemic rationality is important because it benefits others as well as myself? This is true, but there are many other forms of self-improvement that would also have knock-on effects that benefit others.

I have actually read most of the relevant sequences, epistemic rationality really isn't low-hanging fruit anymore for me, although I wish I had known about cognitive biases years ago.

Replies from: robert-miles
comment by Robert Miles (robert-miles) · 2012-07-30T11:18:04.238Z · LW(p) · GW(p)

Are you saying that improving epistemic rationality is important because it benefits others as well as myself?

No, I'm saying that improving the epistemic rationality of others benefits everyone, including yourself. It's not just about improving our own rationality as individuals, it's about trying to improve the rationality of people-in-general - 'raising the sanity waterline'.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2012-07-31T13:17:06.966Z · LW(p) · GW(p)

Ok, I see what you mean now. Yes, this is often true, but again, I am trying to be less preachy (at least IRL) about rationality - if someone believes in astrology, or faith healing, or reincarnation then: (a) their beliefs probably bring them comfort (b) Trying to persuade them is often like banging my head against a brick wall (c) even the notion that there can be such a thing as a correct fact, independent of subjective mental states is very threatening to some people and I don't want to start pointless arguments

So unless they are acting irrationally in a way which harms other people, or they seem capable of having a sensible discussion, or I am drunk, I tend to leave them be.

comment by wedrifid · 2012-07-29T02:03:03.575Z · LW(p) · GW(p)

Hello everyone, Like many people, I come to this site via an interest in transhumanism, although it seems unlikely to me that FAI implementing CEV can actually be designed before the singularity

Many here would agree with you. (And, for instance, consider a ~10% chance of success better than near certain extinction.)

Replies from: skeptical_lurker, None
comment by skeptical_lurker · 2012-07-29T19:24:38.567Z · LW(p) · GW(p)

I agree that 10% chance of success is better than near zero, and furthermore I agree that expected utility maximization means that putting in a great deal of effort to achieve a positive outcome is wiser than saying "oh well, we're doomed anyway, might as well party hard and make the most of the time we have left". However, the question is whether, if FAI has a low probability of success, are other possibilities, e.g. tool AI a better option to pursue?

comment by [deleted] · 2012-07-29T02:15:37.137Z · LW(p) · GW(p)

Many here would agree with you. (And, for instance, consider a ~10% chance of success better than near certain extinction.)

Would you say that many people here (and yourself?) believe that the probable end of our species is within the next century or two?

Replies from: Nornagest, wedrifid
comment by Nornagest · 2012-07-29T03:01:21.106Z · LW(p) · GW(p)

The last survey reported that Less Wrongers on average believe that humanity has about a 68% chance of surviving the century without a disaster killing >90% of the species. (Median 80%, though, which might be a better measure of the community feeling than the mean in this case.) That's a lower bar than actual extinction, but also a shorter timescale, so I expect the answer to your question would be in the same ballpark.

comment by wedrifid · 2012-07-29T03:07:12.528Z · LW(p) · GW(p)

Would you say that many people here (and yourself?) believe that the probable end of our species is within the next century or two?

For myself: Yes! p(extinct within 200 years) > 0.5

comment by John_Maxwell (John_Maxwell_IV) · 2012-07-28T19:03:21.855Z · LW(p) · GW(p)

Welcome!

I can explain why, and possibly even what could be done instead, but it suddenly occurred to me that it seems presumptuous of me to criticize a theory put forward by very smart people when I only have 1 karma...

IMO you should definitely do it. Even if LW karma is good an indicator of good ideas, more information rarely hurts, especially on a topic as important as this.

Replies from: skeptical_lurker
comment by skeptical_lurker · 2012-07-31T13:20:12.755Z · LW(p) · GW(p)

Ok - although maybe I should stick it in its own thread?

I realize much of this has been said before.

Part 1 : AGI will come before FAI, because:

Complexity of algorithm design:

Intuitively, FAI seems orders of magnitude more complex than AGI. If I decided to start trying to program an AGI tomorrow, I would have ideas on how to start, and maybe even make a minuscule amount of progress. Ben Goertzel even has a (somewhat optimistic) roadmap for AGI in a decade. Meanwhile, afaik FAI is still stuck at the stage of lob’s theorem.
The fact that EY seems to be focusing on promoting rationality and writing (admittedly awesome) harry potter fanfiction seems to indicate that he doesn’t currently know how to write FAI (and nor does anyone else) otherwise he would be focusing on that now, and instead is planning for the long term.

Computational complexity CEV requires modelling (and extrapolating) every human mind on the planet, while avoiding the creation of sentient entities. While modelling might be cheaper than ~10^17 flops per human due to short cuts, I doubt it’s going to come cheap. Randomly sampling a subset of humanity to extrapolate from, at least initially, could make this problem less severe. Furthermore, this can be partially circumvented by saying that the AI follows a specific utility function while bootstrapping to enough computing power to implement CEV, but then you have the problem of allowing it to bootstrap safely. Having to prove friendliness of each step in self-improvement strikes me as something that could also be costly. Finally, I get the impression that people are considering using Solomonoff induction. It’s uncomputable, and while I realize that there exist approximations, I would imagine that these would be extremely expensive to calculate anything non-trivial. Is there any reason for using SI for FAI more than AGI, e.g. something todo with provability about the programs actions?

Infeasibility of relinquishment. If you can’t convince Ben Goertzel that FAI is needed, even though he is familiar with the arguments and is an advisor to SIAI, you’re not going to get anywhere near a universal consensus on the matter. Furthermore, AI is increasingly being used in financial and possibly soon military applications, and so there are strong incentives to speed the development of AI. While these uses are unlikely to be full AGI, they could provide building blocks – I can imagine a plausible situation where an advanced AI that predict the stock exchange could easily be modified to be a universal predictor.
The most powerful incentive to speed up AI development is the sheer number of people who die every day, and the amount of negentropy lost in the case that the 2nd law of thermodynamics cannot be circumvented. Even if there could be a worldwide ban on non-provably safe AGI, work would still probably continue in secret by people who thought the benefits of an earlier singularity outweighed the risks, and/or were worried about ideologically opposed groups getting their first.

Financial bootstrapping If you are ok with running a non-provably friendly AGI, then even in the early stages when, for example, your AI can write simple code or make reasonably accurate predictions but not speak English or make plans, you can use these to earn money, and buy more hardware/programmers. This seems to be part of the approach Ben is taking.

Coming in Part II: is there any alternative (and doing nothing is not an alternative! even if FAI is unlikely to work its better than giving up!)

Replies from: shminux
comment by shminux · 2012-07-31T21:33:38.731Z · LW(p) · GW(p)

Definitely worth its own Discussion post, once you have min karma, which should not take long.

Replies from: beoShaffer
comment by beoShaffer · 2012-07-31T21:52:32.185Z · LW(p) · GW(p)

They already have it.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-28T22:05:09.773Z · LW(p) · GW(p)

Welcome!

But more than that, I am trying to switch it off when talking to other people, for the simple reason (and I'm sure this has already been pointed out before) that if you compare three people, one who estimates the probability of an event at 110%, one who estimates it at 90%, and one who compensates for overconfidence bias and estimates it at 65%, the first two will win friends and influence people, while the third will seem indecisive.

Made me think of this article. Yes, you may be able, in the short run, to win friends and influence people by tricking yourself into being overconfident. But that belief is only in your head and doesn't affect the universe–thus doesn't affect the probability of Event X happening. Which means that if, realistically, X is 65% likely to happen, then you with your overconfidence, claiming that X is bound to happen, will eventually look like a fool 35% of the time, and will make it hard for yourself to leave a line of retreat.

Conclusion: in the long run, it's very good to be honest with yourself about your predictions of the future, and probably preferable to be honest with others, too, if you want to recruit their support.

Replies from: skeptical_lurker, TheOtherDave
comment by skeptical_lurker · 2012-07-29T19:43:10.119Z · LW(p) · GW(p)

Excellent points, and of course it is situation dependent - if one makes erroneous predictions on archived forms of communication, e.g. these posts, then yes these predictions can come back to haunt you, but often, especially in non-archived communications, people will remember the correct predictions and forget the false ones. It should go without saying that I do not intend to be overconfident on LW - if I was going to be, then the last thing I would do is announce this intention! In a strange way, I seem to want to hold three different beliefs: 1) An accurate assessment of what will happen, for planning my own actions. 2) A confidant, stopping just short of arrogant, belief in my predictions for impressing non-rationalists. 3) An unshakeable belief in my own invincibility, so that psychosomatic effects keep me healthy.

Unfortunately, this kinda sounds like "I want to have multiple personality disorder".

Replies from: Strange7
comment by Strange7 · 2012-08-01T02:22:06.566Z · LW(p) · GW(p)

If you're going to go that route, at least research it first. For example:

http://healthymultiplicity.com/

Replies from: skeptical_lurker
comment by skeptical_lurker · 2012-08-01T11:34:08.911Z · LW(p) · GW(p)

Thanks for the advice, but I don't actually want to have multiple personality disorder - I was just drawing an analogy.

comment by TheOtherDave · 2012-07-28T23:58:47.506Z · LW(p) · GW(p)

Hm.

So, call -C1 the social cost of reporting a .9 confidence of something that turns out false, and -C2 the social cost of reporting a .65 confidence of something that turns out false. Call C3 the benefit of reporting .9 confidence of something true, and C4 the benefit of .65 confidence.

How confident are you that that (.65C3 -.35C1) < (.65C4-.35C2)?

Replies from: skeptical_lurker, Swimmer963
comment by skeptical_lurker · 2012-07-29T19:46:25.405Z · LW(p) · GW(p)

In certain situations, such as sporting events which do not involve betting, my confidence that (.65C3 -.35C1) < (.65C4-.35C2) is at most 10%. In these situations confidence is valued far more that epistemic rationality.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-07-29T03:42:19.119Z · LW(p) · GW(p)

I would say I'm about 75% confident that (.65C3 -.35C1) < (.65C4-.35C2)... But one of the reasons I don't even want to play that game is that I feel I am completely unqualified to estimate probabilities about that, and most other things. I would have no idea how to go about estimating the probability of, for example, the Singularity occurring before 2050...much less how to compensate for biases in my estimate.

I think I also have somewhat of an ick reaction towards the concept of "tricking" people to get what you want, even if in a very subtle form. I just...like...being honest, and it's hard for me to tell if my arguments about honesty being better are rationalizations because I don't want being dishonest to be justifiable.

Replies from: Mass_Driver, TheOtherDave
comment by Mass_Driver · 2012-07-29T05:20:16.961Z · LW(p) · GW(p)

The way to bridge that gap is to only volunteer predictions when you're quite confident, and otherwise stay quiet, change the subject, or murmur a polite assent. You're absolutely right that explicitly declaring a 65% confidence estimate will make you look indecisive -- but people aren't likely to notice that you make predictions less often than other people -- they'll be too focused on how when you do make predictions, you have an uncanny tendency to be correct...and also that you're pleasantly modest and demure, too.

comment by TheOtherDave · 2012-07-29T07:43:04.088Z · LW(p) · GW(p)

(nods) That makes sense.

comment by cjb230 · 2012-07-21T16:41:13.335Z · LW(p) · GW(p)

Hi! Given how much time I've spent reading this site and its relatives, this post is overdue.

I'm 35, male, British and London-based, with a professional background in IT. I was raised Catholic, but when I was about 12, I had a de-conversion experience while in church. I remember leaving the pew during mass to go to the toilet, then walking back down the aisle during the eucharist, watching the priest moving stuff around the altar. It suddenly struck me as weird that so many people had gathered to watch a man in a funny dress pour stuff from one cup to another. So I identified as atheist or humanist for a long time. I can't remember any incident that made me start to identify as a rationalist, but I've been increasingly interested in evidence, biases and knowledge for over ten years now.

I've been lucky, I think, to have some breadth in my education: I studied Physics & Philosophy as an undergrad, Computer Science as a postgrad, and more recently rounded that off with an MBA. This gives me a handy toolset for approaching new problems, I think. I definitely want to learn more statistics though - it feels like there's a big gap in the arsenal.

There are a few stand-out things I have picked out from LW and OB so far. "Noticing that I am confused", and running toward that feeling rather than away from it, has helped at work. "Dissolving the question" has helped me to clarify some problems, and I'd like to be better at it. The material on how words can mislead has helped me to pay more attention to what people mean in discussion.

Non-rationality stuff: my lust to learn new things runs ahead of my ability to follow through, so I have far too many books! Like many people here, I have akrasia issues. I am interested in what can be done to improve quantity and quality of life, as well as productivity, including fitness and mindfulness meditation. Lastly, I'm taking a long trip to LA, flying on August 1, and I'd like to meet up with the LW community there.

comment by [deleted] · 2013-03-19T04:25:46.222Z · LW(p) · GW(p)

Background:

21-year old transgender-neither. I spent 13 years enveloped by Mormon culture and ideology, growing up in a sheltered environment. Then, everything changed when the Fire nation attacked.

Woops. Off-track.

I want my actions to matter, not from others remembering them but from me being alive to remember them . In simpler terms, I want to live for a long time - maybe forever. Death should be a choice, not an unchanging eventuality.

But I don't know where to start; I feel overwhelmed by all the things I need to learn.

So I've come here. I'm reading the sequences and trying to get a better grasp on thinking rationally, etc., but was hoping to get pointers from the more experienced.

What is needed right now? I want to do what I can to help not only myself, but those whose paths I cross.

~Jenna

Replies from: Alicorn, Nisan
comment by Alicorn · 2013-03-19T05:51:05.707Z · LW(p) · GW(p)

transgender-neither

Is this the same thing as "agender"?

Then, everything changed when the Fire nation attacked.

<3!!

Replies from: None
comment by [deleted] · 2013-03-19T20:13:18.782Z · LW(p) · GW(p)

Yes, it's the same. Transgender-neither sounds better to me, though, so I used that term.

But if I find that agender is more accessible I'll switch.

And yep, I'm an Avatar the Last Airbender junkie. :)

comment by Nisan · 2013-03-19T06:17:56.578Z · LW(p) · GW(p)

Welcome! Have you considered signing up for cryonics?

Replies from: None
comment by [deleted] · 2013-03-19T20:08:29.529Z · LW(p) · GW(p)

Aside from the occasional X-files episode and science fiction reading, I don't know much about cryonics.

I considered it as a possibility but dislike that it means I'm 'in suspense' while the world is continuing on without me. I want to be an active participant! :D

Replies from: shminux
comment by shminux · 2013-03-19T20:54:38.140Z · LW(p) · GW(p)

I want to be an active participant!

Certainly, but when you no longer can be, it's nice to have an option of becoming one again some day.

Replies from: EHeller
comment by EHeller · 2013-03-20T00:20:15.864Z · LW(p) · GW(p)

Option might be too strong a word. Its nice to have the vanishingly-small possibility. I think its important for transhumanists to remind ourselves that cryonics is unlikely to actually work, its just the only hail-mary available.

Replies from: Error, Eliezer_Yudkowsky
comment by Error · 2013-03-20T03:04:30.486Z · LW(p) · GW(p)

I think it might be important to remind others of that too, when discussing the subject. Especially for people who are signed up but have a skeptical social circle, "this seems like the least-bad of a set of bad options" may be easier for them to swallow than "I believe I'm going to wake up one day."

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-20T03:12:22.437Z · LW(p) · GW(p)

Far as I can tell, the basic tech in cryonics should basically work. Storage organizations are uncertain and so is the survival of the planet. But if we're told that the basic cryonics tech didn't work, we've learned some new fact of neuroscience unknown to present-day knowledge.

Don't assign vanishingly small probabilities to things just because they sound weird, or it sounds less likely to get funny looks if you can say that it's just a tiny chance. That is not how 'probability' works. Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.

Replies from: Kawoomba, EHeller, shminux, Dreaded_Anomaly, Error
comment by Kawoomba · 2013-03-20T07:57:03.675Z · LW(p) · GW(p)

Probabilities of basic cryonics tech working are questions of neuroscience, full stop

I'd say full speed ahead, Cap'n. Basic cryonics tech working - while being a sine qua non - isn't the ultimate question for people signing up for cryonics. It's just a term in the probability calculation for the actual goal: "Will I be revived (in some form that would be recognizable to my current self as myself)?" (You've mentioned that in the parent comment, but it deserves more than a passing remark.)

And that most decidedly requires a host of complex assumptions, such as "an agent / a group of agents will have an interest in expending resources into reviving a group of frozen old-version homo sapiens, without any enhancements, me among them", "the future agents' goals cannot be served merely by reading my memory engrams, then using them as a database, without granting personhood", "there won't be so many cryo-patients at a future point (once it catches on with better tech) that thawing all of them would be infeasible, or disallowed", not to mention my favorite "I won't be instantly integrated into some hivemind in which I lose all traces of my individuality".

What we're all hoping for, of course, is for a benevolent super-current-human agent - e.g. an FAI - to care enough about us to solve all the technical issues and grant us back our agent-hood. By construction at least in your case the advent of such an FAI would be after your passing (you wouldn't be frozen otherwise). That means that you (of all people) would also need to qualify the most promising scenario "there will be a friendly AI to do it" with "and it will have been successfully implemented by someone other than me".

Also, with current tech not only would true x-risks preclude you from ever being revived, even non x-risk catastrophic events (partial civilizatory collapse due to Malthusian dynamics etc.) could easily destroy the facility you're held in, or take away anyone's incentive to maintain it. (TW: That's not even taking into account Siam the Star Shredder.)

I'm trying to avoid motivated cognition here, but there are lot of terms going into the actual calculation, and while that in itself doesn't mean the probability will be vanishingly small, there seem to be a lot more (and given human nature, unfortunately likely / contributing more probability mass) scenarios in which your goal wouldn't be achieved - or be achieved in some undesirable fashion - than the "here you go, welcome back to a society you'd like to live in" variety.

That being said, I'll take the small chance over nothing. Hopefully some decent options will be established near my place of residence, soon.

comment by EHeller · 2013-03-20T06:59:28.727Z · LW(p) · GW(p)

I actually am signed up for cryonics.

My issue with the basic tech is that liquid nitrogen, while a cheap storage method, is too cold to avoid fracturing. Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information.

Now, it seems likely to me that at some point in the future the fracturing problem can be solved, or at least mitigated, by intermediate temperature storing and careful cooling processes, but that won't fix the bodies frozen today. So I don't doubt that (barring large neuroscience related, unquantifiable uncertainty) cryonics may improve to the point where the tech is likely to work (or be supplanted by plastination methods,etc), it is not there now, and what matters for people frozen today is the state of cryonics today.

Saying there are no fundamental scientific barriers to the tech working is not the same thing as saying the hard work of engineering has been done and the tech currently works.

Edit: I also have a weak prior that the chemical information in the brain is important, but it is weak.

Replies from: Eliezer_Yudkowsky, Nisan, shminux
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-20T23:08:42.498Z · LW(p) · GW(p)

Experience with imaging systems leads me to believe that fractures will interfere with reconstructions of the brain's geometry, and cryoprotectants obviously destroy chemical information.

Since this is the key point of neuroscience, do you want to expand on it? What experience with imaging leads you to believe that fractures (of incompletely vitrified cells) will implement many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states? What chemical information is obviously destroyed and is it a type that could plausibly play a role in long-term memory?

Replies from: shminux, EHeller
comment by shminux · 2013-03-21T20:52:59.377Z · LW(p) · GW(p)

"many-to-one mappings of molecular start states onto molecular end states in a way that overlaps between functionally relevant brain states" is probably too restrictive. I would use "possibly functionally different, but subjectively acceptably close brain states".

comment by EHeller · 2013-03-21T08:04:26.801Z · LW(p) · GW(p)

The cryoprotectants are toxic, they will damage proteins (misfolds, etc) and distort relative concentrations throughout the cell. This information is irretrievable once the damage is done. This is what I refereed to when I said obviously destroyed chemical information. It is our hope that such information is unimportant, but my (as I said above fairly uncertain) prior would be that the synaptic protein structures are probably important. My prior is so weak because I am not an expert on biochemistry or neuroscience.

As to the physical fracture, very detailed imaging would have to be done on either side of the fracture in order to match the sides back up, and this is related to a problem I do have some experience with. I'm familiar with attempts to use synchrotron radiation to image protein structures, which has a percolation problem- you are damaging what you are trying to image while you image it. If you have lots of copies of what you want to image, this is a solvable problem, but with only one original you are going to lose information.

Edit: in regards to the first point, kalla724 makes the same point with much more relevant expertise in this thread http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryogenics/ His experience working with synapses leads him to a much stronger estimate that cryoprotectants cause irreversible damage. I may strengthen my prior a bit.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T08:41:51.049Z · LW(p) · GW(p)

This information is irretrievable once the damage is done.

How do you know? I'm not asking for some burden of infinite proof where you have to prove that the info can't be stored elsewhere. I am asking whether you know that widely functionally different start states are being mapped onto an overlapping spread of molecularly identical end states, and if so, how. E.g., "denaturing either conformation A or conformation B will both result in denatured conformation C and the A-vs.-B distinction is just a little twist of this spatially isolated thingy here so you wouldn't expect it to be echoed in any exact nearby positions of blah" or something.

Replies from: EHeller, Strange7
comment by EHeller · 2013-03-21T15:56:18.900Z · LW(p) · GW(p)

So what I'm thinking about is something like this: imagine an enzyme,present at two sites on the membrane and regulated by an inhibitor. Now a toxin comes along and breaks the weak bonds to the inhibitor, stripping them off. Information about which site was inhibited is gone.

If the inhibitor has some further chemical involvement with the toxin, or if the toxin pops the enzymes off the membrane all together you have more problems. You might not know how many enzymes were inhibited, which sites were occupied, or which were inhibited.

I could also imagine more exotic cases where a toxin induces a folding change in one protein, which allows it to accept a regulator molecule meant for a different protein. Now to figure out our system we'd need to scan at significantly smaller scales to try to discern those regulator molecules. I don't have the expertise to estimate if this is likely.

To reiterate, I am not by any means a neuroscientist (my training is physics and my work is statistics), so its possible this sort of information just isn't that important, but my suspicion is that it is.

Edited to fix an embarrassing except/accept mistake.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T22:08:36.483Z · LW(p) · GW(p)

(Scanning at significantly smaller scales should always be assumed to be fine as long as end states are distinguishable up to thermal noise!)

So what I'm thinking about is something like this: imagine an enzyme,present at two sites on the membrane and regulated by an inhibitor. Now a toxin comes along and breaks the weak bonds to the inhibitor, stripping them off. Information about which site was inhibited is gone.

Okay, I agree that if this takes place at a temperature where molecules are still diffusing at a rapid pace and there's no molecular sign of the broken bond at the bonding site, then it sounds like info could be permanently destroyed in this way. Now why would you think this was likely with vitrification solutions currently used? Is there an intuition here about ranges of chemical interaction so wide that many interactions are likely to occur which break such bonds and at least one such interaction is likely to destroy functionally critical non-duplicated info? If so, should we toss out vitrification and go back to dropping the head in liquid nitrogen because shear damage from ice freezing will produce fewer many-to-one mappings than introducing a foreign chemical into the brain? I express some surprise because if destructive chemical interactions were that common with each new chemical introduced then the problem of having a whole cell not self-destruct should be computationally unsolvable for natural selection, unless the chemicals used in vitrification are unusually bad somehow.

Replies from: EHeller
comment by EHeller · 2013-03-21T23:20:29.117Z · LW(p) · GW(p)

(Scanning at significantly smaller scales should always be assumed to be fine as long as end states are distinguishable up to thermal noise!)

This has some problems- fundamentally the length scale probed is inversely proportional to the energy required, which means increasing the resolution increases the damage done by scanning. You start getting into issues of 'how much of this can I scan before I've totally destroyed this?' which is a sort of percolation problem (how many amino acids can I randomly knock out of a protein before it collapses or rebonds into a different protein?), so scanning at resolutions with energy equivalent above peptide bonds is very problematic. Assuming peptide bond strength of a couple kj/mol, I get lower-limit length scales of a few microns (this is rough, and I'd appreciate if someone would double check).

Now why would you think this was likely with vitrification solutions currently used?

The vitrification solutions currently used are know to be toxic, and are used at very high concentrations, so some of this sort of damage will occur. I don't know enough biochemistry to say anything else with any kind of definitety, but on the previous thread kalla724 seemed to have some domain specific knowledge and thought the problem would be severe.

If so, should we toss out vitrification and go back to dropping the head in liquid nitrogen because shear damage from ice freezing will produce fewer many-to-one mappings than introducing a foreign chemical into the brain?

No, not at all. The vitrification damage is orders of magnitude less. Destroying a few multi-unit proteins and removing some inhibitors seems much better than totally destroying the cell-membrane (which has many of the same "which sites were these guys attached to?" problems).

I express some surprise because if destructive chemical interactions were that common with each new chemical introduced then the problem of having a whole cell not self-destruct should be computationally unsolvable for natural selection

Its my (limited) understanding that the cell membrane exist to largely solve this problem. Also, introducing tiny bits of toxins here and there causes small amounts of damage but the cell could probably survive. Putting the cell in a toxic environment will inevitably kill it. The concentration matters. But here I'm stepping way outside anything I know about.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T00:10:40.951Z · LW(p) · GW(p)

This has some problems- fundamentally the length scale probed is inversely proportional to the energy required, which means increasing the resolution increases the damage done by scanning.

We seem to have very different assumptions here. I am assuming you can get up to the molecule and gently wave a tiny molecular probe in its direction, if required. I am not assuming that you are trying to use high-energy photons to photograph it.

You also still seem to be use a lot of functional-damage words like "destroying" which is why I don't trust your or kalla724's intuitions relative to the intuitions of other scientists with domain knowledge of neuroscience who use the language of information theory when assessing cryonic feasibility. If somebody is thinking in terms of functional damage (it doesn't restart when you reboot it, oh my gosh we changed the conformation look at that damage it can't play its functional role in the cell anymore!) then their intuitions don't bear very well on the real question of many-to-one mapping.

What does the vitrification solution actually do that's supposed to irreversibly map things, does anyone actually know? The fact that a cell can survive with a membrane, at all, considering the many different molecules inside it, imply that most molecules don't functionally damage most other molecules most of the time, never mind performing irreversible mappings on them. But then this is reasoning over molecules that may be of a different type then vitrificants. At the opposite extreme, I'd expect introducing hydrochloric acid into the brain to be quite destructive.

Replies from: EHeller
comment by EHeller · 2013-03-22T04:30:22.722Z · LW(p) · GW(p)

We seem to have very different assumptions here. I am assuming you can get up to the molecule and gently wave a tiny molecular probe in its direction, if required. I am not assuming that you are trying to use high-energy photons to photograph it.

How are you imaging this works? I'm aware of chemistry that would allow you to say there are X whatever proteins, and Y such-and-such enzymes,etc, but such chemical processes I don't think are good enough for the sort of geometric reconstruction needed. Its not obvious to me that a molecular probe of the type you imagine can exist. What exactly is it measuring and how is it sensitive to it? Is it some sort of enzyme? Do we thaw the brain and then introduce these probes in solution? Do we somehow pulp the cell and run the constituents through a nanopore type thing and try to measure charge?

the intuitions of other scientists with domain knowledge of neuroscience who use the language of information theory when assessing cryonic feasibility.

I would love to be convinced I am overly pessimistic, and pointing me in the direction of biochemists/neuroscientists/biophysicists who disagree with me would be welcome. I only know a few biophysicists and they are generally more pessimistic than I am.

What does the vitrification solution actually do that's supposed to irreversibly map things, does anyone actually know?

I know ethylene glycol is cytotoxic, and so interacts with membrane proteins, but I don't know the mechanism.

Replies from: Eliezer_Yudkowsky, orthonormal
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T05:48:28.327Z · LW(p) · GW(p)

I'll quickly point you at Drexler's Nanosystems and Freitas's Nanomedicine though they're rather long and technical reads. But we are visualizing molecularly specified machines, and 'hell no' to thawing first or pulping the cell. Seriously, this kind of background assumption is why I have to ask a lot of questions instead of just taking this sort of skeptical intuition at face value.

But rather than having to read through either of those sources, I would ask you to just take on assumption that two molecularly distinct (up to thermal noise) configurations will somehow be distinguishable by sufficiently advanced technology, and describe what your intuitions (and reasons) would be taking that premise at face value. It's not your job to be a physicist or to try to describe the theoretical limits of future technology, except of course that two systems physically identical up to thermal noise can be assumed to be technologically indistinguishable, and since thermal noise is much larger than exact quark positions it will not be possible to read off any subtle neural info by looking at exact quark positions (now that might be permanently impossible), etc. Aside from that I would encourage you to think in terms of doing cryptography to a vitrified brain rather than medicine. Don't ask whether ethylene glycol is toxic, ask whether it is a secure hard drive erasure mechanism that can obscure the contents of the brain from a powerful and intelligent adversary reading off the exact molecular positions in order to obtain tiny hints.

Checking over the open letter from scientists in support of cryonics to remember who has an explicitly neuroscience background, I am reminded that good old Anders Sandberg is wearing a doctorate in computational neuroscience from Stockholm, so I'll go ahead and name him.

Replies from: EHeller
comment by EHeller · 2013-03-22T06:50:56.326Z · LW(p) · GW(p)

Do you have a page number in Nanosystems for a references to a sensing probe? Also, this is tangential to the main discussion, so I'll take pointers to any reference you have and let this drop.

Don't ask whether ethylene glycol is toxic, ask whether it is a secure hard drive erasure mechanism that can obscure the contents of the brain from a powerful and intelligent adversary reading off the exact molecular positions in order to obtain tiny hints.

I was using cytotoxic in the very specific sense of "interacts and destabilizes the cell membrane," which is doing the sort of operations we agreed in principle can be irreversible. Estimates as to how important this sort of information actually is are impossible for me to make, as I lack the background. What I would love to see is someone with some domain specific knowledge explaining why this isn't an issue.

Replies from: zslastman, Eliezer_Yudkowsky
comment by zslastman · 2013-03-23T07:50:38.978Z · LW(p) · GW(p)

Do you have a page number in Nanosystems for a references to a sensing probe?

Boom. http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T07:18:40.632Z · LW(p) · GW(p)

I was using cytotoxic in the very specific sense of "interacts and destabilizes the cell membrane," which is doing the sort of operations we agreed in principle can be irreversible.

Sorry, but can you again expand on this? What happens?

Replies from: EHeller
comment by EHeller · 2013-03-23T03:21:25.561Z · LW(p) · GW(p)

So I cracked open a biochem book to avoid wandering off a speculative pier,as we were moving beyond what I readily knew. A simple loss of information presented itself.

Some proteins can have two states, open and closed, which operate on a hydrophobic/hydrophilic balance. In dessicated cells or if the proteins denature for some other reason, the open/closed state will be lost.

Adding cryoprotectants will change osmotic pressure and the cell will dessicate, and the open/closed state will be lost.

Replies from: Eliezer_Yudkowsky, lsparrish
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-23T04:33:40.017Z · LW(p) · GW(p)

Do we know about any such proteins related to LTM? Can we make predictions about what it takes to erase C. elegans maze memory this way?

Replies from: zslastman, EHeller
comment by zslastman · 2013-03-23T07:37:56.409Z · LW(p) · GW(p)

Would strongly predict that such changes erase only information about short term activity, not long term memory. Protein conformation in response to electrochemical/osmotic gradients operates on the timescale of individual firings, it's probably too flimsy to encode stable memories. These should be easy for Skynet to recover.

Higher level pattens of firings might conceivably store information, but experience with anaesthesia, hypothermia etc. says they do not. Or we've been killing people and replacing them all this time... a possibility which thanks to this site I'm prepared to consider..

Oh, and

Do you have a page number in Nanosystems for a references to a sensing probe?

Bam.

http://www.nature.com/news/diamond-defects-shrink-mri-to-the-nanoscale-1.12343

comment by EHeller · 2013-03-23T05:32:32.799Z · LW(p) · GW(p)

Here we have moved far past my ability to even speculate.

Replies from: lsparrish
comment by lsparrish · 2013-03-23T16:04:14.253Z · LW(p) · GW(p)

Presumably you can use google and wikipedia to fill in the gaps just like the rest of us.

Wikipedia: Long-term memory

Long-term memory, unlike short-term memory, is dependent upon the construction of new proteins.[30] This occurs within the cellular body, and concerns in particular transmitters, receptors, and new synapse pathways that reinforce the communicative strength between neurons. The production of new proteins devoted to synapse reinforcement is triggered after the release of certain signaling substances (such as calcium within hippocampal neurons) in the cell. In the case of hippocampal cells, this release is dependent upon the expulsion of magnesium (a binding molecule) that is expelled after significant and repetitive synaptic signaling. The temporary expulsion of magnesium frees NMDA receptors to release calcium in the cell, a signal that leads to gene transcription and the construction of reinforcing proteins.[31] For more information, see long-term potentiation (LTP).

One of the newly synthesized proteins in LTP is also critical for maintaining long-term memory. This protein is an autonomously active form of the enzyme protein kinase C (PKC), known as PKMζ. PKMζ maintains the activity-dependent enhancement of synaptic strength and inhibiting PKMζ erases established long-term memories, without affecting short-term memory or, once the inhibitor is eliminated, the ability to encode and store new long-term memories is restored.

Also, BDNF is important for the persistence of long-term memories.[32]

What I worry about being confused on when reading the literature is the distinction between forming memories in the first place, and actually encoding for memory.

Another critical distinction is that, proteins that are needed to prevent degradation of memories over time (which get lots of research and emphasis in the literature due to their role in preventing degenerative diseases) aren't necessarily the ones directly encoding for the memories.

Replies from: EHeller
comment by EHeller · 2013-03-23T17:08:14.930Z · LW(p) · GW(p)

So in subjects I know a lot about, I have dealt with many people who pick up strange notions by filling in the gaps from google and wikipedia with a weak foundation. The work required to effectively figure out what specific damage to the specific proteins you mentioned could be done by desiccation of a cell is beyond my knowledge base, so I leave it to someone more knowledgeable than myself(perhaps you?) to step in.

What open/closed states does PKMζ have? What regulates those open/closed states? Are the open/closed states important to its roll (it looks like yes given the notion of the inhibitor?)?

Replies from: lsparrish
comment by lsparrish · 2013-03-25T15:09:43.457Z · LW(p) · GW(p)

Yes, it's important to build a strong foundation before establishing firm opinions. Also, in this particular case note that science appears to have recently changed it's mind based on further evidence, which goes to show that you have to be careful when reading wikipedia. Apparently the protein in question is not so likely to underlie LTM after all, as transgenic mice lacking it still have LTM (exhibiting maze memory, LTP, etc). The erasure of memory is linked to zeta inhibitory peptide (ZIP), which incidentally happens in the transgenic mice as well.

ETA: Apparently PKMzeta can be used to restore faded memories erased with ZIP.

comment by lsparrish · 2013-03-23T04:09:28.078Z · LW(p) · GW(p)

Adding cryoprotectants will change osmotic pressure and the cell will dessicate, and the open/closedstate will be lost.

Now you know why I'm so keen on the idea of figuring out a way to get something like trehalose into the cell. Neurons tend to lose water rather than import cryoprotectants because of their myelination. Trehalose protects against dessication by cushioning proteins from hitting each other. Other kinds of solute that can get past the membrane could balance out the osmotic pressure (that's kind of the point of penetrating cryoprotectants) just as well, but I like trehalose because of its low toxicity.

comment by orthonormal · 2013-03-22T05:07:52.731Z · LW(p) · GW(p)

How are you imaging this works?

Nanotechnology, not chemical analysis. Drexler's Engines of Creation contains a section on the feasibility of repairing molecular damage in this way. Since (if our current understanding holds) nanobots can be functional on a smaller scale than proteins (which are massive chunks held together Lego-style by van der Walls forces), they can be introduced within a cell membrane to probe, report on, and repair damaged proteins.

Replies from: EHeller
comment by EHeller · 2013-03-22T06:34:56.888Z · LW(p) · GW(p)

I have not read Engine's of Creation, but I have read his thesis and I was under the impression most of the proposed systems would only work in vacuum chambers as the would oxidize extremely rapidly in an environment like the body. Has someone worked around this problem, even in theory?

Also, I've seen molecular assembler designs of various types in various speculative papers, but I've never seen a sensing apparatus. Any references?

Replies from: orthonormal
comment by orthonormal · 2013-03-23T16:44:25.269Z · LW(p) · GW(p)

Has someone worked around this problem, even in theory?

Later in the thread, Eliezer recommended Drexler's followup Nanosystems and Freitas' Nanomedicine, neither of which I've read, but I'd be surprised if the latter didn't address this issue. Sorry that I in particular don't think this is a worrisome objection, but it's on the same level as saying that electronics could never be helpful in the real world because water makes them malfunction. You start by showing that something works under ideal conditions, and then you find a way to waterproof it.

Also, I've seen molecular assembler designs of various types in various speculative papers, but I've never seen a sensing apparatus. Any references?

For the convenience of later readers: someone elsewhere in the thread linked an actual physical experimental example.

Replies from: EHeller
comment by EHeller · 2013-03-23T17:42:56.884Z · LW(p) · GW(p)

Freitas' Nanomedicine, neither of which I've read, but I'd be surprised if the latter didn't address this issue.

Not that I have seen, but I'm only partially through it.

For the convenience of later readers: someone elsewhere in the thread linked an actual physical experimental example.

And its an awesome example from just a few months ago! Pushing NMR from mm resolutions down to nm resolutions is a truly incredibly feat!

comment by Strange7 · 2013-03-21T08:52:42.254Z · LW(p) · GW(p)

The end states don't need to be identical, just indistinguishable.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-21T09:16:47.885Z · LW(p) · GW(p)

To presume that states non-identical up to thermal noise are indistinguishable seems to presume either lower technology than the sort of thing I have in mind, or that you know something I don't about how two physical states can be non-identical up to thermal noise and yet indistinguishable.

comment by Nisan · 2013-03-20T14:44:54.107Z · LW(p) · GW(p)

Do you think it's at all likely that the connectome can be recovered after fracturing by "matching up" the structure on either side of the fracture?

comment by shminux · 2013-03-21T20:51:54.963Z · LW(p) · GW(p)

Just to be a cryo advocate here for a moment, if the information of interest is distributed rather than localized, like in a hologram (or any other Fourier-type storage), there is a chance that one can be recovered as a reasonable facsimile of the frozen person, with maybe some hazy memories (corresponding to the lowered resolution of a partial hologram). I'd still rather be revived but having trouble remembering someone's face or how to drive a car, or how to solve the Schrodinger equation, than not to be revived at all. Even some drastic personality changes would probably be acceptable, given the alternative.

Replies from: EHeller, TheOtherDave
comment by EHeller · 2013-03-22T04:02:14.809Z · LW(p) · GW(p)

Oh, sure. Or if the sort of information that gets destroyed relates to what-I-am-currently-thinking, or something similar. If I wake up and don't remember the last X minutes,or hours, big deal. But when we have to postulate certain types of storage for something to work, it should lower our probability estimates.

comment by TheOtherDave · 2013-03-21T21:14:35.499Z · LW(p) · GW(p)

Do you have a sense of how drastic a personality change has to be before there's someone else you'd rather be resurrected instead of drastically-changed-shminux?

Replies from: shminux
comment by shminux · 2013-03-21T21:37:58.017Z · LW(p) · GW(p)

Not really. This would require solving the personal identity problem, which is often purported to have been solved or even dissolved, but isn't.

I'm guessing that there is no actual threshold, but a fuzzy fractal boundary which heavily depends on the person in question. While one may say that if they are unable to remember the faces and names of their children and no longer able to feel the love that they felt for them, it's no longer them, and they do not want this new person to replace them, others would be reasonably OK with that. The same applies to the multitude of other memories, feelings, personality traits, mental and physical skills and whatever else you (generic you) consider essential for your identity.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-22T02:26:20.902Z · LW(p) · GW(p)

Yeah, I share your sense that there is no actual threshold.

It's also not clear to me that individuals have any sort of specifiable boundary or what is or isn't "them", however fuzzy or fractal, so much as they have the habit of describing themselves in various ways.

comment by shminux · 2013-03-21T22:18:55.829Z · LW(p) · GW(p)

Probabilities of basic cryonics tech working are questions of neuroscience, full stop

Is this your true objection? What potential discovery in neuroscience would cause you to abandon cryonics and actively look for other ways to preserve your identity beyond the natural human lifespan? (This is a standard question one asks a believer to determine whether the belief in question is rational -- what evidence would make you stop believing?)

Replies from: Eliezer_Yudkowsky, gwern, wedrifid, orthonormal
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T06:18:36.256Z · LW(p) · GW(p)

Anders Sandberg who does get the concept of sufficiently advanced technology posts saying, "Shit, turns out LTM seems to depend really heavily on whether protein blah has conformation A and B and the vitrification solution denatures it to C and it's spatially isolated so there's no way we're getting the info back, it's possible something unknown embodies redundant information but this seems really ubiquitous and basic so the default assumption is that everyone vitrified is dead". Although, hm, in this case I'd just be like, "Okay, back to chopping off the head and dropping it in a bucket of liquid nitrogen, don't use that particular vitrification solution". I can't think offhand of a simple discovery which would imply literally giving up on cryonics in the sense of "Just give up you can't figure out how to freeze people ever." I can certainly think of bad news for particular techniques, though.

Replies from: shminux
comment by shminux · 2013-03-22T15:54:55.905Z · LW(p) · GW(p)

I can't think offhand of a simple discovery which would imply literally giving up on cryonics

OK. More instrumentally, then. What evidence would make you stop paying the cryo insurance premiums with CI as the beneficiary and start looking for alternatives?

Replies from: Eliezer_Yudkowsky, Kawoomba
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-22T22:25:44.686Z · LW(p) · GW(p)

Anders publishes that, CI announces they intend to go on vitrifying patients anyway, Alcor offers a chop-off-your-head-and-dunk-in-liquid-nitro solution. Not super plausible but it's off the top of my head.

Replies from: shminux
comment by shminux · 2013-03-23T04:27:24.885Z · LW(p) · GW(p)

No pun intended?

comment by Kawoomba · 2013-03-22T16:57:43.212Z · LW(p) · GW(p)

Can you name currently available alternatives to cryonics which accomplish a similar goal?

Apologies, misinterpreted the question.

Replies from: shminux
comment by shminux · 2013-03-22T17:09:16.916Z · LW(p) · GW(p)

Not really, but yours is an uncharitable interpretation of my question, which is to evaluate the utility of spending some $100/mo on cryo vs spending it on something (anything) else, not "I have this dedicated $100/mo lying around which I can only spend toward my personal future revival".

comment by gwern · 2013-03-22T17:08:28.762Z · LW(p) · GW(p)

Personally, I would be very impressed if anyone could demonstrate memory loss in a cryopreserved and then revived organism, like a bunch of C. elegans losing their maze-running memories. They're very simple, robust organisms, it's a large crude memory, the vitrification process ought to work far better on them than a human brain, and if their memories can't survive, that'd be huge evidence against anything sensible coming out of vitrified human brains no matter how much nanotech scanning is done (and needless to say, such scanning or emulation methods can and will be tested on a tiny worm with a small fixed set of neurons long before they can be used on anything approaching a human brain). It says a lot about how poorly funded cryonics research is that no one has done this or something similar as far as I know.

Replies from: shminux, Eliezer_Yudkowsky
comment by shminux · 2013-03-22T23:24:27.902Z · LW(p) · GW(p)

Hmm, I wonder how much has been done on figuring out the memory storage in this organism. Like, if you knock out a few neurons or maybe synapses, how much does it forget?

Replies from: gwern
comment by gwern · 2013-03-23T02:25:35.805Z · LW(p) · GW(p)

Since it's C. elegans, I assume the answer is 'a ton of work has been done', but I'm too tired right now to go look or read more medical/biological papers.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-23T00:10:57.936Z · LW(p) · GW(p)

I'm not totally sure I'd call this sufficient evidence since functional damage != many-to-one mapping but it would shave some points off the probability for existing tech and be a pointer to look for the exact mode of functional memory loss.

comment by wedrifid · 2013-03-22T01:33:47.553Z · LW(p) · GW(p)

and actively look for other ways to preserve your identity beyond the natural human lifespan?

He's kind of been working on that for a while now.

(I suppose that works either as "subvert the natural human lifespan entirely through creating FAI" or "preserve his identity for time immemorial in the form of 'Harry-Stu' fanfiction" depending on how cynical one is feeling.)

comment by orthonormal · 2013-03-22T05:23:50.810Z · LW(p) · GW(p)

In my case, to name one contingency: if the NEMALOAD Project finds that analysis of relatively large cellular structures doesn't suffice to predict neuronal activity, and concludes that the activity of individual molecules is essential to the process, then I'd become significantly more worried about EHeller's objection and redo the cost-benefit calculation I did before signing up for cryonics. (It came out in favor, using my best-guess probability of success between 1 and 5 percent; but it wouldn't have trumped the cost at, say, 0.1%.)

To name another: if the BPF shows that cryopreservation makes a hash of synaptic connections, I'd explicitly re-do the cost-benefit calculation as well.

comment by Dreaded_Anomaly · 2013-03-20T13:52:13.120Z · LW(p) · GW(p)

Have you seen the comments by kalla724 in this thread?

Edit: There's some further discussion here.

comment by Error · 2013-03-20T03:54:22.990Z · LW(p) · GW(p)

Probabilities of basic cryonics tech working are questions of neuroscience, full stop; if you know the basic tech has a tiny probability of working, you must know something about current vitrification solutions or the operation of long-term memory which I do not.

It seems to me that they're also questions of engineering feasibility. A thing can be provably possible and yet unfeasibly difficult to implement in reality. Consider the difference between, say, adding salt to water and getting it out again. What if the difference in cost and engineering difficulty between vitrifying and successfully de-vitrifying is similar? What if it turns out to be ten orders of magnitude greater?

I think the most likely failure condition for cryonics tech (as opposed to cyronics organizations) isn't going to be that revival turns out to be impossible, but that revival turns out to be so unbelievably hard or expensive that it's never feasible to actually do. If it's physically and information-theoretically allowed to revive a person, but technologically impractical (even with Sufficiently Advanced Science), then its theoretical possibility doesn't help the dead much.

I have the same concern about unbounded life extension, actually; but I find success in that area more probable for some reason.

(personal disclosure: I'm not signed up for cryonics, but I don't give funny looks to people who are. Their screws seem a bit loose but they're threaded in the right direction. That's more than one can say for most of the world.)

Replies from: Izeinwinter
comment by Izeinwinter · 2013-03-31T19:01:55.692Z · LW(p) · GW(p)

Getting aging to stop looks positively trivial in comparison - The average lifespan of different animals already varies /way/ to much for there to be any biological law underlying it. So turning senescence off altogether should be possible. I suspect evolution has not already done so because overly long-lived creatures in the wild were on average bad news for their bloodlines - banging their grand daughters and occupying turf with the cunning of the old. Uhm. Now I have an itch to set up a simulation and run it.. Just so stories are not proof. Math is proof.

comment by itaibn0 · 2013-02-23T17:37:32.426Z · LW(p) · GW(p)

My name is Itai Bar-Natan. I have been lurking here for a long time, more recently I start posting some things, but only now do I formally introduce myself.

I am in grade 11, and I began reading less wrong at grade 8 (introduced by Scott Aaronson's blog). I am a former math prodigy, and am currently taking one graduate-level course in it. This is the first time I am learning math under the school system (although I not the first time I attended math classes under the school system). Before that, I would learn from my parents, who are both mathematicians, or (later on) from books and internet articles.

Heedless of Feynman, I believe I understand quantum mechanics.

One weakness I am working to improve on is the inability to write in large quantities.

I have a blog here: http://itaibn.wordpress.com/

I consider less wrong as a fun time-waster and a community which is relatively sane.

Replies from: BerryPick6, wedrifid
comment by BerryPick6 · 2013-02-23T17:58:20.812Z · LW(p) · GW(p)

Are you, by any chance, related to Dror?

Replies from: itaibn0
comment by itaibn0 · 2013-02-23T18:12:01.330Z · LW(p) · GW(p)

Yes, I am his son.

Replies from: BerryPick6
comment by BerryPick6 · 2013-02-23T19:25:55.206Z · LW(p) · GW(p)

To my eternal embarrassment, I was, as a youth, quite taken in by "The Bible Code." Very taken in, actually. That ended suddenly when someone directed to the material written by your father and McKay (I think?). Small world, I guess? :)

comment by wedrifid · 2013-02-23T18:50:14.022Z · LW(p) · GW(p)

Headless of Feynman, I believe I understand quantum mechanics.

Give her to Headless Feyn-man!

Replies from: itaibn0
comment by itaibn0 · 2013-02-23T18:59:30.547Z · LW(p) · GW(p)

Typo fixed.

comment by olibain · 2013-02-20T20:34:42.592Z · LW(p) · GW(p)

I'm Robby Oliphant. I started a few months ago reading HP:MoR, which led me to the Sequences, which led me here about two weeks ago. So far I have read comments and discussions solely as a spectator. But finally, after developing my understanding and beginning on the path set forth by the sequences, I remain silent no more.

I am fresh out of high school, excited about life and plan to become a teacher, eventually. My short-term plans involve going out and doing missionary work for my church for the next two years. When I came head on against the problem of being a rationalist and a missionary for a theology, I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.

I find some of this difficult, some of this intuitive and some of this neither difficult or intuitive, which is extremely frustrating, how something can appears simple but defy my efforts to intuitively work it. I will continue to work at it because rationality seems to be praiseworthy and useful. I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.

Replies from: olibain, Desrtopa, shminux, Epiphany, Bugmaster, Ford, Nisan
comment by olibain · 2013-02-21T04:17:23.293Z · LW(p) · GW(p)

Hahaha! I find it heartening that that is your response to me wanting to be a teacher. I am quite aware that the system is broken. My personal way of explaining it: The school system works for what it was made to work for; avoiding responsibility for a failed product.

  • The parents are not responsible; the school taught their kids.

  • The students are not socially responsible; everything was compulsory, they had no choice to make.

  • Teachers are not to blame; they teach what they are told to teach and have the autonomy of a pre-AI computer intelligence.

  • The administrators are not to blame; They are not the students' parents or teachers.

  • The faceless, nameless committees that set the curriculum are not responsible, they formed then separated after setting forth the unavoidably terrible standards for all students of an arbitrary age everywhere.

So the product fails but everyone did they're best. No nails stick out, no one gets hammered.

I have high dreams of being the educator that takes down public education. If a teacher comes up with a new way of teaching or an important thing to teach, he can go to class the next day and test it. I have a hope of professional teachers; either trusted with the autonomy of being professionals, or actual professionals in their subject, teaching only those that want to learn.

Also the literature on Mormons fromDesrtopa, Ford and Nisan I am thankful for. I enjoyed the Mormonism organizational post because I have also noticed how well the church runs. It is one reason I stay a Latter-Day Saint in this time of Atheism mainstreaming. The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education. I can give examples, but I will leave my deeper insights for my future posts; I feel I am well introduced for now.

Replies from: Bugmaster, whowhowho, OrphanWilde
comment by Bugmaster · 2013-02-21T05:45:13.627Z · LW(p) · GW(p)

The church is winning, it is well organized, service and family-oriented, and supports me as I study rationality and education.

I would be quite interested to see a more detailed post regarding that last part. Of course, I am just some random guy on the Internet, but still :-)

Replies from: None
comment by [deleted] · 2013-03-08T05:37:05.168Z · LW(p) · GW(p)

I'd like to know how they [=consequentialist deists stuck in religions with financial obligations] justify tithing so much of their income to an ineffective charity.

comment by whowhowho · 2013-02-21T10:36:22.158Z · LW(p) · GW(p)

The Education system in the US, or the education system everywhere?

Replies from: MugaSofer
comment by MugaSofer · 2013-02-21T10:54:35.514Z · LW(p) · GW(p)

Can't speak for Everywhere, but it's certainly not just the US. Ireland has much the same problem, although I think it's not quite as bad here.

Replies from: army1987
comment by A1987dM (army1987) · 2013-02-21T17:05:43.874Z · LW(p) · GW(p)

In Italy it's also very bad, but the public opinion does have a culprit in mind (namely, politics).

comment by OrphanWilde · 2013-03-07T20:08:33.277Z · LW(p) · GW(p)

I love Mormonism.

Possibly because I love Thus Spoke Zarathustra, and Mormonism seems to be at least partially inspired by it.

Replies from: gwern
comment by gwern · 2013-03-08T05:06:08.019Z · LW(p) · GW(p)

That seems rather unlikely, inasmuch as the first English translation was in 1896 - by which point Smith had preached, died, the Mormons evacuated to Utah, begun proselytizing overseas and baptism of the dead, set up a successful state, disavowed polygamy, etc.

Replies from: OrphanWilde
comment by OrphanWilde · 2013-03-08T14:50:14.350Z · LW(p) · GW(p)

There's also the fact that it wasn't even written until after Joseph Smith had died, translation not even being an issue. (In point of fact, Nietzsche was born the same year that Joseph Smith died.)

Nonetheless! I am convinced a time traveler gave Joseph Smith the book.

comment by Desrtopa · 2013-02-20T22:48:23.510Z · LW(p) · GW(p)

I don't think you'll find much discussion of theology here, since in these parts religion is generally treated as an open and shut case. The archives of Luke Muelhauser's blog, Common Sense Atheism, are probably a much more abundant resource for rational analysis of theology; it documents his (fairly extensive) research into theological matters stemming from his own crisis of faith, starting before he became an atheist.

Obviously, the name of the site is rather a giveaway as to the ultimate conclusion that he drew (I would have named it differently in his place,) and the foregone conclusion might be a bit mindkilling, but I think the contents will probably be a fair approximation of the position of most of the community here on religious theological matters, made more explicit than they generally are on Less Wrong.

comment by shminux · 2013-02-20T21:01:33.607Z · LW(p) · GW(p)

I took a step back and had a crisis of belief, not the first time, but this time I followed the prescribed method and came to a modified conclusion, though I still find it rational and advantageous to serve my 2 year mission.

I would love to hear more details, both about the process and about the conclusion, if you are brave/foolish enough to share.

comment by Epiphany · 2013-02-21T01:24:33.745Z · LW(p) · GW(p)

I appreciate your altruistic spirit and your goal of gathering objective evidence regarding your religion. I'm glad to see you beginning on the path of improving your rationality! If you haven't encountered the term "effective altruist" yet or have not yet investigated the effective altruist organizations, I very much encourage you to investigate them! As a fellow altruistic rationalist, I can say that they've been inspiring to me and hope they're inspiring to you as well.

I feel it necessary to inform you of something important yet unfortunate about your goal of becoming a teacher. I'm not happy to have to tell you this, but I am quite glad that somebody told you about it at the beginning of your adulthood:

The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for.

If you wish to investigate alternatives to becoming a standard school teacher, I would highly recommend considering becoming involved with effective altruists. An organization like THINK or 80,000 hours may be very helpful to you in determining what sorts of effective and altruistic things you might do with your skills. THINK does training for effective altruists and helps them figure out what to do with themselves. 80,000 hours helps people figure out how to make the most altruistic contribution with careers they already have.

For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources. Also, due to the fact that he began as a Christian, I'm guessing that his reasoning was not unnecessarily harsh toward Christian ideas like they might have been otherwise. The sampling of his blog that I've read is of good quality. He's a rationalist, so that might be part of why.

Replies from: Bugmaster, MugaSofer
comment by Bugmaster · 2013-02-21T02:59:15.451Z · LW(p) · GW(p)

The school system is broken in a serious way. The problem is with the fundamental system, so it's not something teachers can compensate for.

See also Lockhart's Lament (PDF link) . That said, in my own case, competent teachers (such as Lockhart appears to be) did indeed make a difference. Though my IQ is much closer to the population than the IQ of an average LWer's, so maybe my anecdotal evidence does not apply (not that it ever does, what with being anecdotal and all).

Replies from: Epiphany
comment by Epiphany · 2013-02-21T03:18:26.491Z · LW(p) · GW(p)

That said, in my own case, competent teachers (such as Lockhart appears to be) did indeed make a difference.

I can't fathom that you'd say that if you had read Gatto's speech.

I am very interested in the reaction you have to the speech (It's called The Seven Lesson School Teacher, and it's in the beginning of chapter 1).

Would you indulge me?

Also:

Failing to teach reasoning skills in school is a crime against humanity.

Replies from: Bugmaster, Bugmaster, MugaSofer
comment by Bugmaster · 2013-02-21T05:14:16.285Z · LW(p) · GW(p)

I have, in fact, read the Speech before, quite some time ago. My point is that outstanding teachers can make a big positive difference in the students' lives (at least, that was the case for me), largely by deliberately avoiding some or all of the anti-patterns that Gatto lists in his Speech. We were also taught the basics of critical thinking in an English class (of all places), though this could've been a fluke (or, once again, a teacher's personal initiative).

I should also point out that these anti-patterns are not ubiquitous. I was lucky enough to attend a school in another country for a few of my teenage years (a long, long time ago). During a typical week, we'd learn how to solve equations in Math class, apply these skills to exercises in Statistics, stage an experiment and record the results in Physics, then program in the statistics formulae and run them on our experimental results in Informatics (a.k.a. Computer Science). Ideas tend to make more sense when connections between them are revealed.

I haven't seen anything like this in US-ian education, but I wouldn't be surprised to find out that some school somewhere in the US is employing such an approach.

Edited to add:

Failing to teach reasoning skills in school is a crime against humanity.

I share your frustration, but there's no need to overdramatize.

comment by Bugmaster · 2013-02-21T05:21:11.078Z · LW(p) · GW(p)

I should also point out that, while Gatto makes some good points, his overall thesis is hopelessly lost in all the hyperbole, melodrama, and outright conspiracy theorizing. He does his own ideas a disservice by presenting them the way he does. For example, I highly doubt that mental illnesses, television broadcasts, and restaurants would all magically disappear (as Gatto claims on pg. 8) if only we could teach our children some critical thinking skills.

Replies from: Epiphany
comment by Epiphany · 2013-02-22T03:56:19.140Z · LW(p) · GW(p)

Connection between education and sanity

Check out Ed DeBono's CORT thinking system. His research (I haven't thoroughly reviewed it, just reciting from memory) shows that by increasing people's lateral thinking / creativity, it decreases things like their suicide rate. If you have been taught to see more options, you're less likely to choose to behave desperately and destructively. If you're able to reason things out, you're less likely to feel stuck and need help. If you're able to analyze, you're less likely to believe something batty. Would mental illness completely disappear? I don't think so. Sometimes conditions are mostly due to genes or health issues. But there are connections, definitely, between one's ability to think and one's sanity.

If you don't agree with this, then do you also criticize Eliezer's method of raising the sanity waterline by encouraging people to refine their rationality?

Connection between education and indulging in passive entertainment

As for television, I think he's got a point. When I was 17, I realized that I was spending most of my free time watching someone else's life. I wasn't spending my time making my own life. If the school system makes you dependent like he says (and I believe it does) then you'll be a heck of a lot less likely to take initiative and do something. If your self-confidence depends on other expert's approval, it becomes hard to take a risk and go do your own project. If your creativity and analytical abilities are reduced, so too will be your ability to imagine projects for yourself to do and guide yourself while doing them. If your love for learning and working is destroyed, why would you want to do self-directed projects in the first place? And if you aren't doing your own projects your own way, that sucks a lot of the life and pleasure out of them. Fortunately, for me, a significant amount of my creativity, analytical abilities, and a significant amount of my passion for learning and working survived school. That gave me the perspective I needed to make the choice between living an idle life of passive entertainment, and making my own life. Making my own life is more engaging than passive entertainment because it's tailored to my interests exactly, more fulfilling than accomplishing nothing could ever be, more exciting than fantasy can be because it is real, and more beneficial and rewarding in both emotional and practical ways than entertainment can be due to the fact that learning and working opens up new social and career opportunities.

If the choice you are making is between "watch TV" and "not watch TV" you're probably going to watch it.

But if you have a busy mind full of ideas and thoughts and passions, that's not the choice you're perceiving. You've got the choice between "watch character's lives" and "make my own life awesome and watch that". If you felt strongly that you could make your own life awesome, is there anything that could convince you to watch TV instead?

Gatto doesn't do a good job of giving you perspective so you can understand his point of view here. He didn't explain how incredible it can feel to have a mind that is on, how engaging it can be to learn something you're interested in, how satisfying it is to do your own d project your own d way and see it actually work! He doesn't do a good job of helping you imagine how much more motivation you would experience if your creativity and analytical abilities were jacked up way beyond what they are. If your life was packed full of thoughts and ideas and self-confidence, could you spend half your free time in front of a show? If you had the kind of motivation it causes to feel like you're in the process of building an amazing life, would you be able to still your mind and focus on sitcoms?

I wouldn't. I can't. It is as if I am possessed by this supernova sized drive to DO THINGS.

Restaurants and education

I honestly don't know anything about whether these are connected. My best guess is that Gatto loves to cook, but had found not being taught how to cook to be a rather large obstacle in the way of enjoying it.

Replies from: Bugmaster
comment by Bugmaster · 2013-02-22T06:35:21.073Z · LW(p) · GW(p)

I mostly agree with the things you say, but these are not the things that Gatto says. Your position is a great deal milder than his.

In a single sentence, he claims that if only we could set up our schools the way he wants them to be set up, then social services would utterly disappear, the number of "psychic invalids" would drop to zero, "commercial entertainment of all sorts" would "vanish", and restaurants would be "drastically down-sized".

This is going beyound hyperbole; this borders on drastic ignorance.

For example, not all mental illnesses are caused by a lack of gumption. Many, such as clinical depression and schizophrenia, are genetic in nature, and will strike their victims regardless of how awesomely rational they are. Others, such as PTSD, are caused by psychological trauma and would fell even the mighty Gatto, should he be unfortunate enough to experience it.

While it's true that most of the "commercial entertainment of all sorts" is junk, some of it is art; we know this because a lot of it has survived since ancient times, despite the proclamations of people who thought just like Gatto (only referring to oil paintings, phonograph records, and plain old-fashioned writing instead of electronic media). As an English teacher, it seems like Gatto should know this.

And what's his beef with restaurants, anyway ? That's just... weird.

If you had the kind of motivation it causes to feel like you're in the process of building an amazing life, would you be able to still your mind and focus on sitcoms?

Do you feel the same way about fiction books, out of curiosity ?

If you don't agree with this, then do you also criticize Eliezer's method of raising the sanity waterline by encouraging people to refine their rationality?

If Eliezer claimed that raising the sanity waterline is the one magic bullet that would usher us into a new Golden Age, as we reclaim the faded glory of our ancestors, then yes, I would disagree with him too. But, AFAIK, he doesn't claim this -- unlike Gatto.

Replies from: wedrifid, Epiphany
comment by wedrifid · 2013-02-22T08:25:25.402Z · LW(p) · GW(p)

For example, not all mental illnesses are caused by a lack of gumption. Many, such as clinical depression and schizophrenia, are genetic in nature, and will strike their victims regardless of how awesomely rational they are.

I'm afraid this account has swung to the opposite extreme---to the extent that it is quite possibly further from the truth and more misleading than Gatto's obvious hyperbole.

Schizophrenia is one of the most genetically determined of the well known mental health problems but even it is heavily dependent on life experiences. In particular, long term exposure to stressful environments or social adversity dramatically increases the risk that someone at risk for developing the condition will in fact do so.

As for clinical depression, the implication that due to being 'genetic in nature' means that the environment in which an individual spends decades of growth and development in is somehow not important is utterly absurd. Genetics is again relevant in determining how vulnerable the individual is but the social environment is again critical for determining whether problems will arise.

Replies from: Bugmaster
comment by Bugmaster · 2013-02-22T19:53:05.315Z · LW(p) · GW(p)

That's a good point, I did not mean to imply that these mental illnesses are completely unaffected by environmental factors. In addition, in case of some illnesses such as depression, there are in fact many different causes that can lead to similar symptoms, so the true picture is a lot more complex (and is still not entirely well understood).

However, this is very different from saying something like "schizophrenia is completely environmental", or even "if only people had some basic critical thinking skills, they'd never become depressed", which is how I interpreted Gatto's claims.

For example even with a relatively low heritability rate, millions of people would still contract schizophrenia every year worldwide -- especially since many of the adverse life experiences that can trigger it are unavoidable. No amount of critical thinking will reduce the number of victims to zero. And that's just one specific disease among many, and we're not even getting into more severe cases such as Down's Syndrome. If Gatto thinks otherwise, then he's being hopelessly naive.

comment by Epiphany · 2013-02-22T18:47:58.310Z · LW(p) · GW(p)

I agree that saying "all these problems will disappear" is not the same as saying that "these problems will reduce". I felt the need to explain why the problems would reduce because I wasn't sure you saw the connections.

Others, such as PTSD, are caused by psychological trauma and would fell even the mighty Gatto, should he be unfortunate enough to experience it.

I have to wonder if having a really well-developed intellect might offer some amount of protection against this. Whether Gatto's intellect is sufficiently well-developed for this is another topic.

And what's his beef with restaurants, anyway ? That's just... weird.

I don't know. I love not cooking.

Do you feel the same way about fiction books, out of curiosity ?

Actually, yes. When I am fully motivated, I can spend all my evenings doing altruistic work for years, reading absolutely no fiction and watching absolutely no TV shows. That level of motivation is where I'm happiest, so I prefer to live that way.

I do occasionally watch movies during those periods, perhaps once a month, because rest is important (and because movies take less time to watch than a book takes to read, but are higher quality than television, assuming you choose them well).

Replies from: Bugmaster
comment by Bugmaster · 2013-02-22T19:39:37.418Z · LW(p) · GW(p)

I felt the need to explain why the problems would reduce because I wasn't sure you saw the connections.

I see the connections, but I do not believe that some of the problems Gatto wants to fix -- f.ex. the existence of television and restaurants -- are even problems at all. Sure, TV has a lot of terrible content, and some restaurants have terrible food, but that's not the same thing as saying that the very concept of these services is hopelessly broken.

I have to wonder if having a really well-developed intellect might offer some amount of protection against this

It probably would, but not to any great extent. I'm not a psychiatrist or a neurobiologist though, so I could be widely off the mark. In general, however, I think that Gatto is falling prey to the Dunning–Kruger effect when he talks about mental illness, economics, and many other things for that matter.

For example, the biggest tool in his school-fixing toolbox is the free market; he believes that if only schools could compete against each other with little to no government regulation, their quality would soar. In practice, such scenarios tend to work out... poorly.

When I am fully motivated, I can spend all my evenings doing altruistic work for years, reading absolutely no fiction and watching absolutely no TV shows.

That's fair, and your preferences are consistent. However, many other people see a great deal of value in fiction; some even choose to use it as a vehicle for transmitting their ideas (f.ex. HPMOR). I do admit that, in terms of raw productivity, I cannot justify spending one's time on reading fiction; if a person wanted to live a maximally efficient life, he would probably avoid any kind of entertainment altogether, fiction literature included. That said, many people find the act of reading fiction literature immensely useful (scientists and engineers included), and the same is true for other forms of entertainment such as music. I am fairly convinced that any person who says "entertainment is a waste of time" is committing a fallacy of false generalization.

Replies from: Epiphany
comment by Epiphany · 2013-02-23T05:41:53.805Z · LW(p) · GW(p)

I do not believe that some of the problems Gatto wants to fix -- f.ex. the existence of television and restaurants -- are even problems at all.

The existence of television technology isn't, in my opinion, a problem. Nor is the fact that some shows are low quality. Even if all of them were low quality, I wouldn't necessarily see that as a problem - it would still be a way of relaxing. The problem I see with television is that the average person spends 4 hours a day watching it. (Can't remember where I got that study, sorry.) My problem with that is not that they aren't exercising (they'd still have an hour a day which is plenty of exercise, if they want it) or that they aren't being productive (you can only be so productive before you run out of mental stamina anyway, and the 40 hour work week was designed to use the entirety of the average person's stamina) but that they aren't living.

It could be argued that people need to spend hours every day imagining a fantasy. I was told by an elderly person once that before television, people would sit on a hill and daydream. I've also read that imagining doing a task correctly is more effective at making you better at it than practice. If that's true, daydreaming might be a necessity for maximum effectiveness and television might provide some kind of similar benefit. So it's possible that putting one's brain into fantasy mode for a few hours of day really is that beneficial.

Spending four hours a day in fantasy mode is not possible for me (I'm too motivated to DO something) and I don't seem to need anywhere near that much daydreaming. I would find it very hard to deal with if I had spent that much of my free time in fantasy. I imagine that if asked whether they would have preferred to watch x number of shows, or spent all of that free time on getting out there and living, most people would probably choose the latter - and that's sad.

he believes that if only schools could compete against each other with little to no government regulation, their quality would soar. In practice, such scenarios tend to work out... poorly.

I think that people would also have to have read the seven lessons speech for the problems he sees to be solved. Maybe eventually things would evolve to the point where schools would not behave this way anymore without them reading it, because it's probably not the most effective way of teaching, but I don't see that change happening quickly without people pressuring schools to make those specific changes.

However, I'm surprised that you say "In practice, such scenarios tend to work out... poorly." Do you mean that the free market doesn't do much to improve quality, or do you just mean that when people want specific changes and expect the free market to implement them, the free market doesn't tend to implement those specific changes?

I'm also very interested in where you got the information to support the idea, either way.

a vehicle for transmitting their ideas

After reading Ayn Rand's the Fountainhead, my feeling was that even though much of the writing was brilliant and enjoyable, I could have gotten the key ideas much faster if she had only published a few lines from one of the last chapters. I'm having the same reaction to the sequences and HPMOR. I enjoy them and recognize the brilliance in the writing abilities, but I find myself doing things like reading lists of biases over and over in order to improve my familiarity and eventually memorize them. I still want to finish the sequences because they're so important to this culture, but what I have prioritized appears to be getting the most important information in as quickly as possible. So, although entertainment is a way of transmitting ideas, I question how efficient it is, and whether it provides enough other learning benefits to outweigh the cost of wrapping all those ideas in so much text. I could walk all the way to Florida, but flying would be faster. People realize this so if they want to take vacations, they fly. Why, then, do they use entertainment to learn instead of seeking out the most efficient method?

It makes sense from the writer's point of view. I have said before that I was very glad that Eliezer decided to popularize rationality as much as possible, as I had been thinking that somebody needed to do that for a very long time. His writing is interesting and his style is brilliant and his method has worked to attract almost twelve million hits to his site. I think that's great. But the fact that people probably would not have flocked to the site if he had posted an efficient dissemination of cognitive biases and whatnot is curious. Maybe the way I learn is different.

I am fairly convinced that any person who says "entertainment is a waste of time" is committing a fallacy of false generalization.

I think it depends on whether you use "waste of time" to mean "absolutely no benefit whatsoever" or "nowhere near the most efficient way of getting the benefit".

The statement "entertainment is an inefficient way to get ideas compared with other methods" seems true to me.

Replies from: wedrifid, olibain, Bugmaster, Bugmaster, Kawoomba
comment by wedrifid · 2013-02-23T06:36:23.920Z · LW(p) · GW(p)

I enjoy them and recognize the brilliance in the writing abilities, but I find myself doing things like reading lists of biases over and over in order to improve my familiarity and eventually memorize them. I still want to finish the sequences because they're so important to this culture, but what I have prioritized appears to be getting the most important information in as quickly as possible.

I wonder if the author would agree that that is the most important information. I suspect he would not. (So naturally if you learning goals are different to the teaching goals of the author then their material will not be optimized for your intentions.)

Replies from: Epiphany
comment by Epiphany · 2013-02-23T09:07:01.753Z · LW(p) · GW(p)

It seems to me that the problem is what intention one has when one begins learning and whether one can deal with accepting the fact that they're biased, not how they go about learning them. Though, maybe Eliezer has put various protections in that get people questioning their intention and sells them on learning with the right intention. I would agree that if it did not occur to a person to use their knowledge of biases to look for their own mistakes, learning them could be really bad, but I do not think that learning a list of biases will all by itself turn me into an argument-wielding brain-dead zombie.

If it makes you feel any better to know this, I've been seeking a checklist of errors against which I can test my ideas.

comment by olibain · 2013-03-25T03:46:48.945Z · LW(p) · GW(p)

Whoo! my post got the most recursion. Do I get a reward? If I get a few more layers it will be more siding than post.

comment by Bugmaster · 2013-02-23T08:59:48.406Z · LW(p) · GW(p)

However, I'm surprised that you say "In practice, such scenarios tend to work out... poorly." Do you mean that the free market doesn't do much to improve quality...

That is one big reason behind my statement, yes. Currently, it looks like many, if not most, people -- in the Southern states, at least -- want their schools to engage in cultural indoctrination as opposed to any kind of rationality training. The voucher programs, which were designed specifically to introduce some free market into the education system, are being used to teach things like Creationism and historical revisionism. Which is not to say that public education in states like Louisiana and Texas is any better, seeing as they are implementing the same kinds of curricula by popular vote.

In fact, most private schools are religious in nature. According to this advocacy site (hardly an unbiased source, I know), around 50% are Catholic. On the plus side, student performance tends to be somewhat better (though not drastically so) in private schools, according to CAPE as well as other sources. However, private schools are also quite a bit more expensive than public schools, with tuition levels somewhere around $10K (and often higher). This means that the students who attend them have much wealthier parents, and this fact alone can account for their higher performance.

This leads me to my second point: I believe that Gatto is mistaken when he yearns for earlier, simpler times, where education was unencumbered by any regulation whatsoever, and students were free to learn (or to avoid learning) whatever they wanted. We do not live in such times anymore. Instead, we live in a world that is saturated by technology. Literacy, along with basic numeracy, are no longer marks of high status, but an absolute requirement for daily life. Most well-paying jobs, creative pursuits, as well as even basic social interactions all rely on some form of information technology. Basic education is not a luxury, but an essential service.

Are public schools adequately providing this essential service ? No. However, we simply cannot afford to live in a world where access to it is gated by wealth -- which is what would happen if schools were completely privatized. As far as I know, most if not all efforts to privatize essential services have added in disaster; this includes police, fire departments, and even prisons (in California, at least). Basic health care is a particularly glaring example.

So, in summary, existing private schools are emphasizing for indoctrination rather than critical thinking; and even if they were not, we cannot afford to restrict access to basic education based on personal wealth.

comment by Bugmaster · 2013-02-23T08:05:10.728Z · LW(p) · GW(p)

The problem I see with television is that the average person spends 4 hours a day watching it. ... My problem with that is not that they aren't exercising ... or that they aren't being productive ... but that they aren't living.

What does "living" mean, exactly ? I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly).

Why, then, do they use entertainment to learn instead of seeking out the most efficient method?

I address this point below, but I'd like to also point out that some people people's goals are different from yours. They consume entertainment because it is enjoyable, or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).

So, although entertainment is a way of transmitting ideas, I question how efficient it is, and whether it provides enough other learning benefits to outweigh the cost of wrapping all those ideas in so much text.

Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative. Similarly, other people find it easier to communicate their ideas in the form of narratives; this is why Eliezer writes things like Three Worlds Collide and HPMOR instead of simply writing out the equations. This is also why he employs several tropes from fiction even in his non-fiction writing.

I'm not saying that this is the "right" way to learn, or anything; I am merely describing the situation that, as I believe, exists.

The statement "entertainment is an inefficient way to get ideas compared with other methods" seems true to me.

I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.

Replies from: Epiphany
comment by Epiphany · 2013-02-23T09:20:52.053Z · LW(p) · GW(p)

What does "living" mean, exactly ?

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly).

I used "living" to refer to a subjective state. There's nothing objective about it, and IMO, there's nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal.

I feel like your real challenge here is more similar to Kawoomba's concern. Am I right?

They consume entertainment because it is enjoyable,

Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people's creativity was reduced to the point where doing your own project is too challenging, or people's self-confidence was made too dependent on others such that they don't feel comfortable pursuing that fulfilling sense of having done something on their own?

or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy - contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don't understand how that can be called social contact.

Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative.

I've been thinking about this and I think what might be happening is that I make my own narratives.

Similarly, other people find it easier to communicate their ideas in the form of narratives

This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That's an interesting point.

I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

Replies from: Bugmaster, Bugmaster
comment by Bugmaster · 2013-02-24T21:59:38.189Z · LW(p) · GW(p)

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba's sub-thread:

Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say:

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life.

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best technique for learning things, or anything of the sort.

Replies from: Desrtopa, army1987
comment by Desrtopa · 2013-02-28T06:14:32.880Z · LW(p) · GW(p)

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?

Replies from: Bugmaster
comment by Bugmaster · 2013-02-28T10:07:38.061Z · LW(p) · GW(p)

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn't spend any time on watching soap operas or reading romance novels -- unless doing so would lead to more paperclips (which is unlikely).

Of course, one could argue that consumption of passive entertainment does contribute to the average human's goals, since humans are unable to function properly without some downtime. But I don't know if I'd go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.

Replies from: Richard_Kennaway, army1987
comment by Richard_Kennaway · 2013-02-28T14:38:05.855Z · LW(p) · GW(p)

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible.

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I'd even call it the sort of toxic mindwaste that RationalWiki loves to mock.

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

Replies from: Viliam_Bur, Jack, Bugmaster
comment by Viliam_Bur · 2013-02-28T20:05:02.824Z · LW(p) · GW(p)

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory.

Why exactly? I mean, my intuition also tells me it's wrong... but my intuition has a few assumptions that disagree with the proposed scenario. Let's make sure the intuition does not react to a strawman.

For example, when in real life people "work like slaves for a future paradise", the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader's personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.

The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

In other words, is your objection "in situation X the decision D is wrong", or is it "the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong"?

Replies from: Richard_Kennaway, Peterdjones, army1987
comment by Richard_Kennaway · 2013-02-28T22:52:46.861Z · LW(p) · GW(p)

However, for the sake of experiment, imagine that Omega comes and tells you

When Omega enters a discussion, my interest in it leaves.

Replies from: wedrifid
comment by wedrifid · 2013-03-01T09:44:58.937Z · LW(p) · GW(p)

When Omega enters a discussion, my interest in it leaves.

To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-08T23:29:43.846Z · LW(p) · GW(p)

Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such.

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

Replies from: wedrifid
comment by wedrifid · 2013-03-09T01:24:26.110Z · LW(p) · GW(p)

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

I intended the general claim as stated. I don't know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.

For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-09T14:55:28.010Z · LW(p) · GW(p)

For practical purposes pronouncements like this are best interpreted as indications

For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.

Replies from: wedrifid
comment by wedrifid · 2013-03-09T16:03:29.199Z · LW(p) · GW(p)

For practical purposes pronouncements like this are best interpreted as saying exactly what they say. You are, of course, free to make up whatever self-serving story you like around it.

This is evidently not a behavior you practice.

comment by Peterdjones · 2013-03-09T09:40:22.984Z · LW(p) · GW(p)

It is counterintuitive that you should slave for people you don't know, perhaps because you can't be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn't fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.

comment by A1987dM (army1987) · 2013-03-01T12:59:39.106Z · LW(p) · GW(p)

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

Replies from: Bugmaster
comment by Bugmaster · 2013-03-01T22:11:53.390Z · LW(p) · GW(p)

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

I believe the answer is "yes", but I had to think about that for a moment. I'm not sure how that's relevant to the current discussion, though.

I think your real point might be closer to something like, "thought experiments are useless at best, and should thus be avoided", but I don't want to put words into anyone's mouth.

Replies from: army1987
comment by A1987dM (army1987) · 2013-03-02T11:57:35.512Z · LW(p) · GW(p)

My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn't yield much of an insight about the real world”.

Replies from: Bugmaster
comment by Bugmaster · 2013-03-04T21:13:25.066Z · LW(p) · GW(p)

That makes sense, but I don't think it's what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.

comment by Jack · 2013-03-09T01:45:32.604Z · LW(p) · GW(p)

"Decision theory" doesn't mean the same thing as "value system" and we shouldn't conflate them.

Replies from: Peterdjones
comment by Peterdjones · 2013-03-09T09:51:37.623Z · LW(p) · GW(p)

Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.

comment by Bugmaster · 2013-02-28T16:48:02.365Z · LW(p) · GW(p)

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise ... is prima facie a broken decision theory.

Why ? I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad. You ask,

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

But the answer depends entirely on your goals. These can be as relatively modest as, "the world will be just like it is today, but everyone wears a party hat". Or it could be as ambitious as, "the world contains as many paperclips as physically possible". In the latter case, if you asked the paperclip maximizer "who gets to slack off ?", it wouldn't find the question relevant in the least. It doesn't matter who gets to do what, all that matters are the paperclips.

You might argue that a paperclip-filled world would be a terrible place, and I agree, but that's just because you and I don't value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like "happy people in party hats", and not nearly enough paperclips.

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

Replies from: Richard_Kennaway, IlyaShpitser
comment by Richard_Kennaway · 2013-02-28T17:52:19.424Z · LW(p) · GW(p)

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time.

This is proving the conclusion by assuming it.

Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

BTW, for what it's worth, I do not watch TV. And now I am imagining a chapter of that book entitled "Never Sleep Alone".

Replies from: ygert, Bugmaster
comment by ygert · 2013-02-28T17:58:01.429Z · LW(p) · GW(p)

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-02-28T18:37:19.405Z · LW(p) · GW(p)

You would, then, not walk away from Omelas?

Replies from: Desrtopa
comment by Desrtopa · 2013-02-28T19:05:28.105Z · LW(p) · GW(p)

I definitely wouldn't. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I'd be getting myself into, so in that sense I've effectively volunteered myself to be the tormented child.

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

Replies from: drnickbone, shminux, Bugmaster
comment by drnickbone · 2013-03-01T08:08:22.826Z · LW(p) · GW(p)

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

You're assuming here that the "veil of ignorance" gives you exactly equal chance of being each citizen of Omelas, so that a decision under the veil reduces to average utilitarianism.

However, in Rawls's formulation, you're not supposed to assume that; the veil means you're also entirely ignorant about the mechanism used to incarnate you as one of the citizens, and so must consider all probability distributions over the citizens when choosing your society. In particular, you must assign some weight to a distribution picked by a devil (or mischievous Omega) who will find the person with the very lowest utility in your choice of society and incarnate you as that person. So you wouldn't choose Omelas.

This seems to be why Rawls preferred maximin decision theory under the veil of ignorance rather than expected utility decision theory.

Replies from: Desrtopa
comment by Desrtopa · 2013-03-01T13:40:03.381Z · LW(p) · GW(p)

In that case, don't use a Rawlsian veil of ignorance, it's not the best mechanism for addressing the decision. A veil where you have an equal chance of your own child being the victim to anyone else's (assuming you're already too old to be the victim) is more the sort of situation anyone actually deciding whether or not to live in Omelas would face.

Of course, I would pick Omelas even under the Rawlsian veil, since as I've said I'm willing to be the one who takes the hit.

Replies from: drnickbone
comment by drnickbone · 2013-03-01T17:10:33.335Z · LW(p) · GW(p)

Ah, so you are considering the question "If Omelas already exists, should I choose to live there or walk away?" rather than the Rawlsian question "Should we create a society like Omelas in the first place?" The "veil of ignorance" meme nearly always refers to the Rawlsian concept, so I misunderstood you there.

Incidentally, I reread the story and there seems to be no description of how the child was selected in the first place or how he/she is replaced. So it's not clear that your own child does have the same chance of being the victim as anyone else's.

Replies from: Desrtopa
comment by Desrtopa · 2013-03-01T23:14:30.344Z · LW(p) · GW(p)

Well, as I mentioned in another comment some time ago (not in this thread,) I support both not walking away from Omelas, and also creating Omelases unless an even more utility efficient method of creating happy and functional societies is forthcoming.

Our society rests on a lot more suffering than Omelas, not just in an incidental way (such as people within our cities who don't have housing or medical care,) but directly, through channels such as economic slavery where companies rely on workers, mainly abroad, who they keep locked in debt, who could not leave to seek employment elsewhere even if they wanted to and other opportunities were forthcoming. I can respect a moral code that would lead people to walk out on Omelas as a form of protest that would also lead people to walk out on modern society to live on a self sufficient seasteading colony, but I reject the notion that Omelas is worse than, or as bad as, our own society, in a morally relevant way.

comment by shminux · 2013-02-28T20:22:03.127Z · LW(p) · GW(p)

A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot. This is not even the dust-specks-vs-torture case, given that the Omelas is not a very large city.

if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately

Imagine that it is not you, but your child you must sacrifice. Would you shrug and say "sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life"? I know what I would do.

Replies from: Desrtopa, drethelin, Bugmaster
comment by Desrtopa · 2013-03-01T00:15:36.641Z · LW(p) · GW(p)

Imagine that it is not you, but your child you must sacrifice. Would you shrug and say "sorry, my precious girl, you must suffer until you die so that your mommy/daddy can live a happy life"?

I hope I would have the strength to say "sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life." Doing it just for myself and my own social circle wouldn't be a good tradeoff, but those aren't the terms of the scenario.

Considering how many of our basic commodities rely on sweatshop or otherwise extremely miserable labor, we're already living off the backs of quite a lot of tormented children.

Replies from: shminux
comment by shminux · 2013-03-01T03:54:22.957Z · LW(p) · GW(p)

I hope I would have the strength to say "sorry, my precious girl, you must suffer until you die so that everyone in the city can live a happy life."

And there I thought that Babyeaters lived only in the Eliezer's sci-fi story...

Replies from: Desrtopa
comment by Desrtopa · 2013-03-01T04:09:24.888Z · LW(p) · GW(p)

The Babyeaters' babies outnumber the adults; their situation is analogous, not to the city of Omelas, but to a utopian city built on top of another, even larger, dystopian city, on which it relies for its existence.

I would rather live in a society where people loved and cherished their children, but also valued their society, and were willing to shut up and multiply and take the hit themselves, or to their own loved ones, for the sake of a common good that really is that much greater, and I want to be the sort of person I'd want others in that society to be.

I've never had children, but I have been in love, in a reciprocated relationship of the sort where it feels like it's actually as big a deal as all the love songs have ever made it out to be, and I think that sacrificing someone I loved for the sake of a city like Omelas is something I'd be willing to do in practice, not just in theory (and she never would have expected me to do differently, nor would I of her.) It's definitely not the case that really loving someone, with true depth of feeling, precludes acknowledgment that there are some things worth sacrificing even that bond for.

Replies from: shminux
comment by shminux · 2013-03-01T18:28:40.553Z · LW(p) · GW(p)

I've never had children

I'm guessing that neither have most of those who upvoted you and downvoted me. I literally cannot imagine a worse betrayal than the scenario we've been discussing. I can imagine one kind-of-happy society where something like this would be OK, though.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-03-01T18:42:17.016Z · LW(p) · GW(p)

I cannot fathom why a comment like that would be upvoted by anyone but an unfeeling robot.

Sounds like you need to update your model of people who don't have children. Also, how aggressively do you campaign against things like sweatshop labor in third-world countries, which as Desrtopa correctly points out are a substantially worse real-world analogue? Do children only matter if they're your children?

comment by drethelin · 2013-02-28T20:32:39.717Z · LW(p) · GW(p)

the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet. Omelas somehow managed to get it down to one child. How many other children would you sacrifice for your own?

Replies from: shminux
comment by shminux · 2013-02-28T20:50:57.511Z · LW(p) · GW(p)

the real problem with omelas: It totally ignores the fact that there are children suffering literally as we speak in every city on the planet.

Unlike in the fictional Omelas, there is no direct dependence or direct sacrifice. Certainly it is possible to at least temporarily alleviate suffering of others in this non-hypothetical world by sacrificing some of your fortune, but that's the difference between active and passive approach, there is a large gap there.

Replies from: satt
comment by satt · 2013-03-07T02:42:43.280Z · LW(p) · GW(p)

Related. Nornagest put their finger on this being a conflict between the consequentially compelling (optimizing for general welfare) and the psychologically compelling (not being confronted with knowledge of an individual child suffering torture because of you). I think Nornagest's also right that a fully specified Omelas scenario would almost certainly feel less compelling, which is one reason I'm not much impressed by Le Guin's story.

comment by Bugmaster · 2013-02-28T23:54:42.387Z · LW(p) · GW(p)

Imagine that it is not you, but your child you must sacrifice.

The situation is not analogous, since sacrificing one's child would presumably make most parents miserable for the rest of their days. In Omelas, however, the sacrifice makes people happy, instead.

Replies from: shminux
comment by shminux · 2013-03-01T01:25:06.820Z · LW(p) · GW(p)

And I thought that the Babyeaters only existed in Eliezer's fiction...

comment by Bugmaster · 2013-02-28T20:08:15.365Z · LW(p) · GW(p)

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

As I said in previous comments, I am genuinely not sure whether entertainment is a good terminal goal to have.

By analogy, I absolutely require sleep in order to be productive at all in any capacity; but if I could swallow a magic pill that removed my need for sleep (with no other side-effects), I'd do so in a heartbeat. Sleep is an instrumental goal for me, not a terminal one. But I don't know if entertainment is like that or not.

Thus, I'm really interested in hearing more about your thoughts on the topic.

Replies from: Desrtopa
comment by Desrtopa · 2013-03-01T00:26:14.894Z · LW(p) · GW(p)

I'm not sure that I would regard entertainment as a terminal goal, but I'm very sure I wouldn't regard productivity as one. As an instrumental goal, it's an intermediary between a lot of things that I care about, but optimizing for productivity seems like about as worthy a goal to me as paperclipping.

Replies from: Bugmaster
comment by Bugmaster · 2013-03-01T00:52:08.501Z · LW(p) · GW(p)

Right, agreed, but "productivity" is just a rough estimate of how quickly you're moving towards your actual goals. If entertainment is not one of them, then either it enhances your productivity in some way, or it reduces it, or it has no effect (which is unlikely, IMO).

Productivity and fun aren't orthogonal; for example, it is entirely possible that if your goal is "experience as much pleasure as possible", then some amount of entertainment would directly contribute to the goal, and would thus be productive. That said, though, I can't claim that such a goal would be a good goal to have in the first place.

comment by Bugmaster · 2013-02-28T19:58:45.949Z · LW(p) · GW(p)

This is proving the conclusion by assuming it.

How so ? Imagine that you have two identical paperclip maximizers; for simplicity's sake, let's assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That is what the saying "he who would be Pope must think of nothing else" looks like in practice.

FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they "polyhack" themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?

Replies from: Richard_Kennaway, whowhowho
comment by Richard_Kennaway · 2013-03-01T00:00:10.276Z · LW(p) · GW(p)

WARNING: This comment contains explicit discussion of an information hazard.

Imagine that you have two identical paperclip maximizers

I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I'm more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they've created in their own heads.

This is a basilisk problem. Unlike Roko's, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It's a very short road from the innocent-sounding "the greatest good for the greatest number" to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you're killing babies! Having a beer? You're drinking dead babies. Own a car? You're driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

But even Peter Singer doesn't go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.

This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don't know what their responses are.

Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.

Replies from: TheOtherDave, army1987, Eliezer_Yudkowsky, Bugmaster
comment by TheOtherDave · 2013-03-01T03:30:12.235Z · LW(p) · GW(p)

Consider two humans, H1 and H2, both utilitarians.

H1 looks at the world the way you describe Peter Singer here.
H2 looks at the world "through the eyes of utilitarianism" as you describe it here.

My expectation is that H1 will do more good in their lifetime than H2.
What's your expectation?

Replies from: army1987, Richard_Kennaway
comment by A1987dM (army1987) · 2013-03-09T11:54:47.129Z · LW(p) · GW(p)

And then you have people like H0, who notices H2 is crazy, decides that that means that they shouldn't even try to be altruistic, and accuses H1 of hypocrisy because she's not like H2. (Exhibit A)

comment by Richard_Kennaway · 2013-03-01T09:57:06.048Z · LW(p) · GW(p)

That is my expectation also. However, persuading H2 of that ("but dead babies!") is likely to be a work of counselling or spiritual guidance rather than reason.

Replies from: TheOtherDave, whowhowho
comment by TheOtherDave · 2013-03-01T22:11:52.836Z · LW(p) · GW(p)

Well... so, if we both expect H1 to do more good than H2, it seems that if we were to look at them through the eyes of utilitarianism, we would endorse being H1 over being H2.
But you seem to be saying that H2, looking through the eyes of utilitarianism, endorses being H2 over being H1.
I am therefore deeply confused by your model of what's going on here.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-08T23:23:51.457Z · LW(p) · GW(p)

Oh yes, H1 is more effective, heathier, saner, more rational, etc. than H2. H2 is experiencing existential panic and cannot relinquish his death-grip on the idea.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-08T23:42:39.425Z · LW(p) · GW(p)

You confuse me further with every post.

Do you think being a utilitarian makes someone less effective, healthy, sane, rational etc.?
Or do you think H2 has these various traits independent of them being a utilitarian?

Replies from: whowhowho, Richard_Kennaway
comment by whowhowho · 2013-03-09T00:48:43.556Z · LW(p) · GW(p)

There's a lot of different kinds of utilitarian.

comment by Richard_Kennaway · 2013-03-08T23:50:05.164Z · LW(p) · GW(p)

WARNING: More discussion of a basilisk, with a link to a real-world example.

It's a possible failure mode of utilitarianism. Some people succumb to it (see George Price for an actual example of a similar failure) and some don't.

I don't understand your confusion and this pair of questions just seems misconceived.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-03-09T00:59:41.280Z · LW(p) · GW(p)

(shrug) OK.
I certainly agree with you that some utilitarians suffer from the existential panic and inability to relinquish their death-grips on unhealthy ideas, while others don't.
I'm tapping out here.

comment by whowhowho · 2013-03-09T00:47:11.186Z · LW(p) · GW(p)

One could reason that one is better placed to do good effectively when focussing on oneself, ones family, one's community, etc, simply because one understands them better.

comment by A1987dM (army1987) · 2013-03-09T11:39:26.739Z · LW(p) · GW(p)

(Warning: replying to discussion of a potential information hazard.)

Whfg ol fvggvat gurer ernqvat YrffJebat lbh'er xvyyvat onovrf! Univat n orre? Lbh'er qevaxvat qrnq onovrf.

Gung'f na rknttrengvba (tvira gung ng gung cbvag lbh unqa'g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg'f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq'f yvsr jvgu Tvirjryy'f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh'er sebz?)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-03-01T18:52:18.658Z · LW(p) · GW(p)

Infohazard reference with no warning sign. Edit and reply to this so I can restore.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-08T23:18:33.872Z · LW(p) · GW(p)

Done. Sorry this took so long, I've been taken mostly offline by a biohazard for the last week.

comment by Bugmaster · 2013-03-01T01:04:25.703Z · LW(p) · GW(p)

What imaginary creatures would choose whose choice has been written into their definition is of no significance.

Are you saying that human choices are not "written into their definition" in some measure ?

Also, keep in mind that a goal like "make more paperclips" does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It's not constrained to just a single path.

Just by sitting there reading LessWrong you're killing babies! ... Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

On the one hand, I do agree with you, and I can't wait to see your proposed solution. On the other hand, I'm not sure what this has to do with the topic. I wasn't talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn't entirely fantastic (other than for the magic pill part, of course).

Replies from: whowhowho, Richard_Kennaway
comment by whowhowho · 2013-03-09T16:21:41.545Z · LW(p) · GW(p)

Are you saying that human choices are not "written into their definition" in some measure ?

What is written in to humans by evolution is hardly relevant. The point is that you can't prove anything about humansby drawing a comparison with imaginary creatures that have had something potentially quite different written into them by their creator.

comment by Richard_Kennaway · 2013-03-08T23:43:56.805Z · LW(p) · GW(p)

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

On the one hand, I do agree with you, and I can't wait to see your proposed solution.

My only solution is "don't do that then". It's a broken thought process, and my interest in it ends with that recognition. Am I a soul doctor? I am not. I seem to be naturally resistant to that failure, but I don't know how to fix anyone who isn't.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"? I am not getting a clear idea of what we are talking about. At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

FWIW, I do not watch television, and have never attended spectator sports.

People with extremely low preferences for passive entertainment do exist, after all

Quite.

Replies from: Bugmaster, whowhowho
comment by Bugmaster · 2013-03-09T02:48:00.931Z · LW(p) · GW(p)

Are you saying that human choices are not "written into their definition" in some measure ?

I have no idea what that even means.

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

You objected to my using Clippy as an analogy to human behaviour, on the grounds that Clippy's choices are "written into its definition". My point is that a). Clippy is free to make whatever choices it wants, as long as it believes (correctly or erroneously) such choices would lead to more paperclips, and b). we humans operate in a similar way, only we care about things other than paperclips, and therefore c). Clippy is a valid analogy.

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

What desire for passive entertainment? For that matter, what is this "passive entertainment"?

You don't watch TV or attend sports, but do you read any fiction books ? Listen to music ? Look at paintings or sculptures (on your own initiative, that is, and not as part of a job) ? Enjoy listening to some small subclass of jokes ? Watch any movies ? Play video games ? Stare at a fire at night ? I'm just trying to pinpoint your general level of interest in entertainment.

At any rate, I can't imagine "entertainment" in the ordinary meaning of that word being a terminal goal.

Just because you personally can't imagine something, doesn't mean it's not true. For example, art and music -- both of which are forms of passive entertainment -- has been a part of human history ever since the caveman days, and continue to flourish today. There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music. On the other hand, there are lots of things hardcoded in our genes that we'd be better off without...

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-09T15:08:31.022Z · LW(p) · GW(p)

To rephrase: do you believe that all choices made by humans are completely under the humans' conscious control ? If not, what proportion of our choices is under our control, and what proportion is written into our genes and is thus difficult, if not impossible, to change (given our present level of technology) ?

The whole language is wrong here.

What does it mean to talk about a choice being "completely under the humans' conscious control"? Obviously, the causal connections wind through and through all manner of things that are outside consciousness as well as inside. When could you ever say that a decision is "completely under conscious control"?

Then you talk as if a decision not "completely under conscious control" must be "written into the genes". Where does that come from?

do you read any fiction books?

Why do you specify fiction? Is fiction "passive entertainment" but non-fiction something else?

There may be something hardcoded in our genes (maybe not yours personally, but on average) that makes us enjoy art and music.

What is this "us" that is separate from and acted upon by our genes? Mentalistic dualism?

My only solution is "don't do that then".

Don't do what ? Do you have a moral theory which works better than utilitarianism/consequentialism ?

Don't crash and burn. I have no moral theory and am not impressed by anything on offer from the philosophers.

To sum up, there's a large and complex set of assumptions behind everything you're saying here that I don't think I share, but I can only guess at from glimpsing the shadowy outlines. I doubt further discussion will get anywhere useful.

comment by whowhowho · 2013-03-09T00:53:10.524Z · LW(p) · GW(p)

Are you saying that human choices are not "written into their definition" in some measure ?

I think Bugmaster is equating being "written in" in the sense of a stipulation in a thought experiment with being "written in" in the sense of being the outcome of an evolutionary process.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-03-09T15:14:17.800Z · LW(p) · GW(p)

If he is, he shouldn't. These are completely different concepts.

comment by whowhowho · 2013-03-09T00:55:24.331Z · LW(p) · GW(p)

If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.

comment by IlyaShpitser · 2013-02-28T16:55:25.809Z · LW(p) · GW(p)

I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad.

This decision theory is bad because it fails the "Scientology test."

Replies from: FeepingCreature
comment by FeepingCreature · 2013-02-28T17:32:07.050Z · LW(p) · GW(p)

That's hardly objective. The challenge is to formalize that test.

Btw: the problem you're having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent "eschew entertainment and fun" is the correct and sane behavior.

Replies from: Bugmaster
comment by Bugmaster · 2013-02-28T20:14:26.820Z · LW(p) · GW(p)

Exactly, though see my comment on a sibling thread.

Out of curiosity though, what is the "Scientology test" ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn't involve poorly calibrated galvanic skin response meters... :-/

Replies from: FeepingCreature
comment by FeepingCreature · 2013-03-01T19:06:27.983Z · LW(p) · GW(p)

Not the commenter, but I think it's just "it makes you do crazy things, like scientologists". It's not a standard LW thing.

comment by A1987dM (army1987) · 2013-03-01T12:54:08.730Z · LW(p) · GW(p)

if your goal is to optimize the world

Optimize it for what?

Replies from: Bugmaster
comment by Bugmaster · 2013-03-01T16:46:57.565Z · LW(p) · GW(p)

That is kind of up to you. That's the problem with terminal goals...

comment by A1987dM (army1987) · 2013-03-09T13:02:06.864Z · LW(p) · GW(p)

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture.

Music is only passive entertainment if you just listen at it, not if you sing it, play it, or dance at it.

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people -- parties we've been to, people we know, places we've visited, our tastes in food and drinks, unusual stuff that happened to us, what we've been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin' weather, whatever -- and I'd consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph -- it is possible to entertain people by talking about STEM topics, if you're sufficiently Feynman-esque about that.)

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

I have read very little of that kind of fiction, and still I haven't felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn't read HPMOR yet, and the occasional Discussion threads about MLP -- but that's a small minority of the time).

comment by Bugmaster · 2013-02-24T22:40:53.001Z · LW(p) · GW(p)

This article, courtesy of the recent Seq Rerun, seems serendipitous:

http://lesswrong.com/lw/yf/moral_truth_in_fiction/

comment by Kawoomba · 2013-02-23T06:13:08.058Z · LW(p) · GW(p)

The problem I see with television is that the average person spends 4 hours a day watching it. (...) Spending four hours a day in fantasy mode is not possible for me (I'm too motivated to DO something) and I don't seem to need anywhere near that much daydreaming.

What's wrong with live and let live (for their notion of 'living'). You can value "DO"ing something (apparently not counting daydreaming) over other activities for yourself, that's your prerogative, but why do you get to say who is and isn't "living"?

Replies from: Epiphany
comment by Epiphany · 2013-02-23T08:57:07.623Z · LW(p) · GW(p)

That was addressed here:

I imagine that if asked whether they would have preferred to watch x number of shows, or spent all of that free time on getting out there and living, most people would probably choose the latter - and that's sad.

It's not that I want to tell them whether they're "really living", it's that I think they don't think spending so much of their free time on TV is "really living".

Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.

Replies from: taelor, Nornagest
comment by taelor · 2013-02-23T11:18:15.098Z · LW(p) · GW(p)

I suspect that many people who enjoy television, if asked, would claim that socializing with freinds or other things are somehow better or more pure, but only because TV is a low status medium, and so saying that watching TV isn't "real living" has become somewhat of a cached thought within our culture; I'd suspect you'd have a much harder time finding people who will claim that spending time enjoying art or reading classic literature or other higher status fictional media doesn't count as "real living".

comment by Nornagest · 2013-02-23T09:40:05.544Z · LW(p) · GW(p)

It's not that I want to tell them whether they're "really living", it's that I think they don't think spending so much of their free time on TV is "really living".

I think I might actually expect people to endorse different activities in this context at different levels of abstraction.

That is, if you asked J. Random TV Consumer to rank (say) TV and socialization, or study, or some other venue for self-improvement, I wouldn't be too surprised if they consistently picked the latter. But if you broke down these categories into specific tasks, I'd expect individual shows to rate more highly -- in some cases much more highly -- than implied by the category rating.

I'm not sure what this implies about true preferences.

Replies from: Epiphany
comment by Epiphany · 2013-02-23T10:17:04.956Z · LW(p) · GW(p)

I think I need an example of this to understand your point here.

Replies from: Nornagest
comment by Nornagest · 2013-02-23T10:40:04.013Z · LW(p) · GW(p)

Well, for example, I wouldn't be too surprised to find the same person saying both "I'd rather socialize than watch TV" and "I'd rather watch Game of Thrones [or other popular TV show] than call my friend for dinner tonight".

Of course that's just one specialization, and the plausibility of a particular scenario depends on personality and relative appeal.

comment by MugaSofer · 2013-02-21T11:11:23.559Z · LW(p) · GW(p)

Offtopic: Does anyone know where you can find that speech in regular HTML format? I defenitely read it in that format, but I can't find it again.

Ontopic: While I appreciate (and agree with) the point he's making, overall, he uses a lot of exaggeration and hyperbole, at best. It seems pretty clear that specific teachers can make a difference to individuals, even if they can't enact structural change.

Also:

What do you mean by "crime against humanity"?

Replies from: Bugmaster
comment by Bugmaster · 2013-02-21T20:56:38.543Z · LW(p) · GW(p)

I could've sworn that I saw his entire book in HTML format somewhere, a long time ago, but now I can't find it. Perhaps I only imagined it.

From what I recall, in the later chapters he claims that our current educational system was deliberately designed in meticulous detail by a shadowy conspiracy of statists bent on world (or, at the very least, national) domination. Again, my recollection could be widely off the mark, but I do seem to remember staring at my screen and thinking, "Really, Gatto ? Really ?"

Replies from: Nornagest
comment by Nornagest · 2013-02-22T05:24:57.266Z · LW(p) · GW(p)

I read Dumbing Us Down, which might not be the book you're thinking of -- if memory serves, he's written a few -- but I don't remember him ever quite going with the conspiracy theory angle.

He skirts the edges of it pretty closely, granted. In the context of history of education, his thesis is basically that the American educational system is an offshoot of the Prussian system and that that system was picked because it prioritizes obedience to authority. Even if we take that all at face value, though, it doesn't require a conspiracy -- just a bunch of 19th- and early 20th-century social reformers with a fondness for one of the more authoritarian regimes of the day, openly doing their jobs.

Now, while it's pretty well documented that Horace Mann and some of his intellectual heirs had the Prussian system in mind, I've never seen historical documentation giving exactly those reasons for choosing it. And in any case the systems diverged in the mid-1800s and we'd need to account for subsequent changes before stringing up the present-day American school system on those charges. But at its core it's a pretty plausible hypothesis -- many of the features that after two World Wars make the Prussians look kind of questionable to us were, at the time, being held up as models of national organization, and a lot of that did have to do with regimentation of various kinds.

comment by MugaSofer · 2013-02-21T11:20:42.920Z · LW(p) · GW(p)

For information regarding religion, I recommend the blog of a former Christian (Luke Muehlhauser) as an addition to your reading list. That is here: Common Sense Atheism. I recommend this in particular because he completed the process you've started - the process of reviewing Christian beliefs - so Luke's writing may be able to save you significant time and provide you with information you may not encounter in other sources.

Speaking as a rationalist and a Christian, I've always found that a bit too propaganda-ish for my tastes. And I wouldn't call Luke's journey "completed", exactly. Still, it can be valuable to see what others have thought in similar positions to you, in a shoulders-of-giants sort of way.

I think it would be better to focus on improving your rationality, rather than seeking out tracts that disagree with you. There's nothing wrong with reading such tracts, as long as you're rational enough not to internalize mistakes from it (on either side) but I wouldn't make it your main goal.

comment by Bugmaster · 2013-02-20T22:55:09.113Z · LW(p) · GW(p)

I hope to find the best evidence about theology here. I don't mean evidence for or against, just the evidence about the subject.

What does "evidence about X" mean, as opposed to "evidence for X" ?

Replies from: Qiaochu_Yuan, Desrtopa, army1987
comment by Qiaochu_Yuan · 2013-02-20T23:40:50.037Z · LW(p) · GW(p)

My interpretation is "evidence that was not obtained in the service of a particular bottom line."

comment by Desrtopa · 2013-02-20T23:05:55.607Z · LW(p) · GW(p)

I'd interpret it as "evidence which bears on the question X" as opposed to "Evidence which supports answer Y to question X."

For instance, if you wanted to know whether anthropogenic climate change was occurring, you would want to search for "evidence about anthropogenic climate change" rather than "evidence for anthropogenic climate change."

Replies from: Bugmaster
comment by Bugmaster · 2013-02-21T00:31:39.774Z · LW(p) · GW(p)

Fair enough, that makes sense. I guess I just wasn't used to seeing this verbal construct before.

comment by A1987dM (army1987) · 2013-02-21T13:07:35.607Z · LW(p) · GW(p)

The former means that log(P(E|X)/P(E|~X)) is non-negligible, the latter means that it is positive.

comment by Ford · 2013-02-20T21:20:13.176Z · LW(p) · GW(p)

You may find this story (a scientist dealing with evidence that conflicts with his religion) interesting.

http://www.exmormonscholarstestify.org/simon-southerton.html

comment by OneLonePrediction · 2012-11-16T08:01:23.597Z · LW(p) · GW(p)

I'm here to make one public prediction that I want to be as widely-read as possible. I'm here to predict publicly that the apparent increase in autism prevalence is over. It's important to predict it because it distinguishes between the position that autism is increasing unstoppably for no known reason (or because of vaccines) and the position that autism has not increased in prevalence, but diagnosis has increased in accuracy and a greater percentage of people with autism spectrum disorders are being diagnosed. It's important that this be as widely-read as possible as soon as possible because the next time prevalence estimates come out, I will be shown right or wrong. I want my theory and prediction out there now so that I can show that I predicted a surprising result before it happened. While many people are too irrational to be surprised when they see this result even though they have predicted the opposite, I hope that rationalists will come to believe my position when it is proven right. I hope that everyone disinterested will come to believe this. The reason why I hope this is because I want them to be more likely to listen to me when I make statements about human rights as they apply to people with autism spectrum disorders. It is important that society change its attitudes toward such individuals.

Please help me by upvoting me to two karma so I can post in the discussion section.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2012-11-16T08:25:40.344Z · LW(p) · GW(p)

I'm not sure you're right that we won't see any increase in autism prevalance - there are still some groups (girls, racial minorities, poor people) that are "underserved" when it comes to diagnosis, so we could see an increase if that changes, even if your underlying theory is correct. Still upvoted, tho.

Replies from: OneLonePrediction
comment by OneLonePrediction · 2012-11-16T18:09:48.507Z · LW(p) · GW(p)

Thank you. Yes, this is possible, but the increase in those groups would end up exactly matching the decrease in adult rates from learning coping skills so well as to be undiagnosable and that seems unlikely to me. Why shouldn't one be vastly more or less?

Anyway, I'm going to make the article now. If you want to continue this, we can do it there.

comment by therufs · 2012-09-29T02:50:26.101Z · LW(p) · GW(p)

I saw this site on evand's computer one day, so of course then had to look it up for myself. In my free time, I pester him with LW-y questions.

By way of background, I graduated from a trying-to-be-progressive-but-sort-of-hung-up-on-orthodoxy quasi-Protestant seminary in spring 2010. Primary discernible effects of this schooling (i.e., I would assign these a high probability of relevance on LW) include:

  • deeply suspicious of pretty much everything

  • a predisposition to enter a Hulk-smash rage at the faintest whiff of systematic injustice or oppression

  • high value on beauty, imagination*, and inclusivity

* Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)

I like learning more about how brains work (/don't work). Also about communities. Also about things like why people say and do what they say and do, both in terms of conditioning/unconscious motivation and conscious decision. And and and. I will start keeping track on a wiki page perhaps.

I cherish ambitions of being able to contribute to a discussion one day! (If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)

Hi!

Replies from: None, Epiphany
comment by [deleted] · 2012-09-29T03:15:00.150Z · LW(p) · GW(p)

Welcome! You sound like just our type. Glad to have you with us.

If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...

Lurk, read the archives, brazenly post things you are quite sure of. Remember that downvotes don't mean we hate you. I dunno. I only get the fear after I post so it's not a problem for me.

comment by Epiphany · 2012-09-29T05:00:06.133Z · LW(p) · GW(p)

(If anyone has any ideas/relevant information about getting over not wanting to look stupid, please do share ...)

Don't worry, you can't possibly look worse than I did.

Part of my motivation to involve myself in rationalism is a hope that I can learn ways to imagine better (more usefully, maybe.)

I wanted to be around people who can point out my flaws and argue with me effectively and tell me things I didn't know. I wanted to be held to higher standards, to actually have to work hard to earn respect. I'm not getting that in other areas of my life. Here, I get it. (: I am so grateful that I found this. People will challenge you and make you work, and find your flaws, but that's a blessing. Embrace it.

comment by EmuSam · 2012-07-19T04:49:37.550Z · LW(p) · GW(p)

Hello.

I was raised by a rationalist economist. At some point I got the idea that I wanted to be a statistical outlier, and also that irrationality was the outlier. After starting to pay attention to current events and polls, I'm now pretty sure that the second premise is incorrect.

I still have many thought patterns from that period that I find difficult to overcome. I try to counter them in the more important decisions by assigning WAG numerical values and working through equations to find a weighted output. I read more non-fiction than fiction now, and I am working with a mental health professional to overcome some of those patterns. I suppose I consider myself to have a good rationalist grounding while being used to completely ignoring it in my everyday life.

I found Less Wrong through FreethoughtBlogs and "Harry Potter and the Methods of Rationalism." I added it to my feed reader and have been forcing my economist to help me work though some of the more science-of-choice oriented posts.

Replies from: army1987
comment by A1987dM (army1987) · 2012-07-19T11:53:10.156Z · LW(p) · GW(p)

WAG

???

The only expansion of that I can find with Google (Wifes And Girlfriends [of footballers]) doesn't seem too relevant.

Replies from: Morendil
comment by Morendil · 2012-07-19T12:13:26.331Z · LW(p) · GW(p)

Wild Ass Guess.

Replies from: DaFranker
comment by DaFranker · 2012-07-19T14:05:53.577Z · LW(p) · GW(p)

Was that just meta, or did you already know it? In what fields would the saying be more common, out of curiosity?

Replies from: evand, Davidmanheim
comment by evand · 2012-07-19T17:24:51.023Z · LW(p) · GW(p)

It's reasonably common among engineers in my experience. Along with SWAG -- scientific wild-assed guessed, intended to denote something that has minimal support -- an estimation that is the output of combining WAGs and actual data, for example.

comment by Davidmanheim · 2012-07-19T23:44:24.614Z · LW(p) · GW(p)

He may not have known it, but it's used. I worked in Catastrophe Risk modeling, and it was a term that applied to what our clients and competitors did; not ourselves, we had rigorous methodologies that were not discussed because they were "trade secrets," or as I came to understand, what is referred to below as SWAG.

I have heard engineers use it as well..

comment by ThoughtSpeed · 2013-02-27T06:07:20.381Z · LW(p) · GW(p)

Hi. 18 years old. Typical demographics. 26.5-month lurker and well-read of the Sequences. Highly motivated/ambitious procrastinator/perfectionist with task-completion problems and analysis paralysis that has caused me to put off this comment for a long time. Quite non-optimal to do so, but... must fight that nasty sunk cost of time and stop being intimidated and fearing criticism. Brevity to assure it is completed - small steps on a longer journey. Hopefully writing this is enough of an anchor. Will write more in future time of course.

Finally. It is written. So many choices... so many thoughts, ideas, plans to express... No! It is done! Another time you silly brain! We must choose futures! We will improve, brain, I promise.

I look forward to at last becoming an active member of this community, and LEVELING UP! Tsuyoku naritai!

comment by NoisyEmpire · 2013-01-03T16:42:47.278Z · LW(p) · GW(p)

I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made me realize how little I’d actually learned thus far. I’m so grateful for this place.

Now I’m an artist – a writer and a musician.

A frequently-confirmed observation of mine is that art – be it a great sci-fi novel, a protest song, an anti-war film – works as a hack to help to change people’s minds who are resistant or unaccustomed to pure rational argument. This is true especially of ethical issues; works which go for the emotional gut-punch somehow make people change their minds. (I think there are a lot of overlapping reasons for this phenomenon, but one certainly is that a well-told story or convincing song provides an opportunity for empathy. It can also help people envision the real consequences of a mind-change in an environment of relative emotional safety.) This, even though of course the mere fact that someone who holds position X made a good piece of art about X doesn’t actually offer much real evidence for the truth of X. Thus, a perilous power. The negative word for the extreme end of this phenomenon is “propaganda.” Conversely, when folks end up agreeing with whatever a work of art brought them to believe, they praise it as “insightful” or some such. You can sort of understand why Plato was worried about having poets – those irrational, un-philosophic things – in his ideal city, swaying his people’s emotions and beliefs.

If I’m going to help save the world, though, I think I do it best through a) giving money to the efficient altruists and the smart people and b) trying to spread true ideas by being a really successful and popular creator.

But that means I have to be pretty damn certain what the true ideas are first, or I’m just spouting pretty, and pretty useless, nonsense.

So thank you, LessWrongers, for all caring about truth together.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2013-01-12T08:52:52.926Z · LW(p) · GW(p)

I think art that spreads the "politics is the mind-killer" meme (which actually seems to be fairly novel outside LW: 1 2) could be a good use of art. Some existential risks, like nuclear weapons, seem likely to be controlled by world governments. The other day it occurred to me that world leaders are people too and are likely susceptible to the same biases as typical folk. If world leaders were less "Go us!" and more "Go humanity!", that could be Really Good.

Welcome to LW, by the way!

comment by Benjamin_Martens · 2012-11-30T09:57:56.623Z · LW(p) · GW(p)

Hello all, My name is Benjamin Martens, a 19-year-old student from Newcastle, Australia. Michael Anissimov, director of Humanity+, added me to the Less Wrong Facebook group. I don’t know his reasons for adding me, but regardless I am glad that he did.

My interest in rational thinking, and in conscious thinking in general, stems, first, from the consequences of my apostasy from Christianity, which is my family’s faith; second, from my combative approach to my major depression, which I have (mostly) successfully beaten into submission through an analysis of some of the possible states of the mind and of the world— Less Wrong and the study of cognitive biases will, I hope, further aid me in revealing my depressive worldview as groundless; or, if not as groundless, then at least as something which is not by nature aberrant and which is, to some degree, justified; third, and in connection to my vegan lifestyle, I aim to understand the psychology which might lead a person to cause another being to suffer; and last, and in connection to all aforementioned, it is my hope that an understanding of cognitive biases will allow not merely myself to edge nearer to the true state of things, but also, through me, for others to do so; I want Less Wrong to school me in some underhand, PR techniques of psychological manipulation or modification which will help me teach others about scepticism, about the errors of learned helplessness and about ways out of the self-reinforcing and self-justifying loops of the pessimistic worldview, and allow me to ably coax others towards cruelty-free ways of living. So, that's me. Hello, Less Wrong.

comment by [deleted] · 2012-10-21T02:48:09.756Z · LW(p) · GW(p)

I'm Nancy Hua. I was MIT 2007 and worked in NYC and Chicago in automated trading for 5 years after graduating with BS's in Math with CS (18C) and in Writing (21W).

Currently I am working on a startup in the technology space. We have funding and I am considering hiring someone.

I started reading Eliezer's posts on Overcoming Bias. In 2011, I met Eliezer, Robin Hanson, and a bunch of the NYC Lesswrongers. After years of passive consumption, very recently I started posting on lesswrong after meeting some lesswrongers at the 2012 Singularity Summit and events leading up to it, and after reading HPMOR and wanting to talk about it. I tried getting my normal friends to read it but found that making new friends who have already read it is more efficient.

Many of the writings regarding overcoming our biases and asking more questions appeal to me because I see many places where we could make better decisions. It's amazing how far we've come without being all that intelligent or deliberate, but I wonder how much more slack we have before our bad decisions prevent us from reaching the stars. I want to make more optimal decisions in my own life because I need every edge I can get to achieve some of my goals! Plus I believe understanding and accepting reality is important to our success, as individuals and as a species.

comment by AllanGering · 2012-07-19T04:03:44.504Z · LW(p) · GW(p)

Poll: how old are you?

Newcomers only, please.

How polls work: the comments to this post are the possible answers. Upvote the one that describes your age. Then downvote the "Karma sink" comment (if you don't see it, it is the collapsed one), so that I don't get undeserved karma. Do not make comments to this post, as it would make the poll options hard to find; use the "Discussion" comment instead.

Replies from: AllanGering, AllanGering, AllanGering, AllanGering, AllanGering, AllanGering, AllanGering
comment by AllanGering · 2012-07-19T04:04:57.615Z · LW(p) · GW(p)

24-29

comment by AllanGering · 2012-07-19T04:04:48.939Z · LW(p) · GW(p)

18-23

comment by AllanGering · 2012-07-19T04:05:16.815Z · LW(p) · GW(p)

30-44

comment by AllanGering · 2012-07-19T04:04:34.926Z · LW(p) · GW(p)

<18

comment by AllanGering · 2012-07-19T04:05:23.924Z · LW(p) · GW(p)

45 or older

comment by AllanGering · 2012-07-19T04:04:13.708Z · LW(p) · GW(p)

Discussion

Replies from: VNKKET
comment by VNKKET · 2012-07-20T02:18:24.240Z · LW(p) · GW(p)

Upvoted for explaining how polls work.

comment by AllanGering · 2012-07-19T04:04:04.386Z · LW(p) · GW(p)

Karma sink

comment by [deleted] · 2013-03-15T15:49:36.712Z · LW(p) · GW(p)

Hi. I discovered LessWrong recently, but not that recently. I enjoy Yudkowsky's writings and the discussions here. I hope to contribute something useful to LessWrong, someday, but as of right now my insights are a few levels below those of others in this community. I plan on regularly visiting the LessWrong Study Hall.

Also, is it "LessWrong" or "Less Wrong"?

Replies from: Kawoomba, TheOtherDave, army1987, beoShaffer
comment by Kawoomba · 2013-03-15T22:01:21.754Z · LW(p) · GW(p)

Also, is it "LessWrong" or "Less Wrong"?

You'll fit in great.

comment by TheOtherDave · 2013-03-15T18:19:03.459Z · LW(p) · GW(p)

I endorse "Less Wrong" as a standalone phrase but "LessWrong" as an affixed phrase (e.g., "LessWrongian").

comment by A1987dM (army1987) · 2013-03-15T17:41:47.381Z · LW(p) · GW(p)

Also, is it "LessWrong" or "Less Wrong"?

Good question... :-)

Replies from: army1987
comment by A1987dM (army1987) · 2013-03-15T19:08:05.530Z · LW(p) · GW(p)

The front page and the About page consistently use the one with the space... except in the logo. Therefore I'm going to conclude that the change in typeface colour in the logo counts as a space and the ‘official’ name is the spaced one.

Replies from: None
comment by [deleted] · 2013-03-15T21:25:10.432Z · LW(p) · GW(p)

I went through the same reasoning pattern as you right before reading this comment. So I think I'll stick with "Less Wrong", for the time being.

comment by beoShaffer · 2013-03-15T18:21:54.454Z · LW(p) · GW(p)

Either is acceptable, though I'd say "Less Wrong" is slightly better.

comment by pinyaka · 2013-02-19T00:22:31.053Z · LW(p) · GW(p)

I am Pinyaka. I've been lurking a bit around this site for several months. I don't remember how I found it (probably a linked comment from Reddit), but stuck around for the main sequences. I've worked my way through two of them thanks to the epub compilations and am currently struggling to figure out how to prioritize and better put into practice the things that I learn from the site and related readings.

I hope to have some positive social interactions with the people here. I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant. This marks the first time I've tried to think about what I value and then seek out a group of like minded individuals.

I also hope to find a consistent stream of ideas for improving myself that are backed by reason and science. I recognize that removing (or at least learning to account for) my own biases will help me build a more accurate picture of the universe that I live in and how I function within that framework. Along with that, I hope to develop the ability to formulate and pursue goals to maximize my enjoyment of life (I've been reading a bunch of lukeprogs anti-akrasia posts recently, so following through on goals is on my mind currently).

I am excited to be here.

Replies from: beoShaffer, Nisan, John_Maxwell_IV
comment by beoShaffer · 2013-02-19T02:32:40.087Z · LW(p) · GW(p)

Hi Pinyaka!

I find that I become fairly unhappy without some kind of regular socialization in a largish group, but it's difficult to find groups whose core values are similar to mine. In fact, after leaving a quasi-religious group last year it occurred to me that I've always just fallen in with whatever group was most convenient and not too immediately repellant.

Semi-seriously, have you considered moving?

Replies from: pinyaka
comment by pinyaka · 2013-02-20T13:52:27.518Z · LW(p) · GW(p)

I'm sort of averse to moving at the moment, since I'm in the middle of getting my doctorate, but I'll likely have to move once I finish that. Do you have specific suggestions? I have always picked where I live based on employment availability and how much I like the city from preliminary visits.

Replies from: beoShaffer
comment by beoShaffer · 2013-02-20T17:30:24.247Z · LW(p) · GW(p)

I have always picked where I live based on employment availability and how much I like the city from preliminary visits.

In that case its going to strongly depend on your field, and if your going into academia specifically you likely won't have much of a choice. That said NY and the Bay Area are both good places for finding rationality support.

comment by Nisan · 2013-02-19T00:53:21.318Z · LW(p) · GW(p)

Welcome! You might enjoy it if you show up to a meetup as well.

Replies from: pinyaka
comment by pinyaka · 2013-02-19T01:17:13.875Z · LW(p) · GW(p)

Thank you. I haven't seen one in Iowa yet, but I do keep an eye out for them.

comment by shaih · 2013-02-17T21:03:29.994Z · LW(p) · GW(p)

I'm Shai Horowitz. I'm currently a duel physics and mathematics major at Rutgers university. I first learned of the concept of "Bayesian" or "rationality" through HPMOR and from there i took it upon myself to read the Overcoming Bias post which has been an extremely long endeavor of which I have almost but not yet accomplished. Through conversation with others in my dorm at Rutgers I have realized simply how much this learning has done to my thought process and it allowed me to hone in on my own thoughts that i could see were still biased and go about fixing them. Through this same reasoning it became apparent to me that it would be largely beneficial to become an active part in the lesswrong community to sharpen my own skills as a rationalist while helping others along the way. I embrace rationality for the very specific reason that I wish to be a Physicists and realize that in trying to do so i could (as Eliezer puts hit) "shoot off my own foot" while doing things that conventional science allows. In the process of learning this I did stall out for months at a time and even became depressed for a while as I was stabbing my weakest points with the metaphorical knife. I do look back at laugh at the fact now that a college student was making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications, which to me seems to be a sign of my progress moving forward. I don't think that i will soon have to face a fact as daunting as that one and with the knowledge that I know how to accept even that law I will now be able to accept any truths much more easily. That being said even though hard science is my primary purpose for learning rationality I am a bit of a self proclaimed polymath and have spent recent times learning more of psychology and cognition then simply the cognitive bias's i need to be self weary of. I just finished the book "Influence: Science and Practice" which I've heard Eliezer mention multiple times and very recently as in this week my interest have turned into pushing standard ethical theories to there limits as to truly understand how to make the world a better place and to unravel the black box that is itself the word "better". I conclude with I would love to talk with anyone experienced or new to rationality about pretty much any topic and would very much like if someone would message me. furthermore if anyone reading this goes to Rutgers university or is around the area, a meet up over coffee or something similar would make my day.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-17T22:19:36.037Z · LW(p) · GW(p)

Welcome! I am really curious what you mean by

making incredibly bad decisions to get over the pain of fully embracing the second law of thermodynamics and its implications

Replies from: shaih
comment by shaih · 2013-02-17T22:29:19.369Z · LW(p) · GW(p)

My thoughts on its implications are along the lines of even if cryogenics works or the human race finds some other way of indefinitely increasing the length of the human life span, the second law of thermodynamics would eventually force this prolonged life to be unsustainable. That combined with the adjusting of my probability estimates of an afterlife made me have to face the unthinkable fact that there will be a day in which i cease to exist regardless of what i do and i am helpless to stop it. while i was getting over the shock of this i would have sleepless night which turned into days that i was to tired to be coherent which turned into missing classes which turned into missed grades. In summation I allowed a truth which would not come to pass for an unthinkable amount of time to change how i acted in the present in a way in which it did not warrant (being depressed or happy or any action now would not change that future).

comment by Rixie · 2013-01-20T00:02:35.146Z · LW(p) · GW(p)

Hi! I was wondering where to start on this website. I started reading the sequence "How to actually change your mind", but there's a lot of lingo and stuff I still don't understand. Is there a sequence here that's like, Rationality for Beginners, or something? Thanks.

Replies from: Kindly, beoShaffer, Dorikka, TimS
comment by Kindly · 2013-01-20T06:04:03.608Z · LW(p) · GW(p)

Probably the best thing you can do, for yourself and for others, is to post comments on the posts you've read, asking questions where you don't understand something. The sequences ought to be as easy to understand as possible, but the reality may not always approach the ideal.

But if the jargon is the problem, the LW wiki has a dictionary

comment by beoShaffer · 2013-01-20T03:59:44.566Z · LW(p) · GW(p)

I found the order presented in the wiki's guide to the sequences to be quite helpful.

comment by Dorikka · 2013-01-30T03:22:01.598Z · LW(p) · GW(p)

This may be a decent starting post.

comment by TimS · 2013-01-20T03:43:29.042Z · LW(p) · GW(p)

Welcome. As intro pieces, I really like Making Beliefs Pay Rent and Belief in Belief. The rest of the Mysterious Answers sequence is attempts to illuminate or elaborate on the points made in those two essays.

I was less impressed with "A Human's Guide to Words," but that might be because my legal training forced me think about those issues far before I ever wandered here. As a brief heuristic, if the use-mention distinction seems really insightful to you, try it out. If you've already thought about similar issues, you could pass on it.

I think the other Sequences are far less interestingly novel, but some of that is my (rudimentary but still above average for here) background in philosophy. And some of it is that I don't care about some of the topics that are central to the discussion in this community.

As always with advice like this, take what I say with a substantial grain of salt. Feel free to look at our wiki page on the Sequences to see all of what's out there.

comment by Briony · 2013-01-04T00:51:25.494Z · LW(p) · GW(p)

Hi, my name is Briony Keir, I'm from the UK. I stumbled on this site after getting into an argument with someone on the internet and wondering why they ended up failing to refute my arguments and instead resorted to insults. I've had a read-around before posting and it's great to see an environment where rational thought is promoted and valued; I have a form of autism called Asperger Syndrome which, among many things, allows me to rely on rationality and logic more than other people seem to be able to - I too often get told I'm 'too analytical' and I 'shouldn't poke holes in other peoples' beliefs' when, the way I see it, any belief is there to be challenged and, indeed, having one's beliefs challenged can only make them stronger (or serve as an indicator that one should find a more sensible viewpoint). I'm really looking forward to reading what people have to say; my environment (both educational and domestic) has so far served more to enforce a 'we know better than you do so stop talking back' rule rather than one which allows for disagreement and resolution on a logical basis, and so this has led to me feeling both frustrated and unchallenged intellectually for quite some time. I hope I prove worthy of debate over the coming weeks and months :)

Replies from: kodos96
comment by kodos96 · 2013-01-04T04:56:16.544Z · LW(p) · GW(p)

I have a form of autism called Asperger Syndrome

This is not at all unusual here at LessWrong... I can't seem to find a link, but I seem to recall that a fairly large portion of LessWrong-ers (at least relative to the general population) have Aspergers (or at least are somewhat Asperger-ish), myself included.

I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"... I realize that that has been the general consensus for a while now, but I've read some articles (again, can't find a link at the moment, sorry) suggesting that Aspergers is not actually related to Autism at all... personally, my feeling on the matter is that "Aspergers" isn't an actual "disease" per se, but rather just a cluster of personality traits that happen to be considered socially unacceptable by modern mainstream culture, and have therefore been arbitrarily designated as a "disease".

In any case, welcome to LessWrong - I look forward to your contributions in the future!

Replies from: anansi133
comment by anansi133 · 2013-01-04T06:01:16.203Z · LW(p) · GW(p)

I'm not entirely sure though that I agree with the statement that Aspergers is "a form of autism"

If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's. I certainly catch myself in the spectrum that includes ADD as well.

The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.

Replies from: kodos96
comment by kodos96 · 2013-01-04T06:15:20.781Z · LW(p) · GW(p)

If anything, I'd be tempted to say that autism is a more pronounced degree of asperger's

That seems to me to be basically equivalent to saying that aspergers is a lesser form of autism. Again, sorry I can't find the links at the moment, but I recall reading several articles suggesting that the two might actually not be related at all, neurologically.

The whole idea of neurodiversity is kind of exciting, actually. If there can be more than one way to appropriately interact with society, everyone gets richer.

I agree. Unfortunately, modern culture and institutions (like the public education system for one notable example) don't seem to be set up based on this premise.

comment by junk_science · 2012-12-20T20:27:24.830Z · LW(p) · GW(p)

Hello everyone,

I found Less Wrong through "Harry Potter and the Methods of Rationality" like many others. I started reading more of Eliezer Yudkowsky's work a few months ago and was completely floored. I now recommend his writing to other people at the slightest provocation, which is new for me. Like others, I'm a bit scared by how thoroughly I agree with almost everything he says, and I make a conscious effort not to agree with things just because he's said them. I decided to go ahead and join in hopes that it would motivate me to start doing more active thinking of my own.

comment by [deleted] · 2012-11-08T05:13:48.651Z · LW(p) · GW(p)

Hello rationalists-in-training of the internet. My name is Joseph Gnehm, I am 15 and I live in Montreal. Discovering LessWrong had a profound effect on me, shedding light on the way I study thought processes and helping me with a more rational approach.

comment by tilde · 2012-10-07T20:27:41.222Z · LW(p) · GW(p)

I'm a 20-year-old physics student from Finland whose hobbies include tabletop roleplaying games and Natalie Reed-Zinnia Jones-style intersection of rationality and social justice.

I've been sporadically lurking on LessWrong for the last 2-3 years and have read most of the sequences. My primary goal is to contribute useful research to either SI or FHI or failing that, a significant part of my income. I've contacted the X-risks Reduction Career Network as well.

I consider this an achievable goal as my general intelligence is extremely high and I have won a national level mathematics competition 7 years ago despite receiving effectively no training in a small backwards town. With dedication and training I believe I could reach the level of the greats.

However, my biggest challenge currently is Getting Things Done; apart from fun distractions, committing any significant effort to something is nigh impossible. This could probably be caused by clinical depression (without the mood effects) and I'm currently on venlafaxine as an attempt to improve my capability to actually do something useful but so far (about 3 months) it hasn't had the desired effect. Assistance/advice would be appreciated.

comment by blueowl · 2012-10-05T21:39:25.747Z · LW(p) · GW(p)

Hi everyone! Another longtime lurker here. I found LW through Yvain's blog (Emily and Control FTW!). I'm not really into cryonics or FAI, but the sequences are awesome, and I enjoy the occasional instrumental rationality post. I decided to become slightly more active here, and this thread seemed like a good place to start, even if a bit old.

comment by robertoalamino · 2012-08-23T18:51:02.339Z · LW(p) · GW(p)

Hi.

My name is Roberto and I'm a Brazilian physicist working in the UK. Even working in an academic environment, that obviously do not guarantee an environment where rational/unbiased/critical discussions can happen. Science production in universities not always are carried out by thinking critically about a subject as many papers can be purely technical in their nature. Also, free thinking is as regulated in academia as it is everywhere else in many aspects.

That said, I have been reading and browsing Less Wrong for some time and think that this can indeed be done here. In addition, given later developments all around the world in many aspects and how people react to them, I felt the urge to discuss them in a way which is not censored, specially by the other persons in the discussion. It promises to be relaxing anyway.

I'm sure I'm gonna have a nice time.

Replies from: Risto_Saarelma, army1987
comment by Risto_Saarelma · 2012-08-24T04:02:52.310Z · LW(p) · GW(p)

My name is Roberto and I'm a Brazilian physicist working in the UK.

Do you get to hear about the Richard Feynman story often when you introduce yourself as a Brazilian physicist?

Replies from: robertoalamino
comment by robertoalamino · 2012-08-24T09:22:27.503Z · LW(p) · GW(p)

It's actually the first time I read it. I would be very happy to say that the situation improved over there, but that might not be true in general. Unfortunately, the way I see it is the completely opposite. The situation became worse everywhere else. Apparently, science education all around the world is becoming more distant of what Feynman would like. Someone once told me that "Science is not about knowledge anymore, it's about production". Feynman's description of his experience seems to be all about that. I refuse to believe in that, but as the world embraces this philosophy, science education becomes less and less related to really thinking about any subject.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-08-24T12:41:03.411Z · LW(p) · GW(p)

At least nowadays, unlike in 1950s Brazil, Feynman's stuff is a Google search away for just about any undergraduate student. Now they just need to somehow figure out they might want to search for him...

comment by A1987dM (army1987) · 2012-08-24T00:46:59.687Z · LW(p) · GW(p)

Science production in universities not always are carried out by thinking critically about a subject as many papers can be purely technical in their nature.

I've found that theoretical physicists usually give me the vibe EY describes here, but experimental physicists usually don't.

Replies from: robertoalamino
comment by robertoalamino · 2012-08-24T09:28:34.409Z · LW(p) · GW(p)

That's more a question of taste, and there is nothing wrong with that. I also prefer theoretical physics, although I must admit that it's very exciting to be in a lab, as long as it is not me collecting the data or fixing the equipment.

My point in the sentence you quoted is that you can perfectly well carry on with some "tasks" without thinking to deeply about them, even in physics. Be it theoretical or experimental or computational. That is something I think is really missing in the whole spectrum of education, not only in science and not only in the universities.

comment by Viliam_Bur · 2012-07-18T18:32:01.878Z · LW(p) · GW(p)

Please add a few words about "Open Thread". Something like -- If you want to write just a simple question or one paragraph or text, don't create a new article, just add it as a comment to the latest discussion article called "Open Thread".

Replies from: AllanGering
comment by AllanGering · 2012-07-19T03:44:17.293Z · LW(p) · GW(p)

In the same line of thought, it may be worth revising the following.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation;

comment by Squark · 2013-03-01T22:02:42.728Z · LW(p) · GW(p)

Hello everyone. My name is Vadim Kosoy, and you can find some LW-relevant stuff about me in my Google+ stream: http://plus.google.com/107405523347298524518/about

I am an all time geek, with knowledge / interest in math, physics, chemistry, molecular biology, computer science, software engineering, algorithm engineering and history. Some areas in which I'm comparatively more knowledgeable: quantum field theory, differential geometry, algebraic geometry, algorithm engineering (especially computer vision)

In my day job I'm a technical + product manager of a small software group in Mantis Vision (http://www.mantis-vision.com/) a company developing 3D video cameras. My previous job was in VisionMap (http://www.visionmap.com/) which develops airborne photography / mapping systems, where I led a team of software and algorithm engineers.

I knew about Eliezer Yudkowsky and his friendly AI thesis (which I don't fully accept) for some time, but discovered this community only relatively recently. For me this community is interesting because of several reasons. One reason is that many discussions are related to the topics of transhumanism / technological singularity / artificial intelligence which I find very interesting and important. Another is that consequentialism is a popular moral philosophy here, and I (relatively recently) started to identify myself as strongly consequentialist. Yet another is that it seems to be a community where rational people discuss things rationally (or at least try), something that society all over the world misses as much direly as the idea seems trivial. This is in stark contrast the usual mode of discourse about social / political issues which is extremely shallow and plagued by excessive emotionality and dogmatism. I truly believe such a community can become a driver of social change in good directions, something with incredible impact

Recently I became very much interested with the subject of understanding general intelligence mathematically, in particular by the methods of computer science. I've written some comments here about my own variant of the Orseau-Ring framework, something I wished to expand into a full article but didn't have the karma for it. Maybe I'll post in on LW discussion.

My personal philosophy: As I said, I'm a consequentialist. I define my utility function not on the basis of hedonism or anything close to hedonism but on the basis of long-term scientific / technological / autoevolutional (transhumanist) progress. I don't believe in the innate value of h. sapiens but rather in the innate value of intelligent beings (in particular the more intelligence the more value). I can imagine scenarios in which a strong AI destroys humanity which are from my P.O.V. strongly positive: this is my disagreement with the friendly AI thesis. However I'm not sure whether any strong AI scenario will be positive, so I agree it is a concern. I also consider myself a deist rather than an atheist. Thus I believe in God, but the meaning I ascribe to the word "God" is very different from the meaning most religious people ascribe to it (I choose to still use the word "God" since there are a few things in common). For me God is the (unknowable) reason for the miraculous beauty of universe, perceived by us as the beauty of mathematics and science and the amazing plethora of interesting natural phenomena. God doesn't punish/reward for good/bad behavior, doesn't perform divine intervention (in the sense of occasional violations of natural law) and doesn't write/dictate scriptures and prophesies (except by inspiring scientists to make mathematical and scientific discoveries). I consider the human brain to be a machine, with no magic "soul" behind the scenes. However I believe in immortality in a stranger metaphysical sense which is something probably too long to detail here

I'm 29.9 years old, married with child (boy, 2.8 years old). I live in Israel since the age of 7 but I was born in the USSR. Ethnically I'm an Ashkenazi Jew. I enjoy science fiction, good cinema ( but no time to see any since my son was born :) ) and many sorts of music (but rock is probably my favorite). Glad to be here!

Replies from: lukeprog, Kawoomba
comment by lukeprog · 2013-04-20T08:08:54.623Z · LW(p) · GW(p)

Welcome! You should probably join the MAGIC list. Orseau and others hang out there, and Orseau will probably comment on your two posts if you ask for feedback on that list. Also, if you ever visit California then you should visit MIRI and do some math with us.

comment by Kawoomba · 2013-03-01T22:24:41.791Z · LW(p) · GW(p)

Welcome! We're all 29.9 years old, here. I look forward to your comments, hopefully you'll find the time for that post on your Orseau-Ring variant.

Regarding your redefinition of god, allow me just a small comment: Calling an unknowable reason "god" - without believing in such a reason's personhood, or volition, or having a mind - invites a lot of unneeded baggage and historical connotations that muddle the discussion, and your self-identification, because what you apparently mean by that term is so different from the usual definitions of "god" that you could just as well call yourself a spiritual atheist (or related).

Replies from: Bugmaster, Squark
comment by Bugmaster · 2013-03-01T22:44:25.563Z · LW(p) · GW(p)

Welcome! We're all 29.9 years old, here.

Speak for yourself, youngster ! Why, back in my day, we didn't have these "internets" you whippersnappers are always going on about, what with the cats and the memes and the facetubes and the whatnot. We had to make our own networks, by hand, out of floppies and acoustic modems, and we liked it . Why, there's nothing like an invigorating morning hike with a box of 640K floppies (formatted to 800K) in your backpack, uphill in the snow both ways. Builds character, it does. Mumble mumble mumble get off my lawn !

comment by Squark · 2013-03-05T21:22:28.894Z · LW(p) · GW(p)

Maybe from a consequentialist point-of-view, it's best to use the word "God" when arguing my philosophy with theists and use some other word when arguing my philosophy with atheists :) I'm thinking of "The Source". However there is a closely related construct which has a sort-of personhood. I named it "The Asymptote": I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power and "The Asymptote" is a formal limit of this sequence. Loosely speaking, "The Asymptote" is just any intelligence vastly more powerful than our own. This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization) and therefore my guess is that there is something about "The Source" which guarantees a indefinite process of this kind. Some sort of a fundamental Law of Evolution which should be complementary, in a way, to the Second Law of Thermodynamics.

Replies from: CCC, Bugmaster, shminux
comment by CCC · 2013-04-20T12:57:45.459Z · LW(p) · GW(p)

This idea comes from the observation that the known history of the universe can be regarded as a process of forming more and more elaborate forms of existence (cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization)

I disagree that they are necessarily more elaborate. I don't think we (as humanity) fully appreciate the complexity of cosmological structures yet (and I don't think we will until we get out there and take a closer look at them; we can only see coarse features from several lightyears away). And civilisation seems less elaborate than sentience, to me.

Replies from: Squark
comment by Squark · 2013-04-20T13:06:49.846Z · LW(p) · GW(p)

Well, civilization is a superstructure of sentience an is more elaborate in this sense (i.e. sentience + civilization is more elaborate than "wild" sentience)

Replies from: CCC
comment by CCC · 2013-04-20T18:13:10.905Z · LW(p) · GW(p)

I take your point. However, I can turn it about and point out that cosmological structures (a category that includes the planet Earth) must by the same token be more elaborate than geological structures.

Replies from: Squark
comment by Squark · 2013-04-20T18:26:31.323Z · LW(p) · GW(p)

Sure. Perhaps I chose careless wording but when I said "cosmological structure formation -> geological structure formation" my intent was the process thereby a universe initially filled with homogeneous gas develops inhomogeneities which condense to form galaxies, stars and planets which undergo further processes (galaxy collisions, supernova explosions, collisions within stellar systems, geologic / atmospheric processes within planets) that produce more and more complex structure over time.

Replies from: CCC
comment by CCC · 2013-04-20T18:47:54.992Z · LW(p) · GW(p)

I see.

Doesn't that whole chain require the entropy of the universe to be negative? Or am I missing something?

Replies from: Squark
comment by Squark · 2013-04-20T19:33:54.371Z · LW(p) · GW(p)

You mean that this process has the appearance of decreasing entropy? In truth it doesn't. For example gravitational collapse (the basic mechanism of galaxy and star formation) decreases entropy by reducing the spatial spread of matter but increases entropy by heating matter up. Thus we end up with a total entropy gain. On cosmic scale, I think the process is exploiting a sort-of temperature difference between gravity and matter, namely that initially the temperature of matter was much higher than the Unruh temperature associated with the cosmological constant. Thus even though the initial state had little structure it was very off-equilibrium and thus very low entropy compared to the final equilibrium it will reach.

Replies from: CCC
comment by CCC · 2013-04-23T12:08:55.939Z · LW(p) · GW(p)

Huh. I don't think that I know enough physics to argue this point any further.

comment by Bugmaster · 2013-03-05T23:19:12.161Z · LW(p) · GW(p)

I think that the universe (in the broadest possible sense of the word) contains a sequence of intelligences of unbounded increasing power...

I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.

...and "The Asymptote" is a formal limit of this sequence.

Wait, so is it bounded or isn't it ? I'm not sure what you mean.

cosmological structure formation -> geological structure formation -> biological evolution -> sentient life -> evolution of civilization

There are plenty of planets where biological evolution had not happened, and most likely never will -- take Mercury, for example, or Pluto (yes yes I know it's not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?

Replies from: Squark
comment by Squark · 2013-03-07T19:44:16.442Z · LW(p) · GW(p)

I strongly doubt the existence of any truly unbounded entity. Even a self-modifying transhuman AI would eventually run out of atoms to convert into computronium, and out of energy to power itself. Even if our Universe was infinite, the AI would be limited by the speed of light.

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually. Of course you might argue that our universe it asymptotically de Sitter. This is true, but it also probably metastable and can collapse into a universe with other properties. In http://arxiv.org/abs/1105.3796 the authors present the following line of reasoning: there must be a way to perform an infinite sequence of measurements since otherwise the probabilities of quantum mechanics would be meaningless. In a similar vein I speculate it must be possible to perform an infinite number of computation (or even all possible computations). The authors then go on to explore cosmological explanation of how that might be feasible.

Wait, so is it bounded or isn't it ? I'm not sure what you mean.

The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is "like an intelligence but not quite" in the same way infinity is "like a number but not quite"

There are plenty of planets where biological evolution had not happened, and most likely never will -- take Mercury, for example, or Pluto (yes yes I know it's not technically a planet). As far as we can tell, most of not all exoplanets we have detected so far are lifeless. What leads you to believe that biological evolution is inevitable ?

Good point. Indeed it seems that life formation is a rare event. So I'm not sure whether there really is a "Law of Evolution" or we're just seeing the anthropic principle at work. It would be interesting to understand how to distinguish these scenarios

Replies from: wedrifid, Bugmaster
comment by wedrifid · 2013-03-07T20:02:09.923Z · LW(p) · GW(p)

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

Does this hold in a universe that is also expanding (like ours)? Such a scenario makes the 'infinite' property largely moot given that any point within has an 'observable universe' that is not infinite. That would seem to rule out computations of anything more complicated than what can be represented within the Hubble volume.

Replies from: Squark
comment by Squark · 2013-03-07T20:29:36.507Z · LW(p) · GW(p)

Yes, this was exactly my point regarding the universe being asymptotically de Sitter. The problem is that the universe is not merely expanding, it's expanding with acceleration. But there are possible solutions to this like escaping to an asymptotic region with a non-positive cosmological constant via false vacuum collapse.

comment by Bugmaster · 2013-03-07T22:02:17.741Z · LW(p) · GW(p)

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

wedrifid already replied better than I could; but I'd still like to add that "eventually" is a long time. For example, if the algorithm that you are computing is NP-complete, then you won't be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an "infinite series of computations".

The sequence is unbounded in the sense that any possible intelligence is eventually superseded. The Asymptote is something akin to infinity. The Asymptote is "like an intelligence but not quite" in the same way infinity is "like a number but not quite"

Sorry, but I literally have no idea what this means. I don't think that infinity is "like a number but not quite" at all, so the analogy doesn't work for me.

It would be interesting to understand how to distinguish these scenarios

Well, so far, we have observed one instance of "evolution", and thousands of instances of "no evolution". I'd say the evidence is against the "Law of Evolution" so far...

Replies from: Squark
comment by Squark · 2013-03-12T20:12:11.721Z · LW(p) · GW(p)

In an infinite universe, the speed-of-light limit is not a problem. Surely it limits the speed of computing but any computation can be performed eventually.

wedrifid already replied better than I could; but I'd still like to add that "eventually" is a long time. For example, if the algorithm that you are computing is NP-complete, then you won't be able to grow your hardware quickly enough to make any practical difference. In addition, if our universe is not eternal (which it most likely is not), then it makes no sense to talk about an "infinite series of computations".

For algorithms with exponential complexity, you will have to wait for exponential time, yes. But eternity is enough time for everything. I think the universe is eternal. Even an asymptotically de Sitter region is eternal (but useless since it reaches thermodynamic equilibrium), however the universe contains other asymptotic regions. See http://arxiv.org/abs/1105.3796

Sorry, but I literally have no idea what this means. I don't think that infinity is "like a number but not quite" at all, so the analogy doesn't work for me.

A more formal definition is given in my comment http://lesswrong.com/lw/do9/welcome_to_less_wrong_july_2012/8kt7 . Less formally, infinity is "like a number but not quite" because many predicates into which a number can be meaningfully plugged in, also work for infinity. For example:

infinity > 5 infinity + 7 = infinity infinity + infinity = infinity infinity * 2 = infinity

However not all such expressions make sense:

infinity - infinity = ? infinity * 0 = ?

Formally, adding infinity to the field of real numbers doesn't yield a field (or even a ring).

Well, so far, we have observed one instance of "evolution", and thousands of instances of "no evolution". I'd say the evidence is against the "Law of Evolution" so far...

There is clearly at least one Great Filter somewhere between life creation (probably there is one exactly there) and appearance of civilization with moderately supermodern technology: it follows from Fermi's paradox. However it feels as though there is a small number of such Great Filters with nearly inevitable evolution between them. The real question is what is the expected number of instances of passing these Filters within the volume of a cosmological horizon. If this number is greater than 1 then the universe is more pro-evolution than what is anticipated from the anthropic principle alone. Fermi's paradox puts an upper bound on this number, but I think this bound is much greater than 1

comment by shminux · 2013-03-05T23:05:14.805Z · LW(p) · GW(p)

"The Asymptote" is a formal limit of this sequence.

Why postulate that such a limit exists?

Replies from: Squark
comment by Squark · 2013-03-07T19:57:31.758Z · LW(p) · GW(p)

To really explain what I mean by the Asymptote, I need to explain another construct which I call "the Hypermind" ( Kawoomba's commented motivated me to invest in the terminology :) ).

What is identity? What makes you today the same person like you yesterday? My conviction is that the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday" and fully understands them. In a similar manner, if a hypothetical superintelligence Omega would learn all of your memories and understand them (you) on the same level you understand yourself, Omega should be deemed a continuation of you, i.e. it assimilated your identity into its own. Thus in the space of "moments of consciousness" in the universe we have a partial order where A < B means "B is a continuation of A" i.e. "B shares A's memories and understands them". The Hypermind hypothesis is that for any A and B in this space there is C s.t. C > A and C > B. This seems to me a likely hypothesis if you take into account that the Omega in the example above doesn't have to exist in your physical vicinity but may exist anywhere in the (multi/)universe and have a simulation of you running on its laptop.

The Asymptote is then a formal limit of the Hypermind. That is, the semantics of "The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P". It is then an interesting problem to find non-trivial properties of the Asymptote. In particular, I suspect (without strong evidence yet) that the opposite of the Orthogonality Thesis is true, namely that the Asymptote has a well-defined preference / utility function

Replies from: shminux
comment by shminux · 2013-03-07T20:33:39.161Z · LW(p) · GW(p)

This seems like a rather simplistic view, see counter-examples below.

My conviction is

"conviction" might not be a great term, maybe what you mean is a careful conclusion based on something.

that the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday"

except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.

and fully understands them.

Not sure what you mean by understanding here, feel free to define it better. For example, we often "understand" our memories differently at different times in our lives.

Thus in the space of "moments of consciousness" in the universe we have a partial order where A < B means "B is a continuation of A" i.e. "B shares A's memories and understands them"

So, if you forgot what you had for breakfast the other day, you today are no longer a continuation of you from yesterday?

"The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P"

That's a rather non-standard definition. If anything, it's close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.

To sum up, your notion of Asymptote needs a lot more fleshing out before it starts making sense.

Replies from: Squark
comment by Squark · 2013-03-08T21:46:32.247Z · LW(p) · GW(p)

the essential relationship between the two is that the "you of today" shares the memories of "you of yesterday"

except that we forget most of them, and that our memories of the same event change in time, and often are completely fictional.

Good point. The description I gave so far is just a first approximation. In truth, memory is far from ideal. However if we assign weight to memories by their potential impact on our thinking and decision making then I think we would get that most of the memories are preserved, at least on short time scales. So, from my point of view, the "you of today" is only a partial continuation of the "you of yesterday". However it doesn't essentially changing the construction of the Hypermind. It is possible to refine the hypothesis by stating that for every two "pieces of knowledge" a and b, there exists a "moment of consciousness" C s.t. C contains a and b.

"The Asymptote has property P" is "For any A there is B > A s.t. for any C > B, C has property P"

That's a rather non-standard definition. If anything, it's close to monotonicity than to accumulation. If you mean the limit point, then you ought to define what you mean by a neighborhood.

Actually I overcomplicated the definition. The definition should read "Exists A s.t. for any B > A, B has property P". The neighbourhoods are sets of the form {B | B > A}. This form of the definition implies the previous form using the assumption that for any A, B there is C > A, B.

Replies from: shminux
comment by shminux · 2013-03-12T20:07:53.891Z · LW(p) · GW(p)

The definition should read "Exists A s.t. for any B > A, B has property P"

Hmm, it seems like your definition of Asymptote is nearly that of a limit ordinal.

comment by BenGilbert · 2013-02-04T12:57:20.356Z · LW(p) · GW(p)

Hello,

I'm Ben. I'm here mainly because I'm interested in effective altruism. I think that tracing through the consequences of one's actions is a complex task and I'm interested in setting out some ideas here in the hope that people can improve my reasoning. For example, I've a post on whether ethical investment is effective, which I'd like to put up once I've got a couple of points of karma.

I studied philosophy and theology, and worked for a while in finance. Now, I'm trying to work out how to increase the positive impact I have, which obviously demands answers about both what 'positive impact' means, and what the consequences are of the choices I make. I think these are far from simple to work out; I hope just to establish a few points with which I'm satisfied enough. I think that exposing ideas and arguments to thoughtful people who might want to criticise or expand them could help me a lot. And this seems a good place for doing that!

comment by shev · 2013-02-02T05:52:18.792Z · LW(p) · GW(p)

Hi, I'm Alex.

Every once in a while I come to LessWrong because I want to read more interesting things and have more interesting discussions on the Internet. I've found it a lot easier to spend time on Reddit (having removed all the drivel) and dredging through Quora to find actually insightful content (seriously, do they have any sort of actual organization system for me to find reading material?) in the past. LessWrong's discussions have seemed slightly inaccessible, so maybe posting an introduction like I'm supposed to will set in motion my figuring out how this community works.

I'm interested in a lot of things here, but especially physics and mathematics. I would use the word "metaphysics" but it's been appropriated for a lot of things that aren't actually meta-physics like I mean. Maybe I want "meta-mathematics"? Anyway, I'm really keen on the theory behind physical laws and on attempts at reformulating math and physics into more lucid and intuitive systems. Some of my reading material (I won't say research, but ... maybe I should say research) recently has been on geometric algebra, re-axiomizing set theory, foundations and interpretations of quantum mechanics, reformulations of relativity, quantum field theory's interpretation, things like that. I have a permanent distaste for spinors and all the math we don't try to justify with intuition when teaching physics, so I've spent a lot of my last few years studying those.

I was really intrigued by the articles/blog posts? on what proofs actually mean and causality a few months ago; that's when I started reading the site. I've spent the better part of the last year sifting through all kinds of math ideas related to reinterpretations or 'fundamental' insights, so I hope hanging around here can expose me to some more.

Oh, and I've spent a good amount of time on the Internet refuting crackpots who think they solved physics, so I, um, promise I'm not one.

I'm a programmer by trade and have a good interest in revolutionary (or just convenient) software projects and disruptive ideas and really naive, idealist world-changing ideas, which is fun.

I have read some of the sequences and such but - I guess I'm a rationalist at heart already, maybe because I've studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other than atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it's possible I've missed a lot of the material here - I find navigating the site to be pretty unintuitive.

I'm based in Seattle and I hope to go to the meetups if they... ever happen again. I mostly just like talking to smart people; I find it makes my brain work better - as if there's some sort of 'conversation mode' which hypercharges my creativity.

Oh, and I have a blog: http://ajkjk.com/blog/. I'm slightly terrified of linking it; it's the first time I've shown it to anyone but friends. It only has 6 posts so far. I've written a lot more but deleted/hid them until they're cleaned up.

Replies from: None, shminux, itaibn0, Nisan
comment by [deleted] · 2013-02-02T06:57:45.748Z · LW(p) · GW(p)

I have read some of the sequences and such but - I guess I'm a rationalist at heart already, maybe because I've studied lots of logic and such, but a lot of it of the basic stuff seemed pretty apparent to me. I was already up to speed on Bayes and quantum mechanics, for example, and never considered anything other an atheism. And I already optimize and try to look at life in terms of expected payoffs and other very rational things like that. But, it's possible I've missed a lot of the material here - I find navigating the site to be pretty unintuitive.

Be very careful thinking you are done. I was in pretty much exactly the same position as you about a year ago. ("yep, I'm pretty rational. Lol @ god; I wonder what it's like to have delusional beliefs"). After a year and a half here, having read pretty much everything in the sequences and most of the other archives, running a meetup, etc, I now know that I suck at rationality. You will find that you are nowhere near the limits, or even the middle, of possible human rationality.

Further, I now know what it's like to have delusional beliefs that are so ingrained you don't even recognize them as beliefs, because I had some big ones. I probably have more. There not easy to spot from the inside.

On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep.

The Seattle guys are pretty cool, from those I've met. Go hang out with them.

Replies from: Kawoomba, shev
comment by Kawoomba · 2013-02-02T07:32:11.595Z · LW(p) · GW(p)

On the subject of atheism... I used to be an atheist, too. The rabbit hole you've fallen into here is deep.

Don't be mysterious, Morpheus, please elaborate.

comment by shev · 2013-02-02T07:21:03.650Z · LW(p) · GW(p)

Okay, sure. Rather I mean: I feel like I'm passed the introductory material. Like I'm coming in as a sophomore, say. But - I could be totally wrong! We'll see.

I've definitely got counter-rational behaviors ingrained; I'm constantly fighting my brain.

And, if we're pedantic about things pretty similar to atheism, I might not be an atheist. I'm not up to speed on all the terms. What do you call:

I don't 'believe' anything, I have degrees of thinking information might be accurate but I talk as though I believe the best model I have; physics provides a model of the universe which I accept to a high degree and I think it's very likely accurate as an abstraction (the finer points are up for debate); I make and accept no claims about things that can't be covered by that model such as extra-universal entities or the reason we exist at all; I consider the elegance of a model as working to its merit as well as its accuracy so invoking supernatural or arbitrary forces where there's an alternative makes an explanation very implausible to me; I see no reason to invoke anything other than physics anywhere between the "big bang" step and my perception of the present so my currently preferred explanation excludes anything supernatural in any form.

I was calling that atheism.

Replies from: None, shminux
comment by [deleted] · 2013-02-02T16:21:43.325Z · LW(p) · GW(p)

I was calling that atheism.

In that sense, then, I'm an atheist.

My test was whether my gods-related beliefs would get me flamed on r/atheism. I don't think my beliefs would pass the ideological turing test for atheism.

I used to think the god hypothesis was not just wrong, but incoherent. How could there be a being above and outside physics? How could god break the laws of physics? Of course now I take the simulation argument much more seriously, and even superintelligences within the universe can probably do pretty neat things.

I still think non-reductionism is incoherent; "a level above ours" makes sense, "supernatural" does not.

This isn't really a major update, though. I'm just not going to refer to myself as an atheist any more, because my beliefs permit a lot more.

comment by shminux · 2013-02-02T08:47:37.347Z · LW(p) · GW(p)

Seems like agnosticism to me, or atheism in a broader sense. The narrow atheism is a belief in zero gods.

comment by shminux · 2013-02-02T06:11:05.493Z · LW(p) · GW(p)

From your blog:

Recently it occurred to me that a large part of being addicted to Reddit isn't actually the content but the fact that the links turn purple when you click on them. And my brain is slightly obsessed with turning all the blue purple, all the time.

This is amazing, yet seems so obvious in retrospect. So many of us have turned into blue-minimizing robots without realizing it. Hopefully breaking the reward feedback loop with your extension would force people to try to examine their true reasons for clicking.

Replies from: shev
comment by shev · 2013-02-02T07:26:15.202Z · LW(p) · GW(p)

I was pretty pleased with myself for discovering that. It - sorta works. I still find myself going to Reddit, but so far it's still "feeling" less addictive (which is really hard to quantify or describe). Now I'm finding myself just clicking to websites more looking for something, rather than specifically clicking links. I've been sleeping badly lately, though, and I find that my brain is a lot more vulnerable to my Internet addiction when I haven't slept well - so it's not a good comparison to my norm.

Incidentally, if anyone wanted me to I could certainly make the extension work on other browsers. It's the simplest thing ever, it just injects 7 clauses of CSS into Reddit pages. I thought about making it mess with other websites I used (hackernews, mostly) but I decided they weren't as much of a problem and it was better to keep it single-purpose for now.

comment by itaibn0 · 2013-02-21T13:15:25.657Z · LW(p) · GW(p)

re-axiomizing set theory

Now I'm tempted to spread a meme. Have you heard Martin-Loef type theory? In my opinion, it's a much better foundation of mathematics than ZFC.

comment by Nisan · 2013-02-21T16:04:36.236Z · LW(p) · GW(p)

Welcome. There are some e-reader format and pdf versions of the Sequences that may be easier to navigate.

comment by anansi133 · 2013-01-03T18:21:57.642Z · LW(p) · GW(p)

Hello, newbie here. I'm intrigued by the premise of this forum.

About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.

In the last few weeks, I noticed what I think are some elemental breakdowns in human politics. When things go bad between people, I think it can be attributed to one of three causes: immaturity, addiction, or insanity. I would love to discuss this further, hoping someone's interested.

I wasn't going to mention theism, but it's here in the main post, and suddenly I'm interested: I trend toward the athiestic- I'm really unimpressed with my grandmother's deity, and "supernatural" doesn't seem a useful or interesting category of phenomena. But I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me, and I notice that other people seem to find survival value in it. So that's probably something I will want to talk about.

Many of my more intellectual friends and neighbors can seem like bullies a lot of the time. So I like the word "rationality" in the title of this place, much more than I like "science" or "logic". When I see the war of the darwin fish on people's bumpers, I remember that the Romans still get a lot of credit for their accomplishments even though math and science as we know it barely existed. Obsession with mere logic seems to put an awful lot of weight on some unexamined premises- and people don't talk in formal logic any more than they math in roman numerals.

I'm not against vaccination, but I am a caregiver to a profoundly autistic child. It's frustrating to try to have any sort of conversation about autism without it devolving into a vaccination tirade.

I don't think of myself as a 9/11 "truther", and yet I still have many questions about those events and the response that trouble me. Some of these questions are getting answered now that the 10 year anniversary has seen the release of more information. As with the Kennedy assassination, I don't think the full story will ever be widely known. I'm cynical enough that I doubt that it matters.

SETI fascinates me. Bigfoot, the Loch Ness Monster, UFOs- not so much. Whitley Streiber is actually kind of interesting, when I can muster up the required grains of salt.

Anyway, it feels a bit like I'm crawling out from under a rock, not sure what the weather is really like out here. I want to outgrow the pleasures of cleverness, hoping for some happiness in wisdom.

Replies from: simplicio
comment by simplicio · 2013-01-03T18:46:10.506Z · LW(p) · GW(p)

About me: I think a lot- mostly by myself. That's trained me in some really lazy habits that I am looking to change now.

Yes, I know the feeling. Welcome out of the echo chamber!

I like being agnostic more than atheist, just on a few tiny little wiggle-words that seem powerfully interesting to me...

Do you mean that it's literally the words you find interesting? Which ones?

Replies from: anansi133
comment by anansi133 · 2013-01-03T19:15:36.962Z · LW(p) · GW(p)

That's not actually what I meant, but the challenge seems interesting. lemme see...

Reciprocity? (I'm looking for a word to describe what happens when Islam holds Jesus up as a prophet worth listening to, but Christians afford no such courtesy to Muhammad.)

Faith (Firefly's Book asks Mal, "when I ask you to have faith, why do you think I'm talking about God?")

Ethics vs Morals (few people I know seem to recognize a difference, let alone agree on it)

Moral Class (If we were to encounter a powerful extraterrestrial, how would we know they weren't God? How would they understand the question if we asked them?)

I guess the words weren't so small after all...

comment by MusicMapsReality · 2012-12-24T22:36:43.914Z · LW(p) · GW(p)

Hello, I'm Ben Kidwell. I'm a middle-aged classical pianist and lifelong student of science, philosophy, and rational thought. I've been reading posts here for years and I'm excited to join the discussion. I'm somewhat skeptical of some things that are part of the conventional wisdom around here, but even when I think the proposed answers are wrong - the questions are right. The topics that are discussed here are the topics that I find interesting and significant.

I am only formally and professionally trained in music, but I have tried to self-study physics, math, computer science, and philosophy in a focused way. I confess that I do have one serious weakness as a rationalist, which is that I can read and understand a lot of math symbology, but I can't actually DO math past the level of simple calculus with a few exceptions. (Some computer programming work with algorithms has helped with a few things.) It's frustrating because higher math is The Key that unlocks a lot of deep understanding of the universe.

I have a particular interest in entropy, information theory, cosmology, and their relation to the human experience of temporality. I think the discovery that information-theoretic entropy and thermodynamic entropy are equivalent and the quantum formalism encodes this duality is a crucial insight which should be a foundational cornerstone of philosophy and our understanding of the world. The sequence about quantum theory and decoherence is one of my favorites and I think there is a lot more to be done to adjust our philosophy and use of language when it comes to what kind of quantum reality we are living in.

comment by [deleted] · 2012-11-17T05:00:21.207Z · LW(p) · GW(p)

I'm Rev. PhD in mathematics, disabled shut-in crank. I spend a lot of time arguing with LW people on Twitter.

Replies from: drethelin
comment by drethelin · 2012-11-17T06:35:19.824Z · LW(p) · GW(p)

Noooooo don't get sucked in

Replies from: None
comment by [deleted] · 2012-11-17T06:55:48.020Z · LW(p) · GW(p)

I think it is unlikely.

comment by capctr · 2012-11-08T06:39:39.264Z · LW(p) · GW(p)

I am a 43 year old man who loves to read, and stumbling across HPMOR was an eye opener for me, and it resonated profoundly within. My wife is not only the Queen of Critical Thinking and logic, she is also the breadwinner. Me? I raise the children( three girls), take care of the house, and function as a housewife/gourmet chef/personal trainer/massage therapist for my wife on top of being my daughters personal servant. This is largely due to my wife's towering intellect, overwhelming competence, my struggles with ADHD, and the fact that she makes huge amounts of money. Me, I just age almost supernaturally slowly(at 43, I still pass for thirty, possibly due to an obsession with fitness ), am above average handsome, passingly charming, good singing voice, and incapable of winning a logical argument, as the more stress I grow, the faster my IQ shrinks. I am taken as seriously by my wife, as Harry probably was by his father as a four year old. I am looking to change that. I am hoping that if I learn enough about less wrong, I just might learn how to put all the books I compulsively read to good use, and maybe learn how to...change.

Replies from: MileyCyrus, Alicorn
comment by MileyCyrus · 2012-11-08T07:13:23.317Z · LW(p) · GW(p)

I'm actually incredibly interested in your story, if you don't mind. What is like dating a woman who is smarter than you are? What do you think attracted her to you? (I would love to pair-bond with a genius woman, but most of them only want to pair-bond with other geniuses.)

comment by Alicorn · 2012-11-08T07:07:40.675Z · LW(p) · GW(p)

housewife

"House spouse" works as a gender neutral term, and it rhymes!

Replies from: MugaSofer
comment by MugaSofer · 2012-11-08T11:04:39.474Z · LW(p) · GW(p)

it rhymes!

This is not a good thing.

comment by Baruta07 · 2012-11-06T18:01:18.342Z · LW(p) · GW(p)

I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect clarity after only hearing them once) but I have trouble combining that recall with my English classes, and I have trouble remembering names. I am informally studying formal logic, programming, game theory, and probability theory (don't you hate it when the curriculum changes. (I also have a unusual fondness for brackets, if you couldn't tell by now)

I also feel that any discussion about me that fails to mention my love of Sf/Fantasy should be shot dead, I caught onto reading at a very, very early age and by the time I was in 5th grade I was reading at a 12th grade comprehension level, and I was tackling Asimov, Niven, Pohl, Piers Anthony, Stephen R. Donaldson, Roger Zelazny and most good authors.

Replies from: Kawoomba, beoShaffer
comment by Kawoomba · 2012-11-06T18:11:08.936Z · LW(p) · GW(p)

(I also have a unusual fondness for brackets, if you couldn't tell by now)

Lisp ith a theriouth condition, once you go full Lisp, you'll never (((((((((((((... come back)?n).

Replies from: Baruta07
comment by Baruta07 · 2012-11-06T20:49:45.378Z · LW(p) · GW(p)

I was laughing so hard when I saw this.

comment by beoShaffer · 2012-11-08T06:50:58.991Z · LW(p) · GW(p)

How do you feel about Heinlein?

Replies from: Baruta07
comment by Baruta07 · 2012-11-09T16:06:05.154Z · LW(p) · GW(p)

He's a decent author but I am having trouble finding anything of significance by him in Calgary

Replies from: beoShaffer
comment by beoShaffer · 2012-11-09T16:56:27.954Z · LW(p) · GW(p)

Too bad.

comment by mattwise · 2012-08-02T00:48:57.599Z · LW(p) · GW(p)

Hi,

I was introduced to LW by a friend of mine but I will admit I dismissed it fairly quickly as internet philosophy. I came out to a meetup on a recent trip to visit him and I really enjoyed the caliber of people I met there. It has given me reason to come back and be impressed by this community.

I studied Math and a little bit of Philosophy in undergrad. I'm here mostly to learn, and hopefully to meet some interesting people. I enjoy a good discussion and I especially enjoy having someone change my mind but I lose interest quickly when I realize that the other party has too much ego involved to even consider changing his or her mind.

I look forward to learning from you all!

Matt

comment by MikeDobbs · 2013-03-25T13:17:04.242Z · LW(p) · GW(p)

Hello LW community. I'm a HS math teacher most interested in Geometry and Number Theory. I have long been attracted to mathematics and philosophy because they both embody the search for truth that has driven me all my life. I believe reason and logic are profoundly important both as useful tools in this search, and for their apparently unique development within our species.

Humans aren't particularly fast, or strong, or resistant to damage as compared with many other creatures on the planet, but we seem to be the only ones with a reasonably well developed faculty for reasoning and questioning. This leads me to believe that developing these skills is a clear imperative for all human beings, and I have worked hard all my life to use rational thinking, discourse and debate to better understand the world around me and the decisions that I make every day.

This is what drove me towards teaching as a career, as I see my profession as providing me with the opportunity to help young people better understand the importance of reason and logic, as well as help them to develop their ability to utilise them.

I'm excited to finally become a member of this community which seems to share in many of the values I hold dear, and look forward to many intriguing and thought provoking discussions here on LW!

comment by bdbaruah · 2013-01-23T15:05:51.178Z · LW(p) · GW(p)

Aaron's blog brought me here. Sad that he's no longer with us.

I have been thinking for a long time about overcoming biases, and to put them into action in life. I work as an orthopaedic surgeon in the daytime and all I see around me is an infinite amount of bias. I can't take it on unless I can understand them and apply them to my life processes!

comment by [deleted] · 2012-12-20T20:33:10.980Z · LW(p) · GW(p)

Hey everyone, I'm sean nolan. I found less wrong from tvtropes.org, but I made sure to lurk sufficiently long before joining. I've been finding a lot of interesting stuff on lesswrong (most of which was posted by eliezer), some of which I've applied to real life (such as how procrastination vs doing something is the equivalent of defect vs cooperate in a prisoners' dilemma against your future self). I'm 99.5% certain I'm a rationalist, the other 0.5% being doubt cast upon me by noticing I've somehow attained negative karma.

comment by mjankovic · 2012-11-21T22:32:22.726Z · LW(p) · GW(p)

Hello, I'm a physics student from Croatia, though I've attended a combined physics and computer science program (study programs here are very specific) for couple of years at a previous university that I left, though my high school specialization is in economy. I am currently working towards my bachelor's degree in physics.

I have no idea how I learned of this site, though it was probably trough some transhumanist channels (there's a lot of half-forgotten bits and pieces of information floating in my mind, so I can't be sure). Lately I've started reading the core sequences, mostly on my cell phone, while traveling (it avoids tab explosions). So far I've encountered a lot of what I've already considered or concluded for myself in a more expanded form.

comment by RobertPearson · 2012-11-06T01:26:59.487Z · LW(p) · GW(p)

Hi! I am Robert Pearson: Political professional of the éminence grise variety. Catholic rationalist of the Aquinas variety. Avid chess player, pistol shooter. Admirer of the writings of Ayn Rand and Robert Heinlein. Liberal Arts BA from a small state university campus. I read Overcoming Bias occasionally some years ago, but heard of LessWrong from Leah Libresco.

My real avocation is learning how to be a smarter, better, more efficient, happier human being. Browsing the site for awhile convinced me it was a good means to those ends.

I write a column on Thursdays for Grandmaster Nigel Davies' The Chess Improver

comment by LadyStardust · 2012-11-06T01:02:42.284Z · LW(p) · GW(p)

Hey there! I'm a 19-year old Canadian girl with a love for science, science fiction, cartoons, RPGs, Wayne Rowley, learning, reading, music, humour, and a few thousand other things.

Like many I found this site via HPMOR. As a long-time fan of both science and Harry Potter, I was ultimately addicted from chapter one. It's hard to apply scientific analysis to a fictional universe while still keeping a sense of humour, and HPMOR executes this brilliantly. My only complaint ( all apologies to Mr. Yudkowsky, though I doubt he'll ever read this) are that Harry comes off as rather Sue-ish. I wanted more, so I came here and found yet more excellent excellent writings. The story about the Pebblesorters is my personal favourite.

I'm mad about music. Queen, Rush, Black Sabbath, and Bowie are some of my favourite bands. I have a Telecaster, which I use mostly to play blues. God I love the blues. But I digress..

Though I'm merely a high school graduate looking for a part-time job, I'm really passionate about biology. I'm the kind of person who reads about sodium-potassium pumps not because it's on the the upcoming quiz, but because it indulges my curiousity about how humans and other lifeforms work. (Don't get me started about speculative xenobiology!)

I've lurked this site for about 7 months now and I really hope that I'll be accepted here in spite of my laconic, idiosyncratic, comma-ridden ramblings. Thank You.

comment by Jess_Whittlestone · 2012-10-05T10:38:12.852Z · LW(p) · GW(p)

Hi, I'm Jess. I've just graduated from Oxford with a masters degree in Mathematics and Philosophy. I'm trying to decide what to do next with my life, and graduate study in cognitive science is currently top of my list. What I'm really interested in is the application of research in human rationality, decision making and its limitations to wider issues in society, public policy etc.

I'm taking some time to challenge my intuition that I want to go into research, though, as I'm slightly concerned that I'm taking the most obvious option not knowing what else to do. My methods for doing this at the moment are a) trying to think about reasons it might not be the best option (a "consider the opposite" type approach) and b) initiating conversations with as many people as possible doing things that interest me, and getting some work experience in different areas this year, to broaden my limited perspective. Any better/additional suggestions are more than welcome!

I'm about to start an internship with 80000 hours, doing a project on the role of cognitive bias in career choice. The aim is to collect together the existing research on biases and mitigation techniques and apply it in a practical and accessible way, identifying the biases that most commonly affect career choice and providing useful strategies for avoiding them. I was wondering if anyone here has a summary of the existing literature on cognitive bias mitigation, or any recommendations of particularly useful/important research? Equally if anyone has spent much time thinking about this, I'd love to hear about it.

Replies from: beoShaffer
comment by beoShaffer · 2012-10-05T21:55:33.091Z · LW(p) · GW(p)

I don't have a full summary on-hand, but if you just want to jumpstart your own search you might want to read Lukeprogs article on efficient scholarship and look into the keyword "debiasing".

comment by Weisguy · 2012-08-22T20:52:58.159Z · LW(p) · GW(p)

Hi everyone,

I'm currently caught up on HPMOR, and I've read many of the sequences, so I figured it was time to introduce myself here.

I'm a 24 year old Cognitive Psychology graduate student. I was raised as a fairly conservative Christian who attempted to avoid any arguments that would seriously challenge my belief structure. When I was in undergrad, I took an intro to philosophy course which helped me realize that I needed to fully examine all of my beliefs. This helped me to move toward becoming a theistic evolutionist and finally an atheist. Now I strive to use the methods of rationality to continue to question all of my beliefs and improve my life.

As a psychology graduate student I have the opportunity to teach an introductory psychology course. I'm hoping to take what I have learned here and start helping my students improve their rationality. Specifically, I'm planning to have the students read excerpts from Ch 22 & 23 of HPMOR as a fun and interesting way to start learning to think like a scientist. I'm hoping the community can assist me with possibly narrowing down the sections I'm going to have them read and consider possible methods of assessment. As of now, I know that I want to have the students analyze the methodology used by Harry in his two experiments from those chapters, and I probably want to have students come up with their own hypotheses and methods to test them. Any help the community wants to provide is most appreciated.

comment by [deleted] · 2012-08-03T13:30:14.943Z · LW(p) · GW(p)

Hi everyone,

I'm Leisha. I originally came across this site quite a while ago when I read the Explain/Worship/Ignore analogy here. I was looking for insight into my own cognitive processes; to skip the unimportant details, I ended up reading a whole lot about the concept of infinity once I realized that contemplating the idea gave me the same feeling of Worship that religion used to. It still does, to some extent, but at least I'm better-informed and can Explain the sheer scale of what I'm thinking of a little better.

I didn't return here until yesterday, when I was researching the concept of rational thought (by way of cognitive processing, Ayn Rand, and Vulcans!) For background, I'm a Myers-Briggs F-type (INFJ) who has come to realize that while emotion has its value, it's certainly not to be relied upon for making sound judgements. What I'm looking to do, essentially, is to repair the faulty processes within my own mind. I've spent a lot of time reaching invalid conclusions because the premises I have been working from were wrong; the original input I was given (before I was of an age to think critically) was incorrect. I'm tracing back the origin of a lot of the aliefs I have, only to find that they're based on values I no longer hold to be important. My value-sets need tweaking.

Unlike with a computer, though, with a mind you can't just delete what you need to and start over. Those detrimental thought-processes need to be overwritten with something that works better. That's why I'm here, essentially, as a complement to my inner work. I'm here to read about a more rational way of thinking, to try out ideas, to compare and to analyze. I intend to work through the Sequences, a little at a time.

I expect to read much more than I comment. If I assess myself honestly and fairly, then I'm not an unintelligent person, but I am (particularly by comparison with the subset represented at this website!) uneducated, and so a great deal of the math and science will likely be beyond my comprehension at this point. However, I thought I'd post here to introduce myself anyway, and to say what a valuable resource this site looks to be. I look forward to reading more.

Other trivia: I'm female, which I know puts me in the minority here. I enjoy science fiction and am working on some original pieces of my own. I'm interested in psychology, anthropology and the "weirder" parts of physics. I like to think about the very large and very small ends of the scale, and contemplate the big questions about who we are, how we got here and where we're going. I'm a libertarian and a feminist, and I drink tea.

Replies from: None
comment by [deleted] · 2012-08-03T14:33:29.635Z · LW(p) · GW(p)

Hmm... Explain/worship/ignore is one of the first articles I remember reading too.

I wish you the warmest welcome.

Make sure to at least read the Core Sequences (Map and Territory, Mysterious Answers to Mysterious Questions, Reductionism), as there is a tendency in discussion on this site to be rash against debaters who have not familiarized themselves with the basics.

Replies from: None
comment by [deleted] · 2012-08-04T06:01:57.404Z · LW(p) · GW(p)

It's a good article!

Thank you for the kind welcome and for the advice. I don't intend to jump into discussion without having done the relevant reading (and acquired at least a small understanding of community norms) so hopefully I'll avoid too many mistakes. I'm working through Mysterious Answers to Mysterious Questions now, and what strikes me is how much of it I knew, in a sense, already, but never could have put forward in such a coherent and cohesive way.

So far, what I've read confirms my worldview. Being wary of confirmation bias and other such fun things, I'll be curious to see how I react when I read an article here that challenges it, as I'm near-certain will happen in due course. (And even typing that makes me wonder what exactly I mean by I there in each case, but that's off-topic for this thread)

comment by JaySwartz · 2012-11-19T23:12:52.305Z · LW(p) · GW(p)

Hello,

I am Jay Swartz, no relation to Aaron. I have arrived here via the Singularity Institute and interactions with Louie Helm and Malo Bourgon. Look me up on Quora to read some of my posts and get some insight to my approach to the world. I live near Boulder, Colorado and have recently started a MeetUp; The Singularity Salon, so look me up if you're ever in the area.

I have an extensive background in high tech, roughly split between Software Development/IT and Marketing. In both disciplines I have spent innumerable hours researching human behavior and thought processes in order to gain insights into how to create user interfaces and how to describe technology in concise ways to help people to evaluate the merits of the technology. I've spent time at Apple, Sun, Seagate, Mensa, Osborne and a few start-ups applying my ever-deepening understanding of the human condition.

Over the years, I have watched synthetic intelligence (I much prefer the more precise SI over AI) grow in fits and starts. I am increasing my focus in this area because I believe we are on the cusp of general SI (GSI). There is a good possibility that within my life time I will witness the convergence of technology that leads to the appearance of GSI. This will in part be facilitated by advances in medicine that will extend my lifespan well past 100 years.

I am currently building my first SI web crawler that will begin building a corpus to be mined by some SciPy applications I have on my list of things to do. These efforts will provide me with technical insights on the SI challenge. There is even the possibility, however slight, that they can be matured to make a contribution to the creation of SI.

Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.

I am looking forward to reading and exchanging ideas here. I will strive to contribute as much as I receive.

Jay

Replies from: gwern
comment by gwern · 2012-11-19T23:31:16.555Z · LW(p) · GW(p)

Finally, I am working on a potential paper for the Singularity Institute. I just posted a first outline/draft, Predicting Machine Super Intelligence, but do not yet know the details on how anyone finds it or how I see any responses. Having been on more than a few sites similar to this, I know I will be able to quickly sort thing out.

I don't see anything. I assume you mean you put it in the LW edit box and then saved it as a draft? Drafts are private.

comment by StonesOnCanvas · 2012-11-15T20:50:48.733Z · LW(p) · GW(p)

Hi I’m Bojidar (also known as Bobby). I was introduced to LW by Luke Muehlhauser’s blog “Common Sense Atheism” and I've been reading LW ever since he first started writing about it. I am a 25 year old laboratory technician (and soon to be PhD student) at a major cancer research hospital in Buffalo, NY. I've been reading LW for a while and recently I've been really wishing that Buffalo had a LW group (I've been considering starting one, but I’m a bit concerned that I don’t have much experience in running groups nor have I been very active in the online community). A bit about myself: I enjoy reading about rationality, psychology, biology, philosophy and methods of self-help (or self-optimization). In my spare time I like doing artistic things (oil painting, figure drawing, and making really cool Halloween costumes), gardening, travel, playing video games (casual MMO gamer & RPG fan), and I like watching sci-fi, fantasy genre movies/TV programs. Also, I work out 5 times per week (which thanks to some awesome self-help advice has been a whole lot easier to stick with – thanks Luke!). I hope to learn how to play the piano well (I currently just freestyle on occasion or attempt to learn songs I like by watching youtube synthesia videos, but I would really like to learn how to read sheet music).

As far as by background in rationality, I would have to say that I didn't really grow up in a particularly rational environment. I grew up Christian, but religion wasn't a huge influence on my upbringing. On the other hand, my family (particularly my mom), is really into alternative medicine. I wish I could say it is just a general belief in “healthy eating” coupled with the naturalistic fallacy, but sadly it is not. She is a homeopathic “doctor” (thankfully non-practicing!) and can easily be convinced of even the most biologically implausible remedies (on rare occasions even scaring me by taking or suggesting potentially dangerous treatments). I really fear the possible outcome of these beliefs; given the option between effective chemotherapy and magical sugar pills, she probably won’t choose the option that saves her life. (After several failed attempts to improve her rationality and change her mind, I have long abandoned any attempts in hopes to preserve my relationship with my family.)

That being said, for a large portion of my life, I believed many of the same things my parents taught me to believe. Then I went to college as a premed student and was exposed to a lot of new information, which over time, made me start to reject those beliefs. Growing up, I was considered to be pretty rational by other people around me (not always in a good way; often it was negatively attached to the claim of being "left – brained” or “not being in touch with my intuitive self”). In retrospect, I was only marginally saner than other people around me, perhaps just sane enough to change my mind given the chance.

P.S. I have not taken any formal logic classes and on occasion might need some terms or symbols clarified (although my boyfriend has and frequent discussions with him have helped me pick up some of this nomenclature).

comment by Rixie · 2012-11-14T02:01:15.119Z · LW(p) · GW(p)

Hi, I'm Rixie, and I read this fan fic called Harry Potter and the Methods of Rationality, by lesswrong, so I decided to check out Lesswrong.com. It is totally different from what I thought it would be, but it's interesting and I like it. And right now I'm reading the post below mine, and wow, my comment sounds all shallow now . . .

Replies from: Strange7, daenerys, Rixie
comment by Strange7 · 2012-11-14T03:27:08.695Z · LW(p) · GW(p)

What did you think it would be like?

Replies from: Rixie
comment by Rixie · 2012-11-29T01:30:40.883Z · LW(p) · GW(p)

I thought it would be more like hpmor.com, but for the authour.

Little did I know . . .

comment by daenerys · 2012-11-14T02:38:55.983Z · LW(p) · GW(p)

Hi Rixie! Don't worry! Lots of people came to LessWrong after reading HPMoR (myself included). I know it can be intimidating here at first, but well worth the effort, I think.

You might also be interested in Three Worlds Collide. It's another fiction by the same guy who wrote HPMoR, and a bunch of the Sequence posts here.

If you have any questions about anything, feel free to PM me!

comment by Rixie · 2012-11-14T02:11:14.686Z · LW(p) · GW(p)

And, question: What does 0 children mean? It's on the comments which were down-voted a lot and not shown.

Replies from: Slackson, Nornagest, Nisan
comment by Slackson · 2012-11-14T03:29:49.540Z · LW(p) · GW(p)

It means it has 0 replies. The way the comments work is that the one above is the "parent" and the one's below are "children". Sometimes you see people using terminology such as "grand-parent" and "great grand-parent" to refer to posts further above.

comment by Nornagest · 2012-11-14T02:30:07.541Z · LW(p) · GW(p)

Means no one replied to the comment. Normally this is implicit in the number of comments nested under it, but since those aren't shown when comments are downvoted below the threshold, the site provides the number of child comments as a convenience.

comment by Nisan · 2012-11-14T02:29:04.667Z · LW(p) · GW(p)

If the downvoted comment had, e.g. 5 total replies to it, it would say "5 children".

comment by avantguard · 2012-11-06T19:42:10.149Z · LW(p) · GW(p)

I'm Rachel Haywire and I love to hate culture. I've been in "the community" for almost 2 years but just registered an account today. I need to read more of the required texts here before saying much but wanted to pop my head out from lurking. I've been having some great conversations on Twitter with a lot of the regulars here.

I organize the annual transhumanist/alt-culture event Extreme Futurist Festival (http://extremefuturistfest.info) and should have my new website up soon. I like to write, argue, and write about arguing. I've also done silly things such as producing industrial music and modeling.

You probably know me as that really loud girl at parties with the tattoos and crazy hair. I'm actually not trying to get attention. I'm just an autist. I am here so I can become a more rational person. I love philosophy and debate but my thinking is not always... correct?

comment by wesley · 2012-11-06T02:49:58.281Z · LW(p) · GW(p)

Hi, my name is Wes(ley), and I'm a lurkaholic.

First, I'd like to thank this community. I think it is responsible in a large way for my transformation (perceived transformation of course) from a cynical high schooler who truly was only motivated enough to use his natural (not worked hard for) above average reasoning skills to troll his peers, to a college kid currently making large positive lifestyle changes, and dreaming of making significant positive changes in the world.

I think I have observed significant changes in my thinking patterns since reading the sequences, learning about Bayes, and watching discussions unfold on LessWrong over the last two years or so.

Three examples (and there are many more) of this are:

  1. Noticing quicker, and more often when a dispute is about terms and not substance.

  2. Identifying situations in which myself or others are trying to "guess the teacher's password" (this has really helped me identify gaps in understanding)

  3. Increased internal dialogue concerning bias (in myself, and in others, I at first started to notice myself being strongly subject to confirmation bias; I suspect realizing this has at least a little bias-reducing effect)

Unfortunately, I don't think I have come even close to being able to apply these skills in a place where they would be highly beneficial to others, like a decision making position. That is okay, my belief is that this is something that will come with age, and career advancement.

One of my goals for the next year is to start a LessWrongish student organization at my college campus (Auburn University), which is a traditionally very conservative place. This is partially out of a wholly selfish desire to engage in more stimulating discussions (instead of just spectating, this is also why I am delurking), and partially out of a part selfish desire to create a community at school that fosters instrumental rationality. I think that by posting this goal here, it is at least slightly more likely I will go through with it.

Some of the things I like to do include: race small sailboats, read, play video games, try new foods, explore, learn, smile at people I don't know, play rough with my family's dogs, drive with high acceleration (not necesscarily high speeds), travel, talk with people I don't know and will likely never meet again, find a state of flow in work, read comments on CNN political articles (it's a comedy thing), learn about native animal and plant species, catch critters, listen to big band music, find humor in unusual places, laugh at myself, fantasize about getting superpowers, and lab benchwork.

Some of the things I don't like to do include: get to know new people (I like knowing people though), spend time on social networking sites (I don't have a Facebook or Twitter), have text conversations, dress formally (ties? why do we need to cling to those?), "jumping through hoops" (e.g. make sure to attend 5 events for this class, suck up to professor x for a good recc, make sure to put x on your resume), engaging in politics, talk to people who say things like "it's all relative man," or "I choose to not let my world be bound by logic", clean, binge drink (okay, actually, I don't like being hung over, or the thought of poisoning myself), die to lag, percieve assignment of undue credit.

Currently I am taking a semester off from studying cell and molecular biology, and volunteering as a research student in a solid tumor immunology lab. I think long-term I would like to get involved with research on the molecular basis of aging, or applied research related to life extension.

comment by [deleted] · 2012-09-30T23:11:29.897Z · LW(p) · GW(p)

I'm new on Less Wrong and I want to solve P vs. NP.

Replies from: shminux, Mitchell_Porter, EvelynM, beoShaffer
comment by shminux · 2012-10-01T03:36:12.492Z · LW(p) · GW(p)

One of my main goals right now is to solve P vs. NP.

Consider partitioning into smaller steps. For example, getting a PhD in math or theoretical comp sci is a must before you can hope to tackle something like that. Well, actually before you can even evaluate whether you really want to. While you seem to be on your way there, you clearly under-appreciate how deep this problem is. Maybe consider asking for a chat with someone like Scott Aaronson.

Replies from: None
comment by [deleted] · 2012-10-01T14:24:12.753Z · LW(p) · GW(p)

You clearly under-appreciate how deep this problem is.

Yes, I do.

Replies from: shminux, TimS
comment by shminux · 2012-10-01T14:59:23.150Z · LW(p) · GW(p)

After that, will it be a difficult, but possible, problem?

Do the math yourself to calculate your odds. Only one of the 7 Millennium Prize Problems have been solved so far, and that by a person widely considered a math genius since his high-school days at one of the best math-oriented schools in Russia and possibly the world at the time. And he was lucky that most of the scaffolding for the Poincaré conjecture happened to be in place already.

So, your odds are pretty bad, and if you don't set a smaller sub-goal, you will likely end up burned out and disappointed. Or worse, come up with a broken proof and bitterly defend it against others "who don't understand the math as well as you do" till your dying days. It's been known to happen.

Sorry to rain on your parade.

comment by TimS · 2012-10-01T14:56:19.575Z · LW(p) · GW(p)

My sense is that you are underestimating the number of extremely smart mathematicians who have been attacking N ? NP. And further, you are not yet in a position to accurately estimate your chances.

For example, PhDs in math OR comp. sci. != PhDs in math AND comp. sci. The later is more impressive because it is much, much harder.

If you find theoretical math interesting, by all means pursue it as far as you can - but I wouldn't advise a person to attend law school unless they wanted to be a lawyer. And I wouldn't advise you to enroll in a graduate mathematics program if you wouldn't be happy in that career unless you worked on P ? NP

Replies from: None
comment by [deleted] · 2012-10-01T18:52:52.135Z · LW(p) · GW(p)

You are underestimating the number of extremely smart mathematicians who have been attacking the problem. And further, you are not yet in a position to accurately estimate your chances.

I was definitely engaging in motivated cognition.

Replies from: TimS
comment by TimS · 2012-10-01T19:13:33.332Z · LW(p) · GW(p)

How many?

If your father has a PhD in Comp.Sci., he's more likely to know than a lawyer like myself.

That said, the Wikipedia article has 38 footnotes (~3/4 appear to be research papers) and 7 further readings. I estimate that at least 10x as many papers could have been cited. Conservatively, that's 300 papers. With multiple authors, that's at least 500 mathematicians who have written something relevant to P ? NP.

Adjust downward because relevant != proof, adjust upward because the estimate was deliberately conservative - but how much to move in each direction is not clear.

Replies from: None
comment by [deleted] · 2012-10-01T21:04:32.599Z · LW(p) · GW(p)

The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. Will I get it? No.

Replies from: shminux, TimS
comment by shminux · 2012-10-01T21:52:03.586Z · LW(p) · GW(p)

The Millenium Prize would be a nice way to simultaneously fund my cryopreservation and increase my prestige. I clearly need a backup plan, though, and I don't have one. Will someone with a BS in mathematics and computer science be able to find a good job? Where should I look?

Sorry to put it bluntly, but this sounds incredibly naive. One cannot plan on winning the Millenium Prize any more than one can plan on winning a lottery. So, it's not an instrumentally useful approach to funding your cryo. The latter only requires a modest monthly income, something that you will in all likelihood have regardless of your job description.

As for the jobs for CS graduates, there are tons and tons of those in the industry. For example, the computer security job market is very hot and requires the best and the brightest (on both sides of the fence).

comment by TimS · 2012-10-02T13:31:58.306Z · LW(p) · GW(p)

In addition to what shimnux said (and which I fully endorse), I think you sell your father short. He doesn't just teach, he does research. Even if he's stopped doing that because he has tenure, he still helps peer-review papers. Even if he's at a community college and does no research or peer-review, he still probably knows what was cutting edge 10 to 15 years ago (which is much more than you or I).

Regarding actual career advice, I think there are three relevant skills:

  • Math skill
  • Writing skill
  • Social skill

Having all three at good levels is much better than having only one at excellent levels. Developing them requires lots of practice - but that's true of all skills.

At college, I recommend taking as much statistics as you can tolerate. Also, take enough history so that you identify something specific taught to you as fact in high school was false/insufficiently nuanced - but not something that you currently think is false.

In terms of picking majors, its probably to early to tell - if you pick a school with a strong science program, you'll figure out the rest later. Pick courses by balancing your interest with your perception of how useful the course will be (keeping in mind that most courses are useless in real life). Topic is much less important than quality of the professor. In fact, forming good relationships with specific professors is more valuable than just about any "facts" you get from particular classes - you'll have to figure out who is a good potential mentor, but a good mentor can answer the very important questions you are asking much more effectively than a bunch of random strangers on the Internet.

Good luck.

comment by Mitchell_Porter · 2012-10-01T02:16:44.981Z · LW(p) · GW(p)

Mulmuley's geometric complexity theory is still where I would start. It's based on continuum mathematics, but extending it to boolean objects is the ultimate goal. A statement of P!=NP in GCT language can be seen as Conjecture 7.10 here. (Also downloadable from Mulmuley's homepage, see "Geometric complexity theory I".)

comment by EvelynM · 2012-10-01T03:13:57.549Z · LW(p) · GW(p)

Welcome!

A fresh perspective on hard problems is always valuable.

Getting the skills to be able to solve hard problems is even more valuable.

comment by beoShaffer · 2012-10-01T02:31:25.110Z · LW(p) · GW(p)

Hi, Jimmy. Welcome to less wrong. Unfortunately I don't have much advice on P vs. NP. On doing the impossible is kinda related, but not to close.

Replies from: None
comment by [deleted] · 2013-03-03T15:28:21.144Z · LW(p) · GW(p)

Do you mean this guy? That's not me. I'm the anonymous one.

comment by sakranut · 2012-08-03T00:49:54.135Z · LW(p) · GW(p)

Hi everyone!

I'm 19 years old and a rising sophomore at an American university. I first came across Less Wrong five months ago, when one of my friends posted the "Twelve Virtues of Rationality" on facebook. I thought little of it, but soon afterward, when reading Leah Libresco's blog on atheism (she's since converted to catholicism), I saw a reference to Less Wrong, and figured I would check it out. I've been reading the Sequences sporadically for a few months, and just got up to date on HPMOR, so I thought I would join the community and perhaps begin posting.

Although I have little background in mathematics, cognitive science, or computer programming, I have had a long-standing, deep interest in ethics and happiness, both of which inevitably lead to an interest in epistemology. Since I began hanging around Less Wrong, my interest in logic and cognitive biases has definitely been piqued as well. Some of my other, less relevant, interests include intellectual history, music, Western classic literature, literary theory, aesthetics, economics, and political philosophy. I also enjoy the New York Giants and playing the piano.

I love debating others, but mostly debating myself - I do so constantly, but too often inconclusively. The main advantage I've found of debating others is that they help disabuse me of my own self-deceptions. Reading good literature usually serves this purpose as well.

A strong part of my identity is that I am a religious Jew. I am not a theist, but I keep a large portion of Jewish law, mostly because I am satisfied that doing so is a good use of my time. I can't remember a case when Jewish law has collided with my ethics, perhaps because so many of my ethical intuitions come from the Jewish tradition.

It amuses me that the Less Wrong community refers to itself as "rationalist," given that at one point in intellectual history, "rationalists" were those who did not believe in empiricism. Aside from that, I'm extremely excited to learn from all of you.

Replies from: Zaine
comment by Zaine · 2012-08-03T01:21:21.697Z · LW(p) · GW(p)

It amuses me that the Less Wrong community refers to itself as "rationalist," given that at one point in intellectual history, "rationalists" were those who did not believe in empiricism.

Are you referring to Humean rationalists? Before Hume used empiricism to show how by mere empiricism one can never certainly identify the cause of an effect, empirical thought was lauded by Cartesian rationalists. Hume's objection to an overreliance on empiricism also (partially) helped galvanize the Romantic movement, bringing an end to the Enlightenment. Future individuals throughout history who considered themselves rationalists were of the Cartesian tradition, not 'all is uncertain' Humean rationalism (see Albert from Goethe's The Sufferings of Young Werther for one example). Those who embraced Hume's insight, though it should be mentioned that Hume himself thought that fully embracing same would be quite foolish, did not call themselves rationalists, but were divers members of myriad movements across history.

Hume's point remained an open problem until it was later considered solved by Einstein's theory of special relativity.

Welcome, by the way.

Replies from: army1987, None, sakranut
comment by A1987dM (army1987) · 2012-08-03T07:26:16.207Z · LW(p) · GW(p)

Hume's point remained an open problem until it was later considered solved by Einstein's theory of special relativity.

What?

Replies from: Zaine
comment by Zaine · 2012-08-03T11:32:18.403Z · LW(p) · GW(p)

I may be misremembering, but if I recall correctly with Einstein's theory of special relativity it was at the time considered finally possible to accurately and precisely predict the movements of bodies in our universe. While Newton proved what laws the universe is bound by, he never figured out how these rules operated beyond what was plainly observable. When Einstein's theory of special relativity became accepted, that ball X caused the effect of ball Y's movement became mathematically provable at such a level of precision that Hume's insight - what causes the effect of ball Y's movement is not empirically discernible - became sound no longer.

I admit the above is a bit vague, and perhaps dangerously so. If it doesn't clear up your question let me know, and I'll check over my notes when I get the chance.

Replies from: Vaniver, iDante, army1987
comment by Vaniver · 2012-08-03T14:49:52.168Z · LW(p) · GW(p)

I may be misremembering, but if I recall correctly with Einstein's theory of special relativity it finally became possible to accurately and precisely predict the movements of bodies in our universe.

This is incorrect. MHD is correct about the right response to "all is uncertain," which is "right, but there are shades of uncertainty from 0 to 1, and we can measure them."

Replies from: Zaine
comment by Zaine · 2012-08-03T15:11:44.781Z · LW(p) · GW(p)

Thank you, both of you. I changed the text to reflect only STR's historical significance in regard to Hume's insight.

comment by iDante · 2012-08-03T18:46:12.011Z · LW(p) · GW(p)

Newton's theory of gravitation is a very close approximation to Einstein's general relativity, but it is measurably different in some cases (precession of Mercury, gravitational lensing, and more). Einstein showed that gravity can be neatly explained by the curvature of spacetime, that mass distorts the "fabric" of space (I use quotes because that's not the mathematical term for it, but it conjures a nice image that isn't too far off of reality). Objects move in straight lines along curved spacetime, but to us it looks like they go in loops around stars and such.

Special relativity has to do with the relation of space and time for objects sufficiently far away from each other that gravity doesn't affect them. Causality is enforced by this theory since nothing can go faster than light, and so all spacetime intervals we run into are time-like (That's just a fancy way of saying we only see wot's in our light cone).

comment by A1987dM (army1987) · 2012-08-03T17:44:47.684Z · LW(p) · GW(p)

(I think it was general relativity, not special relativity.) I can see where whoever said that is coming from, but I'm not sure I 100% agree. (I will elaborate on this when I have more time.)

Replies from: Zaine
comment by Zaine · 2012-08-03T18:17:02.148Z · LW(p) · GW(p)

(I think it was general relativity, not special relativity.)

Special relativity was formalised around ten years earlier than general relativity (around 1905), which better fits in with my mental timeline of the fin de siecle.

I can see where whoever said that is coming from[...]

Whoever asserted that Einstein's theory had resolved Hume's insight? or whoever said that, at the time, the educated generally considered Einstein's theory to have resolved Hume's insight? If the former, I think it was more a widespread idea that the majority of the educated shared, rather than one person's assertion.

Regardless of to whom you were referring, I look forward to your elaboration!

Replies from: army1987, shminux
comment by A1987dM (army1987) · 2012-08-03T22:49:13.628Z · LW(p) · GW(p)

Special relativity was formalised around ten years earlier than general relativity (around 1905), which better fits in with my mental timeline of the fin de siecle.

I can't see what special relativity would have to do with Hume. It just extended the principle of relativity, which was already introduced by Galileo, to the propagation of light at a finite speed, though with all kinds of counter-intuitive results such as the relativity of simultaneity. By itself, it still doesn't predict (say) gravitation. (It does predict conservation of energy, momentum and angular momentum if you assume space-time is homogeneous and isotropic and use Noether's theorem, but so does Galilean relativity for that matter.)

On the other hand, general relativity, from a small number of very simple assumptions, predicts quite a lot of things (pretty much any non-quantum phenomenon which had observed back then except electromagnetism). Indeed Einstein said he was completely certain his theory would prove to be true before it was even tested. EDIT: you actually need more data than I remembered to get to GR: see http://lesswrong.com/lw/jo/einsteins_arrogance/757x

(Wow, now that I'm trying to explain that, I realize that the difference between SR and GR in these respects are nowhere near as important as I was thinking.)

Anyway, there's still no logical reason why those very simple assumptions have to be true; you still need experience to tell you they are.

The comments to http://lesswrong.com/lw/jo/einsteins_arrogance/ go into more detail about this.

If the former, I think it was more a widespread idea that the majority of the educated shared, rather than one person's assertion.

Can you give me some pointers? I can't recall ever hearing about that before.

Replies from: Zaine
comment by Zaine · 2012-08-04T00:19:29.168Z · LW(p) · GW(p)

Thank you for the review! It makes a lot in the two wikipedia articles on special and general relativity easier to digest.

Can you give me some pointers? I can't recall ever hearing about that before.

I intend on thoroughly going over my notes this weekend so I can separate historical fact from interpretation, which are currently grouped together in my memory. I'll be able to do your response justice then.

comment by shminux · 2012-08-03T18:40:28.965Z · LW(p) · GW(p)

I'm not an expert in philosophy, but if we are talking physics, relativity, special or general, did not do anything of the sort you claim: "Einstein's theory of special relativity it was at the time considered finally possible to accurately and precisely predict the movements of bodies in our universe." If anything, the Newtonian mechanics had a better claim at determinism, at least until 19th century, when it became clear than electromagnetism comes with a host of paradoxes, not cleared up until both SR and QM were developed. Of course, this immediately caused more trouble than it solved, and I recall no serious physicist who claimed that it was " finally possible to accurately and precisely predict the movements of bodies", given that QM is inherently non-deterministic, SR showing that Newtonian gravity is incomplete. and GR was not shown to be well-posed until much later.

Replies from: Zaine
comment by Zaine · 2012-08-03T20:09:43.713Z · LW(p) · GW(p)

Thank you for your input. I also do not know of any serious physicist who asserted that causality had been finally and definitively solved by SR; from what I was taught, it was as I said more a widespread idea that the majority of the educated shared, rather than one person's assertion.

Indeed, Hume's insight is more of a philosophical problem than a mathematical one. Hume showed that empiricism alone could never determine causality. Einstein's STR showed that causality can be determined empirically when aided by maths, a tool of the empiricist. It can be argued that STR does not definitively prove causality itself (perhaps very rightly so - again, I am not aware), however the salient point is that STR gave rise to the conception that Hume's insight had finally been resolved. To be clear, in order to resolve Hume's insight one only needed to demonstrate that through empiricism it is possible to establish causality.

comment by [deleted] · 2012-08-03T14:41:19.356Z · LW(p) · GW(p)

The notion of Cause and Effect was captured mathematically, statistically and succinctly by Judea Pearl, empiricism is defined by Bayes Theorem.

comment by sakranut · 2012-08-03T02:14:20.621Z · LW(p) · GW(p)

I was referring to the dispute in the 17th and 18th centuries with Hume, Berkeley, and Locke on the empiricist side, and Descartes, Leibnitz, and Spinoza, on the rationalist Side, as described in this paper.

Out of curiosity, what is the connection between atoms and causality?

Replies from: Zaine
comment by Zaine · 2012-08-03T02:32:16.499Z · LW(p) · GW(p)

Enlightening! Thank you for the paper.

Sorry, it was Einstein's theory of special relativity that resolved Hume's insight, not atomic theory. Basically, Hume argued that if you see a ball X hit a ball Y, and subsequently ball Y begins rolling at the same speed of ball X, all one has really experienced is the perception of ball X moving next to ball Y and the subsequent spontaneous acceleration of ball Y. Infinity out of infinity times you may experience the exact same perception whenever ball X bumps into ball Y, but in Hume's time there was no empirical way to prove that the collision of ball X into ball Y caused the effect of the latter's acceleration. With this, you can. I'm afraid I can't answer in any more depth than that, as I myself don't understand the mathematics behind it. Anyone else?

comment by Haladdin · 2012-07-25T06:49:20.998Z · LW(p) · GW(p)

Hi, LessWrong,

I used to entertain myself by reading psychology, and philosophy articles on Wikipedia and following the subsequent links. When I was really interested in a topic though, I used google to further find websites would provide me more information on said topics. Around late 2010, I found that some of my search results led to this very website. Less Wrong proved to be a little too dense for me to enjoy; I needed to fully utilize my cognitive capabilities to even begin to comprehend some of the articles posted here.

Since I was looking for entertainment, I decided to ignore all links to LW for quite a while, but the frequency of LW result coming up in my queries became more and more frequent with time. I finally decided to read some of the posts, and some of the articles (determinism, cryonics, and death related ones) described conclusions I've derived independently. It was quite shocking as I thought of myself as a rather unique thinker. Thinking more about this, I came to a conclusion. Instead of having a "eureka" moment every couple of months to come at the same conclusion people arrived at centuries ago, I decided to optimize my time - compressing the learning/awakening period by reading the sequences instead of attempting to figure out everything myself.

Funnily enough, I detest reading the same articles that I enjoyed reading before now that I've given myself the goal of reading them. I'm sure that the explanation and the solution to this conundrum can be found on this website as well.

Lastly, a note to ciphergoth - I do not identify myself as a rationalist, as the second sentence of this post implies. I found out that labeling myself limits my words, my actions, and more importantly, my thoughts, so I refuse to label myself by my political ideologies, gender, nationality, etc. I even go by a few different names so I can become more detached to my name itself as I found people to be irrationally attached to names as it is nothing but an identifying label. I will use rationalist techniques and tools, and I may even grow to adopt your ideologies, but I will not identify myself as a rationalist. At least until the benefits of applying labels to myself becomes more concrete.

Nice to meet all of you.

comment by tmosley · 2012-07-21T02:01:12.549Z · LW(p) · GW(p)

So I recently found LessWrong after seeing a link to the Harry Patter fanfiction, and I have been enthralled with the concept of rationalism since. The concepts are not foreign to me as I am a chemist by training, but the systematization and focus on psychology keep me interested. I am working my way through the sequences now.

As for my biography, I am a 29 year old laboratory manager trained as a chemist. My lab develops and tests antimicrobial materials and drugs based on selenium's oxygen radical producing catalysis. It is rewarding work if you can get it, which you can't, because our group is the only one doing it ;)

Besides my primary field of work, I am generally interested in science, technology, economics, and history.

I am looking at retirement from the 9-5 life in the next year or so, and am interested in learning the methods of rationality, which I feel would allow me to excel in other endeavors in the future. I already find myself linking to articles from here to explain and predict human behavior.

This place is overwhelming with its content. I don't think I have ever seen a website with a comment section so worth reading. I fear that I could spend the remainder of my life reading and never have the time to DO anything.

In the realm of politics, I would be considered an anarcho-capitalist, though I value any and all types of values between there and where the USA's politics currently lay. I am an atheist to the extent that I don't believe in an anthropomorphic god, though reading the "an alien god" (not quite sure how to post links here yet) sequence certainly made me realize that certain pervasive and extremely powerful processes do exist, so I am reexamining some of my long-held assumptions in that arena.

I spend quite a lot of my time in the online "Fight Club" that is Zerohedge's comment section, so apologies in advance if I come off as sharp in some of my remarks. I prefer appeals to logic and reason as a rule, but sometimes I resort to pathos and personal attack, especially when I feel that I am being personally attacked. This impulse has been greatly curbed by what I have read here, however, and I find that I am able to pierce through inflammatory arguments much more cool-ly, which I count as a positive result for all involved.

In any event, I generally try not to comment when I feel ill-informed on a subject, but when I think I have something to contribute, I will. I am really enjoying the site so far.

Now, back to reading. So much to read, so little time.

comment by PhoenixMarks · 2021-03-03T21:50:16.887Z · LW(p) · GW(p)

Greetings!

I'll start with how I made my way here. Unsurprisingly, it was HPMOR. Perhaps even less surprisingly, said fanfic was recommended on Tumblr. After reading that excellent story and a couple of follow up fanfics, I decided that rational fics are the thing for me, and also that, as someone who desperately wants to write a good story, the underlying rationality is something that I needed to get a handle on. (Also, for a large portion of my life I've been obsessed with logic.)

I've acquired Rationality: From AI to Zombies, and am slowly working my way through it. Stunting progress is my limited available time, and also my limited knowledge of mathematics. (Tangent, this was caused by poor teachers convincing me that I'm just inherently bad at mathematics, and only once I was much older and have had opportunity to apply mathematics beyond basic addition /subtraction /multiplication and division to real world experiences did I realise that when I learn things my way, I do well enough thank you.)

In the mean time, I've decided to join this community, though I'm likely to be a very infrequent participant, at least to start with.

So about me.

Firstly, if something seems odd about how I write, I am South African, and while I grew up learning English and am actually more comfortable with it than my native Afrikaans, I've also spent most of my life speaking to people who aren't very goof at English, if they speak any at all.

I am a security officer. I was trained for control, but I'm currently a guard. I work night shifts exclusively, and it's during these night shifts that most of my activity is likely to occur. Let's hope sleep deprivation doesn't become too obvious.

I hope to study computer science as soon as finances allow, or alternatively forensic science. If the former, I'd like to work in cyber security and/or computer forensics.

I spent the early years of my life as a Seventh Day Adventist and from my teens until just a handful of years ago I explored various pagan traditions, identifying as a pagan with a general Hellenistic bent. Right now I'd call myself an atheist (or, if pressed, a Discordian and/or Pastafarian) but I'm happy to discuss religious views.

I was dubiously diagnosed as schizophrenic late in my teens - my psychiatrist said pre-schizophrenic, which is not a phrase I've encountered anywhere else before or since, so if someone can provide clarity on what that means, that'd be great. I'm assuming it refers to showing early signs of schizophrenia, at a stage where its more manageable. I took medication for a time, but have since managed to keep myself in check through an effort of will and a very particular mindset.

I'm very interested in the workings of the mind, and, as a point of interest, I've spent a lot of time learning about the underlying theory of MBTI which, while obviously pseudoscience, I think can still be a tool of limited use. For those interested, I'm most likely typed either as an ISTP or INTP.

Other interests include reading (obviously.) I especially enjoy the Wheel of Time, the Discworld novels, and LitRPG stories, aside from rational fiction. In rational fics, I'm anticipating the continuations of Pokemon: The Origin of Species and A More Perfect Union. I'd recommend the fanfics Something Blue (Mass Effect) and Dragon From Ash (Skyrim) to any fanfic readers who want fairly rational stories that aren't necessarily written as rational fics. In LitRPG I recommend Everybody Loves Large Chests, and for science fiction I recommend anything by Daniel Suarez. I try to write, but I have difficulty continuing stories for long periods of time. I love video games.

I've rambled on enough, now. Thanks for the awesome community!

comment by CharlieDavies · 2012-11-08T23:56:49.486Z · LW(p) · GW(p)

Hi, Charlie here.

I'm a middle-aged high-school dropout, married with several kids. Also a self-taught computer programmer working in industry for many years.

I have been reading Eliezer's posts since before the split from Overcoming Bias, but until recently only lurked the internet -- I'm shy.

I broke cover recently by joining a barbell forum to solve some technical problems with my low-bar back squat, then stayed to argue about random stuff. Few on the barbell forum argue well -- it's unsatisfying. Setting my sights higher, I now join this forum.

I'll probably start by trying some of the self-improvement schemes and reporting results. Any recommendations re: where to start?

Replies from: CharlieDavies
comment by CharlieDavies · 2012-11-09T03:19:58.911Z · LW(p) · GW(p)

Never mind, I found the Group rationality diary which is exactly the right aggregation point for self-improvement schemes.

comment by CAE_Jones · 2012-11-06T14:37:10.660Z · LW(p) · GW(p)

Apologies in advance for the novella. And any spelling errors that I don't catch (I'm typing in notepad, among other excuses).
It's always very nice when I come across something that reminds me that there are not only people in the world who can actually think rationally, but that many of them are way better at it than me.
I don't like mentioning this so early in any introduction, but my vision is terrible to the point of uselessness; I mostly just avoid calling myself "blind" because it internally feels like that would be giving up on the tiny power left in my right eye. I mention it now just because it will probably be relevant by the end of my rambling. (Feel free to skip to the last paragraph if you'd rather avoid all the backstory.)
I'm from northeast Arkansas. My parents were never really religious (I kinda internalized the ambient mythos of "God=good and fluffy cloud heaven, Satan=bad and fire and brimstone hell" just because it seemed to be the accepted way of things among all of my other relatives. TUrns out my dad identified himself as a Buddhist after one of our many trips to Disneyworld. ... they.... really like Disney. They have a dog named Disney.). They did emphasize the importance of education and individualism and all of those ideals from the late eighties and nineties that turned out to be counterproductive (though I'm having trouble finding the cracked.com articles that point this out in the most academically sound manner imaginable. (note: the previous statement was sarcastic)). So I tried to learn as much as I could in the general direction of science. Being that this was all done at public schools, and that a whole 0 of the more advanced science books I wanted were available in braille, this didn't get me very far.
I did my last two years of highschool at the Arkansas School of Mathematics and Science (which added "and the arts" when I got there, though before they'd actually added an art program), and somehow graduated without actually doing much science (I did a study of the effects of atmosphere on dreams for the year-and-a-half science project that everyone had to do, but forewent trying to organize an experiment and just wrote a terrible research paper). Then I got to college, and everything went to hell. I'd somehow managed to sneak around learning things like vectors, dot/cross products, and actual lab reports in highschool, and the experiments we did in gen physics never felt like experiments so much as demonstrations ("Behold: gravity still works!"). This is about where it became extremely clear to me that I simply could no longer make myself do things by force of will alone (and it became doubly clear that no one else seemed capable of understanding that I wasn't just "blowing off" everything). It took several semesters after that for me to realize that I had seriously missed out on some basic life things and that I actually needed friends (and that I needed to seriously reevaluate what qualified as friendship). They finally made me pick a new major, seeing as I'd kinda kept away from physics after the first semester ended in disaster. So I took the quickest way out, that being French, and now I'm still living with my parents, have about a dozen essays on Franco-african literature to write, and am about $30,000 in debt (that's only counting the loans in my name; my parents took the rest of the financial burden in their names).
So I mostly try to focus on creative endeavors, such as fiction and video games. Except the lack-of-vision thing makes that harder (I've been focusing on developing audio games for the past couple years, but it's virtually impossible to actually live off the tiny audio games market. Oh, but I could write pages on my observations there, and I rather want to, as I'm sure many of you could make some meaningful observations/analyses on some of those trends.).
... Well crap, I just wrote a few pages without actually getting to anything useful. I have serious need of better rationality skills than I'm currently applying: independence, dealing with emotional/cognative weirdness, finding ways to actually travel outside of my house (public transportation might as well not exist anywhere but the capital in Arkansas, and good sidewalks are hard to find), social issues, productivity issues, finding ways to get in physical activity, being unemployed with an apparent hiring bias against disabilities, financial ability, etc. The total money that I have to work with is less than $400, so I can't exactly sign up for cryonics or hire a driver to take me places. And this wall-o-text demonstrates my horrible disorganization rather well, I fear. (Hm, is there not a way to preview a comment before one posts it?)

comment by Neurosteel · 2012-11-06T11:27:22.970Z · LW(p) · GW(p)

After having read all of the Sequences, I suppose its time I actually registered. I did the most recent (Nov 2012) survey. I'm doing my PhD in the genetics of epilepsy (so a neurogenetics background is implied). I'm really interested in branching out into the field of biases and heuristics, especially from a functional imaging and genetics perspective (my training includes EEG, MRI/fMRI, surgical tissue analysis, and all the usual molecular stuff/microarrays).

Experiences with grant writing makes me lean more toward starting my own biotech or research firm and going from there, but academics is an acceptable backup plan.

comment by Cinnia · 2012-11-05T21:24:03.143Z · LW(p) · GW(p)

Hi, I’m Cinnia, the name I go by on the net these days. I found my way here by way of both HPMOR and Luminosity about 8 months ago, but never registered an account until the survey.

Like Alan, I’m also in my final year of secondary school, though I’m on the other side of the pond. I love science and math and plan to have a career in neuroscience and/or psychiatry after I graduate. This year I finally decided to branch out my interests a bit and joined the local robotics club (a part of FIRST, if anyone’s curious), and it’s possibly the best extracurricular I’ve ever tried.

I’ve noticed that there aren’t many virtual communities that manage to hold my interest for long, due to a number of different reasons, but I’ve been lurking around LessWrong for about 8 months now and find it incredibly enlightening. I am (very) slowly working my way through the Sequences and some of the top articles here, but have finished Eliezer’s “Three Worlds Collide” and Alicorn’s original posts on Luminosity.

I’m still very much in the process of learning and trying to understand many of the concepts LessWrong explores, so I’m not sure how often I’ll be contributing. However, I do have some understanding of Riso and Hudson’s Enneagram and Spiral Dynamics, so I suppose there’s some groundwork that I can build from in the future.

Anyway, I like LessWrong’s mission and am happy to have finally joined the community.

Edited to clarify: Spiral Dynamics is an entirely separate psychological theory from the Enneagram, in case it wasn't clear.

Replies from: Bugmaster, Alicorn
comment by Bugmaster · 2012-11-05T22:21:35.855Z · LW(p) · GW(p)

What are "Riso and Hudson’s Enneagram and Spiral Dynamics", out of curiosity ? I Googled the terms, but didn't see anything that I could immediately relate to Less Wrong, hence my curiosity.

Replies from: Cinnia
comment by Cinnia · 2012-11-05T22:47:47.111Z · LW(p) · GW(p)

My apologies for not making it clearer. The Enneagram and Spiral Dynamics are two entirely separate subjects, though both related to psychology. At least one other user here knows about the Enneagram, — Mercurial, I think — though I'm not sure if anyone knows about the Spiral. The Enneagram is a model for human personality types and the Spiral is theory of evolutionary psychology.

Personally, the way I've learned the Enneagram is from this book, with help from another person who is far more knowledgeable than I am. That same person helped me to understand the Spiral and didn't teach me with books, so I'm afraid I can't refer you to any particular resources, though I assure you there's plenty out there. Don Beck, who wrote a book on it in the late nineties, is the name that usually comes up whenever people talk about it, though.

Replies from: Bugmaster
comment by Bugmaster · 2012-11-05T22:49:21.837Z · LW(p) · GW(p)

Thanks for the info !

comment by Alicorn · 2012-11-05T22:09:24.065Z · LW(p) · GW(p)

Welcome! I like it when people come here by way of my stuff :)

Replies from: Cinnia
comment by Cinnia · 2012-11-06T14:06:08.491Z · LW(p) · GW(p)

Thanks! Reading Luminosity and Radiance helped me move on from most of the disgust and anger I harbored toward the original series, and after reading the other posts on luminosity, I'm starting to observe and monitor my thoughts and actions more often.

comment by alanog · 2012-11-04T15:20:13.157Z · LW(p) · GW(p)

Hi, I'm Alan, a student in my final year of secondary school in London, England. For some reason I'm finding it hard to remember how and when I stumbled upon Less Wrong. It was probably in March or April this year, and I think it was because Julia Galef mentioned it at some point, thought I may be misremembering.

Anyway, I've now read large chunks of the Sequences (though I can never remember which bits exactly) and HPMOR, and enjoy reading all the discussion that goes on here. I've never registered as a user before as I've never felt the burning need to comment on anything, but thought I should take the survey as I seemed part of its intended audience, so maybe I'll find things to say now.

I only study maths and science subjects in school, and am planning to study for a science degree when I head off to University next year. However, I tend to hang out more with the philosophically inclined people in school, and have had much fun introducing and debating Newcomb, prisoners' dillemas, torture vs dust specks, transhumanism and the like with them.

LessWrong is definitely one of those things I regret not finding out about earlier. It's my favourite website now, although I should probably stop using it as a place to procrastinate so much.

comment by lucb1e · 2012-10-17T15:31:31.702Z · LW(p) · GW(p)

Hello everyone, I'm Luc, better known on the web as lucb1e. (I prefer not to advertise my last name for privacy reasons.) I'm currently a 19 year old student, doing application development in Eindhoven, The Netherlands.

Like Aaron Swartz, I meant to post in discussion but don't have enough karma. I've been reading articles from time to time for years now, so I think I have an okay idea what fits on this site.

I think I ended up on LessWrong originally via Eliezer's NPC story. After reading that I looked around on the site, read about the AIBox experiment (which I later conducted myself), and eventually found LessWrong. This was probably about three or four years ago. During this time I've read some articles, sometimes being linked here and sometimes coming here by myself. I'm a bit hesitant to participate in the community because it seems quite out of my league; everybody knows a ton about rationality whereas I've only read some bits and pieces. I think I have an okay idea of what is appropriate to post, though, and also especially where I should not try to post :)

comment by [deleted] · 2012-10-07T00:16:42.073Z · LW(p) · GW(p)

Well, I haven't really figured out what you all need to know about me, but I suppose there must be something relevant. Let's start with why I'm here.

I can remember being introduced to Less Wrong in two ways, though I don't know in what order. One was through HPMoR, and the other through a post about Newcomb's problem. Neither of those really brought me here in a direct way, though. I guess I am here based on the cumulative sum of recommendations and mentions of LW made by people in my social circle, combined with a desire for new reading material that is between SF/fantasy novels and statistics textbooks in need for concentration. So, since I want stuff to read, preferably lots of it, I am starting with the Sequences.

I think the next-most-relevant information here is what fields I am knowledgeable (or not) about. My single area of greatest expertise is pure mathematics; I dropped out of grad school most of the way (I was told by people who should know) to a PhD with a thesis in algebraic topology, and am now a math tutor at the high school and college levels. I have a big gap in my useful math knowledge around statistics, though, which I am now working to fill. Hence the textbooks. I also know more than the average person about archaic household chores like canning and sewing.

comment by aotell · 2012-09-20T13:37:38.994Z · LW(p) · GW(p)

Hi everyone!

I'm a theoretical physicist from Germany. My work is mostly about the foundations of quantum theory, but also information theory and non-commutative geometry. Currently I'm working as head of research in a private company.

As a physicist I have been confronted with all sorts of (semi-) esoteric views about quantum theory and its interpretation, and my own lack of a better understanding got me started to explore the fundamental questions related to understanding quantum theory on a rational basis. I believe that all mainstream interpretations have issues and that the real answer is a rigorous theory of quantum measurement. On my blog at http://aquantumoftheory.wordpress.com I argue that quantum theory does not have to be interpreted and I propose a rational alternative to interpretation. This is also the main reason I came here, to discuss my results with other rationalists to see if they are indeed satisfying. So your feedback is very welcome!

Other interests of mine include cognitive psychology, music (both active and passive), cooking and photography. Science in general and the philosophy of science, at least the more rational parts, are also interests of mine.

Replies from: NancyLebovitz, shminux
comment by NancyLebovitz · 2012-09-20T14:36:10.707Z · LW(p) · GW(p)

Welcome to Less Wrong!

I'm interested in your idea that quantum theory doesn't have to be interpreted.

Replies from: aotell
comment by aotell · 2012-09-20T14:57:46.314Z · LW(p) · GW(p)

Thanks Nancy!

Have you checked out the posts at my blog? I don't know about your background, but maybe you will find them helpful. If you would like to have a more accessible break down then I can write something here too. In any case, thank you for your interest, highly appreciated!

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-20T15:45:34.582Z · LW(p) · GW(p)

From your blog and your paper, your idea seems to be that the quantum state of the universe is a superposition, but only one branch at a time is ever real, and the selection of which branch will become real at a branching is nondeterministic. Well, Bohmian mechanics gets criticised for having ghost wavepackets in its pilot wave - why are they less real than the wavepackets which happen to be guiding the classical system - and you must be vulnerable to the same criticism. Why aren't the non-dominant branches (page 11) just as real as the dominant branch?

Replies from: aotell
comment by aotell · 2012-09-20T16:06:38.011Z · LW(p) · GW(p)

Thank you for your feedback Mitchell,

I'm afraid you have not understood the paper correctly. First, if a system is in a superposition depends on the basis you use to expand it, it's not a physical property but one of description. The mechanism of branching is actually derived, and it doesn't come from superpositions but from eigenstates of the tensor factor space description that an observer is unable to reconstruct. The branching is also perfectly deterministic. I think your best option to understand how the dominance of one branch and the non-reality of the others emerges from the internal observation of unitary evolution is to work through my blog posts. I try to explain precisely where everything comes from and why it has to follow. The blog is also more comprehensible than the paper, which I will have to revise at some point. So please see if you can more make sense of it from my blog, and let me know if you still understand what I'm trying to say there. Unfortunately the precise argument is too long to present here in all detail.

Replies from: aotell
comment by aotell · 2012-09-20T18:04:18.257Z · LW(p) · GW(p)

I think it will be helpful if I briefly describe what my approach to understanding quantum theory is, so that you can put my statements in the correct context. I assume a minimal set of postulates, namely that the universe has a quantum state and that this state evolves unitarity, generated by the strictly local interactions. The usual state space is assumed. Specifically, there is no measurement postulate or any other postulates about probability measures or anything like that. Then I go on to define an observer as a mechanism within the quantum universe that is realized locally and gathers information about the universe by interacting with it. With this setup I am able to show that an observer is unable to reconstruct the (objective) density operator of a subsystem that he is part of himself. Instead he is limited to finding the eigenvector belonging to the greatest eigenvalue of this density operator. It is then shown that the measurement postulate follows as the observer's description of the universe, specifically for certain processes that evolve the density operator in a way that changes the order of the eigensubspaces sorted by their corresponding eigenvalues. That is really all. There are no extra assumptions whatsoever. So if the derivation is correct then the measurement postulate is already contained in the unitary structure (and the light cone structure) of quantum theory.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-20T21:33:58.238Z · LW(p) · GW(p)

As you would know, the arxiv sees several papers every month claiming to have finally explained quantum theory. I would have seen yours in the daily listings and not even read it, expecting that it is based on some sort of fallacy, or on a "smuggled premise" - I mean that the usual interpretation of QM will be implicitly reintroduced (smuggled into the argument) in how the author talks about the mathematical objects, even while claiming to be doing without the Born rule. For example, it is very easy for this to happen when talking about density matrices.

It is a tedious thing to go through a paper full of mathematics and locate the place where the author makes a conceptual mistake. It means you have to do their thinking for them. I have had another look at your paper, and seen a little more of how it works. Since you are here and wanting to promote your idea, I hope you will engage with me even if I am somewhat "lazy", in the sense that I haven't gone through the whole thing and understood it.

So first of all, a very simple issue that you could comment on, not just for my benefit but for the benefit of anyone who wants to know what you're saying. An "observer" is a physical being who is part of the universe. The universe is described by a quantum state vector. The evolution of the state vector is deterministic. How do you get nondeterministic evolution of the observer's state, which ought to be just a part of the overall state of the universe? How do you get nondeterminism of the part, from determinism of the whole?

We know how this works in the many-worlds interpretation: the observer splits into several copies that exist in parallel, and the "nondeterminism" is just an individual copy wondering why it sees one eigenvalue rather than another. The copy in the universe next door is thinking the same thing but with a different eigenvalue, and the determinism applies at the multiverse level, where both copies were deterministically produced at the same time. That's the many-worlds story.

But you have explicitly said that only one branch exists. So how do you reconcile nondeterminism in the part with determinism in the whole?

Second, a slightly more technical issue. I see you writing about the observer as confined to a finite local region of space, into which particles unpredictably enter and scatter. But shouldn't the overall state vector be a superposition of such events? That is, it will be a superposition of "branches" where different particles enter the region at different times, or not at all. Are you implicitly supposing that the state vector outside the small region of space is already "reduced" to some very classical-looking basis?

Replies from: aotell
comment by aotell · 2012-09-20T22:32:51.822Z · LW(p) · GW(p)

I see it exactly like you. I too see the overwhelming number of theories that usually make more or less well hidden mistakes. I too know the usual confusions regarding the meaning of density matrices, the fallacies of circular arguments and all the back doors for the Born rule. And it is exactly what drives me to deliver something that is better and does not have to rely on almost esoteric concepts to explain the results of quantum measurements.

So I guarantee you that this is very well thought out. I have worked on this very publication for 4 years. I flipped the methods and results over and over again, looked for loopholes or logical flaws, tried to improve the argumentation. And now I am finally confident enough to discuss it with other physicists.

Unfortunately, you are not the only physicist that has developed an understandable skepticism regarding claims like I make. This makes it very hard for me to find someone who does exactly what you describe as being hard work, thinking the whole thing through. I'm in desperate need of someone to really look into the details and follow my argument carefully, because that is required to understand what I am saying. All answers that I can give you will be entirely out of context and probably start to look silly at some point, but I will still try.

I do promise that if you take the time to read the blog (leave the paper for later) carefully, you will find that I'm not a smuggler and that I am very careful with deduction and logic.

To answer your questions, first of all it is important that the observer's real state and the state that he assumes to be in are two different things. The objective observer state is the usual state according to unitary quantum theory, described by a density operator, or as I prefer to call them, state operator. There is no statistical interpretation associated with that operator, it's just the best possible description of a subsystem state. The observer does not know this state however, if he is part of the system that this state belongs to. And that is the key result and carefully derived: The observer can only know the eigenstate of the density operator with the greatest eigenvalue. Note that I'm not talking about eigenstates of measurement operators. The other eigensubspaces of the density operator still exist objectively, the observer just doesn't know about them. You could say that the "dominant" eigenstate defines the reality for the observer. The others are just not observable, or reconstructable from the dynamic evolution.

Once you understand this limitation of the observer, it follows easily that an evolution that changes the eigenvalues of the density operator can change their order too. So the dominant eigenstate can suddenly switch from one to another, like a jump in the state description. This jump is determined by external interactions, i.e. interactions of the system the observer describes with inaccessible parts of the universe. An incoming photon could be such an event, and in fact I can show that the information contained in the polarization state of an incoming photon is the source of the random state collapse that generates the Born rule. The process that creates this outcome is fully deterministic though and can be formulated, which I do in my blog and the paper. The randomness just comes from the unknown state of the unobserved but interacting photon.

So as you can see this is fundamentally different from MWI, and it is also much more precise about the mechanism of the state reduction and the source of the randomness. And the born rule follows naturally. No decision theory and artificial assumptions about state robustness, preferred basis or anything like that. Just a natural process that delivers an event with a probability measurable by counting events.

Your last question about the environment being classical is a very good one. I do not model the environment to be classical, in fact there is no assumption about it other than that it belongs to a greater quantum system and that it is not part of the system that the observer wants to describe. There are also no restrictions about anything being in a superposition. That problem resolves itself because the state described by the observer turns out to be a pure state of the local system, always. So even if you assume some kind of superposition of these events, you will always get a single outcome. The scattering process in fact has the property of sending superpositions to different eigensubspaces of the state operator, so that it cleans up everything and makes it more classical, just like the measurement postulate would.

I know I am demanding a lot here, but I really think you will not regret spending time on this. Let me know what else I can explain.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-21T01:40:56.261Z · LW(p) · GW(p)

Here's another question. Suppose that the evolving wavefunction psi1(t), according to your scheme, corresponds to a sequence of events a, b, c,... and that the evolving wavefunction psi2(t) corresponds to another sequence of events A, B, C... What about the wavefunction psi1(t)+psi2(t)?

Replies from: aotell
comment by aotell · 2012-09-21T07:45:55.524Z · LW(p) · GW(p)

You really come up with tricky questions, good :-). I think there are several ways to understand your questions and I am not sure which one was intended, so I'll make a few assumptions about what you mean.

First, an event is a nonlinear jump in the time evolution of the subjectively perceived state. The objective global evolution is still unitary and linear however. In between the perceived nonlinear evolution events you have ordinary unitary evolution, even subjectively. So I assume you mean the subjective states psi1(t) and psi2(t). The answer is then that in general superpositions are not valid subjective evolutions anymore. You can still use linearity piecewise between the events, but the events themselves don't mix. There are exceptions, when both events happen at the same time and the output is compatible, as in can be interpreted as having measured an subspace instead of a single state, which requires mutual orthogonality. So in other words: In general there is no global state that would locally produce a superposition if there are nonlinear local events.

However if you mean that psi1 and psi2 are the global states that produce a list of events a,b,c and A,B,C respectively and you add up those, then the locally reconstructed state evolution will get complicated. If you add with coefficients psi(t) = c1 psi1(t) + c2 psi2(t) then you will get the event sequence a,b,c for |c1|>>|c2| and the sequence A,B,C for |c2|>>|c1|. What happens in between depends on the actual states and how their reduced state eigenspaces interact. You may see an interleaved mix of events, some events may disappear or you may see a brand new event not there before. I hope this answers your questions.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-22T02:45:44.110Z · LW(p) · GW(p)

I find your reference to "the subjectively perceived state" problematic, when the physical processes you describe don't contain a brain or even a measuring device. Freely employing the formal elements and the rhetoric of the usual quantum interpretation, when developing a new one supposedly free of special measurement axioms and so forth, is another way for the desired conclusion to enter the line of reasoning unnoticed.

In an earlier comment you talk about the "objective observer state", which you describe as the usual density operator minus the usual statistical interpretation. Then you talk about "reality for the observer" as "the eigenstate of the density operator with the greatest eigenvalue", and apparently time evolution "for the observer" consists of this dominant eigenstate remaining unchanged for a while (or perhaps evolving continuously if the spectrum of the operator is changing smoothly and without eigenvalue crossings?), and then changing discontinuously when there is a sharp change in the "objective state".

Now I want to know: are we really talking about states of observers, or just of states of entities that are being observed? As I said, you're not describing the physics of observers, you're not even describing the physics of the measurement apparatus; you're describing simple processes like scattering. So what happens if we abolish references to the observer in your vocabulary? We have physical systems; they have an objective state which is the usual density operator; and then we can formally define the dominant eigenstate as you have done. But when does the dominant eigenstate assume ontological significance? For which physical systems, under which circumstances, is the dominant eigenstate meaningful - brains of observers? measuring devices? physical systems coupled to measuring devices?

Replies from: aotell
comment by aotell · 2012-09-22T09:00:24.086Z · LW(p) · GW(p)

Your question is absolutely valid and also important. In fact, most of what I write in my paper and the blog is about answering precisely this.

My observer is well defined, as a mechanism that is part of a quantum system and who interacts with the quantum system to gather information about it. He is limited by the locality of interaction and the unitary nature of the evolution. I imagine the observer to be a physicist, who tries to describe the universe mathematically, based on what he sees. But that is only a trick in order to have a mathematical formulation of the subjective view. The observer is prototypical for any mechanism that tries to create a model of his surrounding. This approach is very different from modeling cognitive mechanisms, and it's also much more general. The information restriction is so fundamental that you can talk about his subjective reconstruction of what is going on as local subjective reality, as everyone has to share it.

The meaning of the dominant eigensubspace is then derived from this assumption. Specifically, I am able to identify a non-trivial transformation on the objective density operator of the observer's subsystem that he cannot gain any knowledge about. This transformation creates a class of equivalent representations that are all equally valid descriptions which the observer could use for making a model of his environment (and himself). The arbitrariness of the representation connected with this reconstruction however forces him to reduce his state description to something more elementary, something that all equivalent descriptions have in common. And that turns out to be the dominant eigensubspace as his best option. This point is very important, and the derivation I provide in the blog is rigorous and detailed. The result is that the subjective reality as reconstructed by any observer like this evolves unitarily if the greatest eigenvalue does not intersect with other eigenvalues (the observer himself cannot know the value of the eigenvalues either) or discontinuous as a formerly smaller eigenvalue intersects with the greatest one to become the new dominant eigenvalue. This requires an interaction with a part of the system that is not contained in the objective local state description, like an incoming photon.

This approach also has the advantage that you don't have to actually model the observer. You still know what information is available to him. That is why the observer does not even have to be part of the system that you want to "subjectify". You already know how he would describe it. Specifically, you don't have to consider any kind of entanglement between observer states and observed states. The dominant eigensubspace is a valid description of every system that the describing entity is part of and that contains everything the observer is directly interacting with. If you want to get quantum jumps you also need an external inaccessible environment.

Summarizing, there's no need to postulate the ontology or relevance of the dominant eigensubspace. I was very careful to only make assumptions that are transparent and to derive everything from there. Specifically I am not adopting any definition or terminology from interpretations of quantum theory.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-23T03:14:57.532Z · LW(p) · GW(p)

I finally got as far as your main calculation (part IV in the paper). You have a two-state quantum system, a "qubit", and another two-state quantum system, a "photon". You make some assumptions about how the photon scatters from the qubit. Then you show that, given those assumptions, if the coefficients of the photon state are randomly distributed, then applying the Born rule to the eigenvalues of the old "objective state" (density operator) of the qubit, gives the probabilities for what the "dominant eigenstate" of the new objective state of the qubit will be (i.e. after the scattering).

My initial thoughts are 1) it's still not clear that this has anything to do with real physical processes 2) it's not surprising that an algebraic combination of quantum coefficients with random variables is capable of yielding new random variables with a Born-rule distribution 3) if you try to make this work in detail, you will end up with a new modification of quantum mechanics - perhaps a stochastic, piecewise-linear Bohmian mechanics, or just a new form of "objective collapse" theory - and not a derivation of the Born rule from within quantum mechanics.

Are you saying that actual physical systems contain populations of photons with randomly distributed coefficients such as you describe? edit Or perhaps just that this is a feature of electromagnetically mediated measurement interactions? It sounds like a thermal state, and I suppose it's plausible that localized thermal states are generically involved in measurement interactions, but these details have to be addressed if anyone is to understand how this is related to actual observation.

Replies from: aotell
comment by aotell · 2012-09-23T07:57:17.637Z · LW(p) · GW(p)

There must be something that you have fundamentally misunderstood. I will try to clear up some aspects that I think may cause this confusion.

First of all, the scattering processes presented in the paper are very generic to demonstrate the range of possible processes. The blog contains a specific realization which you may find closer to known physical processes.

Let me explain in detail again what this section is about, maybe this will help to overcome our misunderstanding. A photon scatters on a single qubit. The photon and the qubit each bring in a two dimensional state space and the scattering process is unitary and agrees with conservation laws. The state of the qubit before the interaction is known, the state of the photon is external to the observer's system and therefore entirely unknown, and it is independent of the state of the qubit.

The result of the scattering process is traced over the external outgoing photon states to get a local objective state operator. You then write I apply the Born rule, but that's really exactly what I don't do. I use the earlier derived fact that a local observer can only reconstruct the eigenstate with the greatest eigenvalue. This will result in getting either the qubit's |0> or |1> state.

In order to get the exact probability distribution of these outcomes you have to assume exactly nothing about the state of the photon, because it is entirely unknown. If you assume nothing then all polarizations are equally likely, and you get an SU(2) invariant distribution of the coefficients. That's all. There are no assumptions whatsoever about the generation of the photons, them being thermal or anything. Just that all polarizations are equally likely. This is a very natural assumption and hard to argue against. The result in then not only the Born rule but also an orthogonal basis which the outcomes belong to.

So if you accept the derivation that the dominant eigensubspace is the relevant state description for a local internal observer and you accept that the state of the incoming photons is not known, then the Born rule follows for certain scattering processes. If you use precisely the process described in my blog is up to you. It merely stands for a class of processes that all result in the Born rule.

You don't need any modification of quantum mechanics for that. Why do you think you would? Also, this is not just a random combination of algebraic conditions and random distributions. Th assumption about the state distribution of the photon is the only valid assumption if you don't want to single out a specific photon polarization basis. And all the results are consequences of local observation and unitary interactions.

Have you worked through my blog posts from the beginning in the meantime? I ask because I was hoping that they describe all this very clearly. Please let me know if you disagree with how the internal observer reconstructs the quantum state, because I think that's the problem here.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-23T10:22:34.905Z · LW(p) · GW(p)

I understand that you have an algebraic derivation of Born probabilities, but what I'm saying is that I don't see how to make that derivation physically meaningful. I don't see how it applies to an actual experiment.

Consider a Stern-Gerlach experiment. A state is prepared, sent through the apparatus, and the electron is observed coming out one way or the other. Repeat the procedure with identical state preparation, and you can get a different outcome.

For Copenhagen, this is just a routine application of the Born rule.

Suppose we try to explain this outcome using decoherence. Well, now we are writing a wavefunction for the overall system, measuring device as well as measured object, and we can show that the joint wavefunction splits into two parts which are entirely decohered for all practical purposes, corresponding to the two different outcomes. But you still have to apply the Born rule to "obtain" a specific outcome.

Now how does your idea explain the facts? I really don't see it. At the level of wavefunctions, each run of the experiment is the same, whether you look at just the wavefunction of the individual electron, or at the joint wavefunction of electron plus apparatus. How do we get physically different outcomes? Apparently it requires these random scattering events, that do not feature at all in the usual analysis of the experiment.

Are you saying that the electron that has passed through the Stern-Gerlach apparatus is really in a superposition, but for some reason I only see it as being located in one place, because that's the "dominant eigenstate"? Does this apply to the whole apparatus as well - really in a superposition, but experienced as being in a definite state, not because of decoherence, but because of scattering + my epistemic limitations??

Replies from: aotell
comment by aotell · 2012-09-23T11:12:35.431Z · LW(p) · GW(p)

This would be a lot simpler if you weren't avoiding my questions. I have asked you whether you have understood and accept the derivation of the dominant eigenstate as the best possible description of the state of a system that the observer is part of. I have also asked if you have read my blog from the beginning, because I need to know where your confusion about what I am saying comes from.

The Stern Gerlach experiment goes like this in my theory: The superposition of the spins of the silver atoms must be collapsed already at the moment the beam splits up, because a much later collapse would create a continuous position distribution. That also means a Copenhagen-like act of observation cannot happen any later, specifically not at a screen. This is a good indication that not observation itself forces the silver atoms to localize but something else, that relates to observation but is not the act of looking at it. In the system that contains the experiment and the observer, the observer would always "see" a state that belongs to the dominant eigenstate of the objective state operator of that system. It doesn't really matter if in that system the observer is entangled with the spin state or not. As soon as you apply the field to separate the silver atoms you also create an energy difference (which is also flight time dependent and scans through a rather large range of possible resonant frequencies). The photons in the environment that are out of the observer's direct observation and unknown to him begin to interact with the two spin states, and some do in a way that creates spin flips, with absorption and stimulated emission, or just shake the atom a little bit. The sum of these interactions can create a total unitary evolution that creates two possible eigenvectors of the state operator, one containing each spin z-eigenstate and a probability for each to be the dominant eigenstate that goes conform with the Born rule. That includes the assumption that the photon states from the environment are entirely unknown. The scattering process I give in my blog shows that such a process is possible and has the right outcome. The dominant eigenstate of the system containing the observer is then the best description of reality that this observer can come up with. Or in other words, he sees either spin up or down and their trajectories.

If you accept the fact that an internal observer can only ever know the dominant eigenstate then state jumps with unknown/random outcome are a necessary consequence. That the statistics of those jumps is the Born rule for events that involve unknown photons is also a direct consequence. And all that follows just from unitary evolution of the global state and the constraints by locality and unitarity on the observer. So please tell me which of the derived steps you do not accept, so that we can focus on it. And please point me to exactly where in the blog the offending statement is.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-23T13:41:31.162Z · LW(p) · GW(p)

Earlier, I should have referred to the calculation as being in part IV, not part V. I've read part V only now - including the stuff about "branch switching" and how "The observer can switch between realities without even noticing, because all records will agree with the newly formed reality." When I said these ideas led towards "stochastic, piecewise-linear Bohmian mechanics", I was more right than I knew!

Bohmian mechanics is rightly criticised for supposedly being just a single-world theory, yet having all those other world-branches in the pilot wave. If your account of reality includes wavefunctions with seriously macroscopic superpositions, then you either need to revise the theory so it doesn't contain such wavefunctions, or you need to embrace some form of many-world-ism. Supposing that "hidden reality branches" exist, but don't get experienced until your personal stream-of-consciousness switches into them, is juvenile solipsism.

If that is where your theory leads, then I have little interest in continuing this discussion. I was suspicious from the beginning about the role that the "subjectively reconstructed state of the universe" was playing in your theory, but I didn't know exactly what was going on. I had hoped that by discussing a particular physical setup (Stern-Gerlach), we would get to see your ideas in action, and learn how they work by demonstration. But now it seems that your outlook boils down to quantum dualism in a virtual multiverse. There is a subjective history which is a series of these "dominant eigenstates", plucked from a superposition whose other branches are there in the wavefunction, but which aren't considered fully real unless the subjective history happens to jump to them.

There is some slim possibility that your basic idea could play a role in the local microscopic dynamics of a new theory, distinct from quantum mechanics but which produces quantum mechanics in a certain limit. Or maybe it could be the basis of a new type of many-worlds theory. But branch-switching observers is ridiculous and it's a reductio ad absurdum of what you currently offer.

ETA: I would really like to know what motivates the downvote on this comment. Is there someone out there who thinks that a theory of physics in which "the observer" can "switch", from one history, to another in which all memories and records have been modified to imply a different past, is actually worth considering as an explanation of quantum mechanics? I'm not exaggerating; see page 11 here, the final paragraph of part V, section A.

Replies from: aotell
comment by aotell · 2012-09-23T14:27:09.458Z · LW(p) · GW(p)

You keep ignoring the fact that the dominant eigenstate is derived from nothing but the unitary evolution and the limitations of the observer. This is not a "new theory" or an interpretation of any kind. Since you are not willing to discuss that part your comments regarding the validity of my approach are entirely meaningless. You criticize my work based on the results which are not to your liking, and not with respect to the methods used to obtain these results. So I beg you one last time, let us rationally discuss my arguments, and not what you believe is a valid result or not. If you can show my arguments to be false beyond any doubt, based on the arguments that I use in my blog, or alternatively, if you can point out any assumptions that are arbitrary or not well founded I will accept your statement. But not like this. If you claim to be a rationalist then this is the way to go.

Any other takers out there who are willing to really discuss the matter without dismissing it first?

Edit : And just for the record, this has absolutely nothing to do with Bohmian mechanics. There is no extra structure that contains the real outcomes before measurement or any such thing. The only common point is the single reality. Furthermore, your quote of page 11 leaves out an important fact. Namely that the switching occurs only for the very short time history where the dominant eigenstates interact and stabilizes for the long term, meaning within a few scattering events of which you probably experience billions every second. There is absolutely no way for you to switch between dominant eigenstates with different memories regarding actual macroscopic events.

comment by shminux · 2012-09-20T18:50:50.552Z · LW(p) · GW(p)

Have fun :) I'll see if I can make sense of your blog.

comment by Spinning_Sandwich · 2012-09-10T23:08:03.762Z · LW(p) · GW(p)

Howdy, I'm a math grad student.

I discovered Less Wrong late last night when a friend linked to a post about enjoying "mere" reality, which is a position I've held for quite some time. That post led me to a couple posts about polyamory and Bayesianism, which were both quite interesting, and I say this as someone familiar with each topic.

Although I've read bits & pieces of Harry Potter & the Methods of Rationality, it wasn't until I browsed through this thread that I realized it was assembled here.

I will freely admit that I tend to be a bit skeptical of enthusiastic science fans (not a stick in the mud, but annoyed with giddy atheism run amok, say, or with the glorification of pop science; maybe it's best summarized by this comic: http://www.smbc-comics.com/index.php?db=comics&id=1777#comic ), but I expect that any such cynical voices are either welcome here or are dismissed with such irony as to be amusing.

The fact that I went to the trouble to join & post should be evidence enough to say I like much of what I've seen. :)

Replies from: MBlume, FiftyTwo, Decius
comment by MBlume · 2012-09-10T23:32:51.622Z · LW(p) · GW(p)

Re: the SMBC strip, I remember tutoring physics in college, and being surprised that my students (all pre-med) had memorized constants I still routinely looked up.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-09-11T00:03:41.055Z · LW(p) · GW(p)

Interestingly doing physics undergrad I never memorised constants, but was annoyed that the only way to succeed in tests was to memorise formulae. Contrastingly you can understand how the systems work which I felt gave a more important level of understanding (e.g. you can fairly intuitively see how things work with momentum, acceleration etc, and with a bit more effort get relativity). Though I suspect in retrospect my main motivation was annoyance that people I felt were less clever or understood less did better by putting in more work memorising and practising than me.

comment by FiftyTwo · 2012-09-11T00:05:06.747Z · LW(p) · GW(p)

What is your area of research/interest in Mathematics?

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-11T00:57:54.198Z · LW(p) · GW(p)

I'm primarily interested in number theory, but I have a great deal of interest in analysis generally (more pure analytic things than anything numerical), which originally developed since it arises from set theory quite directly. I regret that I have never had direct access to a working logician.

I wouldn't say that I have a research area yet, but I expect it will be in either algebraic number theory or PDE. I guess I'm in a rather small group of people who can say that with a straight face, since they're on opposite ends of the spectrum.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-09-12T13:51:31.056Z · LW(p) · GW(p)

Coincidentally I am in the process of writing my final advanced logic assignment as we speak (I wouldn't call myself a working logician as a) I'm undergrad, and b) rarely working). My module focuses on the lead up to Godels incompleteness theorem, so overlaps with set theory related stuff a lot. I might be able to answer some general questions but no guarantees.

Know how you feel about doing very different things simultaneously, done both political philosophy and logic recently, odd shift of gears.

Random question You wouldn't know how to show the rand of an increasing total recursive function is a recursive set would you? Or why if a theory has a arbitrarily large finite model it has an infinite model?

Odd thing about doing high level stuff is realising that the infrastructure you get used to doing lower level stuff (wikipedia articles, decent textbooks, etc.) ceases to exist. I feel increased sympathy for people pre information age.

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-14T11:45:41.037Z · LW(p) · GW(p)

You'd have to explain what the rand function is, since that is apparently an un-Google-able term unless you want Ayn Rand (I don't), the C++ random return function, or something called the RAND corporation.

The second question is due to compactness.

I'm the kind of person who reads things like Fixing Frege for fun after prelims are over.

Edit: Oh, & I don't mean to be rude, but I probably wouldn't call anyone a working mathematician/logician unless they were actively doing research either in a post-doc/tenure position or in industry (eg at Microsoft).

Replies from: FiftyTwo
comment by FiftyTwo · 2012-09-15T17:02:19.816Z · LW(p) · GW(p)

You'd have to explain what the rand function is, since that is apparently an un-Google-able term unless you want Ayn Rand (I don't),

Ah sorry meant "range" not "rand," nevermind think I got it. [I apologise for shamelessly pumping you for question answers.] As for Ayn, no-one does.

Would you recommend "Fixing Frege?" Think I've read bits and pieces of Burgess before but it never made a massive impact.

I'd agree with you on the definition of working logician, the post docs and lecturers I've worked with are on a completely different level from even the smartest student. Not quite thousand year old vampire level but the same level of difference as a native language speaker and a learner.

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-16T03:32:54.531Z · LW(p) · GW(p)

It helps that generally (ie unless you're at Princeton/Cambridge/etc) the faculty at a given school will have come from much stronger schools than the grad students there, and similarly for undergrads/grads. And by "helps" I mean that it helps maintain the effect while explaining it, not that it helps the students any.

As far as the range of a recursive function goes, isn't that the very definition of a recursive set?

I'm definitely enjoying Fixing Frege. This is the third Burgess book I've read (Computability & Logic and Philosophical Logic being the other two), and when it's just him doing the writing, he's definitely one of the clearest expositors of logic I've ever read.

Apparently, he also gets chalk all over his shirt when he lectures, but I've never seen this first-hand.

comment by Decius · 2012-09-10T23:56:24.769Z · LW(p) · GW(p)

Hey, if you need more than 2 sig figs from a calculation, you shouldn't be doing it manually anyway.

Replies from: Spinning_Sandwich
comment by Spinning_Sandwich · 2012-09-11T01:04:32.800Z · LW(p) · GW(p)

I say if you need an explicit computation with nonintegral coefficients, you shouldn't be working in that area anyway.

Replies from: Decius
comment by Decius · 2012-09-11T04:27:58.585Z · LW(p) · GW(p)

For a short 45 degree offset: Take the length, in inches, of the offset: add half of that number to it, then subtract 1/16" for each full inch.

To convert inches to decimal feet: 0" is 0.00 feet 3" is .25 feet, 6" is .50 feet, 9" is .75 feet, 12 inches is 1.00 feet. Select the closest one of these, and then add or subtract .01 feet for each 1/8th inch. To convert decimal feet to fractional inches, select the closest quarter and then add or subtract 1/8th inch for each .01 foot above or below.

comment by Chris_Roberts · 2012-09-05T17:29:04.909Z · LW(p) · GW(p)

My name is Chris Roberts. Professionally, my background is finance, but I have always been fascinated by science and have tried to apply a scientific approach to my thought and discussions. I find far too much thinking dominated by ideology and belief systems without any supporting evidence (let alone testable hypotheses). Most people seem to decide their positions first, then marshal arguments to justify their prejudigments. I have never considered myself a "rationalist", but rather an empiricist. I believe in democracy, the free market and science because they have been demonstrated to be more effective in the real world than the alternatives. But I am not ideologically committed and believe they all can be improved. One common thread in these methods is that they are all self-correcting, able to recover from mistakes, and inclusive, allowing input from all participants (at least in theory). I mention this because it is reflective of my personal philosophy.

I was reading "Harry Potter and the Methods of Rationality" (which itself I found from TV Tropes). I found it amusing, and the discussion articulate (though Harry himself, as presented there, rather unlikeable), so I decided to find out more about the author and his ideas, which led me here. I have been browsing the site for a few weeks, have found it quite fascinating, and feel I am ready to make some modest contributions. I posted this to the discussions thread as a first attempt. Please feel free to dissect and provide constructive criticism :).

comment by PhDre · 2013-03-28T18:56:04.301Z · LW(p) · GW(p)

Hello, I'm a 21 year old undergraduate student studying Economics and a bit of math on the side. I found LessWrong through HPMOR, and recently started working on the sequences. I've always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant as I weigh options moving forward out of Undergrad (Peace Corp? Developmental Economics?)... I'm fond of ellipses, Science Fiction novels and board games - I'll keep my interests to a minimum here, but I've noticed there are meetups regularly; I'm currently studying abroad in Europe, but I live close to Washington DC and would enjoy meeting members of the community face to face at some point in the future!

Edit: If anyone reads this, could you either direct me to a conversation that addresses the question "How has LW / rational thinking influenced your day to day life, if at all," or respond to me directly here (or via PM) if you're comfortable with that! Thanks!

Replies from: Kawoomba
comment by Kawoomba · 2013-03-28T19:10:44.849Z · LW(p) · GW(p)

I've always been torn between an interest in pure rational thinking, and an almost purely emotional / empathetic desire for altruism, and this conflict is becoming more and more significant

Those are not at all at odds. Read e.g. Why Spock is Not Rational, or Feeling Rational.

Relevant excerpts from both:

A popular belief about "rationality" is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can't find any theorem of probability theory which proves that I should appear ice-cold and expressionless.

So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: "That which can be destroyed by the truth should be." My dreaming self's happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.

and

To be sure, emotions often ruin our attempts at rational thought and decision-making. When we’re anxious, we overestimate risks. When we feel vulnerable, we’re more likely to believe superstitions and conspiracy theories. But that doesn’t mean a rational person should try to destroy all their emotions. Emotions are what create many of our goals, and they can sometimes help us to achieve our goals, too. If you want to go for a run and burn some fat, and you know that listening to high-energy music puts you in an excited emotional state that makes you more likely to go for a run, then the rational thing to do is put on some high-energy music.

Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals. You're allowed to be emotionally suckered, eh, influenced into doing your best (instrumental rationality) to do good in the world (for your values of 'good')!

Replies from: PhDre
comment by PhDre · 2013-03-28T19:33:10.749Z · LW(p) · GW(p)

Thank you for the reading suggestions! Perhaps my mind has already packaged Spock / lack of emotion into my understanding of the concept of 'Rationality.'

To respond directly -

Your purely emotion / empathetic desire for altruism governs setting your goals, your pure rational thinking governs how you go about reaching your goals.

Though if pure emotion / altruism sets my goals, the possibility of irrational / insignificant goals remains, no? If for example, I only follow pure emotion's path to... say... becoming an advocate for a community through politics, there is no 'check' on the rationality of pursuing a political career to achieve the most good (which again, is a goal that requires rational analysis)?

In HPMoR, characters are accused of being 'ambitious with no ambition' - setting my goals with empathetic desire for altruism would seem to put me in this camp.

Perhaps my goal, as I work my way through the sequences and the site, is to approach rationality as a tool / learning process of its own, and see how I can apply it to my life as I go. Halfway through typing this response, I found this quote from the Twelve Virtues of Rationality:

How can you improve your conception of rationality? Not by saying to yourself, “It is my duty to be rational.” By this you only enshrine your mistaken conception...Do not ask whether it is “the Way” to do this or that. Ask whether the sky is blue or green. If you speak overmuch of the Way you will not attain it.

Replies from: Kawoomba
comment by Kawoomba · 2013-03-30T07:34:47.357Z · LW(p) · GW(p)

There is no "correct" way whatsoever in setting your terminal values, your "ultimate goals" (other agents may prefer you to pursue values similar to their own, whatever those may be). Your ultimate goals can include anything from "maximize the number of paperclips" to "paint everything blue" to "always keep in a state of being nourished (for the sake of itself!)" or "always keep in a state of emotional fulfillment through short-term altruistic deeds".

Based on those ultimate goals, you define other, derivative goals, such as "I want to buy blue paint" as an intermediate goal towards "so I can paint everything blue". Those "stepping stones" can be irrational / insignificant (in relation to pursuing your terminal values), i.e. you can be "wrong" about them. Maybe you shouldn't buy blue paint, but rather produce it yourself. Or rather invest in nanotechnology to paint everything blue using nanomagic.

Only you can (or can't, humans are notoriously bad at accurately providing their actual utility functions) try to elucidate what your ultimate goals are, but having decided on them, they are supra-rational / beyond rational / 'rational not applicable' by definition.

There is no fault in choosing "I want to live a life that maximizes fuzzy feelings through charitable acts" over "I'm dedicating my life to decreasing the Gini index, whatever the personal cost to myself."

comment by EGI · 2013-02-09T23:40:41.858Z · LW(p) · GW(p)

Hello,

I found this site via HPMOR, which was the most awesome book I have read for several years. Besides being awesome as a book there were a lot of moments during reading I thought wow, there is someone who really thinks quite like myself. (Which is unfortunately something I do not experience too often.) Thus I was interested in who the author of HPMOR is, so I googled “less wrong”.

This site really held what HPMOR promised, so I spend quite some time reading through many articles absorbing a lot of new and interesting concepts.

Regarding my own person, I am a 30 years old biochemist currently working on my master thesis in structural biology. I grew up and live in Cologne, Germany.

I am, since early childhood very interested in everything science, engineering and philosophy related, thus inferential distances to most topics discussed here were not too large. On the downside most people perceive me as quite nerdy. This is reinforced by my rather poor social skill(I am possibly on the spectrum) so I was bullied a lot during childhood. Thus my social life was quite dim, though it improved quite a lot during my twenties, mostly due to having a relationship.

I was raised with an agnostic respectively weakly catholic (maybe there is a god, perhaps or something) worldview, and became increasingly atheistic during my teen-years, though this is not really remarkable and pretty much the default for scientifically educated people in Germany. Further on a lot of transhumanistic idea(l)s have a lot of appeal to me.

Besides the clarity and high intellectual level of discourse on this site I really like the technophilic / progress optimistic worldview of most people here. The general technology is evil meme held by a lot of “intellectuals” really puts me of, especially if they do not realize, that their entire live depends utterly on the very technology they shun.

My main criticism is an (IMHO) over-representation of the ai-foom scenario as a projected future, though this is a post on its own (which I hope to write up soon).

I have been lurking the site for quite some time now (> 1 year) mostly due to akrasia related reasons. First I really like reading interesting ideas and dislike writing so if I can spend time on less wrong this time has a much higher hedonic quality for me if I read articles than if I write my own article or comments. Second, whenever I read a post here and find something missing or imprecise or even wrong, in most cases someone already pointed it out often more precisely and eloquently than I could have done, so I mostly did not feel to much need to comment anyway.

I decided to delurk now anyway, because I have several ideas for posts in mind, which I hope to write up over the next few weeks or month, hopefully contributing to the awesomeness of this site. Further on I contemplate starting an LW meet-up group in my hometown (I could use som help / advise there).

Cudos and an unconditional upvote to the person who first guesses the meaning of my username.

comment by blacktrance · 2012-11-05T21:39:16.412Z · LW(p) · GW(p)

Long-time lurker, first-time poster. I'm 21, male, and a college student majoring in economics and minoring in CS. I first heard of Eliezer Yudkowsky when a couple of my friends discovered Harry Potter and the Methods of Rationality two years ago. I started reading it and enjoyed it immensely at first, but as the plot eclipsed what I'd call the "cool tricks", I became less interested and dropped it. More recently, a different friend linked me to Intellectual Hipsters. After reading it, I read several sequences and was hooked.

My journey to rationality was started by my parents (both of whom are atheists with degrees in STEM fields). I was provided with numerous science books as a child, and I was taught the basics of the scientific method, as well as encouraged to think analytically in general. They also introduced me to science fiction. I grew up in a heavily religious part of the US, so I frequently had to defend my beliefs. Then I discovered what people call "arguing on the Internet", which I found I enjoy. That caused me to refine and develop my beliefs.

My current beliefs. I'm a quasi-Objectivist (in the Ayn Rand sense), though politically I'm a classical liberal (pragmatic libertarian). I'm not particularly interested in AI or cryonics (though I support transhumanism). I'm a compatiblist (free will and determinism are not mutually exclusive). I think technological and scientific progress will continue to reduce limitations on humans, and that's a good thing.

comment by JDM · 2012-11-04T23:54:37.378Z · LW(p) · GW(p)

I wandered onto this site, read an article, read some interesting discussion on it, and decided to take the survey. The survey had some interesting discussion and I enjoyed the extra credit, which I did the majority of, with an exception of the IQ test I couldn't get to work right and will do later. I enjoyed the discussion I read, though, and decided this would be an interesting site to read more on. I don't know yet how much discussion I'll contribute, but when I see an interesting discussion I'm sure I'll join in.

I don't have too much to say about myself. I'm a college student majoring in computer science, and I'd like to do work in artificial intelligence eventually, although I'm nowhere near experienced enough yet to be able to have real discussion about it.

comment by Delta · 2012-08-01T11:44:16.424Z · LW(p) · GW(p)

Hi Guys,

I found out about this place from Methods of Rationality and have been reading the sequences for a few months now. I don't have a background in science or mathematics (just finished reading law at university) so I've yet to get to the details of Bayes but I've been very intrigued by all the sequences on cognitive bias, and this site was the trigger for me becoming interested in the mind-blowing realities of evolution and prompted me finally pulling my finger out and shifting from non-thinking agnosticm to atheism.

I'm still adjusting but I feel this site has already helped start to clean up my thinking, so thanks to everyone for making coming here such a life-changing experience.

David

comment by jamesf · 2013-03-24T20:13:56.715Z · LW(p) · GW(p)

I used to have a different account here, but I wanted a new one with my real name so I made this one.

I study computer and electrical engineering at the University of Nebraska-Lincoln, though I'm not finding it very gratifying (rationalists are rare creatures around here for some reason), and I'm trying as hard as I can to find some other way to get paid to code/think so I can drop out. Here's my occasionally-updated reading list, and my favorite programming language is Clojure.

comment by Aetherial · 2013-02-23T23:14:00.880Z · LW(p) · GW(p)

Peter here,

I stumbled onto LW from a link on TvTropes about the AI Box experiment. Followed it to an explanation of Bayes' Theorem on Yudowsky.net 'cause I love statistics (the rage I felt knowing that not one of my three statistics teachers ever mentioned Bayes was an unusual experience).

I worked my way through the sequences and was finally inspired to comment on Epistemic Viciousness and some of the insanity in the martial arts world. If your goal is to protect yourself from violence, martial arts is more likely to get you hurt or thrown in jail.

It seems inappropriate that I went by Truth_Seeker before discovering this site, so a chose a handle that was in opposition to that. And I like the word aether.

comment by jooyous · 2013-02-22T06:24:14.898Z · LW(p) · GW(p)

Hellooo! I de-lurked during the survey and gradually started rambling at everyone but I never did one of these welcome posts!

My exposure to rationality started with idea that your brain can have bugs, which I had to confront when I was youngish because (as I randomly mentioned) I have a phobia that started pretty early. By then I had fairly accurate mental models of my parents to know that they wouldn't be very helpful/accommodating, so I just developed a bunch of workarounds and didn't start telling people about it until way later. The experience helped me reason about a lot of these blue-killing robot types of situations, and get used to handling involuntary or emotional responses in a goal-optimizing way. As a result, I'm interested in cognitive biases, neurodiversity and braaains, as well as how to explain and teach useful life skills to my tiny brother so that he doesn't have to learn them the hard way.

My undergrad degree is in CS/Math, I'm currently a CS grad student (though I don't know if I'm sticking around) and I'm noticing that I have a weird gap in my understanding of AI-related discussions, so I'll probably start asking more questions about it. I regret to admit I've been avoiding probability because I was bad at it, but I'm slowly coming around to the idea that it's important and I need to just suck it up and learn. Also, a lot of sciencey people whine about this, but I think AP Lit (and similar classes) helped me think better; it taught me to read the question carefully, read the text closely, pay attention to detail and collect evidence! But it has possibly made me way too sensitive to word choice; I apologize for comments saying "you could have used this other word but you didn't, so clearly this means something!" when the other word has never crossed your mind.

I started reading the site so long ago that I can't actually remember how I found it. One of the things I appreciate the most about the community is the way people immediately isolate problems, suggest solutions and then evaluate results, which is awesome! and also not an attitude I'm used to seeing a lot. I also appreciate having a common vocabulary to discuss biases, distortions, and factors that lead to disagreements. There were a lot of concepts I wanted to bring up with people that I didn't have a concise word for in the past.

Replies from: CCC
comment by CCC · 2013-02-22T07:48:15.176Z · LW(p) · GW(p)

I regret to admit I've been avoiding probability because I was bad at it, but I'm slowly coming around to the idea that it's important and I need to just suck it up and learn.

Fortunately, it's also very easy to get a basic grip on it. Multiplication, addition, and a few simple formulae can lead to some very interesting results.

A probability is always written as a number between 0 and 1, where 1 is absolute certainty and 0 cannot happen in any circumstances at all, no matter how unlikely. A one in five chace is equal to a probablity of 1/5, or 0.2. The probability that event E, with probability P, is false is 1-P. The chances of independent events E and F, with probabilities P and Q, occurring in succession is P*Q. (This leads to an interesting result if you try to work out the odds of at least two people in a crowd sharing a birthday)

Probability theory also involves a certain amount of counting. For example; what are the chances of rolling a seven with two ordinary, six-sided dice? (Assuming that the dice are fair, and not weighted).

Each dice has a one-in-six chance of showing any particular number. For a given pair of numbers, that's 1/6*1/6=1/36. And, indeed, if you list the results you'll find that there are 36 pairs of numbers that could turn up: (1, 1), (1, 2), (2, 1), (1, 3)... and so on. But there's more than one pair of numbers that adds up to 7; (2, 5) and (1, 6), for example.

So what are the odds of rolling a 7 with a pair of dice?

Replies from: jooyous
comment by jooyous · 2013-02-22T08:00:37.368Z · LW(p) · GW(p)

Yeah, it's the counting problems that I've been avoiding! Because there are some that seem like you've done them correctly and someone else does it differently and gets a different answer and they still can't point out what you did wrong so you never quite learn what not to do. And then conditional probabilities turn into a huge mess because you forget what's given and what isn't and how to use it togetherrrr.

I hope it's a sixth, but at least this question is small enough to write out all the combinations if you really have to. It's the straight flushes and things that are murder.

Replies from: CCC, Qiaochu_Yuan
comment by CCC · 2013-02-22T09:32:25.407Z · LW(p) · GW(p)

Yeah, it's the counting problems that I've been avoiding!

Ah, I see. You'll be glad to know that there are often ways to shortcut the counting process. The specifics often depend on the problem at hand, but there are a few general principles that can be applied; if you give an example, I'll have a try at solving it.

I hope it's a sixth

It is, indeed.

Replies from: Bugmaster, jooyous
comment by Bugmaster · 2013-02-22T19:56:51.064Z · LW(p) · GW(p)

In fact, many if not most concepts in probability theory deal with various ways of avoiding the counting process. It gets way too expensive when you start handling billions of combinations, and downright impossible when you deal with continuous values.

comment by jooyous · 2013-02-22T18:58:28.802Z · LW(p) · GW(p)

It is, indeed.

asdfjkl; I wrote out all the pairs. -_- Can't trust these problems otherwise! Grumble.

Replies from: arundelo
comment by arundelo · 2013-02-22T19:16:16.517Z · LW(p) · GW(p)

"You are never too cool to draw a picture" -- or make a list or a chart. This particular problem is well served by a six-by-six grid.

Replies from: jooyous
comment by jooyous · 2013-02-22T19:20:17.921Z · LW(p) · GW(p)

Dice are okay; it's the problems with cards that get toooo huge. :)

comment by Qiaochu_Yuan · 2013-02-22T08:05:54.189Z · LW(p) · GW(p)

Can you give an example?

Replies from: jooyous
comment by jooyous · 2013-02-22T08:16:45.820Z · LW(p) · GW(p)

I will try to hunt one down! It's usually the problems where you have to choose a lot of independent attributes but also be careful not to double-count.

Also, when someone explains it, it's clear to see why their way is right (or sounds right), but it's not clear why your way is wrong.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2013-02-22T19:14:06.474Z · LW(p) · GW(p)

it's not clear why your way is wrong.

Yes, I notice that people are in general either bad at giving or reluctant to give this kind of feedback. I think I'm okay at this, so I'd be happy to do this by PM for a few problems if you think that would help.

comment by khriys · 2013-02-08T08:53:03.360Z · LW(p) · GW(p)

Hello everyone!

My personal and professional development keep leading me back to the LessWrong sequences, so I've gathered up enough humility to join in the discussions. I hope to meet your high standards.

I'm 27 and my background is in business and the life sciences; I see rationality as a critically important tool in these areas, but ultimately a relatively minor tool for life as a successful human animal. As such I see this community as being similar to a bodybuilding/powerlifting community, where the interest is in training the rational faculty instead of physical strength.

Edit: Wow, all my comments downvoted? That's a pretty strongly negative response. Care to explain?

Replies from: CoffeeStain
comment by CoffeeStain · 2013-02-08T11:40:30.107Z · LW(p) · GW(p)

From what I can see, people probably thought you were belaboring a point which was not a part of the discussion at hand. You said you were answering the moral value of "there exists 3^^^3 people AND..." versus the situation without that prefix, but people discussing it did not take that interpretation of the problem, nor did Eliezer when he asked it. You might say that to determine the value of 3^^^3 people getting specks in their eye you would have to presuppose it included the value of them existing, but nobody was discussing that as if it were part of the problem. It sucks, yeah, but the way that people prefer to have discussions wins out, and you can but prefer it or not, or persuade in the right channels. A good lesson to learn, and don't be discouraged.

Replies from: khriys
comment by khriys · 2013-02-08T12:59:20.806Z · LW(p) · GW(p)

Thank you.

comment by deathpigeon · 2013-01-06T12:41:11.815Z · LW(p) · GW(p)

Greetings! I am Viktor Brown (please do not spell Viktor with a c), and I tend to go by deathpigeon (please do not capitalize it or spell pigeon with a d) on the internet. (I cannot actually think of a place I don't go by deathpigeon...) I'm currently 19 years old. I'm unemployed and currently out of school since my parents cut off me off for paying for school. I consider myself to be a rationalist, a mindset that comes from how I was raised rather than any particular moment in my life. When I was still in university, I was studying computer science, a subject that still interests me, and I learned some programming in C++. When I get a positive enough income flow that I can afford to continue my schooling, I plan on continuing to study computer science. Around the internet, I tend to hang out in the TvTropes fora, where I also go by deathpigeon. I make a point of regularly reviewing my beliefs, be they political, religious, or something else. I'm not entirely sure what else to say, since I'm terrible with social situations, and introducing myself to a bunch of strangers is a situation I'm especially bad with.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-01-06T13:24:18.074Z · LW(p) · GW(p)

ouch... who the hell downvotes a greeting post?

comment by Rixie · 2012-11-29T01:41:05.239Z · LW(p) · GW(p)

Hi everyone!

Well, I'm new-ish here, and this site is really big, so I was wondering where I should start, like, which articles or sequences I should read first?

Thanks!

Replies from: None
comment by [deleted] · 2012-11-29T18:58:30.356Z · LW(p) · GW(p)

Are there any topics you're particularly interested in?

comment by AlexanderD · 2012-11-14T03:28:01.341Z · LW(p) · GW(p)

Howdy. My name is Alexander. I've read a lot of LW, but only recently finally registered. I learned about LW from RationalWiki, where I am a mod. I have read most of the sequences, and many of them are insightful, although I am skeptical about the utility of such posts as the Twelve Virtues, which seeks to clothe a bit of good advice in the voluminous trappings of myth. HPMOR is also good. I don't anticipate engaging in much serious criticism of these things, however, because I have little experience in the sciences or mathematics, and often struggle to grasp things that appear easy for those accustomed to equations. The utility of Bayes' Theorem is one good example. I expect to ask questions, often.

My primary interest in LW are practical ones - discussions about AI and the singularity are interesting, but I am focused on improving my analytic ability and making good decisions.

comment by sparkles · 2013-03-21T04:06:19.338Z · LW(p) · GW(p)Replies from: Nisan, khafra, gwern, JohnWittle, MixedNuts, TheOtherDave, Strange7, wallowinmaya, Endovior, Kawoomba
comment by Nisan · 2013-03-21T14:28:58.505Z · LW(p) · GW(p)

First of all, I encourage you to take advantage of the counseling and psychological services available to you on campus, if you have not already done so. They're very familiar with psychological pain.

Second, I encourage you to go to a Less Wrong meetup when you get the chance. There's a good chance you'll find people there who are as smart as you and who care about some of the same things you care about. There are listings for meetups in Toronto, Albany, and New York City. I can personally attest that the NYC meetup is great and exists and has good people.

Finally, I wish I could point you to resources that are especially appropriate for trans people, but I don't know what they are.

I really hope that you will be okay.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:28:17.816Z · LW(p) · GW(p)
comment by khafra · 2013-03-21T12:06:37.071Z · LW(p) · GW(p)

I know there's at least 3 MtF semi-regulars on this board, and one more who turned down Aubrey de Grey for a date once; so it's not like you're alone here. But I agree with Kawoomba that there are resources focused more closely on your problems than a forum on rationality, and these will help better and quicker. If you cannot intellectually respect anyone there enough that talking would help, Shannon Friedman does life coaching (and Yvain is on the last leg of his journey to becoming a psychiatrist).

If there's a sequence that would directly help you, it's probably Luminosity.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:26:24.776Z · LW(p) · GW(p)
comment by gwern · 2013-03-21T15:12:02.808Z · LW(p) · GW(p)

RIT can be a pretty miserable place in the winter, I know from personal experience. Maybe you have some seasonal affective disorder in addition to your other issues? Vitamin D in the morning and melatonin in the evening might help, and of course exercise is good for all sorts of mood related issues - so joining one of the clubs might be a good idea, or take a class like fencing (well, I enjoyed the fencing class anyway...) or start rockclimbing at the barn. Clubs might be a good idea in general, actually - the people in the go club were not stupid when I was there and it was nice hanging out in Java Wally's.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:28:41.489Z · LW(p) · GW(p)
comment by JohnWittle · 2013-03-21T06:25:38.393Z · LW(p) · GW(p)

It sounds like you have some extremely strong Ugh Fields. It works like this:

A long, long time ago, you had an essay due on Monday and it was Friday. You had the thought, "Man, I gotta get that essay done", and it caused you a small amount of discomfort when you had the thought. That discomfort counted as negative feedback, as a punishment, to your brain, and so the neural circuitry which led to having the thought got a little weaker, and the next time you started to have the thought, your brain remembered the discomfort and flinched away from thinking about the essay instead.

As this condition reinforced itself, you thought less and less about the paper, and then eventually the deadline came and you didn't have it done. After it was already a day late, thinking about it really caused you discomfort, and the flinch got even stronger; without knowing it, you started psychologically conditioning yourself to avoid thinking about it.

This effect has probably been building in you for years. Luckily, there are some immediately useful things you can do to fight back.

Do you like a certain kind of candy? Do you enjoy tobacco snuff? You can use positive conditioning on your brain the same way you did before, except in the opposite direction. Put a bag of candy on your desk, or in your backpack. Every time you think about an assignment you need to do, or how you have some job applications to fill out, eat a piece of candy. As long as you get as much pleasure out of the candy as you get pain out of the thought of having to do work, the neural circuitry leading to the thought of doing work will get stronger, as your brain begins to think it is being rewarded for having the thought.

It doesn't take long at all before the nausea of actually doing work is entirely gone, and you're back to being just "lazy". But at this point, the thought of doing work will be much less painful, and the candy (or whatever) reward will be much stronger.

All you have to do is trick your brain into thinking it will get candy every time it thinks about doing work. Even if you know that it's just you rewarding yourself, it still works. Yeah, it's practically cheating, but your goal should be to do what works. Just trying really, really hard isn't just painful; it also doesn't work. Cheat instead.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:25:21.662Z · LW(p) · GW(p)
comment by MixedNuts · 2013-03-21T17:09:38.013Z · LW(p) · GW(p)

Oh hey, you're girl!me. Maybe what helped me will help you?

Getting on bupropion stopped me being miserable and hurting all the time, and allowed me to do (some) stuff and be happy. That let me address my executive function issues and laziness; I'm not there yet, but I'm setting up a network of triggers that prompt me to do what I need.

This will hurt like a bitch. When you get to a semi-comfortable point you just want to stop and rest, but if you do that you slide back, so you have to push through pain and keep going. But once the worst is over and you start to alieve that happiness is possible and doing things causes it, it gets easier.

So I'd advise you to drag yourself to a psychiatrist (or perhaps a therapist who can refer you) and see what they can do. If you want friends and/or support, you could drop by on #lesswrong on Freenode, it's full of cool smart people. If I can help, you know where to find me.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:28:51.348Z · LW(p) · GW(p)Replies from: MixedNuts
comment by MixedNuts · 2013-03-24T19:49:29.806Z · LW(p) · GW(p)

I showed up at the doctor's during drop-in hours. I was "voluntarily" admitted to the hospital, put on fluoxetin (Prozac), and discharged a few days later. After some months, it became clear Prozac was making me worse. Since my depression is the atypical subtype (low motivation, can become happy by doing things, oversleeping, overeating), they switched me to bupropion (Wellbutrin). That worked.

Doctors (or at least these particular doctors) know their stuff, but I double-check everything on Crazy Meds.

comment by TheOtherDave · 2013-03-21T04:58:31.112Z · LW(p) · GW(p)

What would help?

Replies from: sparkles
comment by sparkles · 2013-03-24T18:25:13.145Z · LW(p) · GW(p)
comment by Strange7 · 2013-03-22T08:10:21.758Z · LW(p) · GW(p)

What worked for me in a related situaton was leveraging comparative advantage by:

1) Finding somebody who isn't broken in the same specific way, 2) Providing them with something they considered valuable, so they'd have reason to continue engaging, 3) Conveying information to them sufficient to deduce my own needs, 4) Giving them permission to tell me what to do in some limited context related to the problem, 5) Evaluating ongoing results vs. costs (not past results or sunk costs!) and deepening or terminating the relationship accordingly.

None of these steps is trivial; this is a serious project which will require both deep attention and extended effort. The process must be iterated many times before fully satisfactory results can reasonably be expected. It's a very generalized algorithm which could encompass professional counseling, romance, or any number of other things.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:29:11.928Z · LW(p) · GW(p)Replies from: Strange7
comment by Strange7 · 2013-03-27T09:29:26.113Z · LW(p) · GW(p)

Given that you're abnormally intelligent, you probably need less information to deduce any given thing than most people would. The flip side of that is, other people need more information than you think they will, especially on subjects you've studied extensively (such as the inside of your own mind).

Given that you haven't figured out the problem yourself yet, they probably also need more information than you currently have. You might be able to save yourself some trouble (not all of it, but every little bit counts) on research and communication in step #3 by aiming step #1 at people who've already studied the general class of problem in depth. Does RIT have a psych department? Make friends with some of the students there and they'll probably give you a long list of bad guesses (each of which is a potential lead on the actual problem) for free.

Given that you're trans, you probably also have an unusually good idea of what you want. Part of the difficulty of step #2 is that other people cannot be counted on to be fully aware of, let alone adequately explain, their own desires.

If your introspection is chewing itself bloody, maybe it just needs a metaphorical bite block. Does RIT have a group of people who get together for tabletop roleplaying games? Those are going to be big soon. http://thealexandrian.net/wordpress/24656/

The goal is to connect with people who will, for one reason or another, help you without being asked, such that the help will keep coming even while you are unable to ask. They don't necessarily need to do it consciously, or in a way that makes any sense.

What exactly do you mean by "writing?"

comment by David Althaus (wallowinmaya) · 2013-03-23T00:45:46.698Z · LW(p) · GW(p)

You could start or attend a lesswrong meetup, maybe you'll find some like-minded people.

Or talk to some of your professors, some of them should be pretty smart. Maybe also try meeting new folks, maybe older students?

Go to okcupid, search for lesswrong, yudkowsky or rationality and meet some like-minded people. You don't have to date them.

I know, it's pretty hard, I myself don't click with 99,9% of all people and I'm definitely under +3 sigma.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:27:42.240Z · LW(p) · GW(p)
comment by Endovior · 2013-03-21T10:18:45.195Z · LW(p) · GW(p)

I think I understand. There is something of what you describe here that resonates with my own past experience.

I myself was always much smarter than my peers; this isolated me, as I grew contemptuous of the weakness I found in others, an emotion I often found difficult to hide. At the same time, though, I was not perfect; the ease at which I was able to do many things led me to insufficient conscientiousness, and the usual failures arising from such. These failures would lead to bitter cycles of guilt and self-loathing, as I found the weakness I so hated in others exposed within myself.

Like you, I've found myself becoming more functional over time, as my time in university gives me a chance to repair my own flaws. Even so, it's hard, and not entirely something I've been able to do on my own... I wouldn't have been able to come this far without having sought, and received, help. If you're anything like me, you don't want to seek help directly; that would be admitting weakness, and at the times when you hurt the worst, you'd rather do anything, rather hurt yourself, rather die than admit to your weakness, to allow others to see how flawed you are.

But ignoring your problems doesn't make them go away. You need to do something about them. There are people out there who are willing to help you, but they can't do so unless you make the first move. You need to take the initiative in seeking help; and though it will seem like the hardest thing you could do... it's worth it.

Replies from: sparkles
comment by sparkles · 2013-03-24T18:26:01.314Z · LW(p) · GW(p)
comment by Kawoomba · 2013-03-21T07:02:31.682Z · LW(p) · GW(p)

You certainly have a flair for the dramatic.

"aaaaaaaa it hurts please make it stop aaaaa"

Not to seem abrasive, but if you require actual psychological help (and it sure seems like it), this is not the place for it. It can certainly stimulate your mind, but we're here to critically examine each other's thoughts and update as needed, not to provide therapy.

I'm sure you can enjoy being a part of this community, but it's the wrong place to desperately ask for help (over and over) and talk about you hurting so much and being broken, in your first (second) comment. There must be some resources on campus to help you deal with your inner demons professionally. Or your friends that you're so quick to discount as inferior.

but I would be surprised if my intelligence was under +3.5σ, and it feels like at least +5σ (but that sounds hubristic)

You have got to be kidding. At least one in three and a half million (at a rough estimate)? You're lucky I'm a benevolent Nigerian Prince, or I might not take that seriously.

Edit: To those finding this comment needlessly antagonistic, there's a danger in sugarcoating that this is the wrong place for people in immediate psychological distress. If a critically wounded patient turned up at a GP's office, it would be actively damaging to say "you've come to the right place, we'll treat you here". Without labelling the parent poster as such, would you as part of an internet community want to assume responsility for someone at risk of self-harm, or at risk of suffering further psychological trauma? If not, you've got to tell them in no unclear terms.

Replies from: wedrifid
comment by wedrifid · 2013-03-21T09:06:35.732Z · LW(p) · GW(p)

Edit: To those finding this comment needlessly antagonistic, there's a danger in sugarcoating that this is the wrong place for people in immediate psychological distress. If a critically wounded patient turned up at a GP's office, it would be actively damaging to say "you've come to the right place, we'll treat you here". Without labelling the parent poster as such, would you as part of an internet community want to assume responsility for someone at risk of self-harm, or at risk of suffering further psychological trauma? If not, you've got to tell them in no unclear terms.

And precisely where in this analogy does the GP have a moral responsibility to compare himself hyperbolically to a benevolent Nigerian prince for the purpose of sarcastically dismissing the patients self assessment of intelligence?

I put it to you that any perception that you were being needlessly antagonistic is related to the parts of your comment that are not the careful referral to a more appropriate venue for treatment. In fact, you seem to have used a subverted 'sandwich technique'. You open with some antagonism, sneak in the appropriate message then follow up with some more needless antagonism.

(I actually didn't vote down the parent until I saw this edit. This justification attempt is appalling, oblivious, pretentious and various other negative labels related to me thinking it is bad.)

Replies from: Kawoomba
comment by Kawoomba · 2013-03-21T09:10:21.813Z · LW(p) · GW(p)

You are correct, I deserve the downvote, it's not justification for the snark. It would be justification for clearly referring to more appropriate venues for help, however.

Replies from: wedrifid
comment by wedrifid · 2013-03-21T09:15:24.010Z · LW(p) · GW(p)

It would be justification for clearly referring to more appropriate venues for help, however.

Definitely agree. Pardon me if I misinterpreted the intended point of your edit.

Replies from: Kawoomba
comment by Kawoomba · 2013-03-21T09:16:43.331Z · LW(p) · GW(p)

No, I tried to have my cake, and eat it too, you were entirely justified in calling me out on it.

comment by netcode · 2012-12-31T09:14:35.810Z · LW(p) · GW(p)

It really feels good to be here. The name along sounds comforting..... 'less wrong'. I've always loved to be around people who write and provide of intuitive solutions to everyday challenges. Guess am gonna read a few posts and get acquainted to the customs here then make meaningful contributions too.

Thanks Guys for this great opportunity.

comment by shardfilterbox · 2012-10-04T02:00:47.487Z · LW(p) · GW(p)

Hi! I'm shard. I have been looking for a community just like this for quite awhile. Someone on the Brain Workshop group recommended this site too me. It looks great, I am very excited to sponge as much knowledge off as I can, and hopefully to add a grain someday.

I love the look of the site. What forum or bb do you use? or is it a custom one? I've never seen one like it, it's very clean, and I'd like to use it for a forum I wanted to start.

Replies from: Alicorn, MBlume
comment by Alicorn · 2012-10-04T02:27:19.678Z · LW(p) · GW(p)

The software behind the site is a clone of Reddit, plus some custom development.

Replies from: shardfilterbox
comment by shardfilterbox · 2012-10-07T20:59:57.108Z · LW(p) · GW(p)

Well very good job, it looks excellent. Much cleaner and easier on the eyes.

comment by aperrien · 2012-09-18T22:08:59.837Z · LW(p) · GW(p)

Greetings. My name is Albert Perrien. I was initially drawn to this site by my personal search on metacognition; and only really connected after having stumbled across “Harry Potter and the Methods of Rationality”, which I found an interesting read. My professional background is in computer engineering, database administration, and data mining, with personal studies of Machine Learning, AI and mathematics. I find the methods given here to promote rational thought and bias reduction fascinating, and the math behind everything enlightening.

Recently I’ve been giving a lot of thought to the topic of a Resetting Gandhi-Einstein, and personally, I find that I would volunteer for such a state, given prior knowledge of what I signed up for and an interesting and useful enough problem. I realize that I would only retain limited knowledge from incarnation to incarnation, but given a worthy enough mental problem and reasonably well simulated environment, I see nothing innately wrong with it. And finally, that leads me to some questions:

  • What sort of challenges would there be in volunteering to do this, and how should volunteers be chosen?

  • Should this even be something that should be done, even given a willing subject?

  • How many iterations should an individual be subjected to?

  • How many others here would volunteer for such a state, and what sort of problems would you be willing to make that sacrifice for?

This last question being possibly personal, please don't think me too forward. For me, I'd like to help with some sort of medical problem, say for example, autoimmune diseases... (edit: Mangled the formatting...)

comment by lloyd · 2012-09-14T17:09:30.267Z · LW(p) · GW(p)

It took me a few hours to find this thread like a kid rummaging through a closet not knowing what he is looking for.

As my handle indicates, I am Lloyd. Not much I think is worth saying about myself but I would like to ask a few questions to see what interests readers here, if anyone reads this, and present a sample of where my thinking may come from.

Considering the psychological model of five senses we are taught since grade school is there a categorical difference in our ability to logically perceive that 2+2=4 vs perceiving the temperature is decreasing? The deeper question being is the realness of logic (and possibly other mental faculties not being considered here) the same as the realness of sight, hearing, smell, taste and touch? There are questions which unfold from considering logic as a 'sense', but I wish to clarify this question first.

I have not found any proponent of a physical view of the universe as fundamentally alive rather than dead. Is there someone who has proposed, for example, that the stars are living and thus self-directing and the observations of galaxies may be that stars are purposefully forming these structures under their own will much like we form cities? Or maybe the idea that stars induce gravity and feed off of a source of energy from the subatomic regime? Or that different star systems may be fundamentally different on a quantum level like blood types? I mean the language is filled with terms like birth, death, and life, but it sounds like they are disconnected from their biologically meaning altogether.

Does anyone ever discuss the post-industrial society, no, not right question. Why is it that the discussion of post-industrial society is what it is? For example, in mainstream storytelling post-industrial=post-apocalyptic for much of what I have seen. There is Gene Roddenberry who cast post-industrial society as being rescued by aliens. There are Orwell and Huxley who left the world to be forever locked in an industrial nightmare. Zombies. Am I to understand that the culture's mind has settled on imaging the industrial society as its death?

Replies from: Mitchell_Porter, Mitchell_Porter, Mitchell_Porter, DaFranker, chaosmosis
comment by Mitchell_Porter · 2012-09-15T02:21:22.483Z · LW(p) · GW(p)

Why is it that the discussion of post-industrial society is what it is?

This was the hardest of your questions to get a grip on. :-) You mention disaster fiction, Star Trek, 1984, and Brave New World, and you categorize the first two as post-industrial and the second two as bad-industrial perpetuated. If I look for the intent behind your question... the idea seems to be that visions of the future are limited to destruction, salvation from outside, and dystopia.

Missing from your list of future scenarios is the anodyne dystopia of boredom, which doesn't show up in literature about the future because it's what people are already living in the present, and that's not what they look for in futurology, unless they are perverse enough to want true realism even in their escapism, and experienced enough to know that real life is mostly about boredom and disappointment. The TV series "The Office" comes to mind as a representation of what I'm talking about, though I've never seen it; I just know it's a sitcom about people doing very mundane things every day (like every other sitcom) - and that is reality.

If you're worried that reality might somehow just not contain elements that transcend human routine, don't worry, they are there, they pervade even the everyday world, and human routine is something that must end one day. Human society is an anthill, and anthills are finite entities, they are built, they last, they are eventually destroyed. But an anthill can outlive an individual ant, and in that sense the ant's reality can be nothing but the routine of the anthill.

Humans are more complex than ants and their relation to routine is more complex. The human anthill requires division of labor, and humans prepared to devote themselves to the diverse functional roles implied, in order to exist at all. So the experience of young humans is typically that they first encounter the boredom of human routine as this thing that they never wanted, that existed before them, and which it will be demanded that they accept. They may have their own ideas about how to spend the time of their life, or they may just want to be idle, but either way they will find themselves at odds with the social order to which they have been born. There are places in the social ecosystem where it works differently, but this is how it turns out for many or even most people.

So my thesis is really that boredom is the essence of human life, human society, human history, and human experience. Note well: the essence of human reality, not of reality as a whole, which is bigger than human beings. I will also say that boredom was the essence of preindustrial life as well as of industrial life, and also of any postindustrial life so long as it is still all about human beings. Some people get not to live boring lives, and wonder and terror can also just force themselves upon humanity in a certain time and place; and finally, I should add that people can live amid the boredom and not be bored, if they are absorbed in something. Our Internet society is full of distractions and so the typical Internet citizen is not just flatly bored all the time, they will be in a succession of hyper moods as they engage with one thing after another. But most of it is trivia that has no meaning in the long run and that is why it's reasonable to say that it adds up to boredom.

All these non-boring stories about the future are partly expressive of reality, but they are also just distractions from the boredom for the people who consume them. Apocalypse doesn't solve the problem of giving me a happy free life, but it does solve the problem of boredom! Salvation by aliens is an instance of something exciting and non-boring coming from outside and forcing itself upon us. Huxley and Orwell's worlds actually are boring when they're not oppressive or dissipative, so in that sense they resemble reality.

Some people in some times aren't born to boredom. What this really means is that there's some form of instability. Either it's the instability of novelty, that eventually settles down and becomes a new boredom, or it's the instability of something truly dreadful. Our favorite instability on this site is artificial intelligence, which is a plausible candidate for the thing that really will end "human reality" and inaugurate a whole new Something. There may be a cosmic boredom that eventually sets in on the other side of the singularity, but for now, dealing with everything implied by the rise of AI is already more than anyone can handle. (There may be people out there who are thinking, I can think about the possibilities of AI with equanimity, so I can handle this. But no-one's in charge, the situation is completely out of any sort of consensual control, and in that practical sense the human race isn't "handling" the situation.) There are many other ways to avoid boredom, for example the study of the universe. The main challenge then is just convincing the human race to allow you to spend all your time doing this.

But the original question was about the culture's own image of the future. My thesis is that adults generally know in their bones that their lives are boring, and that fact is itself so familiar as to be boring, so there's no market for stories which say the future itself will also be boring. You're finding the available non-boring narratives unsatisfactory - they're either dystopian or involve wishful thinking. But the problem here is whether there's a viable collective solution to boredom, or whether every such solution will be just another type of Watchtower-like unrealism (I mean the little magazine circulated worldwide by Jehovah's Witnesses, in which future life is an agrarian paradise with the sort of nonstop happiness you only see in TV commercials). I should emphasize that the narratives which dominate the part of the culture that is concerned with the practicalities of the future, such as politics, do not try to solve the boredom problem, that's not remotely on the agenda and it would be considered insanely unrealistic. Realistic politics is about ensuring that the social machine, the division of labor, continues to function, and about dealing with crises when they show up. So it might be regarded as depressing rather than boring.

I can't say that the problem of collective boredom concerns me very much. Like other singularity fans, I have my hands full preparing for that future event, which probably is the end of the boredom as we know it. The task for you may just be to come to grips with your own difference from everyone else, accept that most people will end up in some boring but functionally necessary niche, and then try to make sure that you don't end up like most people.

Replies from: lloyd
comment by lloyd · 2012-09-15T04:14:20.939Z · LW(p) · GW(p)

I think you got a grip on the gist. I didn't mention boredom in my question but you went straight to where I have been in looking at the topic. But I do not think there is reason to believe boredom is a basic state of human life indicative of how it has always been. I think it may be more related to the industrial lifestyle.

Take the 2012 Mayan calendar crap. Charles Mann concludes his final appendix in "1491" with a mention of the pop-phenom, "Archaeologists of the Maya tend to be annoyed by 2012 speculation. Not only is it mistaken, they believe, but it fundamentally misrepresents the Maya. Rather than being an example of native wisdom, scholars say, the apocalyptic 'prophecy' is a projection of European values onto non-European people." The apocalypse is the end of boredom for a bored people.

I personally do not like the boring, as you suggested, I have come to grips with that and live accordingly.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-16T04:38:31.186Z · LW(p) · GW(p)

Don't tell anyone, but I'm not immune to 2012-ism myself. At the very least, that old Mayan calendar is one of the more striking intrusions of astronomical facts into human culture; it seems to be built around Martian and Venusian cycles, and the precession of Earth.

Replies from: lloyd
comment by lloyd · 2012-09-17T03:49:18.129Z · LW(p) · GW(p)

So part of being new here...the karma thing. Did you just get docked karma for the assertion you are into 2012-ism? I didn't do it. Is there a list of taboos? I got docked for a comment on intuition (I speculate that is why).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-17T04:11:05.839Z · LW(p) · GW(p)

There's no list. In general, people downvote what they want to see less of on the site, and upvote what they want to see more of. A -1 score means one more person downvoted than upvoted; not generally worth worrying about. My guess is someone pattern-matched MP's comment to fuzzy-headed mysticism.

Replies from: lloyd, shminux
comment by lloyd · 2012-09-17T16:08:21.731Z · LW(p) · GW(p)

The idea of 'what you want to see less of' is fairly interesting. On a site dedicated to rationality I was expecting that one would want to see:

-the discussion of rationality explicitly = the Sequences

-examples of rationality in addressing problems

-a distinction between rationality and other thinking processes and when rational thinking is appropriate (ie- the boundaries of rationality)

It would be a reasonable hypothesis - based on what I have seen - that the last point causes a negative feedback. MP demonstrated a great deal of rationality (and knowledge) in addressing the questions I raised in the first post. Given this, I find it intriguing that he is captivated in any way by 2012ism. Anyway, I would expect upvotes for any comment that clarifies or contributes to the parent, downvotes for comments which obscure, and nothing for humor or personal side notes (they can generate productive input and help create an atmosphere of camaraderie).

I saw the thread on elitism somewhere and noted that the idea of elitism and the karma system are intertwined. It seems a simple explicit description of karma and what it accomplishes may be a good thread for a top member to start. - if it exists already I was implying I sought it in my request for a 'list of taboos'. It may or may not be a good idea to tell people criteria for up/down-voting, but is there a discussion about that?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-17T16:47:21.164Z · LW(p) · GW(p)

Different people want to see, and want to avoid seeing, different things. The net karma score of any given comment is an expression of our collective preferences, filtered extremely noisily through which subset of the site happens to read any given comment.

I would prefer LW not try to impose voting standards beyond "upvote what you want, downvote what you don't want." If we want a less croudsourced value judgment, we can pay someone we trust to go through and rate all the comments, though I would not contribute to that project.

comment by shminux · 2012-09-17T04:16:04.074Z · LW(p) · GW(p)

people downvote what they want to see less of on the site

Or something they disagree with strongly enough. Or if they dislike the poster. Some just press a wrong button. Some have cats walking on keyboards. If you get repeatedly downvoted to -3 or so, then there is a cause for concern.

comment by Mitchell_Porter · 2012-09-15T01:25:54.521Z · LW(p) · GW(p)

physical view of the universe as fundamentally alive rather than dead ... stars are living and thus self-directing

Since life is considered a solved problem by science, any remaining problem of "aliveness" is treated as just a perspective on or metaphor for the problem of consciousness. But talking about aliveness has one virtue; it militates against the tendency among intellectuals to identify consciousness with intellectualizing, as if all that is to be explained in consciousness is "thinking" and passive "experiencing".

The usual corrective to this is to talk about "embodiment". And it's certainly a good corrective; being reminded of the body reintroduces the holism of experience, as well as activity, the will, and the nonverbal as elements of experience. Still, I wouldn't want to say that talking about bodies as well as about consciousness is enough to make up for the move from aliveness to consciousness as the discursively central concept. There's an inner "life" which is also obscured by the easily available ways of talking about "states of mind"; and at the other extreme, being alive is also suggestive of the world that you're alive in, the greater reality which is the context to all the acting and willing and living. This "world" is also a part of cognition and phenomenology that is easily overlooked if one sticks to the conventional tropes of consciousness.

So when we talk about a living universe, we might want to keep all of that in mind, as well as more strictly biological or psychological ideas, such as whether it's like something to be a star, or whether the states and actions of stars are expressive of a stellar intentionality, or whether the stars are intelligences that plan, process information, make choices, and control their physical environment.

People do exist who have explored these ways of thought, but they tend to be found in marginal places like science fiction, crackpot science, and weird speculation. Then, beyond a specific idea like living stars, there are whole genres of what might be called philosophical animism and spiritual animism.

I think pondering whether the stars are intelligences isn't a bad hobby to have, it's the sort of obscure reaching for the unknown which over time can turn into something real and totally new. But know and study your predecessors, especially their mistakes. If you're going to be a crackpot, try at least to be a new type of crackpot, so that humanity can learn from your example. :-)

Replies from: lloyd
comment by lloyd · 2012-09-15T03:36:52.485Z · LW(p) · GW(p)

That is an impressive collection of links you put together. You have provided what I was looking for in a greater scope than I expected. The Star Larvae Hypothesis and Guy Murchie express the eccentricity in thought I was hoping someone would have knowledge of. I like to see the margins, you see. How did you come to all those tidbits? It took me a single question on this forum for me to get that scope and for that I owe you some thanks. I really do not have much of a hobby in pondering the intentions of stellar beings, but in coming up with queries that help me find the edges, margins, or whatever of this evolved social consciousness I am part of.

I do find it interesting that someone would be able to compile those links. Was this a personal interest of yours at some time or part of a program of study you came across? Or do you have some skill at compiling links that is inexplicable?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2012-09-16T04:13:39.453Z · LW(p) · GW(p)

It's a bit of both.

comment by Mitchell_Porter · 2012-09-15T00:55:17.140Z · LW(p) · GW(p)

Whether there is a "logic-sense" is a question about consciousness so fundamental and yet so hard that it's scarcely even recognized by science-friendly philosophy of mind. Phenomenologists have something to say about it because they are just trying to characterize experience, without concern for whether or how their descriptions are compatible with a particular scientific theory of nature. But if you look at "naturalist" philosophers (naturalism = physicalism = materialism = an intent that one's philosophy should be consistent with natural science), the discussion scarcely gets beyond the existence of colors and other "five-sense" qualities.

The usual approach is to talk as if a conscious state is a heap of elementary sense-qualia, somehow in the same way that a physical object could be a pile of atoms. But experience is about the perception of form as well, and this is related to the idea of a logic-sense, because logic is about concepts and abstract properties, and the properties of a "form" have an abstractness about them, compared to the "stuff" that the form is made from.

In the centuries before Kant and Husserl, there was a long-running philosophical "problem of universals", which is just this question of how substance and property are related. How is the greenness in one blade of grass, related to the greenness in another blade of grass? Suppose it were the exact same shade of green. Is it the same thing, mysteriously "exemplified" in two different places? If you say yes, then what is "exemplification" or "instantiation"? Is it a new primitive ontological relation? If you say no, and say that these are separate "color-instances", you still need to explain their sameness or similarity.

With the rise of consciousness itself as a theme of human thought, the problem has assumed a new character, because now the greenness is in the observer rather than in the blade of grass. We can still raise the classic questions, about the greenness in one experience and the greenness in another experience, but the deeper problem is whether we are even conceiving of the basic facts correctly. Experience isn't just "green stuff happening" or "round stuff happening", it's "green round stuff happening to my hand, eyes, and mouth" (if I'm eating an apple), it's "happening to me " (whoever and whatever "I" am), it's "green round stuff being experienced as green and round" - and that little word "as" sums up a whole dimension of consciousness that the focus on sense-qualia obscures; the aspect of consciousness known as its "intentionality", the fact that an experience is an experience "of" something or "about" something.

Names can be useful. The sense that "stuff is happening to me" has been called apperception. (That's jargon that you won't see on LW. Jargon that you will see, that comes at the same phenomenon from a different angle, is indexicality, the me-here-now component of an experience. One also needs to distinguish between me-here-now experienced or conceptualized in terms of difference to other "me-here-now"s, and me-here-now as simply another component of an experience, even if you're not thinking about other people at the time. Apperception is more about this second aspect, whereas discussions of indexicality tend to puzzle over what it is that distinguishes one person, as a locus of experience, from another - they're both "me" to themselves, but ontologically they are two entities, not one.) If there is a logic-sense, then it is presumably at work both in intentionality and in apperception; in fact the latter appears to contain a sort of indexical intentionality, the logic-sense applied to the perceiving self.

Two other very different perspectives: First, in Objectivism, you will see "concept formation" discussed as "measurement omission". The idea being that a concept is a perception with something removed - the sensory and indexical particularities. It doesn't quite deal with the ontological problem of what "instantiation of a property" is in the first place, but it highlights a psychological and cognitive/computational aspect.

Second, for the five senses, there are sense organs. If there is a logic sense, one should ask whether there's a logic organ too. Here the neurocomputational answer is going to be that it's a structure in the brain which has the outputs of sense organs as its inputs. This answer doesn't do away with the miasma of dualism that hangs over all functionalist explanations of experience, but it does plausibly mimic the causal dependence of higher-order experience on raw experience.

Finally, I'll point out that the nature of logic and a logic-sense is tied up with the nature of being and our awareness of it. We have a sense that reality exists, that individual things exist, and that they are a certain way. If you can stand to read something like Heidegger's historical phenomenology of Being, you'll see that grammar and logic have roots in a certain experience of being and a certain analysis of that experience, e.g. into "thatness" and "whatness", existence and essence: that a thing is, and what a thing is. These perceptions and distinctions were originally profound insights, but they were codified in language and became the everyday tools of the thinking, wilful mind. Heidegger's work was partly about recovering a perception of being prior to its resolution into existence and essence, out of a conviction that that is not the end of the story. The problem with trying to think "beyond" or "before" subject-predicate thinking is that it just turns into not thinking at all. Is there intellectual progress to be had beyond the raw observation that "Something is there", if you don't "apply concepts", or is the latter simply an essential condition of understanding? Et cetera, ad infinitum.

Replies from: lloyd
comment by lloyd · 2012-09-15T04:29:04.715Z · LW(p) · GW(p)

Thanks for addressing all three of the questions. Your ability to expound on such a variety of topics is what I was hoping someone in this forum could do. Quite insightful.

comment by DaFranker · 2012-09-14T17:33:29.858Z · LW(p) · GW(p)

Hello! Welcome to LessWrong!

This post reads very much like a stream-of-consciousness dump (the act of writing everything that crosses your mind as soon as you become aware that you're thinking it, and then just writing more and more as more and more thoughts come up), which I've noticed is sometimes one of those non-rules that some members of the community look upon unfavorably.

Regarding your questions, it seems like many of them are the wrong question or simply come from a lack of understanding in the relevant established science. There may also be some confusion regarding words that have no clear referent, like your usage of "realness". Have you tried replacing "realness" with some concrete description of what you mean, in your own mind, before formulating that question? If you haven't, then maybe it's only a mysterious word that feels like it probably means something, but turns out to be just a word that can confuse you into thinking of separate things as if they were the same, and make it appear as if there is a paradox or a grand mysterious scientific question to answer.

Overall, it seems to me like you would greatly benefit from learning the cognitive science taught/discussed in the Core Sequences, particularly the Reductionism and Mysteriousness ones, and the extremely useful Human's Guide to Words (see this post for a hybrid summary / table of contents). Using the techniques taught in Reductionism and the Guide to Words is often considered essential to formulating good articles on LessWrong, and unfortunately some users will disregard comments from users that don't appear to have read those sequences.

I'd be happy to help you a bit with those questions, but I won't try to do so immediately in case you'd prefer to find the solutions on your own (be it answers or simply dissolving the questions into smaller parts, or even noticing that the question simply goes away once the word problems are taken away).

Replies from: lloyd
comment by lloyd · 2012-09-14T18:21:57.922Z · LW(p) · GW(p)

I will tend to violate mores, but I do not wish to seem disrespectful of the culture here. In the future I will more strictly limit the scope of the topic, but considering it was an introduction...I just wished to spread out questions from myself rather than trivia about myself.

I don't think I am asking the wrong question. Such is the best reply I can formulate against the charge. As for my understanding of the established science, I thought I was reasonably versed, but in such a forum as this I am highly skeptical of my own consumption of available knowledge. But from experience, I am usually considered knowledgeable in fields of psychology I am familiar with the textbook junk like Skinner, Freud, Jung, etc.. and with,e.g., Daniel Dennett, Aronson, and Lakoff , but that doesn't make me feel more or less qualified about asking the question I proposed. In astoronomy I have gone through material ranging from Chandrasekhar to Halton Arp, and the view that the stars are subject to, rather than direct gravitational phenomena is prevalent, i.e., stars act like rocks and not like living beings.

Please elaborate on how 'realness' is unclear in its usage. I would like to know the more acceptable language. The concept is clear in my mind and I thought the diction was commonly accepted.

If the subjects I have brought up are ill-framed then I would be happy to be directed to the more encompassing discussion.

I have browsed much of what you directed me to. The structure of this site is a bit alien to my cognitive organization, but the material contained within is highly familiar.

Please help me with the questions.

Replies from: DaFranker
comment by DaFranker · 2012-09-14T20:34:12.385Z · LW(p) · GW(p)

Alright, let's start at the easy part concerning those questions:

Considering the psychological model of five senses we are taught since grade school is there a categorical difference in our ability to logically perceive that 2+2=4 vs perceiving the temperature is decreasing?

Yes. In a large set of possible categorical distinctions, they are in different categories. The true, most accurate answer is that they are not exactly the same. This was obvious to you before you even formulated the question, I suspect. They are at slightly different points in the large space of possible neural patterns. Whether they are "in the same category" or not depends on the purposes of the category.

This question needs to be reduced, and can be reduced in hundreds of ways from what I see, depending on whether you want to know about the source of the information, the source of our cognitive identification of the information/stimuli, etc.

The deeper question being is the realness of logic (and possibly other mental faculties not being considered here) the same as the realness of sight, hearing, smell, taste and touch?

"Sight" is a large mental paintbrush handle for a large process of input and data transfer that gets turned into stimuli that gets interpreted that gets perceived and identified by other parts of the brain and so on. It is a true, real physical process of quarks moving about in certain identifiable (though difficultly so) patterns in response to an interaction of light with (some stuff, "magic", I don't know enough about eye microbiology to say how exactly this works). Each step of the process has material reality.

If you are referring to the "experience"-ness, that magical something of the sense that cannot possibly exist in machines which grants color-ness to colors and image-ness to vision and cold-ness and so forth, you are asking a question about qualia, and that question is very different and very hard to answer, if it does really need an answer at all.

By contrast, it is experimentally verifiable - there is an external referent within reality - that two "objects" put with two "objects" will have the same spacetime configuration as four "objects". There is a true, real physical process by which light reflected on something your mind counts as "four objects" is the exact same light that would be reflected if your mind counted the same objects as "two objects" plus "two other objects". "2 + 2 = 4" is also a grand mental paintbrush in a different language using different words - mathematics - to represent and point to the external referent of what you observe when two and two are four.

At this point, I no longer see any mysterious "realness" in either concept. One is a reliable, predictable pattern of interactions between various particles, the other is a reliable, predictable pattern of interactions between various particles. At the same time, on a higher level of abstraction, one is seen some way to identify expected "number" (another giant mental paintbrush handle) of things, while another is some way in which our minds obtain information about the outside world and gather evidence that we are convinced is entangled with large patterns of other particles through causality.

If I'm going too fast on some things or if you dislike my potentially-very-condescending tone of writing, my apologies and please mention it so that I can adjust my behavior / writing appropriately.

The other questions afterwards become progressively much harder to work on without a solid grounding in reductionism and other techniques, and in particular for the first interrogation on a "fundamentally alive" universe, is very much at the edge of my current comfort-zone in terms of ability to reduce, decompose, dissolve and resolve questions.

Unfortunately, there are also a great many things that might get in the way of resolving these questions - for an absurd example, if you hold a strong, unshakable belief that there is a huge scientific conspiracy hiding the "fact" that outer space does not exist and everything above the sky is a hologram while earth is actually one giant flat room (rather than a round ball) that warps around in spacetime to make us believe that it's round, then I'm afraid I would likely find myself very much at a loss as to how to proceed in the discussion.

Edit: As for the stream-of-consciousness matter, it's not about the widespread coverage of subject(s), but more a comment on the writing style / continuity. Basically, more organized writing and words with clearer delimitation, continuation, and links between topics / sub-topics that don't have a continuous progression are indicators of a more in-depth analysis of one's own words and thoughts, while the stream-of-consciousness style is more difficult for the readers when engaging in conversations that seek to attain a higher degree of truth.

Replies from: lloyd
comment by lloyd · 2012-09-15T00:42:56.674Z · LW(p) · GW(p)

Thanks for clarifying.

I understand that categories are mental constructs which facilitate thinking , but do not themselves occur outside the mind. The question meant to find objections to the categorization of logic as a sense. Taken as a sense there is a frame, the category, which allows it to be viewed as analogous to other senses and interrelated to the thinking process as senses are. In the discussion concerning making the most favorable choice on Monty Hall the contestant who does not see the logical choice is "blind". When considering the limits of logical reason they can be be seen to possibly parallel the limits of visual observation- how much of the universe is impervious to being logically understood?

No need to address qualia.

Will try to constrain myself to more concise, well-defined queries and comments.

comment by chaosmosis · 2012-09-14T17:29:04.000Z · LW(p) · GW(p)

Hiya!

I don't think there's a difference between the human sense of logic and the other senses, I agree with you there. Just as it's impossible to tell whether or not you're a brain in a vat, it's also impossible to tell whether or not you're insane. Every argument you use to disprove the statement will depend on the idea that your observations or thought processes are valid, which is exactly what you're trying to prove, and circular arguments are flawed. This doesn't mean that logic isn't real, it just means that we can't interpret the world in any terms except logical ones. The logical ones might still be right, it's just that we can never know that. You might enjoy reading David Hume, he writes about similar sorts of puzzles.

It doesn't matter whether or not logic works, or whether reality is really "real". Regardless of whether I'm a brain in a vat, a computer simulation, or just another one of Chuang Tzu's dreams, I am what I am. Why should anyone worry about abstract sophistries, when they have an actual life to live? There are things in the world that are enjoyable, I think, and the world seems to work in certain ways that correspond to logic, I think, and that's perfectly acceptable to me. The "truth" of existence, external to the truth of my everyday life, is not something that I'm interested in at all. The people I love and the experiences I've had matter to me, regardless of what's going on in the realm of metaphysics.

I don't quite understand what you're saying about vitalism. I don't know what the word "life" means if it starts to refer to everything, which makes the idea of a universe where everything is alive seem silly. There's not really any test we could do to tell whether or not the universe is alive, a dead universe and an alive one would look and act exactly the same, so there's no reason to think about it. Using metaphors to explain the universe is nice for simplifying new concepts, but we shouldn't confuse the metaphor for the universe itself.

I'm not really in the mood for discussing literature or trying my hand at amateur psychoanalysis, I'll leave that last question for someone else to try their hand at, if they decide they want to.

I think the sequences will help you out. I recommend that you start with the sequence on words and language, and then tackle metaethics. It could be a lot of work, but they make an interesting read and are very amusing at times. Regardless, we're glad you're here!

Replies from: TheOtherDave, lloyd
comment by TheOtherDave · 2012-09-14T17:36:34.166Z · LW(p) · GW(p)

Just as it's impossible to tell whether or not you're a brain in a vat, it's also impossible to tell whether or not you're insane.

Well, it's possible to tell that I'm insane in particular ways. For example, I've had the experience of reasoning my way to the conclusion that certain of my experiences were delusional. (This was after I'd suffered traumatic brain damage and was outright hallucinating some of the time.) For example, if syndrome X causes paranoia but not delusions, I can ask other people who know me whether I'm being paranoid and choose to believe them when they say "yes" (even if my strong intuition is that they're just saying that because they're part of the conspiracy, on the grounds that my suffering from syndrome X is more likely (from an outside view) than that I've discovered an otherwise perfectly concealed conspiracy.

It's also possible to tell that I'm not suffering from specific forms of insanity. E.g., if nobody tells me I'm being paranoid, and they instead tell me that my belief that I'm being persecuted is consistent with the observations I report, I can be fairly confident that I don't suffer from syndrome X.

Of course, there might be certain forms of insanity that I can't tell I'm not suffering from.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-14T18:10:24.182Z · LW(p) · GW(p)

The forms of insanity that you can't tell if you're suffering from invalidate your interpretation that there are specific kinds of insanity you can rule out, no? Mainly though, I was aware that the example had issues, but I was trying to get a concept across in general terms and didn't want to muddle my point by getting bogged down in details or clarifications.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-14T18:22:19.889Z · LW(p) · GW(p)

The forms of insanity that you can't tell if you're suffering from invalidate your interpretation that there are specific kinds of insanity you can rule out, no?

I'm not sure exactly what you mean by invalidating my interpretation. If you mean that, because there are forms of insanity I can't tell if I'm suffering from, there are therefore no forms of insanity that I can rule out, then no, I don't think that's true.

And, please don't feel obligated to defend assertions you don't endorse upon reflection.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-14T18:30:53.226Z · LW(p) · GW(p)

If you mean that, because there are forms of insanity I can't tell if I'm suffering from, there are therefore no forms of insanity that I can rule out, then no, I don't think that's true.

Why not?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-14T19:04:09.376Z · LW(p) · GW(p)

Well, for example, consider a form of insanity X that leads to paranoia but is not compatible with delusion.

Suppose ask a randomly selected group of psychologists to evaluate whether I'm paranoid and they all report that I'm not.

Now I ask myself, "am I suffering from X?"

I reason as follows:

  1. Given those premises, if I am paranoid, psychologists will probably report that I'm paranoid.
  2. If I'm not delusional and psychologists report I'm paranoid, I will probably experience that report.
  3. I do not experience that report.
  4. Therefore, if I'm not delusional, psychologists probably have not reported that I'm paranoid.
  5. Therefore, if I'm not delusional, I'm probably not paranoid.
  6. If I suffered from X, I would be paranoid but not delusional.
  7. Therefore, I probably don't suffer from X.

Now, if you want to argue that I still can't rule out X, because that's just a probabilistic statement, well, OK. I also can't rule out that I'm actually a butterfly. In that case, I don't care whether I can rule something out or not, but I'll agree with you and tap out here.

But if we agree that probabilistic statements are good enough for our purposes, then I submit that X is a form of insanity I can rule out.

Now, I would certainly agree that for all forms of insanity Y that cause delusions of sanity, I can't rule out suffering from Y. And I also agree that for all forms of insanity Z that neither cause nor preclude such delusions, I can't rule out suffering from (Z AND Y), though I can rule out suffering from Z in isolation.

Replies from: chaosmosis
comment by chaosmosis · 2012-09-14T20:38:20.657Z · LW(p) · GW(p)

a form of insanity X that leads to paranoia but is not compatible with delusion.

But how would a possibly insane person determine that insanity X is a possible kind of insanity? Or, how would they determine that the Law of Noncontradiction is actually a thing that exists as opposed to some insane sort of delusion?

Now, if you want to argue that I still can't rule out X, because that's just a probabilistic statement, well, OK. I also can't rule out that I'm actually a butterfly. In that case, I don't care whether I can rule something out or not, but I'll agree with you and tap out here.

But if we agree that probabilistic statements are good enough for our purposes, then I submit that X is a form of insanity I can rule out.

I was talking about how we should regard unknowable puzzles (ignore them, mostly), like the butterfly thing, so I thought it was clear that I've been speaking in terms of possibilities this entire time. Obviously I'm not actually thinking that I'm insane. If I were, that'd just be crazy of me.

Also, this approach presumes that your understanding of the way probabilities work and of the existence of probability at all is accurate. Using the concept of probability to justify your position here is just a very sneaky sort of circular argument (unintentional, clearly, I don't mean anything rude by this).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-14T20:54:42.493Z · LW(p) · GW(p)

how would a possibly insane person determine that insanity X is a possible kind of insanity?

Perhaps they couldn't. I'm not sure what that has to do with anything.

Also, this approach presumes that your understanding of the way probabilities work and of the existence of probability at all is accurate. Using the concept of probability to justify your position here is just a very sneaky sort of circular argument

Sure. If I'm wrong about how probability works, then I might be wrong about whether I can rule out having X-type insanity (and also might be wrong about whether I can rule out being a butterfly).

Replies from: chaosmosis
comment by chaosmosis · 2012-09-14T21:40:48.175Z · LW(p) · GW(p)

Perhaps they couldn't. I'm not sure what that has to do with anything.

I didn't think that your argument could function on even a probabilistic level without the assumption that X-insanity is an objectively real type of insanity. On second thought, I think your argument functions just as well as it would have otherwise.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-14T21:56:13.156Z · LW(p) · GW(p)

If it's not an objectively real type of insanity, then I can certainly rule out the possibility that I suffer from it. If it is, then the assumption is justified.

comment by lloyd · 2012-09-14T18:56:00.228Z · LW(p) · GW(p)

Thanks for the welcome.

I raised this pov of logic (reason or rationality when applied) because I saw a piece that correlates training reason with muscle training. If logic is categorical similar to a sense then treat it metaphorically as such, I think. Improving one's senses is a little different than training a muscle and is a more direct simile. Then there is the question of what is logic sensing? Sight perceives what we call light, so logic is perceiving 'the order' of things? The eventual line of thinking starts questioning the relationship of logic to intuition. I advocate the honing of intuition, but it is identical in process to improving one's reason. The gist being that intuition picks up on the same object that logic eventually describes, like the part of vision which detects movement in the field that is only detailed once the focal point is moved upon it.

As for vitalism, the life I speak of is to extend one's understanding of biological life - a self-directing organism - to see stars as having the same potential. The behavior of stars, and the universal structure is constrained in the imagination to be subject to the laws of physics and the metaphor for a star in this frame is a fire, which is lit and burns according to predictable rules regarding combustion. The alternative is to imagine that the stars are the dogs, upon which the earth is a flea, and we are mites upon it. Why does this matter? I suppose it is just one of those world-view things which I think dictates how people feel about their existence. "We live in a dead universe subject to laws external to our being" predicates a view which sees external authority as natural and dismisses the vitality within all points which manifest this 'life'. I think the metaphor for the universe is closely tied to the ethos of the culture, so I raised this question.

Thanks for your thoughtful reply.

Replies from: Bugmaster, chaosmosis
comment by Bugmaster · 2012-09-14T20:36:17.448Z · LW(p) · GW(p)

I'm not sure what you mean by "self-directing". As I see it, "life" is yet another physical process, like "combustion" or "crystallization" or "nuclear fusion". Life is probably a more complex process than these other ones, but it's not categorically different from them. An amoeba's motion is directed, to be sure, but so is the motion of a falling rock.

Replies from: lloyd
comment by lloyd · 2012-09-14T23:53:35.821Z · LW(p) · GW(p)

An amoeba acts on its environment where a rock behaves according to extrernal force. Life also has the characteristic of reproduction which is not how processes like combustion or fusion begin or continue. There are attempts to create both biological life from naught and AI research has a goal which could be characterized as making something that is alive vs a dead machine - a conscious robot not a car. I recognize that life is chemical processes, but I, and I think the sciences are divided this way, a categorical difference between chemistry and biology. My position is that physics and chemistry, eg, do not study a driving component of reality - that which drives life. If biological life is to be called >>complexity of basic chemical processes then what drives the level of complexity to increase?

Is there a thread or some place where your position on life is expounded upon? If life is to be framed as a complex process on a spectrum of processes I could understand, provided the definition of complexity is made and the spectrum reflects observations. In fact, spectrums seem to me to be more fitting maps than categories, but I am unaware of a spectrum that defines complexity to encompass both combustion and life.

Replies from: Bugmaster
comment by Bugmaster · 2012-09-15T00:08:31.915Z · LW(p) · GW(p)

An amoeba acts on its environment where a rock behaves according to extrernal force.

The rock acts on its environment as well. For example, it could hold up other rocks. Once the rock falls, it can dislodge more rocks, or strike sparks. If it falls into a river or a stream, the rock could alter its course... etc., etc. Living organisms can affect their environments in different ways, but I see this as a difference in degree, not in kind.

Life also has the characteristic of reproduction which is not how processes like combustion or fusion begin or continue.

Why is this important ? All kinds of physical processes proceed in different ways; for example, combustion can release a massive amount of heat in a short period of time, whereas life cannot. So what ?

...as making something that is alive vs a dead machine - a conscious robot not a car.

Are we talking about life, or consciousness ? Trees are alive, but they are not conscious. Of course, I personally believe that consciousness is just another physical process, so maybe it doesn't matter.

My position is that physics and chemistry, eg, do not study a driving component of reality - that which drives life.

Technically they do not, biology does that (by building upon the discoveries of physics and chemistry), but I'm not sure why you think this is important.

then what drives the level of complexity to increase ?

I don't think that complexity of living organisms always increases.

Is there a thread or some place where your position on life is expounded upon?

Well, you could start with those parts of the Sequences that deal with Reductionism . I don't agree with everything in the Sequences, but that still seems like a good start.

comment by chaosmosis · 2012-09-14T20:26:46.751Z · LW(p) · GW(p)

As for vitalism, the life I speak of is to extend one's understanding of biological life - a self-directing organism - to see stars as having the same potential.

I don't believe that even biological life is self-directing. Additionally, I don't understand how extending one's understanding of biological life to everything can even happen. If you expand the concept of life to include everything then the concept of life becomes meaningless. Personally, whether the universe is alive, or not, it's all the same to me.

The behavior of stars, and the universal structure is constrained in the imagination to be subject to the laws of physics and the metaphor for a star in this frame is a fire, which is lit and burns according to predictable rules regarding combustion.

When you say that this behavior is "constrained in the imagination", you're not trying to imply that we're controlling or maintaining those constraints with our thoughts in any way, are you? That doesn't make sense because I am not telekinetic. How would you know that what you're saying is even true, as opposed to some neat sounding thing that you made up with no evidence? What shows that your claims are true?

The alternative is to imagine that the stars are the dogs, upon which the earth is a flea, and we are mites upon it. Why does this matter? I suppose it is just one of those world-view things which I think dictates how people feel about their existence.

If this is just an abstract metaphor, I've been confused. If so, I would have liked you to label it differently.

I don't understand why vitalism would make the universe seem like a better place to live. I'm also reluctant to label anything true for purposes other than its truth. Even if vitalism would make the universe seem like a better place to live, if our universe is not alive, then it doesn't make sense to believe in it. Belief is not a choice. If you acknowledge that the universe isn't alive then you lose the ability to believe that the universe is alive, unless you're okay with just blatantly contradicting yourself.

"We live in a dead universe subject to laws external to our being" predicates a view which sees external authority as natural and dismisses the vitality within all points which manifest this 'life'. I think the metaphor for the universe is closely tied to the ethos of the culture, so I raised this question.

I don't understand why you think determinism is bad. I like it. It's useful, and seems true.

You say that your view says that life is the source of the way things behave. Other than the label and the mysteriousness of its connotations, what distinguishes this from determinism? If it's not determinism, then aren't you just contending that randomness is the cause of all events? That seems unlikely to me, but even if it is the case, why would viewing people as controlled by "life" and mysterious randomness be a better worldview than determinism? I prefer predictability, as it's a prerequisite for meaning and context, as well as pragmatically awesome.

I really do strongly suggest that you read some of these: http://wiki.lesswrong.com/wiki/Sequences#Major_Sequences

You seem to be confusing your feelings with arguments, at some points in your comments.

comment by biased_tracer · 2012-09-12T21:49:15.815Z · LW(p) · GW(p)

Hi all! I'm Leonidas and I was also a lurker for quite some time. I forget how I exactly found Less Wrong but most likely is via Nick Bostrom's website, when I was reading about anthropics about a year ago. I'm an astronomer working on observational large-scale structure and I have a big interest in all aspects of cosmology. I also enjoy statistics, analyzing data and making inferences and associated computational techniques.

It was only during the final year of my undergraduate studies in physics that I consciously started to consider myself a rationalist and then begun trying to improve my thinking. Even though I discovered Less Wrong years later the excitement was still there and I had great pleasure reading its posts and learning about a variety of subjects. I'm now looking forward to contribute in the discussions.

comment by erikerikson · 2012-09-06T21:16:57.639Z · LW(p) · GW(p)

I am Erik Erikson. By day I currently write patents and proofs of concept in the field of enterprise software. My chosen studies included neuro and computer sciences in pursuit of the understanding that can produce generally intelligent entities of equal to or greater than human intelligence less our human limitations. I most distinctly began my "rationalist" development around the age of ten when I came to doubt all truth, including my own existence. I am forever in debt to the "I think, therefore I am" idiom as my first piece of knowledge. I happened upon LW through singularity.org and appreciate the efforts here. Of particular interest to me is improved consideration of the formulated goal for AI (really for any sentient entity) I have devised: the manifested unification of all ideals. I pleasantly found this related to the formulation of intelligence that appears commonly accepted here: "cross-domain optimization". However, I have also been concerned for some time about the mechanical bias that may be implicit: it seems clear that a system which functions through growth (the establishment of connections) as a result of correlated signals would be inherently and, of concern, incorrectly biased towards favoring the unification concept.

comment by Elec0 · 2012-09-03T09:06:19.752Z · LW(p) · GW(p)

Hello everyone. I've been lurking around this site for a while now. I found this site from HPMOR, as I'm sure a lot of people have. The fanfic was suggested to me by one of my friends who read it.

Random cliffnotes about myself. I'm a highschool senior. I'm a programmer, been programming since I was 10, it's one of my favorite things to do and it's what I plan on doing for my career. I love reading, which I would imagine is a given to most people here. I've always been interested in how the universe and people work, and I want to know the why of everything I can.

comment by a_sandwich · 2012-09-03T22:17:37.767Z · LW(p) · GW(p)

Hello, I found LessWrong through a couple of Tumblr posts. I don't really identify as a rationalist but it seems like a sane enough idea. I look forward to figuring out how to use this site and maybe make some contributions. I found reading some of the sequences interesting, but I think I might just stick to the promoted articles. As of now I have no plans on figuring out the Bayes thing, although I did give it a try. My name is Andrew.

comment by DuncanF · 2012-09-02T10:21:22.628Z · LW(p) · GW(p)

Hello everyone

I've been lurking here for a while now but I thought it was about time I said "Hi".

I found Less Wrong through HPMOR, which I read even though I never read Rowling's books.

I'm currently working my way through the Sequences at a few a day. I'm about 30% through the 2006-2010 collection, and I can heartily recommend reading them in time order and on something like Kindle on your iPhone. ciphergoth's version suited me quite well. I've been making notes as I go along and sooner or later there'll be a huge set of comments and queries arriving all at once.

I have a long standing love of expressing my beliefs with respect to probability but reading through those first sequences has really sharpened my appreciation for the art.

I've been reading quite a lot of papers recently and had got the point where I had read enough to be really worried about p ~ 0.05 - which I reasoned at the time meant there was a good chance something I'd read recently was wrong… and now I need to take into account that the p-value might be a complete mess in the first place. Anyone have a figure for how many papers published at p ~ 0.05 have a Bayesian probability of less than that?

What else can I tell you? I was raised in the Church of England but I imagine I was fortunate in that representatives of the church told me whilst I was still young that it wasn't possible to answer my questions. From comparison with the rest of the world that alone seemed to make the whole belief structure seem to be on pretty shaky ground.

I'm in the Cambridge area in the UK and have been lurking on their mailing list for a while but haven't said hello there yet.

I'm in my late-thirties now and soon expecting to become a father for the first time. There is a shocking level of lack of rationality in and around childbirth and significant low-hanging fruit to be taken by being rational. I'll post about this later. Any other parents found some easy gains by reading the science? I'd love to hear about it.

I'm a software engineer and until recently a project manager for bespoke software projects for small businesses. Right now I'm trying to get some iPhone apps off the ground to add to the passive income flow so that I can spend as much time with my new child as possible.

Topics of interest to me at the moment are:

  • The rationality and practicalities of changing to a passive income stream.
  • Open access to government-paid science.
  • The practicalities of home schooling.
  • The practicalities of setting up some better memes for my child than the ones I finished my own childhood with.
comment by EclipseBureau · 2013-03-12T01:35:06.654Z · LW(p) · GW(p)

We are currently undertaking a study on popular perceptions of existensial risk, our goal is to create a publicly accesible index of such risks, which may then be used to inform and catalyze comprehension through discussion generated around them.

If you have a few minutes, please follow the link to complete a brief, anonymous questionnaire - your input will be appreciated !

Survey Link : http://eclipsebureau-survey.questionpro.com/

Join us on Facebook: http://www.facebook.com/eclipse.bureau

comment by normalityrelief · 2013-02-18T23:21:26.557Z · LW(p) · GW(p)

Hi there community! My name is Dave. Currently hailing from the front range in Colorado, I moved out here after 5 years with a Chicago non-profit - half as executive director - following a diagnosis of Asperger Syndrome (four years after being diagnosed with ADHD-I). That was three years ago. Much has happened in the interim, but long story short, I mercilessly began studying what we call AS & anything related I could find. After a particularly brutal first-time experience with hardcore passive-aggressivism (always two sides to every situation, but it doesn't work well when no one will talk about it :P), I became extremely isolated, & have been now for about a year. I'm in my second attempt to return to school via a great community college, but unfortunately the same difficulties as last term are getting in the way.

BUT, that's a different story! I've had this site recommended to me a few times now because over the course of my isolation I've become completely preoccupied with all sorts of fun mental projects, ranging in topics from physics to consciousness to quantum mechanics to dance. My current big projects (I bounce around a loooooot) are creating a linear model for the evolution of cognitive development & showing in some way why I'm not sure i agree that time is the fourth dimension. Oh, also trying to develop a structure for understanding :)

After looking through a few of the welcome threads here, I'm excited to be here! Now all I have to do is keep consistent...

Replies from: shminux, shaih
comment by shminux · 2013-02-19T00:00:51.267Z · LW(p) · GW(p)

showing in some way why I'm not sure i agree that time is the fourth dimension

As long as you frame it as a question about your understanding of relativity, and about the validity of the relativity theory itself, sure, why not.

comment by shaih · 2013-02-18T23:33:21.526Z · LW(p) · GW(p)

Hello and welcome to lesswrong, your goal to understanding time as the 4th dimension stuck out to me in that it reminded me of a post that i found beautiful and insightful while contemplating the same thing. timeless physics has a certain beauty to it that resonates to me much better then 4th dimensional time and sounds like something you would appreciate.

Replies from: shminux
comment by shminux · 2013-02-19T00:14:54.900Z · LW(p) · GW(p)

timeless physics has a certain beauty

Sure does, but don't let yourself get tempted by the Dark Side. Beauty is not enough, it's the ability to make testable predictions that really matters. And Eliezer's two favorite pets, timeless physics and many worlds, fail miserably by this metric. Maybe some day they will be a stepping stone to something both beautiful and accurate.

Replies from: shaih
comment by shaih · 2013-02-19T00:23:20.212Z · LW(p) · GW(p)

You have a very good point and have shown me something that I knew better and will have to keep an eye on closer for now on.

That being said Beauty is not enough to be accepted into any realm of science but thinking about beautiful concepts such as timeless physics could increase the probability of thinking up an original testable theory that is true.

In particular I'm thinking how the notion of absolute time slowed down the discovery of relativity while if someone were to contemplate the beautiful notion of relative time, relativity could have been found much faster.

comment by RatedR · 2012-09-26T00:36:48.556Z · LW(p) · GW(p)

Hello Michael and Amanda Connolly from Phoenix Arizona here! we are looking for like minded people in Arizona too start a meetup group with. We are working on A documentary on rational thinking! its Called Rated R for Rational

http://www.indiegogo.com/RatedR?a=1224097

shoot us off an Email if you live in Arizona!

comment by DeeElf · 2012-09-07T00:35:00.651Z · LW(p) · GW(p)

Just joined. Into: Hume, Nietzsche, J.S. Mill, WIliam James, Aleister Crowley, Wittgenstein, Alfred Korzybski, Robert Anton Wilson, Paul K. Feyerabend, etc.... DeeElf

comment by Alex_Arendar · 2013-03-26T19:33:19.110Z · LW(p) · GW(p)

Hi, my name is Alex. I'm not that smart as ppl posting articles here. My ability to properly challenge the captcha only from 2nd attempt while registering here in LW proves this :) So I was learning math when being student, now working in IT. While typing this comment I've been thinking what is my purpose of spending time here and reading different info... and suddenly realized that i'm 29 already and life is too short to afford thinking wrong and thinking slow. So hope to improve myself to be able learn and understand more and more things. Cheers to everyone :)

comment by [deleted] · 2013-02-15T09:33:42.207Z · LW(p) · GW(p)

Hey. I'd like to submit an article. Please upvote this comment so that I may acquire enogh points to submit it.

comment by Baruta07 · 2012-11-06T18:01:07.579Z · LW(p) · GW(p)

I am Alexander Baruta, High-school student currently in the 11th grade (grade 12 math and biology). I originally found the site through Eliezer's blog, I am (technically) part of the school's robotics team (someone has to stop them from creating unworkable plans), undergoing Microsoft It certification, and going through all of the psychology courses in as little time as possible (Currently enrolled in a self-directed learning school) so I can get to the stuff I don't already know. My mind is fact oriented, (I can remember the weirdest things with perfect clarity after only hearing them once) but I have trouble combining that recall with my English classes, and I have trouble remembering names. I am informally studying formal logic, programming, game theory, and probability theory (don't you hate it when the curriculum changes. (I also have a unusual fondness for brackets, if you couldn't tell by now)

Replies from: Baruta07
comment by Baruta07 · 2012-11-06T18:02:22.611Z · LW(p) · GW(p)

Sorry about that, the internet connection I am using occasionally does this sort of thing.

Replies from: army1987
comment by A1987dM (army1987) · 2012-11-07T01:30:46.635Z · LW(p) · GW(p)

If you reload the page after you've retracted a comment, you can delete it. (Who came up with this awful interface, anyway?)

Replies from: Alicorn
comment by Alicorn · 2012-11-07T07:13:40.080Z · LW(p) · GW(p)

awful interface

It has been asked that we be gentle in word choice when critiquing the site. Tricycle works hard, and time spent working on LW is donated. You can submit bug reports or PM Matt if you think something has been overlooked or have a better idea.

comment by [deleted] · 2012-10-12T06:28:45.763Z · LW(p) · GW(p)

A few years ago some of my friends and I were interested in futurism and how technology could make the world a better place, which brought us upon the topics of transhumanism and the Singularity. I was aware of LessWrong, but it wasn't until last year when I took a psychology course that I got really interested in reading the blog. Just over a year ago I started reading LessWrong more frequently. I read a lot of stuff about the Singularity, existential risk, optimal philanthropy, epistemology, and cognitive science, both here and lots of other places on the Internet.

However, it's gotten to the point where that stuff is too complicated for me to separate the wheat from the chaff and I don't trust my own reason to reach good conclusions on very complicated topics in science and philosophy. I'm only vaguely aware of heuristics to use and beware of and good rationality techniques. I know what Bayes' Theorem is, but I couldn't tell you how to use it well in a real-world situation. That's why I'm really focusing on the basics of LessWrong rationality like reading the Sequences and learning relevant psychology, math, and decision theory before I attempt any very important long chains of reasoning. I intend to read the Sequences and post any comments or questions I have in the corresponding Sequences Reruns article. I hope the welcoming assertion that little previous knowledge of science is required holds, because I know more than the average person, I'm not expert in any field.

I used to have another account on this site, but I created a new one because the other one had lots of comments and discussion posts that related to icky personal details which are irrelevant and uncomfortable to have lying around, so I wanted a fresh start.

I intend to use this comment as a hub to post other milestones I achieve, e.g. finishing the sequences, etc., for reference.

Replies from: None
comment by [deleted] · 2012-10-12T06:53:35.332Z · LW(p) · GW(p)

Other relevant information:

  • I'm currently a college student who is at a loss for what to study, and trying hard to switch into some STEM stream of one kind or another. My marks and general competence in most subject areas are pretty good. I'm looking to do something interesting (i.e., cool research, developing cool software, really neat chemistry/biotech/mechatronics development) or important (i.e., making lots of money to give away and/or make my life more fun). I am willing and in a very good position to take risks, so I could try more than one thing if it tickles my fancy. Input or advice on this topic is invited.

  • I'm a 2nd-generation nontheist, who has pretty much always been into skepticism and naturalism of some form or another, even as a child. However, reading LessWrong has opened my eyes to value of questioning my own opinions a lot more and warned me about the dangers of mindkilling.

  • I've suffer from lots of procrastination and akrasia. More generally, I have poor time and life management skills and habits, and have suffered episodes of depression in the last couple of years.. I'm currently on antidepressants, and working through CBT with a therapist to finally make some progress on these problems. This is the biggest source of irrationality in my life, and I hope that LessWrong Sequences (especially some of lukeprog's work) will help with this. Please suggest any other evidence-based approaches you think that will help me feel better, get better and stay better.

comment by [deleted] · 2012-07-21T17:18:06.336Z · LW(p) · GW(p)

Hello...sorry, but I was hoping someone could msg me the location for the NYC meetup real quick, which is in two hours.

comment by marcusmorgan · 2012-07-25T03:45:45.192Z · LW(p) · GW(p)

I am a new member and have been looking at Blogs for the first time over the past few weeks. I have written a book, finished last month, which deals with many of the issues about reasoning discussed at this site, but I attempt to cut through them somewhat, as there is so much potential in the facts out there to be ordered that I don't spend a lot of time considering the theory relating to my reasoning in providing some order to it in my book. I discuss reasoning, and many of the principles raised in posts here, but my interest is in reasonably framing the conditions of my hypotheses and making them clear, whatever they may be. For example, immediately before 2 particles collides we can fairly accurately predict what will happen because our conditions are very closed, but nature has broad universal sweeps of properties in four forces and how they more generally structure matter (including biology and humans in particular) and hypotheses relating to those explanations are more broad.

My book tries to cover the entire sweep on nature, based upon the use of the four forces in physics, and extends to an explanation of the emergence of biology on planetary surfaces. You are all most welcome to read it, its a free download at http://home.iprimus.com.au/marcus60/1.pdf and well worth a quick flip to see if the coverage interests you. My website in www.thehumandesign.net (a non-spiritual Design) for additional information including a Blog in future. It is entirely novel, and without any input from scientists or philosophers. I am a lawyer of long standing, and do my research by checking facts at the library (and internet now) and I simply constructed a view over a period of several decades. A bit like ongoing Sunday contemplations accumulated into a theory. I hope you enjoy it, and my posts at this site if I get an opportunity to contribute further.

Replies from: thomblake
comment by thomblake · 2012-07-25T14:02:02.623Z · LW(p) · GW(p)

Folks, a reminder that downvotes against introduction posts on the "Welcome" thread are frowned upon. There's nothing in the parent comment that should be sufficient to override that norm.

Replies from: wedrifid, DaFranker
comment by wedrifid · 2012-07-25T22:53:16.504Z · LW(p) · GW(p)

Folks, a reminder that downvotes against introduction posts on the "Welcome" thread are frowned upon. There's nothing in the parent comment that should be sufficient to override that norm.

Yes there is---the rest of the comments that also advertise the book while attempting to shame Vladimir out of downvoting him for allegedly sinister emotional reasons. Making that sort of status challenge can be a useful way to establish oneself (or so the prison myth goes) but also often backfires and also waives the 'be gentle with the new guy' privileges.

People should consider themselves free to ignore thomblake's frowns and vote however they please in this instance. There is no remaining obligation to grant marcusmorgan immunity to downvotes.

Replies from: thomblake
comment by thomblake · 2012-07-26T14:05:48.510Z · LW(p) · GW(p)

I see two comments other than the above that "advertise" the book - actually link to it in a seemingly relevant context - and it's a free book even. The other comments aren't nearly as bad as you're making them out to be, and they were downvoted appropriately.

Did I miss comments that were deleted / edited, or what? What was even a 'status challenge' in marcusmorgan's comments?

Replies from: wedrifid
comment by wedrifid · 2012-07-26T14:18:55.540Z · LW(p) · GW(p)

and they were downvoted appropriately.

Exactly.

comment by DaFranker · 2012-07-25T15:29:14.922Z · LW(p) · GW(p)

I have suspicions that this introduction was downvoted because, on first reading, it feels like an advertising post filled with Applause Lights and other gimmicks (the feeling is particularly strong for me as I just finished reading the Mysterious Answers to Mysterious Questions sequence, though I had already read the majority of the individual posts in jumbled order).

A second reading sufficed to dismiss the feeling for me, and upon randomly selecting five sentences that felt like gimmicks and estimating their intended meaning, it turns out that it wasn't so gimmicky at all. Even the word "emergence", given as a prime example of modern Mysterious Answer in many contexts, seems to have been used properly here.

The oddity of the initial feeling of advertising and gimmickyness and how easily dispersed it was is enough to pique my curiosity, and I think I'll take some time to actually read that book now. Ironically, the only reason I even became aware of this post was seeing the reminder that downvoting was frowned upon in the recent comments. Heh.

comment by Reality_Check · 2012-10-10T13:00:18.248Z · LW(p) · GW(p)

I was hoping to enter into dialog, but obviously my ideas are not welcome. I'll just go finish the conversations here: http://monkeyminds.hubpages.com/hub/Critiquing-Less-Wrong