Welcome to Less Wrong!

post by MBlume · 2009-04-16T09:06:25.124Z · score: 50 (50 votes) · LW · GW · Legacy · 2000 comments

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.

If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread.  Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.

You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.

If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.

A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box.  This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site.

(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)

2000 comments

Comments sorted by top scores.

comment by BecomingMyself · 2011-01-15T23:35:59.874Z · score: 30 (30 votes) · LW · GW

Hi, I am Alyssa, a 16-year-old aspiring programmer-and-polymath who found her way to the wiki page for Egan's Law from the Achron forums. From there I started randomly clicking on links that mostly ended up leading to Eliezer's posts. I was a bit taken aback by his attitude toward religion, but I had previously seen mention of his AI Box thing (where (a) he struck me as awesome, and (b) he said some things about "intelligence" and "wisdom" that caused me to label him as an ally against all those fools who hated science), and I just loved his writing, so I spent about a week reading his stuff alternately thinking, "Wow, this guy is awesome" and "Poor atheist. Doesn't he realize that religion and science are compatible?" Eventually, some time after reading Religion's Claim to be Non-disprovable, I came to my senses. (It is a bit more complicated and embarrassing than that, but you get the idea.)

That was several months ago. I have been lurking not-quite-continuously since then, and it slowly dawned on me just how stupid I had been -- and more importantly, how stupid I still am. Reading about stuff like confirmation bias and overconfidence, I gradually became so afraid to trust myself that I became an expert at recognizing flaws in my own reasoning, without being able to recognize truth or flaws in others' reasoning. In effect, I had artificially removed my ability to consciously classify (non-obvious) statements as true: the same gross abuse of humility I had read about. After a bit of unproductive agonizing over how to figure out a better strategy, I have decided I'm probably too lazy for anything but making samples of my reasoning available for critique by people who are likely to be smarter than me -- for example, by participating in discussion on Less Wrong, which in theory is my goal here. So, hi! (I have been tweaking this for almost an hour and will submit it NOW.)

comment by lukeprog · 2011-01-15T23:41:32.574Z · score: 4 (4 votes) · LW · GW

Welcome, Alyssa!

Finding out how "stupid" I am is one of the most important things I have ever learned. I hope I never forget it!

Also, congrats on seriously questioning your religion at your age. I didn't do so until much later.

comment by timtyler · 2011-01-15T23:59:06.570Z · score: 0 (4 votes) · LW · GW

I'm not sure Alyssa said she was religious!

comment by BecomingMyself · 2011-01-16T01:51:43.190Z · score: 1 (1 votes) · LW · GW

Now that I think of it I didn't say it explicitly, but I was. I called myself Catholic, but I had already rejected the Bible (because it was written by humans, of course) and concluded that God so loved His beautiful physics that He would NEVER EVER touch the universe (because I had managed to develop a fondness for science, though for some reason I did not yet accept e.g. materialism).

comment by Tesseract · 2011-01-17T00:24:30.164Z · score: 0 (0 votes) · LW · GW

That's pretty much Deism, I think. Not right, but not quite as wrong as some other possible approaches.

Welcome! I don't know much/how systematically you've read, but if you're wondering about what makes something "true", you'll want to check out The Simple Truth (short answer: if it corresponds to reality), followed by Making Beliefs Pay Rent and What is Evidence.

But it sounds like you've made a very good start.

comment by [deleted] · 2012-10-15T00:33:49.228Z · score: 2 (2 votes) · LW · GW

You should check out the lesswrong for highschoolers facebook page

comment by MartinB · 2011-01-16T00:03:17.176Z · score: 1 (1 votes) · LW · GW

Welcome!

It can be quite a big hammer to read it all at once. Good luck digging through it all.

You might also like some of the recommended books that are spread all over this site.

Martin

comment by [deleted] · 2011-01-17T00:34:45.728Z · score: 0 (0 votes) · LW · GW

Welcome, and please don't be shy about posting freely. As others have said, it's impressive that you're on the ball so young, and it'll be interesting to see what you share on LW.

comment by ata · 2011-01-16T00:07:33.435Z · score: 0 (0 votes) · LW · GW

Welcome, fellow aspiring programmer-and-polymath!

I'd say the best thing about noticing that you've been stupid is being able to distinctly notice when you're getting smarter, as it's happening. I love that feeling.

comment by Normal_Anomaly · 2010-11-14T04:01:16.525Z · score: 17 (17 votes) · LW · GW

My name's Normal Anomaly, and I'm paranoid about giving away personal information on the Internet. Also, I don't like to have any assumptions made about me (though this is likely the last place to worry about that), so I'd rather go without a gender, race, etc. Apologies for the lack of much personal data. I can say that my major interest is biology, although I am not yet anything resembling an expert. I eventually hope to work in life extension research. I’m an Asperger’s Syndrome Sci Fi-loving nerd, which is apparently the norm here.

I used to have religious/spiritual beliefs, though I was also a fan of science and was not a member of an organized religion. I believed it was important to be rational and that I had evidence for my beliefs, but I was rationalizing and refusing to look at the hard questions. A couple years ago, I was exposed to atheism and rationalism and have since been trying to make myself more reasonable/less insane. I found LW through Harry Potter and the Methods of Rationality a few months ago, and have been lurking and reading the sequences. I'm still scared of posting on here because it’s the first discussion forum where I have known myself to be intellectually outclassed.

I chose the name Normal Anomaly because in my everyday meatspace life I feel different from (read: superior to) everyone around me, but on LW I feel like an ordinary mortal trying to keep up with people talking over my head. Hopefully I've lurked long enough to at least speak the language, and I won't be an annoyance when I comment. I want to socialize with people superior to me; unfortunately for me, they tend to want the same.

In the time I've been lurking, I've started seriously considering cryonics and will probably sign up unless something else changes my mind. I think it's pretty likely that an AGI will be developed eventually, and if it ever is it definitely needs to be Friendly, but I have no idea when other than that I hope it’s in my lifetime, which I want to end only of my own choosing and possibly never.

comment by shokwave · 2010-11-14T15:05:30.646Z · score: 6 (6 votes) · LW · GW

I'm still scared of posting on here because it’s the first discussion forum where I have known myself to be intellectually outclassed.

I have found that some of the time you can make up for a (perceived) lack of intellect with a little work, and this is true (from my own experience) here on LessWrong: when about to comment on an issue, it pays big dividends to use the search feature to check for something related in previous posts with which you can refine, change, or bolster your position. Of the many times I have done it, twice I caught myself in grievous and totally embarrassing errors!

For what it's worth, commenting on LW is so far from normal conversation and normal internet use that most intellects haven't developed methods for it; they have to grind through mostly the same processes as everyone else - and nobody can actually tell if it took you five seconds or five minutes to type your reply. My own replies might be left in the comment box for hours, to be reread with a fresh mind later and changed entirely.

tl;dr Don't be afraid to comment!

comment by NancyLebovitz · 2010-11-14T16:38:35.940Z · score: 5 (5 votes) · LW · GW

For what it's worth, commenting on LW is so far from normal conversation and normal internet use that most intellects haven't developed methods for it

This is interesting-- LW seems to be pretty natural for me. I think the only way my posting here is different from anywhere else is that my sentences might be more complex.

On the other hand, once I had a choice, I've spent most of my social life in sf fandom, where the way I write isn't wildly abnormal, I think.

Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?

comment by Emile · 2010-11-15T19:45:13.104Z · score: 5 (5 votes) · LW · GW

I find writing on LW pretty 'normal', on par with some other forums or blog comments (though with possibly less background hostility and flamewars).

I suspect the ban on discussing politics does more to increase the quality of discourse here than the posts on cognitive bias.

comment by shokwave · 2010-11-15T05:22:30.610Z · score: 5 (5 votes) · LW · GW

Wow, that is interesting ... conditional on more people feeling this way (LW is natural), I might just have focused my intellect on rhetoric and nonreasonable convincing to the point that following LW's guidelines is difficult, and then committed the typical mind fallacy and assumed everyone had too.

comment by NihilCredo · 2010-11-15T07:26:28.647Z · score: 12 (14 votes) · LW · GW

Actually, I've come to notice that rhetoric and other so-called Dark Arts are still worth their weight in gold on LW, except when the harder subjects (math and logic) are at hand.

But LessWrong commenters definitely have plenty of psychological levers, and the demographic uniformity only makes them more effective. For a simple example, I guesstimate that, in just about any comment, a passing mention of how smart LessWrongers are is worth on average 3 or 4 extra karma points - and this is about as old as tricks can get.

comment by NancyLebovitz · 2010-11-15T19:24:55.714Z · score: 2 (4 votes) · LW · GW

Of course, LessWrongers are smarter than most people, but what's really striking is the willingness to update. And the modesty.

comment by Emile · 2010-11-16T09:29:34.318Z · score: 9 (11 votes) · LW · GW

Yup, our only flaw is modesty.

comment by taryneast · 2010-12-12T15:13:46.794Z · score: 3 (3 votes) · LW · GW

I've noticed that karma points accrue for witty quips too.

comment by Jack · 2010-11-15T09:34:52.443Z · score: 2 (8 votes) · LW · GW

But LessWrongers are really smart.

comment by wnoise · 2010-11-15T19:20:52.190Z · score: 3 (3 votes) · LW · GW

That is a true but banal observation that shouldn't be worth karma. Of course, so was this response. And so forth.

comment by taryneast · 2010-12-12T15:21:22.678Z · score: 3 (3 votes) · LW · GW

Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?

Yes. I get the sense that here you are expected to at least try for rigor.

In other venues - it's totally ok to randomly riff on a topic without actually having thought deeply about either the consequences, or whether or not there's any probability of your idea actually having any basis in reality.

comment by katydee · 2010-11-15T20:04:09.037Z · score: 3 (3 votes) · LW · GW

LW is substantially higher-level than most (all?) forums that I've been to, including private ones and real name only ones. The standard of discourse just seems better here in general.

comment by Swimmer963 · 2011-04-14T02:13:37.008Z · score: 1 (1 votes) · LW · GW

Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?

I haven't noticed, but this is the first online community I've belonged to. I'm used to writing fiction, which may affect the way I post here, but if it does I don't notice it affecting it. Commenting feels natural. I don't try to make my sentences complex; if anything, I try to make them as simple as they can be to still convey my point. And at the very least, my comments and posts aren't drastically downvoted.

comment by wnoise · 2010-11-15T06:21:13.121Z · score: 1 (1 votes) · LW · GW

LW feels fairly normal to me as well. It is different than my experience of (most) other forums, but that's because I adjust myself to be more explicit on other forums about things that I feel should be taken for granted, including all of common sense data, a materialistic worldview, and minor inferential steps. This lets me get to the point rather easily here without having to worry (as much) about being misunderstood.

comment by Randaly · 2010-12-24T07:19:35.092Z · score: 0 (0 votes) · LW · GW

Are you talking about the level of rationality, about the expected level (or types) of knowledge, or the grammar and sentence structure?

For obvious reasons, the level of rationality expected here is far higher than (AFAIK) anywhere else on the internet.

The expected knowledge at LW...is probably middling to above average for me. More relevantly, much more knowledge of science, and in particular the sciences that contribute to rationality (or, more realistically, the ones touched on in the sequences), which tend to be fairly 'hard'. I've found a much higher knowledge of, e.g. history, classical philosophy, politics/political science, and other 'softer' disciplines is expected elsewhere.

As for grammar, I'd say that LW is middling to below average, though this may be availability bias: LW is much larger than most of the other internet communities I belong to, so it could have a higher number of errors while still having a better average level of grammar.

comment by Emile · 2010-12-24T08:34:23.617Z · score: 4 (4 votes) · LW · GW

As for grammar, I'd say that LW is middling to below average

YouTube, from it's size, probably has comments closer to "average".

comment by wedrifid · 2010-12-24T07:29:10.737Z · score: 4 (4 votes) · LW · GW

The expected knowledge at LW...is probably middling to above average for me. More relevantly, much more knowledge of science, and in particular the sciences that contribute to rationality (or, more realistically, the ones touched on in the sequences), which tend to be fairly 'hard'. I've found a much higher knowledge of, e.g. history, classical philosophy, politics/political science, and other 'softer' disciplines is expected elsewhere.

I presume you are averaging over a high-sophistication sample of the internet, not the internet at large.

comment by [deleted] · 2010-12-24T00:33:47.712Z · score: 0 (0 votes) · LW · GW

This is the first forum on the Internet I've been a member of, but the standards of rigor and precision of language expected here are the same as the ones my friends and I expect in our conversations.

comment by Alicorn · 2010-11-14T14:46:38.049Z · score: 5 (5 votes) · LW · GW

I'd rather go without a gender

Do you have a preferred set of gender-neutral pronouns?

comment by Jack · 2010-11-14T09:43:14.154Z · score: 3 (3 votes) · LW · GW

Also, I don't like to have any assumptions made about me (though this is likely the last place to worry about that), so I'd rather go without a gender, race, etc.

FYI, this had a "don't think of a pink elephant" effect on me. I immediately made guesses about your gender, race and age. I'm betting I'm not the only one. Sorry!

Anyway welcome! Sounds like you'll fit right in. Don't be too scared to comment, especially if it is just to ask a question (I don't recall ever seeing a non-sarcastic question downvoted).

comment by Carinthium · 2010-11-14T09:42:20.490Z · score: 1 (1 votes) · LW · GW

Mightn't you be discriminated against for having Aspergers Syndrome? There is presumably some risk of such even here.

comment by Jack · 2010-11-14T09:47:41.029Z · score: 12 (12 votes) · LW · GW

I sometimes feel discriminated against here for not being autistic enough.

comment by AdeleneDawner · 2010-11-14T21:10:36.309Z · score: 2 (2 votes) · LW · GW

Can you, or others, give some examples of this?

I don't doubt you, but this is an area where I, and other auties, seem likely to be less well calibrated - we tend to encounter discrimination often enough that it can come to seem like a normal part of interacting with people, rather than something that we should avoid doing. Being made aware of it when that's the case is then likely to be useful to those of us who'd like to recalibrate ourselves.

comment by Jack · 2010-11-14T22:06:27.210Z · score: 6 (6 votes) · LW · GW

Er. For example, it is really hard to communicate here without being totally literal! And people don't get my jokes!:-)

I wasn't complaining. I was trying to point out that the risk of being discriminated against for having Aspergers Syndrome here was very low given the high number of autism spectrum commenters here and the general climate of the site. I thought I was making a humorous point about the uniqueness of Less Wrong, like "We're so different from the rest of the internet; we discriminate against neurotypicals! Take that rest of the world!" while also sort of engaging in collective self-mockery "Less Wrong is a really autistic place."

I really hope the upvotes are from people who chuckled, and not sympathy for an oppressed minority (in any case I'm like a 26 on the Baron-Cohen quiz).

Sorry if I alarmed anyone. *Facepalm

comment by AdeleneDawner · 2010-11-14T22:48:02.339Z · score: 2 (2 votes) · LW · GW

I did chuckle, actually, but that's not mutually exclusive with it being a true statement that I haven't previously noticed the truth of. It's better to check than to assume, per my values. :)

comment by Kingreaper · 2010-12-12T15:41:46.723Z · score: 1 (1 votes) · LW · GW

I really hope the upvotes are from people who chuckled, and not sympathy for an oppressed minority (in any case I'm like a 26 on the Baron-Cohen quiz).

I upvoted due to chuckling, because it contains a nugget of truth.

I don't believe that neurotypicals are oppressed here, but I can certainly see that NTs would feel marginalised in the same way that auts can feel marginalised in normal social scenes.

I probably go below 26 on the baron-cohen test sometimes (I normally lie at 31, but recent bout of depression has had me at ~38) but if so, I've never taken it at such a time (well, I wouldn't expect to, I'd be too busy socialising)

comment by Normal_Anomaly · 2010-11-15T00:02:45.302Z · score: 1 (1 votes) · LW · GW

I got that you may have been making a joke, but I wasn't sure how much truth was behind it. Now that I know it was a joke, I do find it funny.

comment by NancyLebovitz · 2010-11-14T09:23:26.946Z · score: 1 (1 votes) · LW · GW

That's an interesting choice to not give personal information. Do you find that people tend to jump to conclusions about you? Do you usually tell them that you aren't giving them that information?

comment by Normal_Anomaly · 2010-11-14T21:28:27.066Z · score: 2 (2 votes) · LW · GW

I don't really know how to deal with multiple replies without making six different comments and clogging the thread, so I'm responding to everyone upthread of me in reverse order.

Nancy: I lurk on a lot more sites than I comment, so I don't really have the experience to answer those questions. This is the first site I've joined where people give away as much info as they do.

Jack: I'm sorry you're discriminated against and I'll try not to do it. Also, like I said, I rarely get on forums, so I didn't know about the "don't think of a pink elephant effect". I'm glad you pointed it out.

Carinthium: I'm happy with my Asperger's; I wouldn't give up the good parts to get rid of the bad parts. I've never encountered discrimination on that score, so it didn't really occur to me. Besides, it's the sort of thing that will probably be visible in my comments.

Shokwave: Thanks for the reassurance. I do find the conversation here unique, in content and in tone.

Alicorn: I like e for the subject case, en for the object, and es for possessive, but I don't use them in meatspace or other forums as much as in my thoughts because it confuses people. I'll probably use them here. What do you think?

comment by Alicorn · 2010-11-14T22:05:54.617Z · score: 3 (3 votes) · LW · GW

Alicorn: I like e for the subject case, en for the object, and es for possessive, but I don't use them in meatspace or other forums as much as in my thoughts because it confuses people. I'll probably use them here. What do you think?

I'll use those pronouns for you if you prefer them. When I'm picking gender-neutral pronouns on my own I usually use some combination of Spivak and singular "they".

comment by lsparrish · 2010-11-14T04:26:18.546Z · score: 1 (3 votes) · LW · GW

Welcome! One thing you can easily do without being a super-genius is spread more accurate ideas about cryonics. I get a lot of mileage out of Google Alerts and Yahoo Answers for this purpose. I still don't have arrangements myself, but I certainly plan to.

comment by EStokes · 2009-12-19T23:58:59.307Z · score: 17 (17 votes) · LW · GW

I'm Ellen, age 14, student, planning to major in molecular biology or something like that. I'm not set on it, though.

I think I was browsing wikipedia when I decided to google some related things. I think I found some libertarian or anarchist blog that then had a link to Overcoming Bias or Lesswrong. Or I might've seen the word transhumanism on the wiki page for libertarianism and googled it, with it eventually leading here somehow. My memory is fuzzy as it was pretty irrelevant to me.

I'm an atheist, and have been for a while, as is typical for this community. I wasn't brought up religiously, so it was pretty much untheism that turned into atheism.

My rationalist roots... I've always wanted to be right, of course. Partly because I could make mistakes from being wrong, partly because I really, really hated looking stupid. Then I figured that I couldn't know if I was right unless I listened to the other side, really listened, and was careful. (Not enough people do even this. People are crazy, the world is mad. Angst, angst.) I found lesswrong which has given me tools to much more effectively do this. w00t.

I'm really lazy. Curse you, akrasia!

It should be obvious how I came up with my username. Aren't I original?

Some other hobbies I have are gaming and anime/manga. Amusingly enough, I barely ever watch any anime. The internet is very distracting.

Edit: Some of this stuff is outdated. I don't plan to major in molecular biology, for one, and I don't like how I wrote the rationalist roots part. Meh. I doubt anyone is going to see this, but I'm 16 now and plan to major in Computer Science.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T23:40:11.536Z · score: 4 (4 votes) · LW · GW

Welcome on board! You're a key segment of my target audience, so please speak up if you have any thoughts on things I could have done better in my writing.

comment by Kevin · 2010-02-17T07:16:49.793Z · score: 3 (3 votes) · LW · GW

I strongly recommend people go to school for something they find interesting, but since I don't think it's commonly known information, I would like to note that salaries for biologists are lower than for other scientists. Lots more people graduate with PhDs in biology than PhDs in physics which really drives down the salaries for biologists that don't have tenure. Though if you plan on going to professional school (medical school, business school, etc.), a molecular biology degree is a good thing to have if you enjoy molecular biology. Again, I really think people should go to school for something they like, but if you want to make a lot of money, don't become a researching biologist. Biology researchers with MD's do a lot better financially.

comment by EStokes · 2011-03-18T21:49:08.641Z · score: 1 (1 votes) · LW · GW

I know it's been over a year, but thanks :)

comment by Zack_M_Davis · 2009-12-20T00:16:01.822Z · score: 0 (0 votes) · LW · GW

It should be obvious how I came up with my username. Aren't I original?

Apparently not. O, but welcome!

comment by mni · 2009-07-24T21:41:16.399Z · score: 16 (22 votes) · LW · GW

Hello.

I've been reading Less Wrong from its beginning. I stumbled upon Overcoming Bias just as LW was being launched. I'm a young mathematician (an analyst, to be more specific) currently working towards a PhD and I'm very interested in epistemic rationality and the theory of altruist instrumental rationality. I've been very impressed with the general quality of discussion about the theory and general practice of truth-seeking here, even though I can think of places where I disagree with the ideas that I gather are widely accepted here. The most interesting discussions seem to be quite old, though, so reviving those discussions out of the blue hasn't felt like - for lack of a better word - a proper thing to do.

There are many discussions here of which I don't care about. A large proportion of people here are programmers or otherwise from a CS background, and that colors the discussions a lot. Or maybe it's just that the prospect of an AGI in recent future doesn't seem at all likely to me. Anyway, the AI/singularity stuff, the tangentially related topics that I bunch together with them, and approaching rationality topics from a programmer's point of view I just don't care about. Not very much, at least.

The self-help stuff, "winning is everything" and related stuff I'd rather not read. Well, I do my best not to. The apparent lack of concern for altruism in those discussions makes me even wish they wouldn't take place here in the first place.

And then there are the true failings of this community. I had been thinking of registering and posting in some threads about the more abstract sides of rationality, but I must admit I eventually got around to registering and posting because of the gender threads. But there's just so much bullshit going on! Evolutionary psychology is grossly misapplied (1). The obvious existence of oppressive cultural constructs (2) is flatly denied. The validity of anecdotes and speculation as evidence is hardly even questioned. The topics that started the flaming have no reason of even being here in the first place. This post pretty well sums up the failures of rationality here at Less Wrong; and that post has been upvoted to 25! Now, the failings and attitudes that surfaced in the gender debate have, of course, been visible for quite some time. But that the failures of thought seem so common has made me wonder if this community as a whole is actually worth wasting my time for.

So, in case you're still wondering, what has generously been termed "exclusionary speech" really drives people away (3). I'm still hoping that the professed rationality is enough to overcome the failure modes that are currently so common here (4). But unfortunately I think my possible contributions won't be missed if I rid myself of wishful thinking and see it's not going to happen.

It's quite a shame that a community with such good original intentions is failing after a good start. Maybe humans simply won't overcome their biases (5) yet in this day and age.

So. I'd really like to participate in thoughtful discussions with rationalists I can respect. For quite a long time, Less Wrong seemed like the place, but I just couldn't find a proper place to start (I dislike introductions). But now as I'm losing my respect for this community and thus the will to participate here, I started posting. I hope I can regain the confidence in a high level of sanity waterline here.

(Now a proper rationalist would, in my position, naturally reconsider his own attitudes and beliefs. It might not be surprising that I didn't find all too much to correct. So I might just as well assume that I haven't been mind-killed quite yet, and just make the post I wanted to.)

EDIT: In case you felt I was generalizing with too much confidence - and as I wrote here, I agree I was - see my reply to Vladimir Nesov's reply.

(1) I think failing to control for cultural influences in evolutionary psychology should be considered at least as much of a fail as postulating group selection. Probably more so.

(2) Somehow I think phrases like "cultural construct", especially when combined with qualifiers like "oppressive", trigger immediate bullshit alarms for some. To a certain extent, it's forgivable, as they certainly have been used in conjunction with some of the most well-known anti-epistemologies of our age. But remember: reversing stupidity doesn't make you any better off.

(3) This might be a good place to remind the reader that (our kind can't cooperate)[http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/]. (This is actually referring to many aspects of the recent debate, not just one.)

(4) Yes, I know, I can't cooperate either.

(5) Overcoming Bias is quite an ironic name for that blog. EDIT: This refers exclusively to many of Robin Hanson's posts about gender differences I have read. I think I saw a post linking to some of these recently, but I couldn't find a link to that just now. Anyway, this footnote probably went a bit too far.

comment by SoullessAutomaton · 2009-07-24T23:33:58.199Z · score: 6 (8 votes) · LW · GW

I appreciate your honest criticisms here, as someone who participated (probably too much) in the silly gender discussion threads.

I also encourage you to stay and participate, if possible. Despite some missteps, I think there's a lot of potential in this community, and I'd hate to see us losing people who could contribute interesting material.

comment by orthonormal · 2009-07-24T22:43:48.476Z · score: 6 (12 votes) · LW · GW

Somehow I think phrases like "cultural construct", especially when combined with qualifiers like "oppressive", trigger immediate bullshit alarms for some. To a certain extent, it's forgivable, as they certainly have been used in conjunction with some of the most well-known anti-epistemologies of our age. But remember: reversing stupidity doesn't make you any better off.

Upvoted for this in particular.

comment by [deleted] · 2009-07-24T22:22:20.458Z · score: 4 (4 votes) · LW · GW

Interesting. You provide one counterexample to my opinion that the biased language wasn't driving away readers. I now have reason to believe I might have been projecting too much.

comment by Vladimir_Nesov · 2009-07-25T12:16:03.892Z · score: 3 (3 votes) · LW · GW

The evils of in-group bias are getting at me. I felt a bit of anger when reading this comment. Go figure, I rarely feel noticeable emotions, even in response to dramatic events. The only feature that could trigger that reaction seems to be the dissenting theme of this comment, the way it breached the normal narrative of the game of sane/insane statements. I wrote a response after a small time-out, I hope it isn't tainted by that unfortunate reaction.

comment by Wei_Dai · 2009-07-25T13:07:16.185Z · score: 6 (6 votes) · LW · GW

I don't think it's in-group bias. If anything, people are giving mni extra latitude because he or she is seen as new here.

If an established member of the community were to make the same points, that much of the discussion is uninteresting or bullshit, that the community is failing and maybe not worth "wasting" time for, and to claim to have interesting things to say but make excuses for not actually saying them, I bet there would be a lot more criticism in response.

comment by Vladimir_Nesov · 2009-07-25T13:16:33.001Z · score: 2 (2 votes) · LW · GW

As I wrote, anger is an improbable reaction for me, and there doesn't seem to be anything extraordinarily angering about that comment, so I can't justify that emotion appearing in this particular instance. The fact that the poster isn't a regular might be a factor as well.

comment by MrHen · 2009-07-24T21:50:42.855Z · score: 3 (3 votes) · LW · GW

Welcome. :)

One thing I hope you have noticed is that there are different subgroups of people within the community that like or dislike certain topics. Adding content that you prefer is a good way to see more growth in those topics.

comment by [deleted] · 2011-09-16T05:45:09.658Z · score: 1 (3 votes) · LW · GW

mni, I followed in your footsteps years later, and then dropped away, just as you did. I came back after several months to look for an answer to a specific question -- stayed for a bit, poking around -- and before I go away again, I'd just like to say: if this'd been a community that was able to keep you, it probably would have kept me too.

You seem awesome. Where did you go? Can I follow you there?

comment by Nisan · 2011-09-16T06:20:15.340Z · score: 1 (1 votes) · LW · GW

I see people leave Less Wrong for similar reasons all the time. In my optimistic moods, I try to understand the problem and think up ways to fix it. In my pessimistic moods, this blog and its meetups are doomed from the start; the community will retain only those women who are already dating people in the community; and the whole thing will end in a whimper.

comment by shokwave · 2011-09-16T06:29:08.241Z · score: 2 (2 votes) · LW · GW

This needs to be a primary concern during the setting-up of the rationality spin-off SIAI is planning. It needs to be done right, at the beginning.

comment by Z_M_Davis · 2009-07-24T23:08:35.516Z · score: 1 (3 votes) · LW · GW

I'm still hoping that the professed rationality is enough to overcome the failure modes that are currently so common here[.] But unfortunately I think my possible contributions won't be missed if I rid myself of wishful thinking and see it's not going to happen. [...] I'd really like to participate in thoughtful discussions with rationalists I can respect. For quite a long time, Less Wrong seemed like the place, but I just couldn't find a proper place to start (I dislike introductions). But now as I'm losing my respect for this community and thus the will to participate here, I started posting. I hope I can regain the confidence in a high level of sanity waterline here.

Oh, please stay!

comment by Vladimir_Nesov · 2009-07-25T12:15:47.755Z · score: 0 (6 votes) · LW · GW

I assume that you are overconfident about many of the statements you made (and/or underestimate the inferential gap). I agree with some things you've said, but about some of the things you've said there seems to be no convincing argument in sight (either way), and so one shouldn't be as certain when passing judgment.

comment by Z_M_Davis · 2009-07-26T05:13:15.487Z · score: 2 (4 votes) · LW · GW

I agree with some things you've said, but about some of the things you've said there seems to be no convincing argument in sight

Downvoted for lack of specifics.

comment by mni · 2009-07-27T10:29:47.683Z · score: 1 (5 votes) · LW · GW

I think I understand your point about overconfidence. I had thought of the post for a day or two but I wrote it in one go, so I probably didn't end up expressing myself as well as I could have. I had originally intended to include a disclaimer in my post, but for reasons that now seem obscure I left it out. When making as strong, generalizing statements as I did, the ambiguity of statements should be minimized a lot more thoroughly than I did.

So, to explain myself a little bit better: I don't hold the opinion that what I called "bullshit" is common enough here to make it, in itself, a "failing of this community". The "bullshit" was, after all, limited only to certain threads and to certain individuals. What I'm lamenting and attributing to the whole community is a failure to react to the "bullshit" properly. Of course, that's a sweeping generalization in itself - certainly not everyone here failed to react in what I consider a proper way. But the widest consensus in the multitude of opinions seemed to be that the reaction might be hypersensitivity, and that the "bullshit" should be discouraged only because it offends and excludes people (and not because it offends and excludes people for irrational reasons).

And as for overconfidence about my assessment of the "bullshit" itself, I don't really want to argue about that. Any more than I'd want to argue with people who think atheists should be excluded from public office. (Can you imagine an alternate LW in which the general consensus was that's a reasonable, though extreme, position to take? That might give an only slightly exaggerated example of how bizarrely out of place I considered the gender debate to be.) If pressed, I will naturally agree to defend my statements. But I wouldn't really want to have to, and restarting the debate isn't probably in anyone else's best interests either. So, I'll just have to leave the matter as something that, in my perspective, lessens appreciation for the level of discourse here in quite a disturbing way. Still, that doesn't mean that LW wouldn't get the best marks from me as far as the rationality of internet communities I know is considered, or that a lowered single value for "the level of discourse" lessened my perception of the value of other contributions here.

Now the latest top-level post about critiquing Bayesianism look quite interesting, I think I'd like to take a closer look at that...

comment by free_rip · 2011-01-27T04:43:46.933Z · score: 15 (15 votes) · LW · GW

Hi, I'm Zoe. I found this site in a round-about way after reading Dawkin's The God Delusion and searching some things related to it. There was a comment in a forum mentioning Less Wrong and I was interested to see what it was.

I've been mainly lurking for the past few months, reading the sequences and some of the top posts. I've found that while I understand most of it, my high-school level math (I'm 16) is quite inadequate, so I'm working through the Khan Academy to try and improve it.

I'm drawn to rationalism because, quite simply, it seems like the world would be a better place if people were more rational and that has to start somewhere. Whatever the quotes say, truth is worthwhile. It also makes me believe in myself more to know that I'm willing and somewhat able to shift my views to better match the territory. Maybe someday I'll even advance from 'somewhat' into plain ol' 'able'.

My goals here, at this point, aren't particularly defined. I find the articles and the mission inspiring and interesting and think that it will help me. Maybe when I've learnt more I'll have a clearer goal for myself. I already analyze everything (to the point where many a teacher has been quite annoyed), so I suppose that's a start. I'm looking forward to learning more and seeing how I can use it all in my actual life.

Cheers, Zoe

comment by [deleted] · 2011-12-20T16:59:51.283Z · score: 2 (2 votes) · LW · GW

Welcome!

Hope now a few months later you still find some utility from our community. Overall, I just wanted to chime in and say good luck in getting sane in your lifetime, its something all of us here strive for and its far from easy. :)

comment by free_rip · 2011-12-20T17:26:26.973Z · score: 5 (5 votes) · LW · GW

Thank you! I am still enjoying the site - there's so much good stuff to get through. I've read most of the sequences and top posts now, but I'm still in the (more important, probably) process of compiling a list of all the suggested activities/actions, or any I can think of in terms of my own life and the basic principles, for easy reference to try when I have some down-time.

comment by thomblake · 2011-12-20T18:35:50.876Z · score: 5 (5 votes) · LW · GW

compiling a list of all the suggested activities/actions

Such a list should be worth at least posting to discussion, if you finish it.

comment by TheatreAddict · 2011-07-08T05:51:49.682Z · score: 11 (11 votes) · LW · GW

Hello everyone,

My name is Allison, and I'm 15 years old. I'll be a junior next year. I come from a Christian background, and consider myself to also be a theist, for reasons that I'm not prepared to discuss at the moment... I wish to learn how to view the world as it is, not through a tinted lens that is limited in my own experiences and background.

While I find most everything on this site to be interesting, I must confess a particular hunger towards philosophy. I am drawn to philosophy as a moth is to a flame. However, I am relatively ignorant about pretty much everything, something I'm attempting to fix. I have a slightly above average intelligence, but nothing special. In fact, compared to everyone on this site, I'm rather stupid. I don't even understand half of what people are talking about half the time.

I'm not a science or math person, although I find them interesting, my strengths lie in English and theatre arts. I absolutely adore theatre, not that this really has much to do with rationality. Anyway, I kind of want to get better at science and math. I googled the double slit experiment, and I find it.. captivating. Quantum physics holds a special kind of appeal to me, but unfortunately, is something that I'm not educated enough to pursue at the moment.

My goals are to become more rational, learn more about philosophy, gain a basic understanding of math and science, and to learn more about how to refine the human art of rationality. :)

comment by KPier · 2011-07-08T06:05:34.761Z · score: 2 (2 votes) · LW · GW

Welcome! Encountering Less Wrong as a teenager is one of the best things that ever happened to me. One of the most difficult techniques this site can teach you, changing your mind, seems to be easier for younger people.

Not understanding half the comments on this blog is about standard, for a first visit to the site, but you aren't stupid; if you stick with it you'll be fluent before you know it. How much of the site have you read so far?

comment by TheatreAddict · 2011-07-08T07:00:48.365Z · score: 1 (1 votes) · LW · GW

Yeah, I mean from history, it shows that even when people think they're right, they can still be wrong, so if I'm proved wrong, I'll admit it, there's no point holding onto an argument that's proven scientifically wrong. :3

Hmm, I've darted around here and there, I've read a few of the sequences, and I'm continuing to read those. I've read how to actually change your mind. I've attempted to read more difficult stuff involving Bayes theorum, but it pretty much temporarily short-circuited my brain. Hahh.

comment by TheatreAddict · 2011-07-09T06:11:56.937Z · score: 3 (3 votes) · LW · GW

Edit: I've read most of the sequence, Mysterious Answers to Mysterious Questions.

comment by TheatreAddict · 2011-07-08T05:54:48.773Z · score: 2 (2 votes) · LW · GW

Ahh! I forgot, I learned about this site through Eliezer Yudkowsky's fanfiction, Methods or Rationality. :3 A good read.

comment by [deleted] · 2011-12-20T16:20:30.651Z · score: 1 (1 votes) · LW · GW

While I find most everything on this site to be interesting, I must confess a particular hunger towards philosophy. I am drawn to philosophy as a moth is to a flame. However, I am relatively ignorant about pretty much everything, something I'm attempting to fix. I have a slightly above average intelligence, but nothing special. In fact, compared to everyone on this site, I'm rather stupid. I don't even understand half of what people are talking about half the time.

LessWrong is basically a really good school of philosophy.

And while you may hear some harsh words about academic philosophy ( that stuff, at least most of what's written in the 20th century, is dull anyway), reading some of the classics can be really fun and even useful for understanding the world around you (because so many of those ideas, sometimes especially the wrong ones, are baked into our society). I started with Plato right after my 15th birthday, continued reading stuff all through high school instead of studying, and occasionally still taking some time to read some old philosophy now that I'm in college.

Concerning intelligence, do not be mislead by the polls that return self-reported IQs in the 140~ range, for active participants its probably a good 20 points lower and for average readers 5 points bellow that.

As for relevant math, or studying math in general just ask in the open threads! LWers are helpful when it comes to these things. You even have people offering dedicated math tutoring, like Patrick Robotham or as of recently me.

comment by kilobug · 2011-10-18T18:46:35.063Z · score: 1 (1 votes) · LW · GW

Welcome here !

Don't underestimate yourself too much, being here and spending time reading the Sequences at your age is already something great :) And if you don't understand something, there is no shame to that, don't hesitate to ask questions on the points that aren't clear to you, people here will be glad to help you !

As for quantum physics, I hope you'll love Eliezer's QM Sequence, it's by far the clearest introduction to QM I ever saw, and doesn't require too much maths.

comment by wallowinmaya · 2011-04-20T22:04:01.450Z · score: 11 (11 votes) · LW · GW

hi everybody,

I'm 22, male, a student and from Germany. I've always tried to "perceive whatever holds the world together in its inmost folds", to know the truth, to grok what is going on. Truth is the goal, and rationality the art of achieving it. So for this reason alone lesswrong is quite appealing.

But in addition to that Yudkowsky and Bostrom convinced me that existential risks, transhumanism , the singularity, etc. are probably the most important issues of our time.

Furthermore this is the first community I've ever encountered in my life that makes me feel rather dumb. ( I can hardly follow the discussions about solomonoff induction, everett-branches and so on, lol, and I thought I was good at math because I was the best one in high school :-) But, nonetheless being stupid is sometimes such a liberating feeling! Everytime desperation takes hold, caused by the utter stupidity of my fellow human beings, I only have to imagine how unbearable it must be for someone like Yudkowsky to endure the idiocy of most folks ( myself included). But maybe the fact that dumb people drive me insane is only a sign of my own arrogance...

To spice this post with more gooey self-disclosure: I was sort of a "mild" socialist for quite some time ( yeah, I know. But, there are some intelligent folks who were socialists, or sort-of-socialists like Einstein and Russell). Now I'm more pro-capitalism, libertarian, but some serious doubts remain. Furthermore my atheistic worldview was shattered by some LSD-trips, and new-age, mysterious quantum-physics-interpretations. I drifted into a spooky pantheistic worldview. The posts on LessWrong were really useful to help me overcome this weltanschauung. This story may seem not too harmful, since the distinction between atheism and pantheism is not entirely clear afterall, but mystic experiences, caused by psychedelics ( or other neurological "happenings"), may well be one of the reasons why some highly intelligent people remain/ or become religious. Therefore I'm really interested in neuropsychological research of mystic experiences. ( I think I share this personal idiosyncrasy with Sam Harris...) And I think many rational atheists ( myself included before I encountered LSD), underestimate the preposterous and life-transfomring power of mystic experiences, that can convert the most educated rationalist into a gibbering crackpot. It makes you think you really "know" that there is some divine and mysterious force at the deepest level of the universe, and the quest for understanding involves reading many, many absurd and completely useless books, and this endeavor may well destroy your whole life. Such a mystic experience may well be the Absolute Bias, almost impossible to overcome, at least for me it was really hard. But do mystic experiences have some benefits???I think so. Ah, life is soo ambivalent...

Oops, probably already talked way too much. I hope I can contribute some useful stuff in the future and meet some like-minded people...

comment by Swimmer963 · 2011-04-21T13:00:17.855Z · score: 4 (4 votes) · LW · GW

But mystic experiences, caused by psychedelics (or other neurological "happenings"), may well be one of the reasons why some highly intelligent people remain/ or become religious.

I can personally support this. I've never taken LSD or any other consciousness-altering drug, but I can trigger ecstatic, mystical "religious experiences" fairly easy in other ways; even just singing in a group setting will do it. I sing in an Anglican church choir and this weekend is Easter, so I expect to have quite a number of mystical experiences. At one point I attended a Pentecostal church regularly and was willing to put up with people who didn't believe in evolution because group prayer inevitably triggered my "mystical experience" threshold. (My other emotions are also triggered easily: I laugh out loud when reading alone, cry out loud in sad books and movies, and feel overpowering warm fuzzies when in the presence of small children.)

I have done my share of reading "absurb and useless" books. Usually I found them, well, absurd and useless and pretty boring. I would rather read about the neurological underpinnings of my experience, especially since grokking science's answers can sometimes trigger a near-mystical experience! (Happened several times while reading Richard Dawkins' 'The Selfish Gene'.)

In any case, I would like to hear more about your story, too.

comment by wallowinmaya · 2011-04-21T15:54:38.103Z · score: 2 (2 votes) · LW · GW

I can trigger ecstatic, mystical "religious experiences" fairly easy in other ways; even just singing in a group setting will do it.

Wow, impressing that nevertheless you've managed to become a rationalist! Now I would like to hear how you achieved this feat :-)

I would rather read about the neurological underpinnings of my experience, especially since grokking science's answers can sometimes trigger a near-mystical experience!

I totally agree. Therefore neuroscience of "altered states of consiousness" is one of my pet subjects...

comment by Swimmer963 · 2011-04-21T16:41:45.148Z · score: 3 (3 votes) · LW · GW

Wow, impressing that nevertheless you've managed to become a rationalist! Now I would like to hear how you achieved this feat :-)

Mainly by having read so much pop science and sci-fi as a kid that by the time the mystical-experience things happened in a religious context (at around 14, when I started singing in the choir and actually being exposed to religious memes) I was already a fairly firm atheist in a family of atheists. Before that, although I remember having vaguely spiritual experiences as a younger kid, they were mostly associated with stuff like looking at beautiful sunsets or swimming. And there's the fact that I'm genuinely interesting in topics like physics, so I wasn't going to restrict my reading list to New Age/religious books.

comment by rhollerith_dot_com · 2011-04-21T12:00:25.284Z · score: 1 (1 votes) · LW · GW

For "weltanschauung" (an English word), Wiktionary has, "a person's or a group's conception, philosophy or view of the world; a worldview". Moreover (if you capitalize it) it means the same thing in German.

comment by MrMind · 2011-04-21T07:33:11.852Z · score: 1 (1 votes) · LW · GW

I think your experience deserves a narration in the discussion section.

comment by wallowinmaya · 2011-04-21T10:12:40.510Z · score: 2 (2 votes) · LW · GW

Hm, I don't know. Merely writing about the trip can never be as profound as the experience itself. Read e.g. descriptions of experiences with meditation. They often sound just silly. Furthermore there are enough trip-reports in the internet about experiences with psychedelic drugs, from people who can better write than I can, and who have more knowledge than I have. If you are really interested in mystic or psychedelic experiences, you can go to Erowid , which is one of the best sites on the internet if you are interested in this stuff...

comment by MrMind · 2011-04-21T10:24:16.881Z · score: 1 (1 votes) · LW · GW

I was referring not to the experience of your trip, but to the following battle you fought about overcoming the (almost) Absolute Bias...

comment by wallowinmaya · 2011-04-21T11:28:01.112Z · score: 3 (3 votes) · LW · GW

Oh,sorry, I see... Well, overcoming this worldview consisted mainly of reading some sequences of Eliezer:-) And remember that I wasn't a New-Age crackpot. I had only very mild mystic experiences, but these alone lead me to question the nature of consiousness, the universe etc.. So for me it was not really difficult, but I imagine that really radical experiences make you "immune" to a naturalistic, atheistic explanation.
I think Yvain made a similar experience with hashish (This post also convinced me that mystic experiences are only strange realignments of neurological processes ) Well, maybe I will write a post in the future that discusses risks and benefits of psychedelic drugs and meditation. But first I have to read the remaining sequences of Eliezer, which will be time-consuming enough:-)

comment by [deleted] · 2010-04-28T01:16:25.647Z · score: 11 (11 votes) · LW · GW

Hi, I'm Sarah. I'm 21 and going to grad school in math next fall. I'm interested in applied math and analysis, and I'm particularly interested in recent research about the sparse representation of large data sets. I think it will become important outside the professional math community. (I have a blog about that at http://numberblog.wordpress.com/.)

As far as hobbies go, I like music and weightlifting. I read and talk far too much about economics, politics, and philosophy. I have the hairstyle and cultural vocabulary of a 1930's fast-talking dame. (I like the free, fresh wind in my hair, life without care; I'm broke, that's Oke!)

Why am I here? I clicked the link from Overcoming Bias.

In more detail, I'm here because I need to get my life in order. I'm a confused Jew, not a thoroughgoing atheist. I've been a liberal and then a libertarian and now need something more flexible and responsive to reason than either.

Some conversations with a friend, who's a philosopher, have led me to understand that there are some experiences (in particular, experiences he's had related to poverty and death) that nothing in my intellectual toolkit can deal with, and so I've had to reconsider a lot of preconceptions.

I'm here, to be honest, for help. I've had difficulty since childhood believing that I am valuable, partly because in mathematics you always have the example before you of people far better. Let me put it this way: I need to find something to do or believe that doesn't crumble periodically into wishing I were dead, because otherwise I won't have a very productive future. That sounds dismal, but really it's a good problem to have -- I'm pretty fortunate otherwise. Still, I want to solve it. I like this community, I think there's a lot to learn here, and my inclination is always to solve problems by learning.

comment by mattnewport · 2010-04-28T01:27:50.366Z · score: 5 (5 votes) · LW · GW

I'm here, to be honest, for help. I've had difficulty since childhood believing that I am valuable, partly because in mathematics you always have the example before you of people far better.

I don't know if it will help you, but the concept of comparative advantage might help you appreciate how being valuable does not require being better than anyone else at any one thing. I found the concept enlightening, but I'm probably atypical...

comment by [deleted] · 2010-04-28T01:37:33.815Z · score: 1 (1 votes) · LW · GW

I am familiar with it, actually. Never seemed to do much good, but maybe with a little meditation it might. If someone is paying me voluntarily, I must be earning my keep, in a sort of caveat emptor way...

comment by mattnewport · 2010-04-28T02:01:20.980Z · score: 4 (4 votes) · LW · GW

I think gains from trade is one of the most uplifting (true) concepts in all of the social sciences. It is a tragedy that it is not more widely appreciated. Most people see trade as zero sum.

comment by CronoDAS · 2010-04-28T01:36:29.188Z · score: 0 (0 votes) · LW · GW

Welcome, Sarah.

(I sometimes make comments on Overcoming Bias under the name Doug S.)

comment by Qiaochu_Yuan · 2012-11-24T08:45:14.278Z · score: 10 (10 votes) · LW · GW

Hello! I'm a first-year graduate student in pure mathematics at UC Berkeley. I've been reading LW posts for awhile but have only recently started reading (and wanting to occasionally add to) the comments. I'm interested in learning how to better achieve my goals, learning how to choose better goals, and "raising the sanity waterline" generally. I have recently offered to volunteer for CFAR and may be an instructor at SPARC 2013.

comment by [deleted] · 2012-11-29T02:30:44.026Z · score: 2 (2 votes) · LW · GW

I've read your blog for a long time now, and I really like it! <3 Welcome to LW!

comment by Qiaochu_Yuan · 2012-11-29T02:49:25.203Z · score: 2 (2 votes) · LW · GW

Thanks! I'm trying to branch out into writing things on the internet that aren't just math. Hopefully it won't come back to bite me in 20 years...

comment by Sarokrae · 2011-09-25T11:24:27.183Z · score: 10 (10 votes) · LW · GW

Greetings, LessWrong!

I'm Saro, currently 19, female and a mathematics undergraduate at the University of Cambridge. I discovered LW by the usual HP:MoR route, though oddly I discovered MoR via reading EY's website, which I found in a Google search about Bayes' once. I'm feeling rather fanatical about MoR at the moment, and am not-so-patiently awaiting chapter 78.

Generally though, I've found myself stuck here a lot because I enjoy arguing, and I like convincing other people to be less wrong. Specifically, before coming across this site, I spent a lot of time reading about ways of making people aware of their own biases when interpreting data, and effective ways of communicating statistics to people in a non-misleading way (I'm a big fan of the work being done by David Spiegelhalter). I'm also quite fond of listening to economics and politics arguments and trying to tear them down, though through this, I've lost any faith in politics as something that has any sensible solutions.

I suspect that I'm pretty bad at overcoming my own biases a lot of the time. In particular, I have a very strong tendency to believe what I'm told (including what I'm being told by this site), I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site), and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but am now not so sure. (At some level, I have the thought that our form of logic should only apply to our plane of existence, whatever that means.) But hey, figuring all that out is what this site's about, right?

comment by [deleted] · 2011-09-25T16:31:08.820Z · score: 5 (5 votes) · LW · GW

Welcome!

I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site),

I wouldn't necessarily call that a failing in and of itself -- it's important to notice the influence that tone and eloquence and other ineffable aesthetic qualities have on your thinking (lest you find yourself agreeing with the smooth talker over the person with a correct argument), but it's also a big part of appreciating art, or finding beauty in the world around you.

and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but > am now not so sure.

If it helps, I was raised atheist, only ever adopted organized religion once in response to social pressure (it didn't last, once I was out of that context), find myself a skeptical, materialist atheist sort -- and with my brain wiring (schizotypal, among other things) I still have intense, vivid spiritual experiences on a regular basis. There's no inherent contradiction, if you see the experiences as products-of-brain and that eerie sense that maybe there's something more to it as also a product-of-brain, with antecedents in known brain-bits.

comment by Sarokrae · 2011-09-25T19:06:42.005Z · score: 0 (0 votes) · LW · GW

Thanks for the welcome!

I'm certainly not going to join organised religion any time soon, seeing as I think I'm much better off without them. However, it's proving pretty difficult to argue myself out of a general, self-formed religion because of the hangups I have about our logic only applying to our world. I mean, if there is a supreme being for whom "P and ¬P"...

Fortunately, any beings that use logic that is above and beyond my own, and cares about my well-being, will probably want me to just try my best with my own logic. It's not a belief that gets in the way of life much, so I don't think about it all the time, but it would be interesting to sit down and just poke all of that bit of my thoughts with a rationalist stick at some point.

comment by Oscar_Cunningham · 2011-09-25T18:01:37.085Z · score: 3 (3 votes) · LW · GW

Welcome!

I'm Saro, currently 19, female and a mathematics undergraduate at the University of Cambridge.

Note to self: Organise Cambridge meet-up.

comment by Swimmer963 · 2011-09-25T12:39:23.565Z · score: 3 (3 votes) · LW · GW

Welcome! Sweet, another girl my age!

though oddly I discovered MoR via reading EY's website, which I found in a Google search about Bayes' once.

Kind of similar to how I discovered it. I think I googled EY and found his website after seeing his name in the sl4 mailing list.

comment by tenshiko · 2011-09-25T16:02:20.029Z · score: 0 (0 votes) · LW · GW

My story is similar, finding this stuff from that good old "The Meaning of Life" FAQ from back in 2003, which I think he's officially renounced, kind of like the doornail dead SL4 wiki. A search brought me back into the website fold years later.

Anyway, seconding Swimmer's happiness at the young female demographic being bolstered a little more with your arrival, Sarokrae! May you gain the maximum amount of utilons from this site.

comment by CaveJohnson · 2011-12-20T15:42:33.464Z · score: 2 (2 votes) · LW · GW

Welcome!

Generally though, I've found myself stuck here a lot because I enjoy arguing, and I like convincing other people to be less wrong. Specifically, before coming across this site, I spent a lot of time reading about ways of making people aware of their own biases when interpreting data, and effective ways of communicating statistics to people in a non-misleading way (I'm a big fan of the work being done by David Spiegelhalter).

Honestly that made me cringe slightly and I wanted to write something about it when I came to the second paragraph:

I suspect that I'm pretty bad at overcoming my own biases a lot of the time. In particular, I have a very strong tendency to believe what I'm told (including what I'm being told by this site), I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site), and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but am now not so sure. (At some level, I have the thought that our form of logic should only apply to our plane of existence, whatever that means.) But hey, figuring all that out is what this site's about, right?

You are bad at overcoming your own biases, since all of us are. We've got pretty decent empirical evidence that knowing about some biases does help you, but not with others. The best practical advice to avoid being captured by slogans and inspirational tones is to practice playing the devils advocate.

I'm also quite fond of listening to economics and politics arguments and trying to tear them down, though through this,

Check out LW's sister site Overcoming Bias. Robin Hanson loves to make unorthodox economical arguments about nearly everything. Be warned his contrarianism and cynicism with a simile are addictive! He also has some interesting people on his blogroll.

I've lost any faith in politics as something that has any sensible solutions.

I'm afraid hanging out here probably will not make it any better. Seek different treatment. :)

comment by JenniferDavies · 2011-08-20T18:32:12.151Z · score: 10 (10 votes) · LW · GW

Hey everyone,

My name is Jennifer Davies. I'm 35 years old and am married with a 3 year old daughter. I live in Kitchener, Ontario, Canada.

Originally a computer programmer, I gave it up after spending a year coding for a bank (around 1997). Motivated by an interest in critical thinking, I earned a BA in Philosophy.

Currently, I'm completing a one year post-grad program to become a Career Development Practitioner. I plan to launch a private practice in 2012 to help people find and live their passions while providing them with the tools to do so.

A friend introduced me to Harry Potter: Methods of Rationality and Less Wrong. I have never enjoyed a piece of reading more than that fanfic -- I even saved a PDF version to introduce to my daughter once she's able to benefit from it.

My main motivations (that I'm aware of) for becoming a member of this community are to: improve my thinking skills (and better understand/evaluate values and motivations), help clients to think more rationally, better encourage independent, critical thought in my daughter.

Although it can be painful at times (for my ego) to be corrected, I appreciate such corrections and the time put into them.

Any tips for teaching young children rationality? I'm at a loss and wonder if I need to wait until she's older.

comment by beoShaffer · 2011-08-20T19:04:04.546Z · score: 4 (4 votes) · LW · GW

Hi Jennifer. There's been quite a bit written about teaching children rationality. Unfortunately, the relative newness of LW and the low percentage of parents means its all somewhat speculative. The following links cover most(but probably not all of what LW has on the subject).

comment by JenniferDavies · 2011-08-20T19:19:40.261Z · score: 0 (0 votes) · LW · GW

Oops. I should have done a search first before mentioning it. Thanks for taking the time for posting those links.

comment by Arandur · 2011-07-28T18:35:22.438Z · score: 10 (10 votes) · LW · GW

Hello, Less Wrong.

I suppose I should have come here first, before posting anything else, but I didn't come here through the front door. :3 Rather, I was brought here by way of HP:MOR, as I'm sure many newbies were.

My name is Anthony. I'm 21 years old, married, studying Linguistics, and I'm an unapologetic member of the Church of Jesus Christ of Latter-Day Saints.

Should be fun.

comment by MatthewBaker · 2011-07-28T22:56:30.849Z · score: 1 (1 votes) · LW · GW

Enjoy :)

comment by jsalvatier · 2011-07-28T18:50:50.830Z · score: 0 (0 votes) · LW · GW

Welcome! Nice to have you :)

I don't think anyone comes through the front door.

How did you happen across HP:MOR?

comment by Arandur · 2011-07-28T18:53:38.136Z · score: 1 (1 votes) · LW · GW

Bah! I don't even remember. :3 I haven't the slightest clue. All I know is that it was the first step on my path to enlightenment. How I got to that step ceases to matter; I know my direction.

comment by AlexGreen · 2010-12-11T03:43:10.876Z · score: 10 (10 votes) · LW · GW

Good day I'm a fifteen year-old high school student, Junior, and ended up finding this through the Harry Potter & MOR story, which I thought would be a lot less common to people. Generally I think I'm not that rational of a person, I operate mostly on reaction and violence, and instinctively think of things like 'messages' and such when I have some bad luck; but, I've also found some altruistic passion in me, and I've done all of this self observation which seems contradictory, but I think that's all a rationalization to make me a better person. I also have some odd moods, which split between talking like this, when usually I can't like this at all.

I'd say something about my age group but I can't think of anything that doesn't sound like hypocrisy, so I think I'll cut this off here.

  • Aaaugh, just looking at this giant block of text makes me feel like an idiot.
comment by Jack · 2010-12-11T04:08:37.712Z · score: 2 (2 votes) · LW · GW

Aaaugh, just looking at this giant block of text makes me feel like an idiot.

I think pretty much everyone feels this way writing comments that strangers will be reading.

comment by fortyeridania · 2010-12-11T04:01:54.841Z · score: 2 (2 votes) · LW · GW

Don't be so hard on yourself. Or, more precisely: don't be hard on yourself in that way. Bitter self-criticism could lead to helpful reforms and improved habits, but it could also lead to despair and cynicism. If you feel that you need to be criticized, post some thoughts and let other LWers do it.

comment by lsparrish · 2010-12-11T03:48:19.005Z · score: 1 (1 votes) · LW · GW

Good job with the self analysis. Welcome! :)

comment by apophenia · 2010-04-16T21:19:35.658Z · score: 10 (10 votes) · LW · GW

Hello, Less Wrong.

My name is Zachary Vance. I'm an undergraduate student at the University of Cincinnati, double majoring in Mathematics and Computer Science--I like math better. I am interested in games, especially board and card games. One of my favorite games is Go.

I've been reading Less Wrong for 2-3 months now, and I posted once or twice under another name which I dropped because I couldn't figure out how to change names without changing accounts. I got linked here via Scott Aaronson's blog Shtetl-Optimized after seeing a debate between him and Eliezer. I got annoyed at Eliezer for being rude, forgot about it for a month, and followed the actual link on Scott's site over here. (In case you read this Eliezer, you both listen to people more than I thought (update, in Bayesian) and write more interesting things than I heard in the debate.) I like paradoxes and puzzles, and am currently trying to understand the counterfactual mugging. I've enjoyed Less Wrong because everybody here seems to read everything and usually carefully think about it before they post, which means not only articles but also comments are simply amazing compared to other sites. It also means I try not to post too much so Less Wrong remains quality.

I am currently applying to work at the Singularity Institute.

comment by Paul Crowley (ciphergoth) · 2010-04-17T00:09:41.762Z · score: 4 (4 votes) · LW · GW

Hi, welcome to Less Wrong and thanks for posting an introduction!

comment by Rain · 2010-03-21T15:02:25.261Z · score: 10 (10 votes) · LW · GW
  • Persona: Rain
  • Age: 30s
  • Gender: Unrevealed
  • Location: Eastern USA
  • Profession: Application Administrator, US Department of Defense
  • Education: Business, Computers, Philosophy, Scifi, Internet
  • Interests: Gaming, Roleplaying, Computers, Technology, Movies, Books, Thinking
  • Personality: Depressed and Pessimistic
  • General: Here's a list of my news sources

Rationalist origin: I discovered the scientific method in highschool and liked the results of its application to previously awkward social situations, so I extended it to life in general. I came up with most of OB's earlier material by myself under different names, or not quite as well articulated, and this community has helped refine my thoughts and fill in gaps.

Found LW: The FireFox add-on StumbleUpon took me to EY's FAQ about the Meaning of Life on 23 October 2005, along with Max More, Nick Bostrom, Alcor, Sentient Developments, the Transhumanism Wikipedia page, and other resources. From there, to further essays, to the sl4 mailing list, to SIAI, to OB, to LW, where I started interacting with the community in earnest in late January 2010 and achieved 1000 karma in early June 2010. Previous to the StumbleUpon treasure trove, I had been turned off the transhumanist movement by a weird interview of Kurzweil in Wired, but still hopeful due to scifi potentials.

Value and desire to achieve: I'm still working on that. The metaethics sequence was unsatisfactory. In particular, I have problems with our ability to predict the future and what we should value. I'm hoping smarter than human intelligence will have better answers, so I strongly support the Singularity Institute.

comment by RobinZ · 2009-07-08T21:33:25.652Z · score: 10 (10 votes) · LW · GW

Ignoring the more obvious jokes people make in introduction posts: Hi. My name is Robin. I grew up in the Eastern Time Zone of the United States, and have lived in the same place essentially all my life. I was homeschooled by secular parents - one didn't discuss religion and the other was agnostic - with my primary hobby being the reading of (mostly) speculative fiction of (mostly) quite high quality. (Again, my parent's fault - when I began searching out on my own, I was rather less selective.) The other major activity of my childhood was participation in the Boy Scouts of America.

I entered community college at the age of fifteen with an excellent grounding in mathematics, a decent grounding in physics, superb fluency with the English language (both written and spoken), and superficial knowledge of most everything else. After earning straight As for three years, I applied to four-year universities, and my home state university offered me a full ride. At present, I am a graduate student in mechanical engineering at the same institution.

In the meantime, I have developed an affection for weblogs, web comics, and online chess, much to the detriment of my sleep schedule and work ethic. I suspect I discovered Overcoming Bias through "My Favorite Liar" like everyone else, but Eliezer Yudkowsky's sequences (and, to a lesser extent, Robin Hanson's essays) were what drew me in. I lost interest around when EY jumped to lesswrong.com, but was drawn back in when I opened up the bookmark again in the past day or so, particularly thanks to a few of Yvain's contributions.

Being all of twenty-four and with less worldly experience than the average haddock, I imagine I shan't contribute much to the conversation, but I'll give it my best shot.

(P.S. I am not registered for cryonics and I'm skeptical about the ultimate potential of AI. I'm an modern-American-style liberal registered as a Republican for reasons which seemed good at the time. Also, I am - as is obvious in person but not online - both male and black.)

comment by Alicorn · 2009-07-08T21:45:27.334Z · score: 4 (6 votes) · LW · GW

Being all of twenty-four and with less worldly experience than the average haddock

What gave you the idea that anyone cares about age and experience around here? ;)

comment by RobinZ · 2009-07-09T02:11:31.044Z · score: 2 (2 votes) · LW · GW

Oh, I'm sure someone does, but the real reason I mentioned it is because I usually don't have a lot more to say about a subject than "that sounds reasonable to me". (:

comment by Vladimir_Nesov · 2009-07-09T10:11:10.030Z · score: -1 (3 votes) · LW · GW

So, that was a rationalization above the bottom line of observation that you choose to not say much?

comment by RobinZ · 2009-07-09T11:29:20.413Z · score: 1 (1 votes) · LW · GW

No - I choose to talk a lot, in fact. That's just the reason I expect most of it to be inane. :D

comment by thomblake · 2009-07-08T22:23:32.254Z · score: 1 (1 votes) · LW · GW

Welcome! As Alicorn pointed out, age and experience don't count for much here, as compared to rationality and good ol'fashioned book-learnin'. If it helps any, you even have more education than a lot of the folks about (though we have a minor infestation of doctors)

comment by RobinZ · 2009-07-09T02:14:04.020Z · score: 2 (2 votes) · LW · GW

Well, like I said, I'll give it my best!

(Doctors, eh? Y'know, I have this rash on my lower back... ^_^)

comment by ThoughtDancer · 2009-04-16T17:59:46.783Z · score: 10 (12 votes) · LW · GW
  • Handle: thoughtdancer
  • Name: Deb
  • Location: Middle of nowhere, Michigan
  • Age: 44
  • Gender: Female
  • Education: PhD Rhetoric
  • Occupation: Writer-wannabe, adjunct Prof (formerly tenure-track, didn't like it)
  • Blog: thoughtdances Just starting, be gentle please

I'm here because of SoullessAutomaton, who is my apartment-mate and long term friend. I am interested in discussing rhetoric and rationality. I have a few questions that I would pose to the group to open up the topic.

1) Are people interested in rhetoric, persuasion, and the systematic study thereof? Does anyone want a primer? (My PhD is in the History and Theory of Rhetoric, so I could develop such a primer.)

2) What would a rationalist rhetoric look like?

3) What would be the goals / theory / overarching observations that would be the drivers behind a rationalist rhetoric?

4) Would a rationalist rhetoric be more ethical than current rhetorics, and if so, why?

5) Can rhetoric ever be fully rational and rationalized, or is the study of how people are persuaded inevitably or inherently a-rational or anti-rational (I would say that rhetoric can be rationalized, but I know too many scholars who would disagree with me here, either explicitly or implicitly)?

6) Question to the group: to what degree might unfamiliar terminology derived from prior discussions here and in the sister-blog be functioning as an unintentional gatekeeper? Corollary question: to what degree is the common knowledge of math and sciences--and the relevant jargon terms thereof--functioning as a gatekeeper? (As an older woman, I was forbidden from pursuing my best skill--math--because women "didn't study math". I am finding that I have to dig pretty deeply into Wikipedia and elsewhere to make sure I'm following the conversation--that or I have to pester SoullessAutomaton with questions that I should not have to ask. sigh)

comment by MBlume · 2009-04-16T21:24:01.644Z · score: 5 (5 votes) · LW · GW

I rather like Eliezer's description of ethical writing given in rule six here. I'm honestly not sure why he doesn't seem to link it anymore.

Ethical writing is not "persuading the audience". Ethical writing is not "persuading the audience of things I myself believe to be true". Ethical writing is not even "persuading the audience of things I believe to be true through arguments I believe to be true". Ethical writing is persuading the audience of things you believe to be true, through arguments that you yourself take into account as evidence. It's not good enough for the audience unless it's good enough for you.

comment by Bongo · 2009-04-17T11:55:05.690Z · score: 1 (1 votes) · LW · GW

That's what I was going to reply with. To begin with, a rationalist style of rethoric should force you to write/speak like that, or make it easy for the audience to tell whether or not you do.

(Rationalist rethoric can mean at least three things: ways of communication you adopt in order to be able to deliver your message as rationally and honestly as possible, not in order to persuade; techniques that persuade rationalists particularly well; or new forms of dark arts discovered by rationalists)

(We should distinguish between forms of rhetoric that optimize for persuasion and those that optimize for truth. Eliezer's proposed "ethical writing" seems to optimize for truth. That is, if everyone wrote like that, we would find out more truths and lying would be harder, or even persuading people of untruths. Though it's also awfully persuasive... On the other hand, political rhetoric probably optimizes for persuasion, in so far as it involves knowingly persuading people of lies and bad policies.)

comment by mattnewport · 2009-04-16T20:33:16.163Z · score: 1 (1 votes) · LW · GW

1) Yes, I'm interested.

2) I suspect that the study of rhetoric is already fairly rationalist, in the sense of rationality being about winning. Rhetoric seems to be the disciplined/rational study of how to deliver persuasive arguments. I suspect many aspiring rationalists attempt to inoculate themselves against the techniques of rhetoric because they desire to believe what is true rather than what is most convincingly argued. A rationalist rhetoric might then be a rhetoric which does not trigger the rationalist cognitive immune system and thus is more effective at persuading rationalists.

3) From my point of view the only goal is success - winning the argument. Everything else is an empirical question.

4) Not necessarily. Since rationalists attempt to protect themselves against well-sounding but false arguments, rationalist rhetoric might focus more on avoiding misleading or logically flawed arguments but only as a means to an end. The goal is still to win the argument, not to be more ethical. To the extent that signaling a desire to be ethical helps win the argument, a rationalist rhetoric might do well to actually pre-commit to being ethical if it could do so believably.

5) I think the study of rhetoric can absolutely be rational - it is after all about winning. The rational study of how people are irrational is not itself irrational.

6) My feeling is that the answer is 'to a significant degree' but it's a bit of an open question.

comment by [deleted] · 2011-09-25T16:22:38.813Z · score: 9 (9 votes) · LW · GW

Hey everyone.

I'm Jandila (not my birth, legal or even everyday name), I'm a 28-year old transgendered woman living in Minnesota. I've been following EY's writings off and on since many years ago on the sl4 mailing list, mostly on the topic of AI; initially I got interested in cognitive architecture and FAI due to a sci-fi novel I've been working on forever. I discovered LW a few years ago but only recently started posting; somehow I missed this thread until just recently.

I've been interested in bias and how people think, and in modifying my own instrumental ability to understand and work around it, for many years. I'm on the autistic spectrum and have many clusters of neurological weirdness; I think this provided an early incentive to understand "how people think" so I could signal-match better.

So far I've stuck around because I like LW's core mission and what it stands for in abstract; I also feel that the community here is a bit too homogenous in terms of demographics for a community with such an ostensibly far-reaching, global goal, and thus want to see the perspective base broadened (and am encouraged by the recent influx of female members).

comment by Oscar_Cunningham · 2011-09-25T18:01:08.946Z · score: 0 (0 votes) · LW · GW

Welcome!

comment by KND · 2011-07-02T00:41:42.660Z · score: 9 (9 votes) · LW · GW

Hello fellow Less Wrongians,

My name is Josh and I'm a 16-year-old junior in high school. I live in a Jewish family at the Jersey Shore. I found the site by way of TV Tropes after a friend told me about the Methods of Rationality. Before i started reading Eliezer's posts, i made the mistake of believing I was smart. My goal here is mainly to just be the best that I can be and maybe learn to lead a better life. And by that I mean that I want to be better than everyone else I meet. That includes being a more rational person better able to understand complex issues. I think i have a fair grip on the basic points of rationality as well as philosophy, but i am sorely lacking in terms of math and science (which can't be MY fault obviously, so I'll just go ahead and blame the public school system). I never knew what exactly an logarithm WAS before a few days ago, sadly enough (I knew the term of course, but was never taught what it meant or bothered enough to look it up. I have absolutely no idea what i want to do with my life other than amassing knowledge of whatever i find to be interesting.

I was raised in a conservative household, believing in God but still trying to look at the world rationally. My father never tried to defend the beliefs he taught me with anything but logic. I suppose I'm technically atheist, but i prefer to consider myself agnostic. Believe it or not, I actually became a rationalist after my dad got me to read Atlas Shrugged. While i wasnt taken in very much by the appeal to my sense of superiority, however correct it may be, i did take special notice of a particular statement in which Rand maintains that man is a reasoning animal and that the only evil thought is to not think as to do so is to reject the only tool that mankind has used to survive and instead embrace death. This and her rejection of emotion as a substitute for rationality impressed me more than anything i had read up to that point. i soon became familiar with Aristotle and from then on studied both philosophy and rationality. Of course i hadnt really seen anything before I started reading Eliezer's writing!

Overall, Im just happy to be here and have enjoyed everything i have seen of the site so far. Im still young and relatively ignorant to many of the topics discussed here, but if you will just bare with me, as i know you will, i might, in time, actually learn to add something to the site. Thanks for reading my story, i look forward to devoting many more hours to the site!

comment by [deleted] · 2011-12-20T16:35:18.426Z · score: 2 (2 votes) · LW · GW

Great to have you here Josh!

Im still young and relatively ignorant to many of the topics discussed here, but if you will just bare with me, as i know you will, i might, in time, actually learn to add something to the site.

Most of all as you read and participate in the community, don't be afraid to question common beliefs here, that's where the contribution is likley to be there I think. Also if you plan on going through one or more of the sequences systematically consider finding a chavruta.

I think i have a fair grip on the basic points of rationality as well as philosophy, but i am sorely lacking in terms of math and science (which can't be MY fault obviously, so I'll just go ahead and blame the public school system)

To quote myself:

As for relevant math, or studying math in general just ask in the open threads! LWers are helpful when it comes to these things. You even have people offering dedicated math tutoring, like Patrick Robotham or as of recently me.

Also a great great resource for basic math are the Khan Academy videos and exercises.

comment by lincolnquirk · 2011-04-05T20:29:45.162Z · score: 9 (9 votes) · LW · GW

Hi, I'm Lincoln. I am 25; I live and work in Cambridge, MA. I currently build video games but I'm going to start a Ph.D program in Computer Science at the local university in the fall.

I identified rationality as a thing to be achieved ever since I knew there was a term for it. One of the minor goals I had since I was about 15 was devising a system of morality which fit with my own intuitions but which was consistent under reflection (but not in so many words). The two thought experiments I focused on were abortion and voting. I didn't come up with an answer, but I knew that such a morality was a thing I wanted -- consistency was important to me.

I ran across Eliezer's work 907 days ago reading a Hacker News post about the AI-box experiment, and various other Overcoming Bias posts that were submitted over the years. I didn't immediately follow through on that stuff.

But I became aware of SIAI about 10 months ago, when rms on Hacker News linked an interesting post about the Visiting Fellows program at SIAI.

I think I had a "click" moment: I immediately saw that AI was both an existential risk and major opportunity, and I wanted to work on these things to save the world. I followed links and ended up at LW; I didn't immediately understand the connection between AI and rationality, but they both looked interesting and useful, so I bookmarked LW.

I immediately sent in an application to the Visiting Fellows program, thinking "hey, I should figure out how to do this" -- I think it was Jasen who responded and asked me by email to summarize the purpose of SIAI and how I thought I could contribute. I wrote the purpose summary, but got stuck on how to contribute. I had barely read any of the Sequences at that time and had no idea how I could be useful. For those reasons (as well as a healthy dose of akrasia), I gave up on my application at that time.

Somewhere in there I found HP:MoR (perhaps via TVTropes?), saw the author was "Less Wrong" and made the connection.

Since then, I have been inhaling the Sequences; in the last month I've been checking the front page almost daily. I applied to the Rationality Boot Camp.

I'm very far from being a rationalist -- I can see that my rationality skills are really quite poor, but I at least identify as a student of rationality.

comment by Kevin · 2011-06-06T05:22:28.997Z · score: 2 (2 votes) · LW · GW

That's me, welcome to Less Wrong! Glad to form some part of your personal causal history.

comment by lincolnquirk · 2011-06-06T19:45:36.859Z · score: 0 (0 votes) · LW · GW

Update: I got into Rationality Boot Camp, which is starting tomorrow. Thanks for posting that on HN! I wouldn't (probably) be here otherwise.

comment by Alexei · 2011-06-11T02:14:11.628Z · score: 0 (0 votes) · LW · GW

Hey, I am in kind of in a similar situation as you. I've worked on making games (as a programmer) for several years, and currently I'm working on a game of my own, where I incorporate certain ideas from LessWrong. I've been wondering lately if I could contribute more if I did FAI related research. What convinced you to switch to it? How much do you think you'll contribute? How talented are you and how much of a deciding factor was that?

comment by GDC3 · 2010-12-29T09:22:37.533Z · score: 9 (9 votes) · LW · GW

HI, I'm GDC3. Those are my initials. I'm a little nervous about giving my full name on the internet, especially because my dad is googlible and I'm named after him. (Actually we're both named after my grandfather, hence the 3) But I go by G.D. in real life anyway so its not exactly not my name. I'm primarily working on learning math in advance of returning to college right now.

Sorry if this is TMI but you asked: I became an aspiring rationalist because I was molested as a kid and I knew that something was wrong, but not what it was or how to stop it, and I figure that if I didn't learn how the world really worked instead of what people told me, stuff like that might keep happening to me. So I guess my something to protect was me.

My something to protect is still mostly me, because most of my life is still dealing with the consequences of that. My limbic system learned all sorts of distorted and crazy things about how the world works that my neocortex has to spend all of its time trying to compensate for. Trying to be a functional human being is sort of hard enough goal for now. I also value and care about eventually using this information to help other people who've had similar stuff happen to them. I value this primarily because I've pre-committed to valuing that so that the narrative would motivate me emotionally when I hate myself too much to motivate myself selfishly.

So I guess I self-modified my utility function. I actually was pretty willing to hurt other people to protect myself as a kid. I've made myself more altruistic not to feel less guilty (which would mean that I wasn't really as selfish as I thought I was), but to feel less alone. Which is plausible I guess, because I wasn't exactly a standard moral specimen as a kid.

I hope that was more interesting than upsetting. I think I can learn a lot from you guys if I can speak freely. I hope that I can contribute or at least constitute good outreach.

comment by TheOtherDave · 2010-12-29T14:28:23.212Z · score: 2 (2 votes) · LW · GW

I value this primarily because I've pre-committed to valuing that so that the narrative would motivate me emotionally when I hate myself too much to motivate myself selfishly.

I think that's the most succinct formulation of this pattern I've ever run into. Nicely thought, and nicely expressed.

(I found the rest of your comment interesting as well, but that really jumped out at me.)

Welcome!

comment by [deleted] · 2010-08-11T22:05:45.778Z · score: 9 (9 votes) · LW · GW

[Hi everyone!]

comment by Oligopsony · 2010-08-03T20:15:40.550Z · score: 9 (9 votes) · LW · GW

I've existed for about 24 years, and currently live in Boston.

I regard many of the beliefs popular here - cyronics, libertarianism, human biodiversity, pickup artistry - with extreme skepticism. (As if in compensation, I have my own unpopular frameworks for understanding the world.) I find the zeitgeist here to be interestingly wrong, though, because almost everyone comes from a basically sane starting point - a material universe, conventionally "Western" standards of science, reason, and objectivity - and actively discusses how they can regulate their beliefs to adhere to these. I have an interest in achieving just this kind of regulation (am a "rationalist",) and am aware that it's epistemically healthy to expose myself to alternative points of view expressed in a non-crazy way. So hopefully the second aspect will reinforce the first.

As for why I'm a rationalist, I don't know, and the question doesn't seem particularly interesting to me. I regard it beyond questions of justification, like other desires.

comment by Blueberry · 2010-08-03T20:31:38.653Z · score: 5 (5 votes) · LW · GW

Welcome to Less Wrong!

I regard many of the beliefs popular here - cyronics, libertarianism, human biodiversity, pickup artistry - with extreme skepticism. (As if in compensation, I have my own unpopular frameworks for understanding the world.)

I'd love to hear more about this: I also like exposing myself to alternative points of view expressed in a non-crazy way, and I'm interested in your unpopular frameworks.

Specifically: cryonics is highly speculative, but do you think there's a small chance it might work? When you say you don't believe in human biodiversity, what does that mean? And when you say you don't believe in pickup artistry, you don't think that dating and relationships skills exist?

comment by Oligopsony · 2010-08-03T22:41:05.627Z · score: 3 (5 votes) · LW · GW

Thanks for the friendly welcome!

"I'd love to hear more about this: I also like exposing myself to alternative points of view expressed in a non-crazy way, and I'm interested in your unpopular frameworks."

Specifically, I've become increasingly interested in Marxism, especially the varieties of Anglo post-Marxism that emerged from the analytical tradition. I don't imagine this is any more popular here than it is among normal people, but the general mode of analysis is probably less foreign to libertarian types than they might assume - as implied above, we're both working from materialist assumptions (beyond what's implied above, this applies to more than one meaning of "materialist," at least for certain types of libertarians.)

In general, my bias is to assume that people's behavior is more rational (I mean this in a utility-maximizing sense, rather than in the "rationalist" sense) than it initially appears. In general, the more we know about the context of a decision, the more rational it usually appears to be; and there may be something beyond vanity for the tendency of people, who are in greatest possession of their own situations, to consider themselves atypically rational. I see this materialist (in the "latter," economic sense) viewpoint as avoiding unnecessary mulitiplication of entities and (not that it should matter for truth) a basically respectful way of facially analyzing people: "MAYBE they're just crazy, but until we have more contextual knowledge, let's take as a working assumption that this is in their self-interest." This is my general verbal justification for reflexively turning to materialist explanations, although the CAUSE of my doing so is probably just that I studied neoclassical economics for four years.

"Specifically: cryonics is highly speculative, but do you think there's a small chance it might work?"

Of course. The transparent wish-fulfillment seems inherently suspect, like the immortality claims of religions, but that doesn't mean it couldn't be the case; and it doesn't seem like enthusiasm for cyrogenics seems more harmful than other hobbies. So I wish everyone involved the best of luck.

Of course I can't how much I'm generalizing from my own lack of enthusiasm. I don't put a positive value on additional years of my life - I experience some suicidal ideation but don't act on it because I know it would make people I care about incredibly upset. (This doesn't mean that I subjectively find my life to be torturous, or that it's hard not to act on the ideation; I think my life overall averages out to a level of slight annoyance - one can say "cet par, I'd rather not have experienced that span of annoyance" but one can also easily endure such a span if not doing so would cause tremendous outrage in others.)

"When you say you don't believe in human biodiversity, what does that mean?"

I mean I don't believe in what the sort of people who say "human biodiversity" refer to when they use that phrase: namely, that non-cosmetic, non-immunity genetic differences between ethnic groups are great enough to be of social importance. (Or to use the sort of moralizing, PC language I'd use in most any social context other than here: I am not a consciously-identified racist, though like anyone I have unconscious racial prejudices.) As above, politico-moral reasons wouldn't inhabit my verbal justification for this, although they're probably the efficient cause of my belief.

It's probably inevitable that racism will be unusually popular among a community devoted to Exploring Brave Edgy Truths No Matter the Cost, but I'm not afraid that actually XBETNMtC will lead me to racism - both because I consider that very unlikely, and because if reason does lead me to racism, then it is proper to be a racist. (This is true of beliefs generally, of course.)

"And when you say you don't believe in pickup artistry, you don't think that dating and relationships skills exist?"

Dating and relationship skills exist, but it seems transparent that the meat of PUA is just a magic feather to make dorky young men more confident. (Though one should not dismiss the utility of magic feathers!) I find the "seduction community" repulsively misogynistic, but that's a separate issue. (Verbal justifications, efficient causes, you know the drill.)

Being easily confident with strangers is by far the most important skill for acquiring a large number of sexual partners - this is of course a truth proclaimed by PUA, one which has been widespread knowledge since the dawn of time - and for the same time that easy confidence with strangers is the most important skill for politicians, sales professionals, &c. I do think it's here, for game-theoretic reasons, that the idea of "general social skills" can break down: easy confidence with strangers sabotages your ability to send certain social signals that are important to maintaining close relationships. So there are tradeoffs to make, and I think generally speaking people make the tradeoffs that reflect their preferences.

comment by Blueberry · 2010-08-03T23:17:51.901Z · score: 2 (6 votes) · LW · GW

I typically think of Marxists as people who don't understand economics or human nature and subscribe to the labor theory of value. But you've studied economics, so I'm curious exactly what form of Marxism you subscribe to.

I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.

There are different views on PUA, but in my experience the "meat of PUA" is just conversational practice and learning flirtation and comfort. It's like the magic feather in that believing in your own ability helps, but I don't see it as fake at all.

I do think it's here, for game-theoretic reasons, that the idea of "general social skills" can break down: easy confidence with strangers sabotages your ability to send certain social signals that are important to maintaining close relationships.

Please elaborate on this. It sounds interesting but I'm not sure what you mean.

comment by NancyLebovitz · 2010-08-04T13:24:47.406Z · score: 3 (3 votes) · LW · GW

I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.

My impression was that it is popular here, but I may be overgeneralizing from a few examples or other contexts.

The fact that no one else is saying it's popular suggests but doesn't prove that I'm mistaken.

IIRC, the last time the subject came up, the racial differences in IQ proponent was swatted down, but it was for not having sound arguments to support his views, not for being wrong.

More exactly, there were a few people who disagreed with the race/IQ connection at some length, but the hard swats were because of the lack of good arguments.

comment by Risto_Saarelma · 2010-08-04T14:50:51.039Z · score: 2 (2 votes) · LW · GW

I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.

The psychological diversity article you link to is about Gregory Cochran's and Henry Harpending's book, which is all about the thesis of human evolution within the last ten thousand years affecting the societies of different human populations in various ways. It includes a chapter about Ashkenazi Jews seeming to have a higher IQ than their surrounding populations due to genetics. So I'm not really sure what the difference you are going for here is.

comment by Blueberry · 2010-08-04T15:16:58.105Z · score: 0 (0 votes) · LW · GW

That the evidence suggests there may be a genetic explanation for the higher IQ of Ashkenazim but not for the racial IQ gap.

comment by [deleted] · 2011-12-20T17:27:19.708Z · score: 3 (3 votes) · LW · GW

I'm afraid you may be a bit confused on this. What are the odds that out of all ethnicities on the planet, only Ashkenazi Jews where the ones to develop a different IQ than the surrounding peoples? And only in the past thousand years or so. What about all those groups that have been isolated or differentiated in very different natural and even social environments for tens of thousands of years?

Unless you are using "the racial gap" to refer to the specific measured IQ differences between people of African, European and East Asian descent, which may indeed be caused by the envrionment, rather than the possibility of differences between human "races" in general. But even in that case the existence of ethnic genetic IQ differences should increase the probability of a genetic explanation somewhat.

comment by 715497741532 · 2010-08-04T14:09:38.921Z · score: 2 (8 votes) · LW · GW

Participant here from the beginning and from OB before that, posting under a throwaway account. And this will probably be my only comment on the race-IQ issue here.

I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap [emphasis mine].

The vast majority of writers here have not given their opinion on the topic. Many people here write under their real name or under a name that can be matched to their real name by spending a half hour with Google. In the U.S. (the only society I really know) this is not the kind of opinion you can put under your real name without significant risk of losing one's job or losing out to the competition in a job application, dating situation or such.

Second, one of the main reasons Less Wrong was set up is as a recruiting tool for SIAI. (The other is to increase the rationality of the general population.) Most of the people here with a good reputation are either affiliated with SIAI or would like to keep open the option of starting an affiliation some day. (I certainly do.) Since SIAI's selection process includes looking at the applicant's posting history here, even writers whose user names cannot be correlated with the name they would put on a job application will tend to avoid taking the unpopular-with-SIAI side in the race-IQ debate.

So, want to start a debate that will leave your side with complete control of the battlefield? Post about the race-IQ issue on Less Wrong rather than one of the web sites set up to discuss the topic!

comment by gensym · 2010-08-05T00:00:06.276Z · score: 3 (3 votes) · LW · GW

Since SIAI's selection process includes looking at the applicant's posting history here, even writers whose user names cannot be correlated with the name they would put on a job application will tend to avoid taking the unpopular-with-SIAI side in the race-IQ debate.

What makes you think "the unpopular-with-SIAI side" exists? Or that it is what you think it is?

comment by Unknowns · 2010-08-04T16:54:06.395Z · score: 2 (2 votes) · LW · GW

Downvoted for not even giving your opinion on the issue even with your throwaway account.

Some have pointed out that cultural and environmental explanations can account for significant IQ differences. This is true.

It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are.

comment by Oligopsony · 2010-08-05T00:15:31.357Z · score: 4 (8 votes) · LW · GW

"It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are."

And what direction they're in. If social factors are sufficient to explain (e.g.) the black-white IQ gap, and the argument for their being some innate differences is "well, it's exceedingly unlikely that they're precisely the same," we don't have reason to rate "whites are natively more intelligent than blacks" as more likely than "blacks are natively more intelligent than whites." (If we know that Smith is wealthier than Jones, and that Smith found a load of Spanish dubloons by chance last year, we can't make useful conclusions about whose job was more renumerative before Smith found her pirate booty.) Of course, native racial differences might also be such that there are environmental conditions under which blacks are smarter than whites and others in which the reverse applies, or whatever.

In any event I don't think we need to hypothesize the existence of such entities (substantial racial differences) to explain reality, so the razor applies.

comment by Unknowns · 2010-08-05T01:13:40.202Z · score: 3 (3 votes) · LW · GW

Even if cultural factors are sufficient, in themselves, to explain the black-white IQ difference, it remains more probable that whites tend to have a higher IQ by reason of genetic factors, and East Asians even more so.

This should be obvious: a person's total IQ is going to be the sum of the effects of cultural factors plus genetic factors. But "the sum is higher for whites" is more likely given the hypothesis "whites have more of an IQ contribution from genetic factors" than given the hypothesis "blacks have more of an IQ contribution from genetic factors". Therefore, if our priors for the two were equal, which presumably they are, then after updating on the evidence, it is more likely that whites have more of a contribution to IQ from genetic factors.

comment by Oligopsony · 2010-08-05T01:32:30.368Z · score: 0 (2 votes) · LW · GW

I'm not sure that this is the case, given that the confound has a known direction and unknown magnitude.

Back to Smith, Jones, and Spanish treasure: let's assume that we have an uncontroversial measure of their wealth differences just after Smith sold. (Let's say $50,000.) We have a detailed description of the treasure Smith found, but very little market data on which to base an estimation of what she sold them for. It seems that ceteris paribus, if our uninformed estimation of the treasure is >$50,000, Jones is likelier to have a higher non-pirate gold income, and if our uninformed estimation of the treasure is <$50,000, Smith is likelier to.

comment by Unknowns · 2010-08-05T03:04:35.485Z · score: 2 (2 votes) · LW · GW

Whites and blacks both have a cultural contribution to IQ. So to make your example work, we have to say that Smith and Jones both found treasure, but in unequal amounts. Let's say that our estimate is that Smith found treasure approximately worth $50,000, and Jones found treasure approximately worth $10,000. If the difference in their wealth is exactly $50,000, then most likely Smith was richer in the first place, by approximately $10,000.

In order to say that Jones was most likely richer, the difference in their wealth would have to be under $40,000, or the difference between our estimates of the treasures found by Smith and Jones.

I agree with this reasoning, although it does not contradict my general reasoning: it is much like the fact that if you find evidence that someone was murdered (as opposed to dying an accidental death), this will increase the chances that Smith is a murderer, but then if you find very specific evidence, the chance that Smith is a murderer may go down below what it was originally.

However, notice that in order to end up saying that blacks and whites are equally likely to have a greater genetic component to their intelligence, you must say that your estimate of the average demographic difference is EXACTLY equal to the difference between your estimates of the cultural components of their average IQs. And if you say this, I will say that you wrote it on the bottom line, before you estimated the cultural components.

And if you don't say this, you have to assert one or the other: it is more likely that whites have a greater genetic component, or it is more likely that blacks do. It is not equally likely.

comment by wedrifid · 2010-08-05T03:16:00.985Z · score: 2 (4 votes) · LW · GW

And if you don't say this, you have to assert one or the other: it is more likely that whites have a greater genetic component, or it is more likely that blacks do. It is not equally likely.

Often when people say "equally likely" they mean "I don't know enough to credibly estimate which one is greater, the probability distributions just overlap too much." (Yes, the 'bottom line' idea is more relevant here. It's a political minefield.)

comment by Unknowns · 2010-08-05T03:21:54.413Z · score: 1 (1 votes) · LW · GW

But that's the point of my general argument: if you know that whites average a higher IQ score, but not necessarily by how much (say because you haven't investigated), and you also know that there is a cultural component for both whites and blacks, but you don't know how much it is for each, then you should simply say that it is more likely (but not certain) that whites have a higher genetic component.

comment by wedrifid · 2010-08-05T06:08:51.901Z · score: 2 (2 votes) · LW · GW

I agree.

comment by Oligopsony · 2010-08-05T04:01:56.220Z · score: 0 (2 votes) · LW · GW

I mean "equally likely" in wedrifid's sense: not that, having done a proper Bayesian analysis on all evidence, I may set the probability of p(W>B)=p(B>W}=.5 (assuming intelligence works in such a way that this implied division into genetic and environmental components makes sense), but that 1) I don't know enough about Spanish gold to make an informed judgement and 2) my rough estimate is that "I could see it going either way" - something inherent in saying that environmental differences are "sufficient to explain" extant differences. So actually forming beliefs about these relative levels is both insufficiently grounded and unnecessary.

I suppose if I had to write some median expectation it's that they're equal in the sense that we would regard any other two things in the phenomenal world of everyday experience equal - when you see two jars of peanut butter of the same brand and size next to each other on a shelf in the supermarket, it's vanishingly unlikely that they have exaaaactly the same amount of peanut butter, but it's close enough to use the word.

I don't think this is really a case of writing things down on the bottom line. What reason would there be to suppose ex ante that these arbitrarily constructed groups differ to some more-than-jars-of-peanut-butter degree? Is there some selective pressure for intelligence that exists above the Sahara but not below it (more obvious than counter-just-so-stories we could construct?) Cet par I expect a population of chimpanzees or orangutans in one region to be peanut butter equal in intelligence to those in another region, and we have lower intraspecific SNP variation than other apes.

comment by Unknowns · 2010-08-05T06:47:20.672Z · score: 1 (1 votes) · LW · GW

"I could see it going either way" is consistent with having a best estimate that goes one way rather than another.

Just as you have the Flynn effect with intelligence, so average height has also been increasing. Would you say the same thing about height, that the average height of white people and black people has no significant genetic difference, but it is basically all cultural? If not, what is the difference?

In any case, both height and intelligence are subject to sexual selection, not merely ordinary natural selection. And where you have sexual selection, one would indeed expect to find substantial differences between diverse populations: for example, it would not be at all surprising to find significantly different peacock tails among peacock populations that were separated for thousands of years. You will find these significant differences because there are so many other factors affecting sexual preference; to the degree that you have a sexual preference for smarter people, you are neglecting taller people (unless these are 100% correlated, which they are not), and to the degree that you have a sexual preference for taller people, you are neglecting smarter people. So one just-so-story would be that black people preferred taller people more (note the basketball players) and so preferred more intelligent people less. This just-so-story would be supported even more by the fact that the Japanese are even shorter, and still more intelligent.

Granted, that remains a just-so-story. But yes, I would expect "ex ante" to find significant genetic differences between races in intelligence, along with other factors like height.

comment by 715497741532 · 2010-08-04T19:04:05.755Z · score: 3 (3 votes) · LW · GW

The reason I did not even give my opinion on the race-IQ issue is that IMHO the expected damage to the quality of the conversation here exceeds the expected benefit.

It is possible for a writer to share the evidence that brought them to their current position on the issue without stating their position, but I do not want to do that because it is a lot of work and because there are probably already perfectly satisfactory books on the subject.

By the way, the kind of person who will discriminate against me because of my opinion on this issue will almost certainly correctly infer which side I am on from my first comment without really having to think about it.

comment by jimrandomh · 2010-08-04T18:09:11.291Z · score: 0 (2 votes) · LW · GW

It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are.

That is not the only question. The question that gets people into trouble, is "which groups are favored or disfavored". You can't answer that without offending some people, no matter how small you think the genetic component of the difference is, because many of the people who read it will discard or forget the magnitude entirely and look at only the sign. Saying that group X is genetically smarter than group Y by 10^-10 IQ points will, for many listeners, have the same effect as saying that X is 10^1 IQ points smarter. And while the former belief may be true, the latter belief is false, harmful to those who hold it, and harmful to uninvolved third parties. True statements about race, IQ, and genetics are very easy to simplify or round off to false, harmful and disreputable ones.

That's why comments about race, IQ, and genetics always have to be one level separated from reality, talking about groups X and Y and people with orange eyes rather than real traits and ethnicities. And if they aren't well-separated from reality, they have to be anonymous, to protect the author from the reputational effects of things others incorrectly believe they've said.

(Edited to add: See also this comment I previously wrote on the same topic, which describes a mechanism by which true beliefs about demographic differences in intelligence (not necessarily genetic ones) produce false beliefs about individual intelligence.)

comment by steven0461 · 2010-08-04T22:45:15.799Z · score: 3 (3 votes) · LW · GW

It seems clear to me that much of the time when people mistakenly get offended, they're mistaken about what sort of claim they should get offended about, not just mistaken about what claim was made.

comment by TobyBartels · 2010-08-11T03:18:43.880Z · score: 1 (1 votes) · LW · GW

The important thing for me is that the standard deviations swamp the average difference, so the argument against individual prejudice is valid.

comment by Oligopsony · 2010-08-05T01:01:42.508Z · score: 1 (1 votes) · LW · GW

I wouldn't say I "subscribe" to Marxism, though it seems plausible to me that I might in the near future. I'm still investigating it. While I wouldn't say that specific Marxist hypothesis have risen to the level of doxastic attitudes, the approach has affected the sort of facial explanations I give for phenomena. But as I said the tradition I'm most interested in is recent, economics-focused English language academic Marxism. (The cultural stuff doesn't really interest me all that much, and most of it strikes me as nonsense, but I'm not informed enough about it to conclude that "yes, it is nonsense!") If I could recommend a starting point it would be Harvey's "Limits to Capital," although it was Hobsbawm's trilogy on the 19th century that sparked my interest.

I hope this doesn't sound evasive! I try to economize on my explicit beliefs while being explicit on my existing biases.

(As a side note, while there are a lot of different LTVs floating around, it's likely that they're almost all a bit more trivial and a lot less crazy than what you might be imagining. Most forms don't contradict neoclassical price theory but do place some additional (idealized, instrumental) constaints in order to explain additional phenomena.)

By the signaling thing, I mean the following: normal humans (not neurotic screwballs, not sociopath salesmen) show a level of confidence in social situations that corresponds roughly to how confident they themselves feel at the time. Thus, when someone approaches you and tries to sell you on something - a product, an idea, or, most commonly, themselves - their confidence level can serve as a good proxy for whether they think the item under sale is actually worthy of purchase. The extent to which they seem guarded signals that they're not all that. So for game-theoretic reasons, salesmanship works.

But it's also the case that normal people become more confident and willing to let their guards down when they're around people they trust, for obvious reasons. Thus, lowering of guards can signal "I trust you; indeed, trust you significantly more than most people" if you showed some guardedness when you first met them. There are other signals you can send, but these are among those whose absence will leave people suspicious, if you want to take your relationships in a more serious direction.

So there are tradeoffs in where you choose to place yourself on the easy-confidence spectrum. Moving to the left makes it easier to make casual friends, and lots of them; to the right makes it easier to make good friends. I suspect that most people slide around until they get the goods bundle that they want - I've even noticed how I've slid around throughout time, in reaction to being placed in new social environments - although there are obvious dysfunctional cases.

Sorry for implying that racism is common here if it isn't! Seeing Saileresque shibboleths thrown around here a few times and, indeed, the nearbyness of blogs like Roissy probably colored my perceptions. (Perhaps the impression I have of PUA from the Game and Roissy is similarly inaccurate.)

comment by TobyBartels · 2010-08-11T03:22:19.018Z · score: 0 (0 votes) · LW · GW

I used to be interested in Marxism, but not so much anymore.

However, I'm still interested in theories of value. The labour theory of value is not just a Marxist thing; it was widely accepted in the 19th century, and there are still non-Marxists who use it.

I have a hard time deciding if the debate is anything more than a matter of definition. Perhaps one ought to have multiple theories of value for different purposes?

Anyway, I want to ask if you have any recommendations for reading on this subject.

comment by Emile · 2010-08-04T10:08:27.017Z · score: 0 (0 votes) · LW · GW

I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap.

I was wondering about that too, it's not really a a major topic here, though maybe the fact that it's been recently discussed on Overcoming Bias and that Roissy in DC is a "nearby" blog gave him this impression?

comment by satt · 2010-08-04T17:55:30.405Z · score: 1 (1 votes) · LW · GW

The topic kinda-sorta came up in last month's Open Thread, and WrongBot used it as an example in "Some Thoughts Are Too Dangerous For Brains to Think".

comment by [deleted] · 2011-12-20T17:33:02.255Z · score: 0 (0 votes) · LW · GW

"Some Thoughts Are Too Dangerous For Brains to Think".

Which was controversial.

comment by erratio · 2010-06-29T10:18:41.631Z · score: 9 (9 votes) · LW · GW

Hi all, I'm Jen, an Australian Jewish atheist, and an student in a Computer Science/Linguistics/Cognitive Science combined degree, in which I am currently writing a linguistics thesis. I got here through recommendations from a couple of friends who visit here and stayed mostly for the akrasia and luminosity articles (hello thesis and anxiety/self-esteem problems!) Oh and the other articles too, but the ones I've mentioned are the ones that I've put the most effort into understanding and applying. The others are just interesting and marked for further processing at some later time.

I think I was born a rationalist rather than becoming one - I have a deep-seated desire for things to have reasons that make sense, by which I mean the "we ran some experiments and got this answer" kind of sense as opposed to the "this validates my beliefs" kind of sense. Although having said that I'm still prey to all kinds of irrationality, hence this site being helpful.

At some point in the future I would be interested in writing something about linguistic pragmatics - it's basically another scientific way of looking at communication. There's a lot of overlap between pragmatics and the ideas I've seen here on status and signalling, but it's all couched in different language and emphasises different parts, so it may be different enough to be helpful to others. But at the moment I have no intention of writing anything beyond this comment (hello thesis again!), the account is mostly just because I got sick of not being able to upvote anything.

comment by Morendil · 2010-06-29T10:42:40.411Z · score: 3 (3 votes) · LW · GW

Welcome to Less Wrong!

writing something about linguistic pragmatics

Please do! I have a keen interest in that topic.

comment by Bill_McGrath · 2011-08-24T11:51:28.907Z · score: 8 (8 votes) · LW · GW

Hello, Less Wrong!

I'm Bill McGrath. I'm 22 years old, Irish, and I found my way here, as with many others, from TVTropes and Harry Potter and the Methods of Rationality.

I'm a composer and musician, currently entering the final year of my undergrad degree. I have a strong interest in many other fields - friends of mine who study maths and physics often get grilled for information on their topics! I was a good maths student in school, I still enjoy using maths to solve problems in my other work or just for pleasure, and I still remember most of what I learned. Probablity is the main exception here - it wasn't my strongest area, and I've forgotten a lot of the vocabulary, but it's the next topic I intend to study when I get a chance. This is proving problematic in my understanding of the Bayesian approach, but I'm getting there.

I've been working my way through the core sequences, along with some scattered reading elsewhere on the site. So far, a lot of what I've encountered has been ideas that are familiar to me, and that I try to use when debating or discussing ideas anyway. I've held for a while now that you have to be ready to admit your mistakes, not be afraid of being wrong sometimes, and take a neutral approach to evidence - allowing any of these to cloud your judgement means you won't get reliable data. That said, I've still learned quite a bit from LW, most importantly how to express these ideas about rationality to other people.

I'm not sure I could pinpoint what moment brought me to this mindset, but it was possibly the moment I understood why the scientific method was about trying to disprove, rather than prove, your hypothesis; or perhaps when I realized that the empiricisist's obligation to admit when they are wrong was makes them strong. Other things that have helped me along the way - the author Neal Stephenson, the comedian Tim Minchin, and Richard Fenyman.

My other interests, most of which I have no formal training in but I have read about in my own time or have learned about through conversation with friends, include:

-politics - I consider myself to be socially liberal but economically ignorant

-languages (I speak a little German and less Irish, have taken brief courses in other languages), linguistic relativism

-writing, and the correct use of language

-quantum physics (in an interested layman way - I am aware of a lot of the concepts, but I'm by no means knowledgeable)

-psychology

as well as many other things which are less LW-relevant!

Thank you to the founders and contributors to the site who have made it such an interesting collection of thoughts and ideas, as well as a welcoming forum for people to come and learn. I think I'll learn a lot from it, and hopefully some day I'll be able to repay the favour!

-Bill

comment by taryneast · 2010-12-12T15:19:01.538Z · score: 8 (8 votes) · LW · GW

Hi, I'm Taryn. I'm female, 35 and working as a web developer. I started studying Math, changed to Comp Sci and actually did my degree in Cognitive Science (Psychology of intelligence, Neurophysiology, AI, etc) My 3rd year Project was on Cyberware.

When I graduated I didn't see any jobs going in the field and drifted into Web Development instead... but I've stayed curious about AI, along with SF, Science, and everything else too. I kinda wish I'd known about Singularity research back then... but perhaps it's better this way. I'm not a "totally devoted to one subject" kinda person. I'm too curious about everything to settle for a single field of study.

That being said - I've worked in web development now for 11 years. Still, when I get home, I don't start programming, preferring pick up a book on evolutionary biology, medieval history, quantum physics, creative writing (etc) instead. There's just too damn many interesting things to learn about to just stick to one!

I found LW via Harry Potter & MOR, which my sister forwarded to me. Since then I've been voraciously reading my way through the sequences, learning just how much I have yet to learn... but totally fascinated. This site is awesome.

comment by Relsqui · 2010-09-17T02:47:12.100Z · score: 8 (8 votes) · LW · GW

I suppose it's high time I actually introduced myself.

Hullo LW! I'm Elizabeth Ellis. That's a very common first name and a very common last name, so if you want to google me, I recommend "relsqui" instead. (I'm not a private person, the handle is just more useful for being a consistently recognizable person online.) I'm 24 and in Berkeley, California, USA. No association with the college; I just live here. I'm a cyclist, an omnivore, and a nontheist; none of these are because of moral beliefs.

I'm a high school dropout, which I like telling people after they've met me, because I like fighting the illusion that formal education is the only way to produce intelligent, literate, and articulate people--or rather, that the only reason to drop out is not being one. In mid-August of this year I woke up one morning, thought for a while about things I could do with my life that would be productive and fulfilling, and decided it would be helpful to have a bachelor's degree. I started classes two weeks later. GEs for now, then a transfer into a communication or language program. It's very strange taking classes with people who were in high school four months ago.

My major area of interest is human communication. Step back for a moment and think about it: You've got an electric meatball in your head which is capable of causing other bits of connected meat to spasm, producing vibrations in the air. Another piece of meat somewhere else is touched by those vibrations ... and then the electric meatball in somebody else's head is supposed to produce an approximation of the signals that happened to be running through yours? That's ridiculous. The wonder isn't how often we miscommunicate, it's that we ever communicate well.

So, my goal is to help people do it better. This includes spreading communication techniques which I've found effective for getting one electric meatball to sync up with another, as well as more straightforward things like an interest in languages. (I'm only fluent in English, but I'm conversational in Spanish, know some rudimentary Hebrew, and have a semester-equivalent or less of a handful of other things.)

One of my assets in this department is that, on the spectrum of strongly logic-driven people to strongly emotion-driven people, I am fairly close to the center. This has its good and bad points. I understand each side better than the other one does, and have had success translating between them for people who weren't getting across to each other. On the other hand, I'm repelled by both extremes, which can be inconvenient. I think that no map of a human can be accurate without acknowledging emotions in the territory, which we feel, and which drive us, but which we do not fully understand. This does not preclude attempting to understand them better; it just requires working with those emotions rather than wishing they didn't exist.

I came to LW because someone linked me to the parable of the dagger and it delighted me, so I looked around to see what else was here. I'm interested in ways to make better decisions and be less wrong because I find it useful to have these ideas floating around in my head when I have a decision to make--much like aforementioned communication techniques when I'm talking to someone. I'm not actively trying to transform myself, at least not in any way related to rationality.

That's everything of any relevance I can think of at the moment.

comment by Alicorn · 2010-09-17T03:04:50.811Z · score: 4 (4 votes) · LW · GW

Upvoted for the amusing phrase "electric meatball".

comment by Relsqui · 2010-09-17T03:06:57.088Z · score: 1 (1 votes) · LW · GW

"Have you heard my new band, Electric Meatball?"

(I've tried to describe that idea several times and I think that's my favorite wording so far.)

comment by LauralH · 2010-07-22T20:02:25.484Z · score: 8 (8 votes) · LW · GW

My name is Laural, 33-yo female, degree in CS, fetish for EvPsych. Raised Mormon, got over it at 18 or so, became a staunch Darwinist at 25.

I've been reading OvercomingBias on and off for years, but I didn't see this specific site till all the links to the Harry Potter fanfic came about. I had in fact just completed that series in May, so was quite excited to see the two things combined. But I think I wouldn't have registered if I hadn't read the AI Box page, which convinced me that EY was a genius. Personally, I am more interested in life-expansion than FAI. I'm most interested in changing social policy to legalize drugs, I suppose; if people are allowed to put whatever existing substances in their bodies, the substances that don't yet exist have a better chance.

comment by TobyBartels · 2010-07-22T03:14:19.275Z · score: 8 (8 votes) · LW · GW

I also found this blog through HP:MoR.

My ultimate social value is freedom, by which I mean the power of each person to control their own life. I believe in something like a utilitarian calculus, where utility is freedom, except that I don't really believe that there is a common scale in which one person's loss of freedom can be balanced by another person's gain. However, I find that freedom is usually very strongly positive-sum on any plausible scale, so this flaw doesn't seem to matter very much.

Of course, freedom in this sense can only be a social value; this leaves it up to each person to decide their own personal values: what they want for their own lives. In my case, I value forming and sustaining friendships in meatspace, often with activities centred around food and shared work, and I also value intellectual endeavours, mostly of an abstract mathematical sort. But this may change with my whims.

I might proselytise freedom here from time to time. There would be no point in proselytising my personal values, however.

comment by TobyBartels · 2010-07-24T17:10:39.009Z · score: 2 (2 votes) · LW · GW

I also found this blog through HP:MoR.

Now that I think about it, I may have found HP:MoR through this blog. (I don't read much fan fiction.)

I can't remember anymore what linked me to HP:MoR, but I think that I got there after following a series of blog posts linking to blog posts on blogs that I don't ordinarily read. So I might well have gone through Less Wrong (or Overcoming Bias) along that way.

But if so, I wasn't inspired to read further in Less Wrong until after I'd read HP:MoR.

comment by [deleted] · 2010-07-22T03:27:09.155Z · score: 2 (2 votes) · LW · GW

Freedom, I can get behind. Also math. Welcome aboard.

comment by TobyBartels · 2010-07-22T08:18:27.955Z · score: 0 (0 votes) · LW · GW

Thanks!

comment by CronoDAS · 2010-07-22T06:11:08.815Z · score: 1 (1 votes) · LW · GW

I suspect that some kinds of "freedom" are overrated. Suppose that A, B, and C are mutually exclusive options, and you prefer A to both of the others. If you have a choice between A and B, you'd choose A. If I then give you the "freedom" to choose between A, B, and C instead of just between A and B, you'll still choose A, and the extra "freedom" didn't actually benefit you.

comment by TobyBartels · 2010-07-22T08:18:19.313Z · score: 1 (1 votes) · LW · GW

Right, by the standard of control over one's own life, that extra option does not actually add to my freedom. In real life, an extra option can even be confusing and so actually detract from freedom! (But it can also help clarify things and add to freedom that way, although you can get the same effect by merely contemplating the extra option if you're smart enough to think of it.)

comment by cousin_it · 2010-07-22T08:26:27.445Z · score: 2 (2 votes) · LW · GW

More freedom is always good from an individual rationality perspective, but game theory has lots of situations where giving more behavior options to one agent causes harm to everyone, or where imposing a restriction makes everyone better off. For example, if we're playing the Centipede game and I somehow make it impossible for myself to play "down" for the first 50 turns - unilaterally, without requiring any matching commitment on your part - then we both win much more than we otherwise would.

comment by TobyBartels · 2010-07-22T08:53:34.760Z · score: 0 (0 votes) · LW · GW

Well, if you make it impossible for you to play down, then that's a perfectly valid exercise of your control over your own life, isn't it? For a paradox, you should consider whether I would impose that restriction on you (or at least whether I would take part in the enforcement mechanism of your previously chosen constraint when you change your mind).

Usually in situations like this, I think that the best thing to do is to figure out why the payoffs work in that way and then try to work with you to beat the system. If that's not possible now, then I would usually announce my intention to cooperate, then do so, to build trust (and maybe guilt if you defect) for future interactions.

If I'm playing the game as part of an experiment, so that it really is just a game in the ordinary sense, then I would try to predict your behaviour and play accordingly; this has much more to do with psychology than game theory. I wouldn't have to force you to cooperate on the first 50 turns if I could convince you of the truth: that I would cooperate on those turns anyway, because I already predict that you will cooperate on those turns.

If the centipede game, or any of the standard examples from game theory, really is the entire world, then freedom really isn't a very meaningful concept anyway.

comment by cousin_it · 2010-07-22T09:02:59.789Z · score: 2 (2 votes) · LW · GW

Well, if you make it impossible for you to play down, then that's a perfectly valid exercise of your control over your own life, isn't it?

Then you make it a tautology that "freedom is good", because any restriction on freedom that leads to an increase of good will be rebranded as a "valid exercise of control". Maybe I should give an example of the reverse case, where adding freedom makes everyone worse off. See Braess's paradox: adding a new free road to the road network, while keeping the number of drivers constant, can make every driver take longer to reach their destination. (And yes, this situation has been observed to often occur in real life.) Of course this is just another riff on the Nash equilibrium theme, but you should think more carefully about what your professed values entail.

comment by TobyBartels · 2010-07-22T10:20:18.365Z · score: 2 (2 votes) · LW · GW

Then you make it a tautology that "freedom is good"

Yes, it's my ultimate social value! That's not a tautology, but an axiom. I don't like it because I believe that it maximises happiness (or whatever), I just like it.

Braess's paradox

Yes, this is more interesting, especially when closing a road would improve traffic flow. People have to balance their desire to drive on the old road with their desire to drive in decongested traffic. If the drivers have control over whether to close the road, then the paradox dissolves (at least if all of the drivers think alike). But if the road closure is run by an outside authority, then I would oppose closing the road, even if it's ‘for their own good’.

comment by cousin_it · 2010-07-22T10:46:31.484Z · score: 1 (1 votes) · LW · GW

Also maybe relevant: Sen's paradox. If you can't tell, I love this stuff and could go on listing it all day :-)

comment by TobyBartels · 2010-07-23T07:53:20.882Z · score: 0 (0 votes) · LW · GW

As currently described at your link, that one doesn't seem so hard. Person 2 simply says to Person 1 ‘If you don't read it, then I will.’, to which Person 1 will agree. There's no real force involved; if Person 1 puts down the book, then Person 2 picks it up, that's all. I know that this doesn't change the fact that the theorem holds, but the theorem doesn't seem terribly relevant to real life.

But Person 1 is still being manipulated by a threat, so let's apply the idea of freedom instead. Then the preferences of Persons 1 and 2 may begin as in the problem statement, but Person 1 (upon sober reflection) allows Person 2's preferences to override Person 1's preferences, when those preferences are only about Person 2's life, and vice versa. Then Person 1 and Person 2 both end up wanting y,z,x; Person 1 grudgingly, but with respect for Person 2's rights, gives up the book, while Person 2 refrains from any manipulative threats, out of respect for Person 1.

comment by Vladimir_Nesov · 2010-07-22T09:07:53.354Z · score: 1 (1 votes) · LW · GW

More freedom makes signaling of what you'll actually do more difficult. All else equal, freedom is good.

comment by TobyBartels · 2010-07-22T09:52:54.815Z · score: 1 (1 votes) · LW · GW

More freedom makes signaling of what you'll actually do more difficult.

Yes, this is something that I worry about. You can try to force your signal to be accurate by entering a contract, but even if you signed a contract in the past, how can anybody enforce the contract now without impinging on your present freedom? The best that I've come up with so far is to use trust metrics, like a credit rating. (Payment of debts is pretty much unenforceable in the modern First World, which is why they invented credit reports.)

comment by cousin_it · 2010-07-22T10:28:18.357Z · score: 1 (1 votes) · LW · GW

What Nesov said.

Thomas Schelling gives many examples of incentivising agreements instead of enforcing them. Here's one: you and I want to spend 1 million dollars each on producing a nonexcludable common good that will give each of us 1.5 million in revenue. (So each dollar spent on the good creates 1.5 dollars in revenue that have to be evenly split among us both, no matter who spent the initial dollar.) Individually, it's better for me if you spend the million and I don't, because this way I end up with 1.75 million instead of 1.5. Schelling's answer is spreading the investment out in time: you invest a penny, I see it and invest a penny in turn, and so on. This way it costs almost nothing for us both to establish mutual trust from the start, and it becomes rational to keep cooperating every step of the way.

comment by TobyBartels · 2010-07-23T08:17:41.360Z · score: 0 (0 votes) · LW · GW

The paradoxical decision theorist would still say, ‘You fool! Don't put in a penny; your rational opponent won't reciprocate, and you'll be out a farthing.’. Fortunately nobody behaves this way, and it wouldn't be rational to predict it.

I would probably put in half a million right away, if I don't know you at all other than knowing that you value the good like I do. I'm sure that you can find a way to manipulate me to my detriment if you know that, since it's based on nothing more than a hunch; and actually this is the sort of place where I would expect to see a lecture as to exactly how you would do so, so please fire away! (Of course, any actual calculation as to how fast to proceed depends on the time discounting and the overhead of it all, so there is no single right answer.)

I agree, slowly building up trust over time is an excellent tactic. Looking up somebody's trust metric is only for strangers.

comment by Vladimir_Nesov · 2010-07-22T09:55:00.748Z · score: 1 (3 votes) · LW · GW

You are never free to change what you actually are and what you actually want, so these invariants can be used to force a choice on you by making it the best one available.

comment by cousin_it · 2010-07-22T09:08:50.494Z · score: 0 (0 votes) · LW · GW

Um, Braess's paradox doesn't involve signaling.

comment by Vladimir_Nesov · 2010-07-22T09:48:42.543Z · score: 0 (0 votes) · LW · GW

That's the reason bad things happen. Before the added capacity, drivers' actions are restricted by problem statement, so signaling isn't needed, its role is filled. If all drivers decide to ignore the addition, and effectively signal to each other that they actually will, they end up with the old plan, better than otherwise, and so would choose to precommit to that restriction. More freedom made signaling the same plan more difficult, by reducing information. But of course, with new capacity they could in principle find an even better plan, if only they could precommit to it (coordinate their actions).

comment by Tuesday_Next · 2010-04-07T17:20:08.146Z · score: 8 (8 votes) · LW · GW

Hello everyone!

Name: Tuesday Next Age: 19 Gender: Female

I am an undergraduate student studying political science, with a focus on international relations. I have always been interested in rationalism and finding the reasons for things.

I am an atheist, but this is more a consequence of growing up in a relatively nonreligious household. I did experiment with paganism and witchcraft for several years, a rather frightening (in retrospect) display of cognitive dissonance as I at once believed in science and some pretty unscientific things.

Luckily I was able to to learn from experience, and it soon become obvious that what I believed in simply didn't work. I think I wanted to believe in witchcraft both as a method of teenage rebellion and to exert some control over my life. However I was unable to delude myself.

I tried to interest myself in philosophy many times, but often became frustrated by the long debates that seemed divorced from reality. One example is the idea of free will. Since I was a child (I have a memory of trying, when I was in elementary school, of trying to explain this to my parents without success) I have had a conception of reality and free will that seemed fairly reasonable to me and I never understood what all the fuss was about.

It went something like this: The way things did turn out is the only way things could have turned out, given the exact pre-existing circumstances. In particular, when one person makes a decision they presumably do so for a reason, whether that reason is rational or not; if that decision is not predetermined by the situation and the person, then it is random. If a decision is random, this is not free will because the choice is not a result of a person's decision; rather it is a result of some random phenomenon involving the word "quantum."

But since no two situations are alike, and it is impossible for anyone to know everything, let alone extrapolate from knowledge of the present to figure out what the future will be, there is no practical effect from this determinism. In short, we act as if we have free will and we cannot predict the future. It is the same thing with reality. Whether it is "real" or not is irrelevant.

The practical consequences of this, for me at least, are that arguing about whether we have free will or not misses the point. We may be able to predict the "future" of a simple computer program by knowing all the conditions of the present, but cannot do the same for the real world; it is too complex.

I finally found this articulated, to my great relief that I was not crazy for believing it, in Daniel Dennet's "Freedom Evolves." This is what got me interested in philosophy again.

I am also interested in how to change minds (including my own). I have always had fairly strong (and, in retrospect, irrational) political beliefs. When I took an Economics course, I found many of my political beliefs changing significantly.

I even found myself arguing with a friend (who like me is fairly liberal), and he later praised me for successfully defending a point of view he knew I disagreed with. (The argument in question was about a global minimum wage law; I was opposed.) I found this disconcerting as I was in fact arguing what I honestly believed, though I do have a tendency to play "Devil's Advocate" and argue against what I believe.

This forced me to confront the fact that some of my political views had actually changed. Later, when I challenged some of the basic assumptions that Economics class made, like the idea that markets can be "perfect," I found myself reassessing my political views again. I am trying to get in the habit of doing this to avoid becoming dogmatic.

Anyway, I think that's enough for now; if anyone has any questions I would be happy to address them.

--Tuesday

comment by Alicorn · 2010-04-07T18:30:27.643Z · score: 1 (3 votes) · LW · GW

I officially declare you to be nifty. You may collect your niftiness paraphernalia at the Office of Niftiness during normal business hours.

comment by Morendil · 2010-04-07T17:42:08.929Z · score: 1 (1 votes) · LW · GW

Welcome to LW!

comment by orange · 2010-04-09T04:56:05.024Z · score: 0 (2 votes) · LW · GW

Your ideas on free will are basically what I came up with too when I was younger. That's kind of comforting.

comment by Zvi · 2009-04-16T20:37:53.747Z · score: 8 (10 votes) · LW · GW
  • Handle: Zvi
  • Name: Zvi Mowshowitz
  • Location: New York City
  • Age: 30
  • Education: BA, Mathematics

I found OB through Marginal Revolution, which then led to LW. A few here know me from my previous job as a professional Magic: The Gathering player and writer and full member of the Competitive Conspiracy. That job highly rewarded the rationality I already had and encouraged its development, as does my current one which unfortunately I can't say much about here but which gives me more than enough practical reward to keep me coming back even if I wasn't fascinated anyway. I'm still trying to figure out what my top level posts are going to be about when I get that far.

While I have told my Magic origin story I don't have one for rationality or atheism; I can't remember ever being any other way and I don't think anyone needs my libertarian one. If anything it took me time to realize that most people didn't work that way, and how to handle that, which is something I'm still working on and the part of OB/LW I think I've gained the most from.

comment by Alicorn · 2009-04-16T15:15:53.937Z · score: 8 (8 votes) · LW · GW
  • Handle: Alicorn
  • Location: Amherst, MA
  • Age: The number of full years between now and October 21, 1988
  • Gender: Female

Atheist by default, rationalist by more recent inclination and training. I found OB via Stumbleupon and followed the yellow brick road to Less Wrong. In the spare time left by schoolwork and OB/LW, I do art, write, cook, and argue with those of my friends who still put up with it.

comment by MBlume · 2009-04-17T02:59:36.263Z · score: 0 (0 votes) · LW · GW

bookmarking improvisational soup =)

comment by Jack · 2009-04-16T16:30:38.639Z · score: 0 (2 votes) · LW · GW

Do you know your what areas you want to focus on in philosophy?

comment by Alicorn · 2009-04-16T16:32:48.955Z · score: 3 (3 votes) · LW · GW

Not sure yet. I have a fledgeling ethics of rights kicking around in the back of my head that I might do something with. Alternately, I could start making noise about my wacky opinions on personal identity and be a metaphysicist. I also like epistemology, and I find philosophy of religion entertaining (although I wouldn't want to devote much of my time to it). I'm pretty sure I don't want to do philosophy of math, hardcore logic, or aesthetics.

comment by Jack · 2009-04-16T18:48:02.703Z · score: 0 (2 votes) · LW · GW

I hope we get to hear your wacky opinions on personal identity some time, I think my senior thesis will be on that subject.

comment by Alicorn · 2009-04-16T22:37:41.298Z · score: 3 (5 votes) · LW · GW

I think I have to at least graduate before anyone besides me is allowed to write a thesis on my wacky opinions on personal identity ;)

In a nutshell, I think persons just are continuous self-aware experiences, and that it's possible for two objects to be numerically distinct and personally identical. For instance (assuming I'm not a brain in a vat myself) I could be personally identical to a brain in a vat while being numerically distinct. The upshot of being personally identical to someone is that you are indifferent between "yourself" and the "other person". For instance, if Omega turned up, told me I had an identical psychological history with "someone else" (I use terms like that of grammatical necessity), and that one of us was a brain in a vat and one of us was as she perceived herself to be, and that Omega felt like obliterating one of us, "we" would "both" prefer that the brain in a vat version be the one to be obliterated because we're indifferent between the two as persons, and just have a general preference that (ceteris paribus) non brains-in-vats are better.

Persons can share personal parts in the same way that objects can share physical parts. We should care about our "future selves" because they will include the vast majority of our personal parts (minus forgotten tidbits and diluted over time by new experiences) and respect (to a reasonable extent) the wishes of our (relatively recent) past selves because we consist mostly of those past selves. If we fall into a philosophy example and undergo fission of fusion, fission yields two people who diverge immediately but share a giant personal part. Fusion yields one person who shares a giant personal part each with the two people fused.

comment by loqi · 2009-04-18T21:04:46.906Z · score: 2 (2 votes) · LW · GW

In a nutshell, I think persons just are continuous self-aware experiences, and that it's possible for two objects to be numerically distinct and personally identical.

I've found this position to be highly intuitive since it first occurred/was presented to me (don't recall which, probably the latter from Egan).

One seemingly under-appreciated (disclaimer: haven't studied much philosophy) corollary of it is that if you value higher quantities of "personality-substance", you should seek (possibly random) divergence as soon as you recognize too much of yourself in others.

comment by Alicorn · 2009-04-19T01:52:41.598Z · score: 2 (2 votes) · LW · GW

Not really. Outside of philosophy examples and my past and future selves, I don't actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her. Besides, that's if I value personal uniqueness. Frankly, I'd be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.

comment by loqi · 2009-04-19T03:41:35.030Z · score: 0 (0 votes) · LW · GW

I don't actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her.

The term "continuity of perspective" doesn't reduce much beyond "identity" for me in this context. How similar can you be without sharing personal parts? If the difference is at all determined by differences in external inputs, how can you be sure that your inputs are effectively all that different?

Frankly, I'd be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.

I think the above addresses a slightly different concern. Suppose that some component of your decision-making or other subjective experience is decided by a pseudo-random number generator. It contains no interesting structure or information other than the seed it was given. If you were to create a running (as opposed to static, frozen) copy of yourself, would you prefer to keep the current seed active for both of you, or introduce a divergence by choosing a new seed for one or the other? It seems that you would create the "same amount" of personal backup either way.

comment by michaelhoney · 2009-04-16T23:47:11.136Z · score: 2 (2 votes) · LW · GW

I think you're on the right track. There'll be a lot of personal-identity assumptions re-evaluated over the next generation as we see more interpenetration of personal parts as we start to offload cognitive capacity to shared resources on the internet.

Semi-related: I did my philosophy masters sub-thesis [15 years ago, not all opinions expressed therein are ones I would necessarily agree with now] on personal identity and the many-world interpretation of quantum physics. Summary: personal identity is spread/shared along all indistinguishable multiversal branches: indeterminacy is a feature of not knowing which branch you're on. Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.

comment by RobinZ · 2009-07-20T19:51:36.407Z · score: 1 (1 votes) · LW · GW

Technically, that's non-transitive - non-commutative would be A=B but B≠A.

(Also, it is mildly confusing to use an equality symbol to indicate a relationship which is not a mathematical equality relationship - i.e. reflexive, commutative, and transitive.)

(Also, a Sorites-paradox argument would suggest that identity is a matter of degree.)

comment by Nick_Tarleton · 2009-04-19T02:09:03.176Z · score: 1 (1 votes) · LW · GW

Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.

I think I understand (and agree with) the other parts, but how is this possible?

comment by Jack · 2009-04-17T07:08:06.365Z · score: 0 (2 votes) · LW · GW

See, now I'm going to block quote this :-P

comment by Jonathan Doolin (jonathan-doolin) · 2018-05-26T16:10:50.087Z · score: 7 (2 votes) · LW · GW

Hi. This is my first time to this website, and my third comment today. I've been listening to the show "Bayesian Conspiracy" and made some posts to the subreddit. So I guess I'm not a good lurker.

I was intrigued by Arandur's article entitled "The Goal of the Bayesian Conspiracy" which was essentially,

(1) eliminate most pain and suffering and inequity.

(2) develop technologies for eternal life.

The ordering here, that Arandur suggested, I thought, was quite wise. I recently saw the series "Dollhouse" and I felt like it gave a pretty good description of what would probably happen if you reversed the order.

And then I went on to read the article on "The Failures of Eld Science"... Well, skim.... Like I said, I'm not a good lurker. And then I read "Rationality as a Martial Art" which was short so I read the whole thing.

I guess I have very entrenched views on the failures of Eld science, and Rationality as a martial art, because, I've been arguing about Special and General Relativity online for about two decades, and occasionally debating biblical interpretation with Christians for most of my life.

Hide in plain sight

Before you can step forward you have to be where you are.

Don't be ashamed of your ignorance, but don't cling to it either.

Desire the things you have, commit to what you love.

Don't look for false things. Don't seek out error to make yourself look smart. Don't confuse counterattack with defense.

Stand up for what you believe in--especially when you realize you look foolish, and still believe it.

When pulled in different directions, stick with your commitments.

Get good at what you have to do. It will be more fun and people will appreciate you more.

Be clear with your meaning.

Try to understand others from their own perspectives, and with their own meanings.

Acknowledge the hypothesis. Don't confuse what you believe to be a false belief with a moral failure.

Be the heart before you be the head. Agreeing to disagree is the start of a conversation... not the end.

I have two MS degrees, one in physics, and one in math... I got them in the wrong order... as knowing how to do a differential equation would have been REALLY helpful, in physics. But I'm really good at trig, both regular and hyperbolic.

comment by SwingDancerMike · 2012-06-20T19:37:47.986Z · score: 7 (7 votes) · LW · GW

Hi everyone, I've been reading LW for a year or so, and met some of you at the May minicamp. (I was the guy doing the swing dancing.) Great to meet you, in person and online.

I'm helping Anna Salamon put together some workshops for the meetup groups, and I'll be posting some articles on presentation skills to help with that. But in order to do that, I'll need 5 points (I think). Can you help me out with that?

Thanks

Mike

comment by SwingDancerMike · 2012-06-20T21:57:18.207Z · score: 1 (1 votes) · LW · GW

Yay 5 points! That was quick. Thanks everyone.

comment by Michelle_Z · 2011-07-14T23:49:47.639Z · score: 7 (7 votes) · LW · GW

Hello LessWrong!

My name is Michelle. I am from the United States and am entering college this August. I am a graphic design student who is also interested in public speaking. I was lead to this site one day while browsing fanfiction. I am an avid reader and spend a good percentage of my life reading novels and other literature. I read HPMOR and found the story intriguing and the theories very interesting. When I finally reached the end, I read the author's page and realized that I could find more information on the ideas presented in the book. Naturally, I was delighted. The ideas were mainly why I kept reading. I had not encountered anything similar and found it refreshing to read something that had so many theories that rang true to my ear.

I am not a specialist in any science field or math field. I consider rationality to be something that I wish everyone would get interested in. I really want this idea to stick in more people's heads, but know better than to preach it. I hope to help people become more involved in it, and learn more about rationality and the like.

I'm learning. I'm no expert and hardly consider myself a rationalist. If this were split into ranks like karate, I'd still be a white belt.

I'm looking forward to learning more about rationality, philosophy, and science with all of you here, and hopefully one day contributing, myself!

comment by [deleted] · 2011-12-20T16:08:13.330Z · score: 3 (3 votes) · LW · GW

Greetings!

While I naturally feel superior to people who came here via fanfiction.... I want to use this opportunity to peddle some of the fiction that got me here way back in 2009.

comment by Michelle_Z · 2011-12-25T02:51:25.649Z · score: 0 (0 votes) · LW · GW

I've read that, as well.

comment by Alicorn · 2011-07-15T00:06:18.487Z · score: 2 (2 votes) · LW · GW

Here, have some more fanfiction!

comment by Michelle_Z · 2011-07-15T00:11:36.078Z · score: 1 (1 votes) · LW · GW

Not a huge fan of the Twilight series, but I'll pick it up when I have a bit more time to get into it. I am currently working on a summer essay for college. In other words, I am productively procrastinating by reading this blog instead of writing the remaining two thirds of my essay.

comment by Alicorn · 2011-07-15T00:27:07.194Z · score: 4 (6 votes) · LW · GW

You don't have to be a fan of Twilight. A lot of people who like my fic hate canon Twilight.

comment by Michelle_Z · 2011-07-15T00:33:17.903Z · score: 0 (0 votes) · LW · GW

I'll give it a look, then.

comment by gscshoyru · 2011-02-04T14:27:52.642Z · score: 7 (7 votes) · LW · GW

Hi, my handle is gscshoyru (gsc for short), and I'm new here. I found this site through the AIBox experiment, oddly enough -- and I think I got there from TVTropes, though I don't remember. After reading the fiction, (and being vaguely confused that I had read the NPC story before, but nothing else of his, since I'm a fantasy/sci-fi junkie and I usually track down authors I like), I started reading up on all of Eliezer's writings on rationality. And found it made a lot of sense. So, I am now a budding rationalist, and have decided to join this site because it is awesome.

That's how I found you -- as for who I am and such, I am a male 22-year-old mathematics major/CS minor currently working as a programmer in New Jersey. So, that's me. Hi everyone!

comment by bigjeff5 · 2011-01-27T02:02:01.130Z · score: 7 (7 votes) · LW · GW

Hello, I'm Jeff, I found this site via a link on an XKCD forum post, which also included a link to the Harry Potter and the Methods of Rationality fan-fic. I read the book first (well, what has been written so far, I just couldn't stop!) and decided that whoever wrote that must be made of pure awesome, and I was excited to see what you all talked about here.

After some perusal, I decided I had to respond to one of the posts, which of course meant I had to sign up. The post used keyboard layouts (QWERTY, etc.) as an example of how to rephrase a question properly in order to answer it in a meaningful way. Posting my opinion ended up challenging some assumptions I had about the QWERTY layout and the Dvorak layout, and I am now three and a half hours into learning the Dvorak layout in order to determine which is actually the better layout (based on things I read it seemed a worthwhile endeavor, instead of too difficult like I assumed).

I would have posted this in Dvorak layout, but I only have half the keys down and it would be really, really slow, so I switched back to QWERTY just for this. QWERTY comes out practically as I think it - Dvorak, not so much yet. The speed with which I'm picking up the new layout also shatters some other assumptions I had about how long it takes to retrain muscle memory. Turns out, not long at all (at least in this case), though becoming fluent in Dvorak will probably take a while.

I would say I am a budding rationalist, and I hope this site can really speed my education along. If that doesn't tell you enough about who I am, then I don't really know what else to say.

comment by [deleted] · 2010-12-24T00:59:25.401Z · score: 7 (7 votes) · LW · GW

Greetings, fellow thinkers! I'm a 19-year-old undergraduate student at Clemson University, majoring in mathematics (or, as Clemson (unjustifiably) calls it, Mathematical Sciences). I found this blog through Harry Potter and the Methods of Rationality about three weeks ago, and I spent those three weeks doing little else in my spare time but reading the Sequences (which I've now finished).

My parents emigrated from the Soviet Union (my father is from Kiev, my mother from Moscow) just months before my birth. They spoke very little English upon their arrival, so they only spoke Russian to me at home, and I picked up English in kindergarten; I consider both to be my native languages, but I'm somewhat more comfortable expressing myself in English. I studied French in high school, and consider myself "conversant", but definitely not fluent, although I intend to study abroad in a Francophone country and become fluent. This last semester I started studying Japanese, and I intend to become fluent in that as well.

My family is Jewish, but none of my relatives practice Judaism. My mother identifies herself as an agnostic, but is strongly opposed to the Abrahamic religions and their conception of God. My father identifies as an atheist. I have never believed in Santa Claus or God, and was very confused as a child about how other people could be so obviously wrong and not notice it. I've never been inclined towards mysticism, and I remember espousing Physicalist Reductionism (although I did not know those words) at an early age, maybe when I was around 9 year old.

I've always been very concerned with being rational, and especially with understanding and improving myself. I think I missed out on a lot of what Americans consider to be classic sci-fi (I didn't see Star Wars until I got to college, for example), but I grew up with a lot of good Russian sci-fi and Orson Scott Card.

I used to be quite a cynical misanthrope, but over the past few years I've grown to be much more open and friendly and optimistic. However, I've been an egoist for as long as I can remember, and I see no reason why this might change in the foreseeable future (this seems to be my primary point of departure from agreement with Eliezer). I sometimes go out of my way to help people (strangers as much as friends) because I enjoy helping people, but I have no illusions about whose benefit my actions are for.

I'm very glad to have found a place where smart people who like to think about things can interact and share their knowledge!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-24T01:31:06.085Z · score: 10 (12 votes) · LW · GW

I've been an egoist for as long as I can remember

No offense intended, but: If you could take a pill that would prevent all pain from your conscience, and it could be absolutely guaranteed that no one would ever find out, how many twelve-year-olds would you kill for a dollar?

(Perhaps you meant to say that you were mostly egoist, or that your deliberatively espoused moral principles were egoistic?)

PS: Welcome to Less Wrong!

comment by [deleted] · 2010-12-25T06:04:06.664Z · score: 8 (8 votes) · LW · GW

Eliezer, I've been thinking about this a lot. When I backed up and asked myself whether, not why, I realized that

1) I'm no longer sure what "I am an egoist" means, especially given how far my understanding of ethics has come since I decided that, and

2) I derive fuzzies from repeating that back to myself, which strikes me as a warning sign that I'm covering up my own confusion.

comment by [deleted] · 2010-12-24T06:38:57.436Z · score: 3 (3 votes) · LW · GW

Eliezer, please don't think you can offend me by disagreeing with me or questioning my opinions - every disagreement (between rational people) is another precious opportunity for someone (hopefully me!) to get closer to Truth; if the person correcting me is someone I believe with high probability to be smarter than me, or to have thought through the issue at hand better than I have (and you fit those criteria!), this only raises the probability that it is I who stand to benefit from the disagreement.

I'm not certain this is a very good answer to your question, but 1) I would not take such a pill, because I enjoy empathy and don't think pain is always bad, 2) peoples' deaths negatively affect many people (both through the ontologically positive grief incurred by the loss and the through ontologically negative utility they would have produced), and that negative effect is very likely to make its way to me through the Web of human interaction, especially if the deceased are young and have not yet had much of a chance to spread utility through the Web, and 3) I would have to be quite efficient at killing 12-year-olds for it to be worth my time to do it for a dollar each (although of course this is tangential to your question, since the amount "a dollar" was arbitrary).

I should also point out that I have a strongly negative psychological reaction to violence. For example, I find the though of playing a first-person shooting game repugnant, because even pretending to shoot people makes me feel terrible. I just don't know what there is out there worse than human beings deliberately doing physical harm to one another. As a child, I felt little empathy for my fellow humans, but at some point, it was as if I was treated with Ludovico's Technique (à la A Clockwork Orange)... maybe some key mirror neurons in my prefrontal cortex just needed time to develop.

Thank you for taking time to make me think about this!

comment by jimrandomh · 2010-12-24T16:38:32.720Z · score: 3 (3 votes) · LW · GW

If your moral code penalizes things that make you feel bad, and doing X would make you feel bad, then is it fair to say that not doing X is part of your moral code?

I think the point Eliezer was getting at is that human morality is very complex, and statements like "I'm an egoist" sweep a lot of that under the rug. And to continue his example: what if the pill not only prevented all pain from your conscience, but also gave you enjoyment (in the form of seratonin or whatever) at least as good as what you get from empathy?

comment by [deleted] · 2010-12-25T06:04:42.879Z · score: 3 (3 votes) · LW · GW

You're right, human morality is more complex than I thought it was when "I am an egoist" seemed like a reasonable assertion, and all the fuzzies I got from "resolving" the question of ethics prevented me from properly updating my beliefs about my own ethical disposition.

comment by [deleted] · 2011-12-20T17:02:43.508Z · score: 0 (0 votes) · LW · GW

"I'm an egoist" sweep a lot of that under the rug.

Statements like I'm an altruist do too. They are however less likley to be challenged.

comment by wedrifid · 2010-12-24T02:33:07.315Z · score: 2 (2 votes) · LW · GW

No offense intended, but: If you could take a pill that would prevent all pain from your conscience, and it could be absolutely guaranteed that no one would ever find out, how many twelve-year-olds would you kill for a dollar?

How much do bullets cost again? :P

comment by [deleted] · 2011-07-27T20:41:09.472Z · score: 1 (1 votes) · LW · GW

As few as possible to earn the dollar! Maxing out at however many I could do in approximately 10 seconds.

comment by TobyBartels · 2010-12-24T02:12:53.197Z · score: 2 (2 votes) · LW · GW

majoring in mathematics (or, as Clemson (unjustifiably) calls it, Mathematical Sciences)

If you mean that mathematics is not a natural science, then I agree with you. But ‘science’ has an earlier, broader meaning that applies to any field of knowledge, so mathematical science is simply the systematic study of mathematics. (I don't know why they put it in plural, but that's sort of traiditional.)

Compare definitions 2 and 4 at dictionary.com.

comment by [deleted] · 2010-12-24T06:19:46.954Z · score: 6 (6 votes) · LW · GW

You're right! I've been so caught up (for years now) with explaining to people that mathematics was not a science because it was not empirical (although, as I've since learned from Eliezer, "pure thought" is still a physical process that we must observe in order to learn anything from it), that I've totally failed to actually think about the issue.

There goes another cached thought from my brain; good riddance, and thanks for the correction!

comment by TobyBartels · 2010-12-26T07:06:54.729Z · score: 0 (0 votes) · LW · GW

You're welcome!

comment by ata · 2010-12-24T01:04:33.975Z · score: 1 (1 votes) · LW · GW

Welcome!

I spent those three weeks doing little else in my spare time but reading the Sequences (which I've now finished).

Impressive. I've been here for over a year and I still haven't finished all of them.

However, I've been an egoist for as long as I can remember, and I see no reason why this might change in the foreseeable future (this seems to be my primary point of departure from agreement with Eliezer). I sometimes go out of my way to help people (strangers as much as friends) because I enjoy helping people, but I have no illusions about whose benefit my actions are for.

I'm curious — if someone invented a pill that exactly simulated the feeling of helping people, would you switch to taking that pill instead of actually helping people?

comment by [deleted] · 2010-12-24T06:46:04.340Z · score: 2 (2 votes) · LW · GW

Impressive. I've been here for over a year and I still haven't finished all of them.

Thanks! My friends thought I was crazy (well, they probably already did and still do), but once I firmly decided to get through the Sequences, I really almost didn't do anything else while I wasn't either in class, taking an exam, or taking care of biological needs like food (having a body is such a liability!).

I'm curious — if someone invented a pill that exactly simulated the feeling of helping people, would you switch to taking that pill instead of actually helping people?

No, because helping people has real effects that benefit everyone. There's a reason I'm more inclined to help my friends than strangers - I can count on them to help me in return (this is still true of strangers, but less directly - people who live in a society of helpful people are more likely to be helpful!). This is especially true of friends who know more about certain things than I do - many of my friends are constantly teaching each other (and me) the things they know best, and we all know a lot more as a result... but it won't work if I decide I don't want to teach anyone anything.

comment by [deleted] · 2011-12-20T17:07:17.190Z · score: 1 (1 votes) · LW · GW

I think there are few humans who don't genuinely care more about themselves their friends and family than people in general.

Personally I find the idea that I should prefer the death of say, my own little sister, to two or three or four random little girls absurd. I suspect even when it comes to one's own life people are hopelessly muddled on what they really want and their answers don't correlate too well with actions. A better way to get an estimate of what a person is likley to do, is to ask them what fraction of people would sacrifice their lives to save the lives of N (small positive integer) other random people.

comment by MixedNuts · 2011-12-20T17:24:34.829Z · score: 3 (3 votes) · LW · GW

It's even more complicated than that. If I see a few strangers in immediate, unambiguous danger, I'm pretty sure I will die to save them. But I will not spend all that much on donating to a charity that will save these same people, twenty years later and two thousand miles away. (...what was that about altruistic ideals being Far?)

comment by [deleted] · 2011-12-20T20:46:01.402Z · score: 0 (0 votes) · LW · GW

Excellent point.

comment by shokwave · 2010-12-24T09:36:38.468Z · score: 0 (0 votes) · LW · GW

However, I've been an egoist for as long as I can remember,

I'm not entirely sure what this position entails. Wikipedia sent me to 'egotist' and here. I am curious because it seems like quite a statement to use a term so similar to an epithet to describe one's own philosophy.

comment by [deleted] · 2010-12-24T10:30:31.483Z · score: 1 (1 votes) · LW · GW

The distinction between egoism and egotism is an oft-mixed-up one. An egotist is simply someone who is overly concerned with themselves; egoism is a somewhat more precise term, referring to a system of ethics (and there are many) in which the intended beneficiary of an action "ought" (a word that Eliezer did much to demystify for me) to be the actor.

The most famous egoist system of ethics is probably Ayn Rand's Objectivism, of which I am by no means a follower, although I've read all of her non-fiction.

comment by arundelo · 2010-12-24T16:29:10.637Z · score: 0 (0 votes) · LW · GW

See the article on ethical egoism.

comment by RedRobot · 2010-11-24T18:32:56.196Z · score: 7 (7 votes) · LW · GW

Hello!

I work in a semi-technical offshoot of (ducks!) online marketing. I've always had rationalist tendencies, and reading the material on this website has had a "coming home" feeling for me. I appreciate the high level of discourse and the low levels of status-seeking behaviors.

I am female, and I read with interest the discussion on gender, but unfortunately I do not think I can contribute much to that topic, because I have been told repeatedly that I am "not like other women." I certainly don't think it would be a good idea to generalize from my example what other women think or feel (although to be honest the same could be said about my ability to represent the general populace).

I found my way here through the Harry Potter story, which a friend sent to me knowing that I would appreciate the themes. I am enjoying it tremendously.

comment by JJ10DMAN · 2010-10-15T13:25:48.547Z · score: 7 (7 votes) · LW · GW

I originally wrote this for the origin story thread until I realized it's more appropriate here. So, sorry if it straddles both a bit.

I am, as nearly as I believe can be seen in the present world, an intrinsic rationalist. For example: as a young child I would mock irrationality in my parents, and on the rare occasions I was struck, I would laugh, genuinely, even through tears if they came, because the irrationality of the Appeal to Force made the joke immensely funnier. Most people start out as well-adapted non-rationalists; I evidently started as a maladaptive rationalist.

As an intrinsic (maladaptive) rationalist, I have had an extremely bumpy ride in understanding my fellow man. If I had been born 10 years later, I might have been diagnosed with Asperger's Syndrome. As it was, I was a little different, and never really got on with anyone, despite being well-mannered. A nerd, in other words. Regarding bias, empathic favoritism, willful ignorance, asking questions in which no response will effect subsequent actions or belief confidences, and other peculiarities for which I seem to be an outlier, any knowledge about how to identify and then deal with these peculiarities has been extremely hard-won from years upon years of messy interactions in uncontrolled environments with few hypotheses from others to go on (after all, they "just get it", so they never needed to sort it out explicitly).

I've recently started reading rationalist blogs like this one, and they have been hugely informative to me because they put things I have observed about people but failed to understand intuitively into a very abstract context (i.e. one that bypasses intuition). Less Wrong, among others, have led to a concrete improvement in my interactions with humanity in general, the same way a blog about dogs would improve one's interactions with dogs in general. This is after just a couple months! Thanks LW.

comment by HughRistik · 2010-10-15T17:54:01.362Z · score: 2 (2 votes) · LW · GW

Less Wrong, among others, have led to a concrete improvement in my interactions with humanity in general, the same way a blog about dogs would improve one's interactions with dogs in general.

That's really cool. I'd be curious to know some examples of some ideas you've read here that you found useful.

comment by JJ10DMAN · 2011-02-17T19:40:59.540Z · score: 3 (3 votes) · LW · GW

Rationalist blogs cite a lot of biases and curious sociological behaviors which have plagued me because I tend optimistically accept what people say at face value. In explaining them in rationalist terms, LW and similar blogs essentially explain them to my mode of thinking specifically. I'm now much better at picking up on unwritten rules, at avoiding punishment or ostracism for performing too well, at identifying when someone is lying politely but absolutely expects me to recognize it as a complete lie, etc., thanks to my reading into these psychological phenomena.

Additionally, explanations of how people confuse "the map" to be "the territory" have been very helpful in determining when correcting someone is going to be a waste of time. If they were sloppy and mis-read their map, I should step in; if their conclusion is the result of deliberately interpreting a map feature (flatness, folding) as a territory feature, unless I know the person to be deeply rational, I should probably avoid starting a 15-minute argument that won't convince them of anything.

comment by Skepxian · 2010-07-26T15:44:45.965Z · score: 7 (9 votes) · LW · GW

Greetings, all. Found this site not too long ago, been reading through it in delight. It has truly energized my brain. I've been trying to codify and denote a number of values that I held true to my life and to discussion and to reason and logic, but was having the most difficult time. I was convinced I'd found a wonderful place that could help me when it provided me a link to the Twelve Virtues of Rationality, which neatly and tidily listed out a number of things I'd been striving to enumerate.

My origins in rationality basically originated at a very, very young age, when the things adults said and did didn't make sense. Some of it did, as a matter of fact, make more sense once I'd gotten older - but they could have at least tried to explain it to me - and I found that their successes too often seemed more like luck than having anything to do with their reasons for doing things. I suppose I became a rationalist out of frustration, one could say, at the sheer irrationality of the world around me.

I'm a Christian, and have applied my understanding of Rationality to Christianity. I find it holds up strongly, but am not insulted that not everyone feels that way. This site may be slanted atheist, but I find that rationalists have more in common with each other no matter their religious beliefs than a rationalist atheist has with a dogmatic atheist, or a rationalist Christian has with a dogmatic Christian, generally speaking.

I welcome discussion, dialog, and spirited debate, as long as you listen to me and I listen to you. I have a literal way of speaking, and don't tend to indulge in those lingual niceties that are technically untrue, which so many people hold strongly to. My belief is that if you don't want to discuss something, don't bring it up. So if I bring something up, I'd better darn well be able to discuss it. My belief is also that I should not strongly hold an opinion if I cannot strongly argue against my opinion, so I value any and all strong arguments against any opinion I hold.

I look forward to meeting many of you!

comment by RobinZ · 2010-07-26T16:05:33.399Z · score: 0 (2 votes) · LW · GW

Welcome! I imagine a number of us would be quite happy to argue the rectitude of Christianity with you whenever you are interested, but no big rush.

A while ago someone posted a question about introductory posts if you want a selection of reading material which doesn't require too much Less Wrong background. And yes, I posted many of those links. Hey, I'm enthusiastic!

comment by Skepxian · 2010-07-26T17:07:12.798Z · score: 3 (5 votes) · LW · GW

Thank you very much!

A small element of my own personal quirks (which, alas, I keep screwing up) is to avoid using the words 'argue' and 'debate'. Arguing is like trying to 'already be right', and Debate is a test of social ability, not the rightness of your side. I like to discuss - some of the greatest feelings is when I suddenly get that sensation of "OH! I've been wrong, but that makes SO MUCH MORE SENSE!" And some of the scariest feelings are "What? You're changing your mind to agree with me? But what if I'm wrong and I just argued it better?"

I'm not really looking to try to convince anyone of Christianities' less-wrongedness, but it seems to be a topic that pops up with a decent frequency. (Though admittedly I've not read enough pages to really get a good statistical assessment yet) Since it was directly mentioned in "Welcome to Less Wrong," I figured I'd make my obvious biases a bit of public knowledge. :) But I always do enjoy theological discussion, when it comes my way.

I look forward to discussing with you soon. :) I'm taking my time getting through the Sequences, at the moment, but I'll keep an eye on those introductory posts as well.

comment by Apprentice · 2010-07-26T17:37:15.857Z · score: 5 (11 votes) · LW · GW

Christian or atheist - in the end we all believe in infinite torture forever. Welcome!

comment by WrongBot · 2010-07-26T18:05:13.526Z · score: 1 (1 votes) · LW · GW

I think you're leaving out a substantial number of people who don't believe in infinite anything.

comment by Apprentice · 2010-07-26T18:35:37.327Z · score: 3 (3 votes) · LW · GW

This was an attempt at humor. Usually when people start sentences with "Whatever religion we adhere to..." they are going to utter a platitude ending with "...we all believe in love/life/goodness". The intended joke was to come about through a subversion of the audience's expectation. It was also meant to poke fun at all the torture discussions here lately, though perhaps that's already been done to death.

comment by orthonormal · 2010-07-26T18:42:44.581Z · score: 1 (1 votes) · LW · GW

Creative idea, poor execution. You'd have to combine it with several other such platitude parodies before other people would interpret your joke correctly.

comment by khafra · 2010-07-26T19:46:27.931Z · score: 3 (3 votes) · LW · GW

It might not work in another month or two, but the idea of "contrived infinite-torture scenarios" has high salience for LW readers right now. I got the joke immediately.

comment by Skepxian · 2010-07-26T19:25:16.303Z · score: 1 (3 votes) · LW · GW

Just because you didn't get the joke doesn't mean he did it wrong. I got the joke, and he was saying it to me, so I believe the joke was performed correctly, given his target audience! ^_^

The problem, I'd say, would be an assumption of shared prior experience - but humor in general tends to make that assumption, whether it's puns which assume a shared experience with lingual quirks, friend in-jokes which are directly about shared experiences, or genre humor which assumes a shared experience in that genre. This was genre humor.

While transparent communication is wonderful for rational discussion, I would conjecture that humor is inherently about the irrational links our minds make between disparate information with similar qualities.

comment by WrongBot · 2010-07-26T19:56:15.183Z · score: 0 (0 votes) · LW · GW

I got the joke, but I guess I just didn't think it was funny. That may be because I've been pretty annoyed with all the infinite torture discussions that have been going on; I think the idea is laughably implausible, and don't understand the compulsion people seem to have to keep talking about it, even after being informed that they are causing other people horrible nightmares by doing so.

comment by Skepxian · 2010-07-26T19:31:01.738Z · score: -2 (2 votes) · LW · GW

I think everyone believes in infinite something, even if it's infinite nothingness, or infinite cosmic foam, but I understand your meaning. ^_^

comment by WrongBot · 2010-07-26T19:52:28.037Z · score: 0 (2 votes) · LW · GW

I don't. I believe that there are things that can only be described in terms of stupendously huge numbers, but I believe that everything that exists can be described without reference to infinities.

Really, when I think about how incomprehensibly enormous a number like BusyBeaver(3^^^3) is, I have trouble believing that there is some physical aspect of the universe that could need anything bigger. And if there is, well, there's always BusyBeaver(3^^^^3) waiting in the wings.

Eliezer calls this infinite-set atheism, which is as good a name as any, I suppose.

comment by Sniffnoy · 2010-07-27T20:46:58.058Z · score: 1 (1 votes) · LW · GW

See also: Finitism

comment by Vladimir_Nesov · 2010-07-26T20:12:42.607Z · score: 1 (1 votes) · LW · GW

Concepts don't have to be about "reality", whatever that is (not a mathematically defined concept for sure).

comment by WrongBot · 2010-07-26T20:25:27.532Z · score: 0 (0 votes) · LW · GW

Infinities exist as concepts, yes. They're even useful in math. But I have never encountered anything that exists (for any reasonable definition of "exists") that can't be described without an infinity. MWI describes a preposterously large but still finite multiverse, as far as I understand it. And if our physical universe is infinite, as some have supposed, I haven't seen proof of it.

Really, like any other form of atheism, infinite-set atheism should be easy to dispel. All anyone has to do to change my mind is show me an infinite set.

comment by Vladimir_Nesov · 2010-07-26T20:29:05.611Z · score: 1 (1 votes) · LW · GW

Unfortunately, observations don't have epistemic power, so we'd have to live with all possible concepts. Besides, it's quite likely that reality doesn't in fact contain any infinities, in which case it's not possible to show you an infinity, and you are just demanding particular proof. :-)

comment by Skepxian · 2010-07-26T20:38:01.243Z · score: 0 (2 votes) · LW · GW

Wait... he's already saying he believes reality doesn't contain any infinities...

And you say that you can't show proof to the contrary because it's likely reality doesn't contain any infinities...

I don't think I followed you there.

comment by Vladimir_Nesov · 2010-07-26T20:50:33.347Z · score: 0 (2 votes) · LW · GW

I distinguish between "believing in X" and "believing reality contains X". I grew to dislike the non-mathematical concept of reality lately. Decision theory shouldn't depend on that.

comment by WrongBot · 2010-07-26T21:00:58.091Z · score: 1 (1 votes) · LW · GW

My disbelief in infinities extends only to reality; I make no claims about the question of their existence elsewhere.

comment by dclayh · 2010-07-26T20:28:41.604Z · score: 1 (1 votes) · LW · GW

All anyone has to do to change my mind is show me an infinite set.

Considering your brain is finite, I don't think you're entitled to that particular proof.

(Perhaps you're just saying it would be a sufficient but not a necessary proof, in which case...okay, I guess.)

comment by WrongBot · 2010-07-26T20:58:42.326Z · score: 0 (0 votes) · LW · GW

That's not the only proof I'd accept, but given that I do accept conceptual infinities, I don't think my brain is necessarily the limiting factor here.

Another form of acceptable evidence would be some mathematical proof that begins with the laws of physics and demonstrates that reality contains an infinity. I'm not sure if a similar proof that demonstrates that reality could contain an infinity would be as convincing, but it would certainly sway me quite a bit.

comment by Skepxian · 2010-07-26T20:25:08.542Z · score: 0 (0 votes) · LW · GW

I'm not sure I understand. Part of it is the use of BusyBeaver - I'm familiar with Busy Beaver as an AI state machine, not as a number. Second: So you say you do not believe in infinity ... but only inasmuch as physical infinity? So you believe in conceptual infinity?

comment by WrongBot · 2010-07-26T20:35:16.166Z · score: 0 (0 votes) · LW · GW

The BusyBeaver value I'm referring to is the maximum number of steps that the Busy Beaver Turing Machine with n states (and, for convenience, 2 symbols) will take before halting. So (via wikipedia), BB(1) = 1, BB(2) = 6, BB(3) = 21, BB(4) = 107, BB(5) >= 47,176,870, BB(6) >= 3.8 × 10^21132, and so on. It grows the fastest of all possible complexity classes.

comment by Sniffnoy · 2010-07-27T20:48:31.821Z · score: 1 (1 votes) · LW · GW

OK, have to make technical corrections here. Busy Beaver is not a complexity class, complexity classes do not grow. Busy Beaver function grows faster than any computable function, but I doubt it's the "fastest" at anything, seeing as you can always just take e^BB(n), e.g.

comment by WrongBot · 2010-07-27T21:15:03.898Z · score: 0 (0 votes) · LW · GW

Ugh, thank you. I seem to have gotten complexity classes and algorithmic complexity mixed up. Busy Beaver's algorithmic complexity grows asymptotically faster than any computable function, so far as considerations like Big-O notation are concerned. In those sorts of cases, I think that even for functions like e^BB(n), the BB(n) part dominates. Or so Wikipedia tells me.

ETA: cousin_it has pointed out that there uncomputable functions which dominate Busy Beaver.

comment by Sniffnoy · 2010-07-27T21:21:14.932Z · score: 0 (0 votes) · LW · GW

Sure, but my point is it's not the "fastest" of anything unless you want to start defining some very broad equivalences...

comment by cousin_it · 2010-07-27T20:59:58.875Z · score: 0 (0 votes) · LW · GW

As Eliezer pointed out on HN, there is a way to define numbers that dominate BB values as decisively as BB dominates the Ackermann function, but you actually need some math knowledge to make the next step, not just stack BB(BB(...)) or something. (To be more precise, once you make the step, you can beat any person who's "creatively" using BB's and oracles but doesn't know how to make the same step.) And after that quantum leap, you can make another quantum leap that requires you to understand another non-trivial bit of math, but after that leap he doesn't know what to do next, and I, being a poor shmuck, don't know either. If you want to work out for yourself what the steps are, don't click the link.

comment by Skepxian · 2010-07-26T20:36:17.696Z · score: 0 (0 votes) · LW · GW

Ah, excellent, so I'm not so far off. Then what's 3^^^3, then?

comment by Vladimir_Nesov · 2010-07-26T20:47:09.337Z · score: 1 (1 votes) · LW · GW

3^^^^3 on Less Wrong wiki.

comment by Skepxian · 2010-07-26T20:54:28.291Z · score: 0 (2 votes) · LW · GW

Oh, good golly gosh, that gets big fast. Thank you!

comment by Skepxian · 2010-07-26T18:56:30.605Z · score: 0 (0 votes) · LW · GW

( chuckles warmly ) don't worry - I got the joke. ^_^ Although I'm rather in a minority in my view of Hell as something other than torture, but hey, there's plenty of time for that! Thank you for the welcome!

comment by ata · 2010-07-26T18:01:18.939Z · score: 2 (2 votes) · LW · GW

A small element of my own personal quirks (which, alas, I keep screwing up) is to avoid using the words 'argue' and 'debate'. Arguing is like trying to 'already be right', and Debate is a test of social ability, not the rightness of your side. I like to discuss - some of the greatest feelings is when I suddenly get that sensation of "OH! I've been wrong, but that makes SO MUCH MORE SENSE!" And some of the scariest feelings are "What? You're changing your mind to agree with me? But what if I'm wrong and I just argued it better?"

Good attitude. I'm much the same, both in enjoying learning new things even when it means relinquishing a previously held belief, and in feeling slightly guilty when I cause someone to change their mind. :) LW has actually helped me get over the latter, because now that I understand rationality much better, I'm accordingly more confident that I'm doing things correctly in debates.

I'm glad you mentioned your Christianity and your specific belief that it is rationally justified — I'll be curious to see how it holds up after you've read the sequences Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, and Reductionism. (I hope you'll be considering that issue with that same curious, unattached mindset — if Christianity were false, would you really, honestly, sincerely want to know?) If I may ask, what specific beliefs do you consider part of your Christianity? The Holy Trinity? The miracles described in the NT? Jesus's life as described by the Gospels? The moral teachings in the OT? Creationism? Biblical literalism? Prayer as a powerful force? Heaven and Hell? Angels, demons, and the Devil as actual beings? Salvation through faith or works? The prophecies of the Revelation?

comment by EStokes · 2010-07-26T22:19:00.043Z · score: 4 (4 votes) · LW · GW

Not in response to anyone, but to this thread/topic.

Is this really something that should be on LessWrong? LessWrong is more about debate on new territory and rationality and such, not going to well-tread territory. There are many other places on the internet for debate on religion, but there's only one LW. Perhaps /r/atheism, (maybe being careful to say that you're honestly looking to challenge your beliefs and not test your faith.)

Unless there are new points that haven't been heard before, or people are genuinely interested in this specific debate.

Just not sure this is the right place, and want to hear other people's opinions on this.

comment by Skepxian · 2010-07-27T00:16:59.465Z · score: 1 (3 votes) · LW · GW

Well, thus far, I've mainly seen, "Welcome to LessWrong ... let's poke at the new guy and see what he's thinking!" I don't think we're getting into any real serious philosophy, yet. It's all been fairly light stuff. I've been trying to self-moderate my responses to be polite and answer people, but not get too involved in a huge discussion, because I agree, this wouldn't be the right place. But so far, it's seemed just some curiosity being satisfied about me, specifically, and my theology - not theology as a whole. As such, it certainly seems to belong in a 'Meet the new guys' thread.

Additionally, I'm personally not here to challenge my beliefs or test my faith, though I certainly won't turn it down as it happens. Given the lean of belief in the place, I expect it to happen. My main draw, however, isn't theological but instead in the realm of discovering a knowledge base and discussion area based around rationality, containing elements of discussion which have already done the work I've been running in circles in my head because I've been lacking someone to talk about them with!

comment by Bongo · 2010-07-26T22:45:22.060Z · score: 1 (1 votes) · LW · GW

Let's just not take the discussion outside this subthread.

comment by RobinZ · 2010-07-27T03:18:19.598Z · score: 0 (0 votes) · LW · GW

I should apologize for having kicked off the topic - I had some vague ideas of someday getting on the IRC channel and bouncing thoughts back and forth, and didn't realize that it would inevitably become a conversation in the thread here if I mentioned it.

comment by Nick_Tarleton · 2010-07-26T22:25:45.188Z · score: 0 (0 votes) · LW · GW

In general I'd agree, but a theological argument in which all parties refer to the Sequences seems like a worthwhile novelty.

comment by Skepxian · 2010-07-26T19:15:21.332Z · score: 0 (4 votes) · LW · GW

I'm partway through Mysterious Answers to Mysterious Questions, and it's very, very interesting. Much better fodder than I usually see from people misusing those concepts. It's refreshing to see points made in context to their original meaning, and intelligently applied! I'm giving myself some time to let my thoughts simmer, before making a few comments on a couple of them.

I want to know what's true. Even if Christianity wasn't true, I've already found a great deal of Truth in its teachings for how to live life. The Bible, I feel, encourages a rational mindset, as much as many might think otherwise - to not use one's intellect to examine one's religion would be to reject many of Jesus' teachings. Most specifically, it would reject Jesus' parable about taking a treasure (reason) that the Master (God) has given his servant (Man), and burying it in the ground instead of using it to create more treasure (knowledge). It also can be seen in the way that the only people Jesus really gets angry at throughout the entire bible, and cries out against, are members of the 'true religion' (Pharisees) who abused and misused the tenets of their religion to push their own preconceptions.

The Holy Trinity: yes. The miracles: yes. Jesus' life: yes. Moral teachings: yes.

Creationism: Not supported by the bible, nor by a thorough examination of the Ancient Hebrew culture, where the '6 days' were considered a metaphor for time too vast to be comprehended by the mortal mind. The Genesis sequence contains quite a few inherent metaphoric parallels with our scientific understanding of how the world was created, too.

Biblical literalism: sorta. I believe the bible was divinely inspired but I believe that man's language is completely unable to manifest any sort of 'perfect understanding' as the language itself is imperfect. Even, theoretically speaking, were the bible able to present a perfect language, man is imperfectly able to understand it. So on a technical level, yes, I believe in biblical literalism (except where scholarly study, historical cultural examination, and the bible itself tells us it's not literal), but in practice, I treat it a lot more loosely in recognition of man's inherent bias.

Prayer as a powerful force: Yes, but not like a wish-granting genie. Really, the power of prayer is more a power of inspiration and internal moral / emotional strength, an effect which could be explained by a placebo effect. Studies also show that prayer does have a powerful healing effect - but only if the subject knows that they are being prayed for. But medically speaking, attitude is a strong component of healing, not simply biochemical response to stimuli - so it might be internal strength, might be a placebo effect. As an attitude towards the world around one, I see 'answers' to prayers quite a bit, but not so much that I can rule out coincidence and a Rorschach-like effect upon the world around me.

Heaven and Hell, angels, demons, faith or works: I believe Heaven is where beings go in order to serve and follow the rules (which are there for our benefit, not just arbitrary). I believe that when beings of free will expressed a desire to do things their own way, not according to the rules, God created a place we call "Hell" which is where people who wish total freedom from the rules can go to do things their way without hurting the people who are following the rules. Not a punishment at all. As such, the "Salvation" question becomes rather a bit more complex as neither faith nor works is an appropriate descriptor. I'm looking into some theological scholarly writings at the moment which recently were brought to my attention which goes into more detail on this concept.

Prophecies, finally, tend to be awfully confusing till after they've happened, so till I see fire in the skies over Israel or an earthquake that shakes the whole world at once, I'm really not paying too much attention to them. The prophecies of the OT seem to have held up pretty well, though.

comment by orthonormal · 2010-07-27T00:43:39.558Z · score: 4 (6 votes) · LW · GW

I want to know what's true. Even if Christianity wasn't true, I've already found a great deal of Truth in its teachings for how to live life. The Bible, I feel, encourages a rational mindset, as much as many might think otherwise - to not use one's intellect to examine one's religion would be to reject many of Jesus' teachings.

Having been religious (in particular, a very traditionalist Catholic, more so than my parents by far)† for a good chunk of my life before averting to atheism a few years ago (as an adult), I would have agreed with you, but a bit uneasily. And now, I can't help but point out a distinction.

When you point to the Bible for moral light, you're really pointing to a relatively small fraction of the total text, and much of that has been given new interpretations†† that the original apostles didn't use.

Let's give an example: to pick a passage that's less emotionally charged and less often bruited about in this connection, let's consider the story of Mary and Martha in Luke 10:38-42. People twist this every which way to make it sound more fair to Martha, when the simplest reading is just that Luke thought that the one best thing you could do with your life was to be an apostle, and wrote the episode in a way that showed this. Luke wasn't thinking about how the story should be interpreted within a large society where the majority are Christians going about daily business like Martha, because he expected the end times to come too soon for that society to be realized on Earth. He really, genuinely, wanted the reader to conclude that they should forget living like Martha††† if they possibly could, and imitate Mary instead.

Now, when faced with a passage like this, what do you prefer? The simpler interpretation which doesn't seem to help you as moral guidance? Or a more convoluted one which meshes with the way you think the truth should be lived in the world today? Which interpretation would you expect to find upheld in letters of the Church Fathers who lived before Rome converted? Which interpretation do you think was more likely for Luke?

And most importantly, if you're saying you're learning about moral truth from the Bible, but you're choosing your preferred interpretation of Scripture by aesthetic and moral criteria of the modern era, rather than criteria that are closer to the text and the history, why do you need the Scripture at all? Why not just state your aesthetic and moral principles and be done with it?

† Sorry for these distracting parentheticals, but I know the assumptions I'd have made had I read the unadorned account from someone else.

†† For one year at school, I took on the task of finding both Scripture readings and commentary from the Church Fathers to be read during a weekly prayer group. The latter task proved to be a lot harder than it seemed, because the actual content of typical passages from the Church Fathers is really foreign, and not in an inspiring way either. Augustine gets read today in schools as exemplar of Christian thought basically because he's the only Church Father of the Roman era who doesn't look completely insane on a straightforward reading of any full work.

††† There are places of honor in Luke and Acts for patrons who help the apostles, but they're rather clearly supporting roles, and less admirable than the miracle-working apostles themselves.

comment by Skepxian · 2010-07-27T00:58:50.434Z · score: 0 (2 votes) · LW · GW

Every time someone says, "The simplest reading..." about a passage, I really draw back cautiously. I see, usually, two types of people who say "There's only one way to read that passage," on any nonspecific passage. The first is "I know what it means and anyone who disagrees with me is wrong because I know the Will of God," and the second is "I know what it means and it's stupid and there is no God."

I'm not saying you're doing that - quite the opposite, you agree that there are many ways to approach the passage. The way Luke may have approached it, I couldn't say. I just see a story being presented, and Jesus rarely said anything in a straightforward manner. He always presented things in such a way that those listening to it had to really think about what he meant, and there are many ways to interpret it. Even Jesus, when pressed, usually meant many things by his stories. Admittedly, this wasn't a parable, this was an 'event that happened', but I think any of Jesus' responses still need to get considered carefully.

Second, we have the fact that you're talking about what Luke saw in it. I don't pretend the Apostles were perfect or didn't have their flaws. Every apostle, every prophet, was shown to be particularly flawed - unlike many other religions, the chosen of God in JudeoChristian belief were terribly flawed. There was a suicidally depressed prophet, there was the rash murderer, there were liars and thieves. The closest to a 'good' prophet was Joseph of the Coat of Many Colors, but even he had his moments of spite and anger.

I'm interested, but not dedicated, to what Luke thought of the situation. I'm much more interested in what Jesus did in the situation. Additionally, what about the context in which that scene appears? Jesus was constantly about service ... and that's what Martha was doing. He never admonished Martha ... he simply told her that Mary had made her choice, and it was better. He never said Martha should make the same choice, either.

It's worth noting that Mary was in a position that was traditionally denied women - but Jesus defended her right to be there, listening and learning from a teacher.

And I almost forgot the 'most importantly' part...

The strong lessons I learn from the bible ... wouldn't necessarily have occurred to me otherwise. Yes, I interpret them from my bias of modern life and mores ... but the bible presents me with things I wouldn't have thought to bring forward and consider. Methods of thinking I wouldn't have come up with on my own, or by talking with most others. This doesn't mean it's 'The True Faith', but it does make it a useful tool.

At any rate, we need to be careful not to go too much further. This is getting dangerously close to a theology discussion rather than a 'meet the new guy' discussion.

comment by orthonormal · 2010-07-27T01:28:19.157Z · score: 1 (1 votes) · LW · GW

Anyhow, I think it's illuminating to be aware of what criteria actually go into one's judgments of Biblical interpretations. Your particular examples will vary.

comment by Skepxian · 2010-07-27T02:56:54.073Z · score: 0 (0 votes) · LW · GW

Oh, I quite agree! Thank you very much for the time spent sharing your thoughts. ^_^

comment by mattnewport · 2010-07-26T20:06:52.570Z · score: 3 (3 votes) · LW · GW

Studies also show that prayer does have a powerful healing effect - but only if the subject knows that they are being prayed for.

Citations please. The only well controlled study00649-6/abstract) I know of found the opposite - subjects who knew they were being prayed for suffered more complications than those who did not.

comment by Skepxian · 2010-07-26T20:26:50.680Z · score: 0 (2 votes) · LW · GW

I actually found it several years ago through an atheist site which was using it as evidence that prayer had only a placebo effect, so I'm afraid I don't have a citation for you just at the moment. I'll see what I can do when I have time. My apologies.

comment by WrongBot · 2010-07-26T19:38:07.371Z · score: 2 (2 votes) · LW · GW

I believe the bible was divinely inspired

Why? This seems to be the foundation for all your justifications here, and it's an incredibly strong claim. What evidence supports it? Is there any (weaker, presumably) evidence that contradicts it? I'd suggest you take a look at the article on Privileging the Hypothesis, which is a pretty easy failure mode to fall into when the hypothesis in question was developed by someone else.

comment by Skepxian · 2010-07-26T20:08:39.576Z · score: 0 (0 votes) · LW · GW

A weighty question... At the moment, I'm not entirely able to give you the full response, I'm afraid, but I'll give you the best 'short answer' that I'm able to compile.

1: The universe seems slanted towards Entropy. This suggests a 'start'. Which suggests something to start the universe. This of course has a great many logical fallacies inherent in it, but it's one element. 2: Given a 'something to start the universe', we're left with hypothetical scientific/mathematical constructs or a deity-figure of some sort. 3: Assuming a deity figure (yes, privileging the Hypothesis - but given a small number of possibilities, we can hypothesize each in turn and then exhaustively test that element) we need to assume that either the deity figure doesn't care if we know about it, in which case it's pointless to search, or that it does care if we know about it, in which case there will be evidence. If it is pointless to search, then I see little difference between that and a hypothetical scientific/mathematical construct. Thus, we're still left with 'natural unknown force' or 'knowable deity figure'. 4: Assuming a deity figure with the OOMPH to make a universe, it'll probably be able to make certain it remains known. So it's probably one of the existing long-lasting and persistent belief systems. 5: ( magic happens ) Given a historical study of various long-lasting and persistent belief systems, I settled on Christianity as the most probable belief system, based on my knowledge of human behavior, the historical facts of the actions surrounding the era and life of Jesus such as the deaths of the Disciples, a study of the bible, and a basic irrational hunch. I found that lots of what I was brought up being taught about the bible and Christianity was wrong, but the Bible itself seemed much more stable. 6: Given certain historical elements, I was led to have to believe in certain Christian miracles I'm unable to explain. That, combined with the assumption that a deity-figure would want itself to be known, results in an active belief.

3: Assuming there is no deity-figure, or the deity-figure does not care to be known. In this case, the effort expended applying rational thought to religious institutions will not provide direct fruit for a proper religion. 4: If there is no deity figure, or the deity-figure dose not care to be known, the most likely outcome of assumption #1 will likely have a serious flaw in it. 5: ( magic happens ) I searched out (and continue to search out) all the strongest "Christianity cannot be true" arguments I could (and can) find, and compare the anti-Christianity to the pro-Christianity arguments, and could not find a serious flaw. Several small flaws which are easily attributable to human error or lack of knowledge about a subject, but nothing showing a serious flaw in the underpinnings of the religion. 6: Additional side effect: the act of researching religions includes a researching and examination of comparable morality systems and social behavior, and how it affects the world around it. This provides sufficient benefit that even if there is no deity figure, or a deity figure does not care to be known, the act of searching is not wasted. Quite the contrary, I consider the ongoing study into religion, and into Christianity itself, to be time well spent - even if at some later date I discover that the religion does have the serious flaw that I have not yet found.

comment by WrongBot · 2010-07-26T21:39:21.159Z · score: 3 (3 votes) · LW · GW

1: The universe seems slanted towards Entropy. This suggests a 'start'. Which suggests something to start the universe. This of course has a great many logical fallacies inherent in it, but it's one element.

If this point is logically fallacious, why is it the foundation of your belief? Eliezer has addressed the topic, but that post focuses more on whether one should jump to the idea of God from the idea of a First Cause, which you do seem to have thought about. But why assume a First Cause at all?

On a slightly different tack, if Thor came down (Or is it up? My Norse mythology is a little rusty) from Valhalla, tossed some thunderbolts around, and otherwise provided various sorts of strong evidence to support his claim that he was the God of Thunder with all that that entails, would you then worship him? Or, to put it another way, is there some evidence that would make you change your mind?

(Apologies if I'm being too aggressive with my questions. You seem like good people, and I wouldn't want to drive you away.)

comment by Skepxian · 2010-07-27T00:12:20.753Z · score: 0 (2 votes) · LW · GW

Oh, no, not at all! I'm quite happy to have people interested in what I have to say, but I'm trying to keep my conversation suitable for the 'Welcome to Less Wrong' thread, and not have it get too big. ^_^

As far as 'If it's logically fallacious, why is it the foundation of your belief?'

Well, it's not the foundation of my belief, it's just a very strong element thereof. It would probably require several months of dedicated effort and perhaps 30,000 words to really hit the whole of my belief with any sort of holistic effort. However, why assume a First Cause? Well, because of entropy, we have to assume some sort of start for this iteration. Anything past that starts getting into extreme hypotheticals that only really 'make more sense than God' if it suits your pre-existing conditions. And no, I'm not saying God makes more sense outside of a bias - more that given a clean slate, "There might be laws of physics we can't detect because they don't function in a universe where they've already countered entropy to a new start state" is about equal to "Maybe there's a Deity figure that decided it wanted to start the universe" are about equal in my mind. And to be fair, 'deity figure' could be equivalent to 'Higher-level universe's programmer making a computer game.' Or this could all be a simulation, and none of it's actually real, or, or, or...

But the reason that I decide to accept this as a basic assumption is that, eventually, you have to assume that there is truth, and work off of the existing scientific knowledge instead of waiting for brand new world-shattering discoveries in the field of metaphysics. So I keep an interested eye on stuff like brane vibration or cosmic froth, but still assume that entropy happens, and the universe had an actual start.

if Thor came down throwing lightning bolts and etc, and claiming our worship, I'd be... well, admittedly, a little confused, and unsure. That's not exactly his MO from classic Norse mythology (which I love) and Norse mythology really didn't have the oomph of world creation that goes together with scientific evidence. I'd have to wonder if he wasn't a Nephilim or alien playing tricks. (Hi, Stargate SG-1!)

However, I take your meaning. If some deity figure came down and said, "hey, here's proof," yeah, I'd have a LOT of re-evaluating to do. It'd depend a lot on circumstances, and what sort of evidence of the past, rather than just pure displays of power, the deity figure could present. What answers does it have to the tough questions? Does it match certain anti-christ elements from Revelations?

Alternatively, what sort of evidence would make me change my mind and become atheist?

I would love to be able to easily say, "Yeah, if this happened, I'd totally change my mind in an instant!" but I am aware that I'm only human, and certain beliefs have momentum in my mind. Negative circumstance certainly won't do it - I've long ago resolved the "Why does a good God allow bad things to happen?" element. Idiotic Christian fanboys won't do it - I've been developing a very careful attitude towards religion and politics in divorcing ideas from the proponents of ideas. And if I had an idea what that proof would be - I'd already be researching it. So I just keep kicking around looking for new stuff to research.

Thank you for the interest!

comment by WrongBot · 2010-07-27T00:49:45.059Z · score: 2 (2 votes) · LW · GW

Sounds like you've given this some serious thought and avoided all kinds of failure modes. While I disagree with you and think that there's probably an interesting discussion here, I agree that this probably isn't the place to get into it. Welcome to Less Wrong, and I hope you stick around.

comment by Skepxian · 2010-07-27T01:16:53.855Z · score: 0 (0 votes) · LW · GW

I've certainly tried, thank you very much. I think that might be the most satisfying reaction I could have hoped to receive. ^_^ I hope to stick around for a good long time, too... this site's rivaling "TV Tropes" for the ability to completely suck me in for hours at a time without me noticing it.

comment by byrnema · 2010-07-26T21:46:41.541Z · score: 1 (1 votes) · LW · GW

4: Assuming a deity figure with the OOMPH to make a universe, it'll probably be able to make certain it remains known. So it's probably one of the existing long-lasting and persistent belief systems.

I like this argument. If there was such a deity, it could make certain it is known (and rediscovered when forgotten). The deity could embed this information into the universe in any numbers of ways. These ways could be accessed by humans, but misinterpreted. Evidence for this is the world religions, which have many major beliefs in common, but differ in the details. Christianity, being somewhat mature as a religion and having developed concurrently with rational and scientific thought, could have a reliable interpretation in certain aspects.

comment by Skepxian · 2010-07-26T23:43:53.750Z · score: 0 (0 votes) · LW · GW

Thank you very much, I appreciate that.

however, I'm following from an assumption of a deity that wants to be known and moving forward. It certainly doesn't suffice for showing that a deity figure does exist, because if we follow the assumption of a deity that doesn't want to be known, or a lack of a deity, then any religion which has withstood the test of time is likely the one with the fewest obvious flaws. It's rather like evolution of an idea rather than a creature.

However, the existence of such a religion does provide for the possibility of a deity figure.

comment by byrnema · 2010-07-27T03:50:34.909Z · score: -1 (1 votes) · LW · GW

I used the word 'embed' because this implies the deity could (possibly) be working within the rules of physics. The relationship between the deity, physical time and whether it is immediately involved in human events would be an interesting digression. The timelessness of physics is a relevant set of posts for that.

I agree with your comments. Regarding the strength of implications in either direction, (the possibility of a deity given a vigorous religion or the possibility of a true religion given a deity), there are two main questions:

  • if a deity exists, should we expect that it cares if it is known?

  • does the world actually look like a world in which a deity would be revealing itself? (though as you cautioned, such a world may or may not actually have a deity within it)

If this thread is likely to attenuate here, these questions are left for academic interest ...

comment by mattnewport · 2010-07-26T20:41:54.608Z · score: 1 (1 votes) · LW · GW

Given a historical study of various long-lasting and persistent belief systems, I settled on Christianity as the most probable belief system, based on my knowledge of human behavior, the historical facts of the actions surrounding the era and life of Jesus such as the deaths of the Disciples, a study of the bible, and a basic irrational hunch.

This sounds interesting. So were you raised an atheist or in some non-Christian religious tradition? Is the culture of your home country predominantly non-Christian? Conversion to a new belief system based on evidence is an interesting phenomenon because it is so relatively rare. The vast majority of religious people simply adopt the religion they were raised in or the dominant religion of the surrounding culture which is one piece of evidence that religious belief is not generally arrived at through rational thinking. Counter examples to this trend offer a case study in the kinds of evidence that can actually change people's minds.

comment by Skepxian · 2010-07-26T20:52:51.268Z · score: 1 (3 votes) · LW · GW

Apologies, I'm not as interesting as that. I changed a lot of beliefs about the belief system, but I was nonetheless still raised Christian. I didn't mean to imply otherwise - pre-existing developmental bias is part of the 'basic irrational hunch' part of the sentence.

I agree that religious belief is not generally arrived at through rational thinking, however - whether that religious belief is 'there is a God, and I know who it is!' or 'there is no God'. This is evidenced, for instance, the time I was standing there at church, just before services, and enjoying the fine day, and someone steps up next to me. "Isn't it a beautiful morning?" he asks. "Yes it is!" I reply. "Makes you wonder how someone can see this and still be an atheist," he says.

( head turns slooooowly ) "I think it's possible to appreciate a beautiful morning and still be atheist..." "Yes, but then who would have made something so beautiful?" ( mouth opens to talk ) ( mouth works silently ) "I believe the assumption would be, no one." "And what kind of sense would that make?" "I'd love to have that discussion, but service is about to start, and it's too beautiful a morning for what I suspect would be an argument."

comment by Vladimir_Nesov · 2010-07-26T21:02:47.537Z · score: 3 (3 votes) · LW · GW

Apologies, I'm not as interesting as that. I changed a lot of beliefs about the belief system, but I was nonetheless still raised Christian.

See also: Epistemic luck.

comment by Skepxian · 2010-07-26T23:44:29.753Z · score: 0 (0 votes) · LW · GW

Ah, yes. that rather strikes a chord, indeed. Thank you.

comment by WrongBot · 2010-06-21T20:42:41.412Z · score: 7 (7 votes) · LW · GW

Hi all.

I found this site through Methods of Rationality (as I suspect many have, of late). I've been reading through the sequences and archives for a while, and am finally starting to feel up to speed enough to comment here and there.

My name is Sam. I'm a programmer, mostly interested in writing and designing games. Oddly enough, my username derives from my much-neglected blog, which I believe predated this website.

I've always relished discovering that I'm wrong; if there's a better way to consistently improve the accuracy of one's beliefs, I'm not aware of it. So the LW approach makes an awful lot of sense to me, and I'm really enjoying how much concentrated critical thinking is available in the archives.

I'm also polyamorous, and so I'm considering a post or two on how polyamory (and maybe other kinds of alternative sexualities) relates to the practice of rationality. Would there be any interest in that sort of thing? I don't want to drag a pet topic into a place it's unwanted.

Furthermore, I am overfond of parentheses and semicolons. I apologize in advance.

comment by RobinZ · 2010-06-22T01:01:30.917Z · score: 4 (4 votes) · LW · GW

Hello! I like your blog.

I have a bit harsher filter than a number of prolific users of Less Wrong, I think - I would, pace Blueberry, like to see discussion of polyamory here only if you can explain how to imply the insights to other fields as well. I would be interested in the material, but I don't think this is the context for the merely interesting.

comment by WrongBot · 2010-06-22T02:37:42.942Z · score: 3 (3 votes) · LW · GW

The post I'm envisioning is less an analysis of polyamory as a lifestyle and more about what I'm tentatively calling the monogamy bias. While the science isn't quite there (I think; I need to do more research on the topic) to argue that a bias towards monogamy is built into human brain chemistry, it's certainly built into (Western) society. My personal experience has been that overcoming that bias makes life much more fun, so I'd probably end up talking about how to analyze whether monogamy is something a person might actually want.

The other LW topic that comes out of polyamory is the idea of managing romantic jealousy, which ends up being something of a necessity. Depending on how verbose I get, those may or may not get combined into a single post.

In any case, would either of those pass your (or more general) filters?

comment by RobinZ · 2010-06-22T03:25:38.421Z · score: 4 (6 votes) · LW · GW

Let me give an example of a topic that I think would pass my filter: establish that there is a bias (i.e. erroneous heuristic) toward monogamy, reverse-engineer the bias, demonstrate the same mechanisms working in other areas, and give suggestions for identifying other biases created by the same mechanism.

Let me give an example of a topic that I think would not pass my filter: establish that there is a bias towards monogamy, demonstrate the feasibility and desirability of polygamy, and offer instructions on how to overcome the bias and make polyamory an available and viable option.

Does that make sense?

comment by Vladimir_M · 2010-06-22T04:19:35.458Z · score: 3 (3 votes) · LW · GW

I certainly find quality discussions about such topics interesting and worthwhile, and consistent with the mission statement of advancing rationality and overcoming bias, but I'm not sure if the way you define your proposed topic is good.

Namely, you speak of the possibility that "bias towards monogamy is built into human brain chemistry," and claim that this bias is "certainly built into (Western) society." Now, in discussing topics like these, which present dangerous minefields of ideological biases and death-spirals, it is of utmost importance to keep one's language clear and precise, and avoid any vague sweeping statements.

Your statement, however, doesn't make it clear whether you are talking about a bias towards social norms encouraging (or mandating) monogamy, or about a bias towards monogamy as a personal choice held by individuals. If you're arguing the first claim, you must define precisely the metric you use to evaluate different social norms, which is a very difficult problem. If you're arguing the second one, you must establish which precise groups of people your claim applies to, and which not, and what metric of personal welfare you use to establish that biased decisions are being made. In either case, it seems to me that establishing a satisfactory case for a very general statement like the one you propose would be impossible without an accompanying list of very strong disclaimers.

Therefore, I'm not sure if it would be a good idea to set out to establish such a general and sweeping observation, which would, at least to less careful readers, likely be suggestive of stronger conclusions than what has actually been established. Perhaps it would be better to limit the discussion to particular, precisely defined biases on concrete questions that you believe are significant here.

comment by WrongBot · 2010-06-22T06:02:24.291Z · score: 3 (3 votes) · LW · GW

I think I grouped my ideas poorly; the two kinds of bias you point out would be better descriptions of the two topics I'm thinking of writing about. (And they definitely seem to be separate enough that I shouldn't be writing about them in the same post.) So, to clarify, then:

Topic 1: Individuals in industrialized cultures (but the U.S. more strongly than most, due to religious influence) very rarely question the default relationship style of monogamy in the absence of awareness of other options, and usually not even then. This is less of a bias and more of a blind spot: there are very few people who are aware that there are alternatives to visible monogamy. Non-consensual non-monogamy (cheating) is, of course, something of a special case. I'm not sure if there's an explicit "unquestioned assumptions that rule large aspects of your life" category on LW, but that kind of material seems to be well-received. I'd argue that there's at least as much reason to question the idea that "being monogamous is good" as the idea that "being religious is good." Of course my conclusions are a little different, in that one's choice of relationship style is ultimately a utilitarian consideration, whereas religion is nonsense.

Topic 2: Humans have a neurological bias in favor of (certain patterns of behavior associated with) monogamy. This would include romantic jealousy, as mentioned. While the research in humans is not yet definitive, there's substantial evidence that the hormone vasopressin, which is released into the brain during sexual activity, is associated with pair-bonding and male-male aggression. In prairie voles, vasopressin production seems to be the sole factor in whether or not they mate for life. Romantic/sexual jealousy is a cultural universal in humans, and has no known purpose other than to enforce monogamous behavior. So there are definitely biological factors that affect one's reasoning about relationship styles; it should be obvious that if some people prefer to ignore those biological factors, they see some benefit in doing so. I can say authoritatively that polyamory makes me happier than monogamy does, and I am not so self-absorbed as to think myself alone in this. Again, this is a case where at least some people can become happier by debiasing.

And that still leaves Topic 3: jealousy management, which I imagine would look something like the sequence on luminosity or posts on akrasia (my personal nemesis).

Thanks for your comment; it's really helped me clarify my organizational approach.

comment by CronoDAS · 2010-06-22T07:16:25.034Z · score: 0 (0 votes) · LW · GW

Several of us have enough trouble forming and maintaining even a single romantic relationship. :(

comment by khafra · 2010-06-22T18:03:34.156Z · score: 1 (1 votes) · LW · GW

Perhaps you should pay an agent to give you random chances at prizes in exchange for reported social interactions with eligible women.

comment by CronoDAS · 2010-06-22T18:47:48.117Z · score: 0 (0 votes) · LW · GW

My sarcasm detector is broken. :P

comment by wedrifid · 2010-06-22T05:21:23.892Z · score: 0 (0 votes) · LW · GW

The first is clearly about rational choices, psychological and social biases and the balance of incorporating existing instincts and overriding them with optimizations. That is right on topic so stop asking for permission and approval and go for it. Anything to do with social biases will inevitably have some people disapproving of it. That is how social biases get propagated! Here is not the place to be dominated by that effect.

As for the romantic jealousy thing... I don't see the relevance to rationality myself but if you think it is an effective way to demonstrate some rationalist technique or concept then go for it.

comment by Blueberry · 2010-06-21T20:46:02.124Z · score: 1 (1 votes) · LW · GW

Welcome!

I'm considering a post or two on how polyamory (and maybe other kinds of alternative sexualities) relates to the practice of rationality.

I'd certainly be very interested. The topic has come up a few times before; try searching in the search box on the right. I think the post would be well received, especially if you can explain how to apply the insights from polyamory to other fields as well.

Furthermore, I am overfond of parentheses and semicolons.

It's ok; I am too (they're hard to resist).

comment by ValH · 2010-05-07T13:21:28.443Z · score: 7 (7 votes) · LW · GW

I'm Valerie, 23 and a brand new atheist. I was directed to LW on a (also newly atheist) friend's recommendation and fell in love with it.

Since identifying as an atheist, I've struggled a bit with 'now what?' I feel like a whole new world has opened up to me and there is so much out there that I didn't even know existed. It's a bit overwhelming, but I'm loving the influx of new knowledge. I'm still working to shed old patterns of thinking and work my way into new ones. I have the difficulty of reading something and feeling that I understand it, but not being able to articulate it again (something left over from defending my theistic beliefs, which had no solid basis). I think I just need some practice :)

EDIT: Your link to the series of posts on why LW is generally atheistic is broken. Which makes me sad.

comment by ata · 2010-05-07T13:55:04.214Z · score: 3 (3 votes) · LW · GW

Welcome!

The page on LW's views on religion (or something like that page — not sure if the old wiki's content was migrated directly or just replaced) is now here. The Mysterious Answers to Mysterious Questions, Reductionism, and How To Actually Change Your Mind sequences are also relevant, in that they provide the background knowledge sufficient to make theism seem obviously wrong. Sounds like you're already convinced, but those sequences contain some pretty crucial core rationalist material, so I'd recommend reading them anyway (if you haven't already).

If there's anything in particular you're thinking "now what?" about, I and others here would be happy to direct you to relevant posts/sequences and help with any other questions about life, the universe, and everything. (Me, I recently decided to go back to the very beginning and read every post and the comments on most of them... but I realize not everyone's as dedicated/crazy (dedicrazy?) as me. :P)

comment by alexflint · 2010-05-07T13:56:21.278Z · score: 0 (0 votes) · LW · GW

Welcome! I hope you enjoy the posts and discussion here, and suggest ways that it could be improved.

comment by clarissethorn · 2010-03-15T10:24:47.727Z · score: 7 (7 votes) · LW · GW

I go by Clarisse and I'm a feminist, sex-positive educator who has delivered workshops on both sexual communication and BDSM to a variety of audiences, including New York’s Museum of Sex, San Francisco’s Center for Sex and Culture, and several Chicago universities. I created and curated the original Sex+++ sex-positive documentary film series at Chicago’s Jane Addams Hull-House Museum; I have also volunteered as an archivist, curator and fundraiser for that venerable BDSM institution, the Leather Archives & Museum. Currently, I'm working on HIV mitigation in southern Africa. I blog at clarissethorn.wordpress.com and Twitter at @clarissethorn.

Besides sex, other interests include gaming, science fiction and fantasy, and housing cooperatives.

I've read some posts here that I thought had really awful attitudes about sexuality and BDSM in particular, so I'm sure I'll be posting about those. I would like it if people were more rational about sex, inasmuch as we can be.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-15T10:34:28.080Z · score: 2 (2 votes) · LW · GW

I've read some posts here that I thought had really awful attitudes about sexuality and BDSM in particular

?? Not any of mine, I hope.

EDIT: I see, Phil Goetz on masochism. Well, I downvoted it. Not much else to say, aside from noting that it had net 4 points and that karma rules do make it easier to upvote than downvote.

This is a community blog and I think it's pretty fair to say that what has not been voted high or promoted ought not to be blamed on "Less Wrong".

comment by clarissethorn · 2010-03-15T10:52:02.720Z · score: 3 (3 votes) · LW · GW

That's fair. And I'll add that for a site populated mainly by entitled white guys (I kid, I kid), this site does much better at being generally feminist than most within that demographic.

PS It's kind of exciting to be talking to you, EY. Your article on heuristics and biases in the context of extinction events is one of my favorites ever. I probably think about it once a week.

comment by Hook · 2010-03-05T15:40:01.984Z · score: 7 (7 votes) · LW · GW

Hello.
My name is Dan, and I'm a 30 year old software engineer living in Maryland. I was a mostly lurking member of the Extropian mailing list back in the day and I've been following the progress of the SIAI sporadically since it's founding. I've made a few donations, but nothing terribly significant.

I've been an atheist for half my life now, and as I've grown older I've tended more and more to rational thinking. My wife recently made a comment that she specifically uses rational argument with me much more so than anyone else she has to deal with, even at work, because she knows that is what will work. (Obviously, she wins frequently enough to make it worth her while.)

I hope to have something minor to contribute to the akrasia discussion, although I haven't fully formulated it yet. I used to be an avid video game player and I don't play anymore. The last few times I played any games I didn't even enjoy it. I plan to describe the experiences that led to this state. Unfortunately for general applicability, one of those experiences is "grow older and have a child."

It's not the most altruistic of motives, but what most draws me to this community is that I enjoy being right, and there seem to be lots of things I can learn here to help me to be right more often. What I would dream about getting out of this community is a way to find or prepare for meaningful work that helped reduce existential risk. I have a one year old daughter and I was recently asking myself "What is most likely to kill my children and grandchildren?" The answer I came up with was "The same thing that kills everyone else."

comment by orthonormal · 2010-03-22T03:05:20.259Z · score: 3 (3 votes) · LW · GW

I have a one year old daughter and I was recently asking myself "What is most likely to kill my children and grandchildren?" The answer I came up with was "The same thing that kills everyone else."

That's a pretty compelling way to start a conversation on existential risk. I like it.

comment by hrishimittal · 2009-05-17T13:35:51.177Z · score: 7 (7 votes) · LW · GW

Hi, I'm Hrishi, 26, male. I work in air pollution modelling in London. I'm also doing a part-time PhD.

I am an atheist but come from a very religious family background.

When I was 15, I once cried uncontrollably and asked to see God. If there is indeed such a beautiful supreme being then why didn't my family want to meet Him? I was told that their faith was weak and only the greatest sages can see God after a lot of self-afflicted misery. So, I thought nevermind.

I've signed up for cryonics. You should too, or it'll just be 3 of us from LW when we wake up on the other side. I don't mind hogging all the press, but inside me lives a shiny ball of compassion which wants me to share the glory with you.

I wish to live a happy and healthy life.

comment by arthurlewis · 2009-04-16T16:13:56.406Z · score: 7 (9 votes) · LW · GW
  • Handle: arthurlewis
  • Location: New York, NY
  • Age: 28
  • Education: BA in Music.
  • Occupation: Musician / Teacher / Mac Support Guy
  • Blog/Music: http://arthurthefourth.com

My career as a rationalist began when I started doing tech support, and realized the divide between successful troubleshooting and what most customers tried to do. I think the key to "winning" is to challenge your assumptions about how to win, and what winning is. I think that makes me an instrumental rationalist, but I'm not quite sure I understand the term. I'm here because OB and LW are among the closest things I've ever seen to an honest attempt to discover truth, whatever that may turn out to mean. And because I really like the phrase "Shut up and calculate!"

Note to new commenters: The "Help" link below the comment box will give you formatting tips.

comment by MBlume · 2009-04-16T17:48:42.597Z · score: 1 (1 votes) · LW · GW

Note to new commenters: The "Help" link below the comment box will give you formatting tips.

This belongs in the Welcome post, thank you for reminding me!

comment by Paul Crowley (ciphergoth) · 2009-04-16T09:44:34.730Z · score: 7 (11 votes) · LW · GW

This community is too young to have veterans. Since this is the first such post, I think we should all be encouraged to introduce ourselves.

Thanks for doing this!

comment by MBlume · 2009-04-16T09:49:20.379Z · score: 0 (2 votes) · LW · GW

Since this is the first such post, I think we should all be encouraged to introduce ourselves.

Wonderful idea =)

comment by zslastman · 2012-06-24T16:30:18.799Z · score: 6 (6 votes) · LW · GW

I'm a 24 year old PhD student of molecular biology. I arrived here trying to get at the many worlds vs copenhagen debate as a nonspecialist, and as part of a sustained campaign of reading that will allow me to tell a friend who likes Hegel where to shove it. I'm also here because I wanted to reach a decision about whether I really want to do biology, if not, whether I should quit, and if I leave, what i actually want to do.

comment by phonypapercut · 2012-06-20T23:35:39.155Z · score: 6 (6 votes) · LW · GW

Hello. I've been browsing articles that show up on the front page for about a year now. Just recently started going through the sequences and decided it would be a good time to create an account.

comment by Catnip · 2011-12-12T16:16:24.240Z · score: 6 (6 votes) · LW · GW

Hello, Less Wrong.

I am Russian, atheistic, 27, trying to be rational.

Initially I came here to read a through explanation of Bayes theorem, but noticed that LessWrong contains a lot more than that and decided to stay for a while.

I am really pleased by quality of material and pleasantly surprised by quality of comments. It is rare to see useful comments on the Internet.

I am going to read at least some sequences first and comment if I have something to say. Though, I know I WILL be sidetracked by HP:MoR and "Three worlds collide". Well, my love for SF always got me.

comment by potato · 2011-06-15T12:38:02.562Z · score: 6 (8 votes) · LW · GW

Hello Less wrong.

I've been reading Yudkowsky for a while now. I'm a philosophy major from NJ and he's been quite popular around here since I showed some of my friends three worlds collide. I am here because I think I can offer this forum new and well considered views on cognition, computability, epistemology, ontology, valid inference in general and also have my views kicked around a bit. Hopefully our mutual kicking around of each others views will toughen them up for future kicking battles.

I have studied logic at high levels, and have an intricate understanding of Godel's incompleteness theorem and of Tarski's undefinability theorem. I plan to write short posts that might make the two accessible when I have the Karma to do so. So the sooner you give me 20 Karma the sooner you will have a non-logician friendly explanation of Godel's first incompleteness theorem.

comment by [deleted] · 2011-12-20T16:42:50.235Z · score: 2 (2 votes) · LW · GW

three worlds collid

Awesome. Finally someone. Reading the intros I was starting to think only HP:MOR was still bringing people here.

comment by Benquo · 2011-06-15T13:00:36.501Z · score: 2 (2 votes) · LW · GW

Welcome! It sounds like you have a lot to offer here.

You could put your Godel post in the discussion section now, it only requires 2 Karma to do that, and transfer it to the main page later if/when it's popular.The karma threshold is not very high, but asking for free karma instead of building up a record of commenting/discussion posts defeats the purpose of the 20-karma threshold.

comment by potato · 2011-06-15T21:49:30.593Z · score: 0 (0 votes) · LW · GW

Good point, I've already written a discussion page to get people talking about the epistemic status of undecidable propositions, but I feel like a full description of Godel's first incompleteness theorem might be a bit much for a discussion page.

comment by zntneo · 2011-04-04T18:35:32.758Z · score: 6 (6 votes) · LW · GW

Hello my name is Zachary Aletheia (when my wife and i got married we decided to choose a last name based on something that had meaning to use aletheia means truth in greek and we both have a passion for finding out the truth). Looking back on my journey to being a rationalist i think it was a 2 step process (though given how i've repeatedly thought about it i probably have changed the details in my memory quite a bit). I think the first step was during a anthropology class i watched this film about "magic" (i was a neo-pagan at the time who believed i could manipulate energy with my mind) and how absurd the video seemed really made me want to find a way to be able to have beliefs that aren't easy to see as absurd or laughable by others. From there i read quite a lot about logic (i still have a love affair with pure logic, i think i own 3 books on the subject and recently bought one from lukeprog). This all occured while i was a computer engineer undergraduate.

When i couldn't pass physics (which at the time i thought , due to self-serving bias, was because i was more interested in what i got my degree in) i decided to switch majors to psychology. During this time i still took lots of vitamins and supplements and was even a 9/11 truther for a while. Then i took a class called "Cognition" where we learned about quite a few of the biases and heuristics that are talked about on LW. Since then I started listening to a ton of skeptic podcasts and in general have tried to be a better rationalist.

One area that i do seem to have a hard time being a rationalist about is myself. I hold my self in very low self esteem (for instance i truly debated if i should post on here because everyone seems so brilliant how could i possibly add anything). I am hoping on trying to apply reason to that area of my life.

When it comes to life goals i am still trying to figure that out. I am leaning towards becoming a psychology prof but really not sure.

Oh i found lesswrong due to lukeprog and CSA. I have since become basically addicted to reading posts on it.

Oh i live in seaside,ca if anyone lives near there i would love to go to a LW meet-up.

comment by Oscar_Cunningham · 2011-04-04T18:38:50.716Z · score: 1 (1 votes) · LW · GW

Hi, welcome to LessWrong!

comment by DavidAgain · 2011-03-12T17:19:38.680Z · score: 6 (6 votes) · LW · GW

Hi

Didn't realise that this thread existed, so this 'hello' is after 20 or so posts. Oh well! I found Less Wrong because my brother recommended TVtropes, that linked to Harry Potter and the Methods of Rationality, and THAT led me back here. I've now recommended this site to my brother, completing the circle.

I've always been interested in rationality, I guess: I wouldn't identify any particular point of 'becoming a rationalist', though I've had times where I've come across ideas that help me be more accurate. Including some on here, actually. There's a second strand to my interest: I work in government and am interested in practical applications of rational thinking in large and complex organisations.

The Singularity Institute and Future of Humanity stuff is not something I've looked at before: I find it fairly interesting on an amateur level, and have some philosophy background that means the discussions make sense to me. I have zero computer science though, and generally my education is in the humanities rather than anything scientific.

comment by free_rip · 2011-03-29T08:43:11.092Z · score: 1 (1 votes) · LW · GW

Hi, David. I was very happy when I read

I work in government and am interested in practical applications of rational thinking in large and complex organisations.

A huge amount of people here have math/computing/science majors and/or jobs. I'm in the same basket as you, though - very interested in the applications of rationality, but with almost no education relevant to it. I'm currently stuck between politics and academia (in psychology, politics, economics maybe?) as a career choice, but either way...

And we need that - people from outside the field, who extend the ideas into other areas of society, whether we understand it all as in-depth or not.

So best of luck to you! And as Alexandros says, don't hesitate to put a post in the discussion forum with any progress, problems or anything of interest you come across in your quest. I'll be keeping an eye out for it.

comment by Alexandros · 2011-03-14T09:55:17.918Z · score: 0 (0 votes) · LW · GW

Welcome! If you do make any progress on your quest, do share your findings with us.

comment by Swimmer963 · 2011-02-18T12:02:14.328Z · score: 6 (6 votes) · LW · GW

Hi everyone!

I found this blog by clicking a link on Eliezer's site...which I found after seeing his name in a transhumanist mailing list...which I subscribed to after reading Ray Kurzweil's The Singularity is Near when I was fifteen. I found Harry Potter and the Methods of Rationality at the same time, and I've now successfully addicted my 16-year-old brother as well.

I'm 19 and I'm studying nursing in Ottawa. I work as a lifeguard and swim instructor at a Jewish Community Centre. (I'm not Jewish.) I sing in a girls' choir at an Anglican church. (I'm not Christian.) This usually throws people off a little. My favourite hobbies are writing and composing music. I can program in Java at a fairly beginner level after taking one class as my elective.

I've been reading this site for about a year and I decided it was time to start being useful. Cheers!

comment by jwhendy · 2011-01-06T02:53:13.886Z · score: 6 (6 votes) · LW · GW

Hi, I've been hanging around for several months now and decided to join. My name is John and I found the site (I believe) via a link on CommonSenseAtheism to How to actually change your mind. I read through many of those posts and took notes and resonated with a lot. I loved EY's Twelve Virtues and the Litany of Gendlin.

I'm a graduate in mechanical engineering and work as one today. I don't know that I would call myself a rationalist, but only because I haven't perhaps become one. In other words, I want to be but do not consider myself to be well-versed in rationalist methods and thought compared to posts/comments I read here.

To close, I was brought to this site in a round-about way because I have recently de-converted from Catholicism (which is what took me to CSA). I'm still amidst my "quest" and blog about it HERE. I would say I'm not sure god doesn't exist or that Christianity is false, but the belief is no longer there. I seek to be as certain and justified I can in whatever beliefs I hold. LessWrong has seemed to be a good tool toward that end. I look forward to continuing to learn and want to take this opportunity to begin participating more.

Note: I also post as "Hendy" on several other blogs. We are the same.

comment by MoreOn · 2010-12-09T23:33:43.613Z · score: 6 (6 votes) · LW · GW

Okay. Demographics. Boring stuff. Just skip to the next paragraph. I’m a masters student in mathematics (hopefully soon-to-be PhD student in economics). During undergrad, I majored in Biology, Economics and Math, and minored in Creative Writing (and nearly minored in Chemistry, Marine Science, Statistics and PE) … I’ll spare you the details, but most of those you won’t see on my resume for various reasons. Think: Master of None, not Omnidisciplinary Scientist.

My life goal is to write a financially self-sustainable computer game… for reasons I’ll keep secret for now. Seems like I’m not the first one in this thread to have this life goal.

I found LW through Harry Potter & MOR. I’d found HP&MOR through Tvtropes. I’d found Tvtropes through webcomic The Meek. I’d found The Meek through The Phoenix Requiem. Which I’d found through Top Web Comics site. That’s as far as I remember, 2 years ago.

I haven’t read most of the site, so far only about Bayes and the links off of that. And I’d started reading Harry Potter 3 weeks ago. So as far as you can see, I’m an ignorant newbie who speaks first and listens second.

I don’t identify myself as a rationalist. Repeat: I DO NOT identify myself as a rationalist. I didn’t notice that I’m different from everyone else when I was eleven. Or twelve. Or thereafter. I’m not smart enough to be a rationalist. I don't mean that in Socratic sense, "I know nothing, but at least I know more than you, idiot." I mean I'm just not smart.. I have the memory of a house cat. I can't name-drop on cue. I'm irrational. And I have BELIEFS (among them emergence, and when I model them, it'll be a Take That but for now it's just a belief).

Oh, and my name is a reference to BOTH Baldur's Gate 2, and to my intention of trying to challenge everything on this blog (what's my alternative? mindlessly agree?), and to how morons can't add 1+1.

comment by Axel · 2010-11-12T22:33:06.570Z · score: 6 (6 votes) · LW · GW

My name's Axel Glibert. I'm 21, I just finished studying Biology and now I'm going for a teaching job. I found this wonderful site through hp and the methods of rationality and it has been an eyeopener for me.

I've been raised in a highly religious environment but it didn't take very long before I threw that out of the window. Since then I had to make my own moral rules and attempts at understanding how the universe works. My firsts "scientific experiments" were rather ineffective but it caused me to browse through the science section of the local library... and now, more then a decade later, here I am!

I have long thought I was the only one to so openly choose Science over Religion (thinking even scientists were secretly religious because it was the "right thing to do") but then I found Less Wrong filled with like-minded people! For the past 3 months I've been reading through the core sequences on this site and now I've finally made an account. I'm still too intimidated by the sheer brilliance of some of the threads here to actually post but that's just more motivation for me to study on my own.

comment by David_Gerard · 2010-12-10T21:49:30.893Z · score: 3 (3 votes) · LW · GW

Just to go cross-site (RW is slightly anti-endorsed by LW), would the Atheism FAQ for the Newly Deconverted have been of conceivable use to your recovering religious younger self?

comment by Axel · 2010-12-27T23:00:45.003Z · score: 2 (2 votes) · LW · GW

Yes, that list has a lot of the answers I was looking for. However, for my younger self, breaking from religion meant making my own moral rules so there is a good chance I would have rejected it as just another text trying to control my life (yes my younger self was quite dramatic)

comment by SoulAllnighter · 2010-09-26T08:53:01.790Z · score: 6 (8 votes) · LW · GW

G'day LW Im an Aussie currently studying at the Australian National University in Canberra. My name is Sam and i should point out that the 'G'day' is just for fun, most Australians never use that phase and it kinda makes me cringe.

At at this very moment i'm trying to finish my thesis on the foundations of inductive reasoning, which i guess is pretty relevant to this community. A big part of my thesis is to translate a lot of very technical mathematics regarding Bayesianism and Sollomonoff induction into philosophical and intuitive explanations, so this whole site is really useful to me in just seeing how people think about rationalism and the mechanics of beliefs.

Although I my entire degree has been focused on the rational side of the human spectrum I remain alot more open minded and I think our entire education system regards math and physics too highly and does not leave enough room for creativity. Although create subjects exist in the arts, the generally culture is to regard them as intellectually inferior in some sense which has led to a hugely skewed idea of intelligence.

The saying goes "the map is not the territory" and although we can continually refine our maps through science and math I think truly understanding the territory can only be achieved through direct experience.

Im also very worried about the state of the world and It is exactly through rational open forums such as this that much needed progress can be discussed and advanced.

I guess i have a lot to say and instead of posting it here I should save it for an actual post, whenever i get time. But Its refreshing to see such an interesting online community amongst the seemingly endless rubbish on the net.

comment by Wei_Dai · 2010-09-26T19:39:20.933Z · score: 1 (1 votes) · LW · GW

Welcome to Less Wrong! I think you might be interested in these posts of mine, where I develop some standard and non-standard interpretations of probability theory. Let me know what you think. (BTW, I think you misspelled Solomonoff 's name?)

comment by Alex_Altair · 2010-07-21T21:01:24.528Z · score: 6 (6 votes) · LW · GW

I recently found Less Wrong through Eliezer's Harry Potter fanfic, which has become my second favorite book. Thank you so much Eliezer for reminding my how rich my Art can be.

I was also delighted to find out (not so surprisingly) that Eliezer was an AI researcher. I have, over the past several months, decided to change my career path to AGI. So many of these articles have been helpful.

I have been a rationalist since I can remember. But I was raised as a Christian, and for some reason it took me a while to think to question the premise of God. Fortunately as soon as I did, I rejected it. Then it was up to me to 1) figure out how to be immortal and 2) figure out morality. I'll be signing up for cryonics as soon as I can afford it. Life is my highest value because it is the terminal value; it is required for any other value to be possible.

I've been reading this blog every day since I've found it, and hope to get constant benefit from it. I'm usually quiet, but I suspect the more I read, the more I'll want to comment and post.

comment by Vladimir_Nesov · 2010-07-21T21:17:32.390Z · score: 4 (4 votes) · LW · GW
comment by Alex_Altair · 2010-07-21T21:37:45.987Z · score: 0 (0 votes) · LW · GW

"AGI is death, you want Friendly AI in particular and not AGI in general."

I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.

"'Life' is not the terminal value, terminal value is very complex."

I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don't mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.

comment by Vladimir_Nesov · 2010-07-21T21:49:38.655Z · score: 3 (3 votes) · LW · GW

If you are determined to read the sequences, you'll see. At least read the posts linked from the wiki pages.

I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.

Well, you'll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of "War and Peace". Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you'll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.

I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive.

Acting in the world might require you being alive, but it's not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn't disappear the moment it can no longer be observed. And it's possible to be mistaken about your own values.

comment by JGWeissman · 2010-07-21T21:49:34.215Z · score: 3 (3 votes) · LW · GW

I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.

I am not sure what you mean by "give it outputs", but you may be interested in this investigation of attempting to contain an AGI.

I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don't mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.

Then I think you meant that "Life is the instrumental value."

comment by Nick_Tarleton · 2010-07-21T23:36:40.069Z · score: 1 (1 votes) · LW · GW

Then I think you mant that "Life is the instrumental value."

to amplify: Terminal Values and Instrumental Values

comment by steven0461 · 2010-07-21T21:08:04.084Z · score: 0 (0 votes) · LW · GW

Life is my highest value because it is the terminal value; it is required for any other value to be possible.

A value that's instrumental to every other value is still instrumental.

comment by Tesseract · 2010-07-08T01:12:41.116Z · score: 6 (6 votes) · LW · GW

Hello! I'm Sam. I'm 17, a newly minted high school graduate, and I'll be heading off to Reed College in Portland, Oregon next month.

I discovered Less Wrong through a link (whose origin I no longer remember) to "A Fable of Science and Politics" a couple of months ago. The post was rather striking, and the site's banner was alluring, so I clicked on it. The result, over the past couple of months, has been a massive accumulation of bookmarks (18 directly from Less Wrong at the time of this writing) accompanied by an astonishing amount of insight.

This place is probably the most intellectually stimulating site I've ever found on the internet, and I'm very much looking forward to discovering more posts, as well as reading through the ones I've stored up. I have, until now, mostly read bits and pieces that I've seen on the main page or followed links to, partially because I haven't had time and partially because some of the posts can be intimidatingly academic (I don't have the math and science background to understand some of what Eliezer writes about), but I've made this account and plan to delve into the Sequences shortly.

To some degree, I think I've always been a rationalist. I've always been both inquisitive and argumentative (captain of my school's debate team, by the way), and those qualities combined tend to lead one to questioning established thought. Although my parents are mildly religious, I don't think I ever actually believed in God (haven't gone to synagogue since my Bar Mitzvah), and that lack of belief hardened into strong atheism.

I'm very fond of logic, and I've argued myself from atheism to materialism and hence to determinism, with utilitarianism thrown in along the way. They're not popular viewpoints, but they're internally consistent, and the world becomes much clearer and simpler when seen from them. I'm still trying to refine my philosophies to create a truly coherent view of the world. I very much enjoy Less Wrong both because it's a hub of my low-percentage philosophy and because it's uniquely clarifying in its perspectives.

I enjoy psychology and philosophy, the former of which I'm considering as a major, and was heavily influenced by reading The Moral Animal (which I highly recommend if you haven't already read it) during my freshman year of high school. I love reading, practice introspection, and am continually attempting to incorporate as much information as I can into my worldview.

I actually already have about one and a half posts ready (one on consciousness, one on post rem information), but I'll readily wait until I've read through the Sequences and accumulated some karma before I publish them.

I've written too much already, so I'll cut this off here. Once again: Hi everyone! My mind is open.

comment by lsparrish · 2010-07-08T01:42:19.673Z · score: 1 (1 votes) · LW · GW

Good to meet you! If you're interested in cryonics at all, you'll be pleased to note that there is a local group headed by my friends Chana and Aschwin de Wolf. http://www.cryonicsoregon.com/

comment by Kevin · 2010-07-08T01:34:40.474Z · score: 1 (1 votes) · LW · GW

Congratulations! and Welcome!

comment by luminosity · 2010-06-17T05:04:17.633Z · score: 6 (8 votes) · LW · GW

Hi there,

My name is Lachlan, 25 years old, and I too am a computer programmer. I found less wrong via Eliezer's site; having been linked there by a comment on Charles Stross's blog, if I recall correctly.

I've read through a lot of the LW backlog and generally find it all very interesting, but haven't yet taken the time and effort to try to apply the useful seeming guidelines to my life and evaluate the results. I blame this on having left my job recently, and feeling that I have enough change in my life right now. I worry that this excuse will metamorphose into another though, and become a pattern of not examining my thinking as best as possible.

All that said, I do often catch myself thinking thoughts that on examination don't hold up, and re-evaluating them. The best expression of this that I've seen is Pratchett's first, second, third thoughts.

comment by Alicorn · 2010-06-17T06:14:01.412Z · score: 4 (6 votes) · LW · GW

Love the username!

comment by luminosity · 2010-06-18T01:20:11.596Z · score: 2 (2 votes) · LW · GW

Completely coincidental -- just a word I liked the sound of 10 years ago. It does fit in here rather well though.

comment by Gigi · 2010-06-02T15:23:44.719Z · score: 6 (6 votes) · LW · GW

Hi, everyone, you can call me Gigi. I'm a Mechanical Engineering student with a variety of interests ranking among everything from physics to art (unfortunately, I know more about the latter than the former). I've been reading LW frequently and for long sessions for a couple of weeks now.

I was attracted to LW primarily because of the apparent intelligence and friendliness of the community, and the fact that many of the articles illuminated and structured my previous thoughts about the world (I will not bother to name any here, many are in the Sequences).

While the rationalist viewpoint is fairly new to me (aside from various encounters where I could not identify ideas as "rationalist"), I am looking forward to expanding my intellectual horizons by reading, and hopefully eventually contributing something meaningful back to the community.

If anyone has recommendations for reading outside LW that may be interesting or relevant to me, I welcome them. I've got an entire summer ahead of me to rearrange my thinking and improve my understanding.

comment by Vive-ut-Vivas · 2010-06-04T03:04:30.424Z · score: 2 (2 votes) · LW · GW

I'm a Mechanical Engineering student with a variety of interests ranking among everything from physics to art (unfortunately, I know more about the latter than the former).

Why "unfortunately"? I'd love to see more discussion about art on Less Wrong.

comment by Gigi · 2010-06-04T04:59:57.973Z · score: 2 (2 votes) · LW · GW

Hah, the relative lack of discussion on art was exactly why it seemed to me as if the physics was more useful here. But who knows, I may be able to start up some discussion once I've gotten into the swing of things.

comment by RobinZ · 2010-06-04T18:54:00.647Z · score: 1 (1 votes) · LW · GW

There was Rationality and the English Language and Human Evil and Muddled Thinking a while ago that brought in a literary angle (George Orwell, to be specific) - but I think Yudkowsky talked about how people talk about wanting "an artist's perspective" disingenously before. That there is a relative lack of discussion on art is not a reflection of the particular lack of interest in art, but the fact that we do not know what to say about art that is relevant to rationality.

(Although commentary spinning off of the drawing-on-the-right-side-of-the-brain insight into failure modes of illustration could be illuminating...)

comment by Gigi · 2010-06-05T18:01:06.004Z · score: 0 (0 votes) · LW · GW

I've been thinking on that, actually. So far all I've come up with is the fact that learning to exercise your creativity and think more abstractly can help very much with finding new ways of approaching problems and looking at your universe, thereby helping to shed new light on certain subjects. The obvious flaw is, of course, that you can learn to be creative without art; there are legions of scientists who show it to be so.

If I happen to come up with something that I think is particularly relevant or interesting I will definitely show it to the community, though.

comment by NancyLebovitz · 2010-06-04T07:52:14.537Z · score: 1 (1 votes) · LW · GW

I was thinking about recommending Effortless Mastery by Kenny Werner-- it's about the hard work of eliminating effort so as to become an excellent jazz musician, but has more general application. For example, it's the only book I've seen about getting over anxiety-driven procrastination.

It seemed too far off topic, but now that you mention art....

comment by RomanDavis · 2010-06-04T10:31:40.938Z · score: 0 (0 votes) · LW · GW

I've been trying to use drawing as a test case in this thread:

http://lesswrong.com/lw/2ax/open_thread_june_2010/23am

Just Ctrl+F my name and you'll find my derails and their replies.

comment by RobinZ · 2010-06-03T01:00:57.385Z · score: 1 (1 votes) · LW · GW

Many people here loved Gödel, Escher, Bach by Douglas Hofstadter. It's quite a hodge-podge, but there's a theme underlying the eclectic goodness.

I have a peculiar fondness for Consciousness Explained by Daniel Dennett, which I find to be an excellent attempt (although [edit: I suspect] obsolete and probably flawed) to provide a reductionist explanation of an apparently-featureless phenomenon - many people, including many people here, found it dissatisfying.

I cannot think of other specifically LessWrongian recommendations off the top of my head - as NancyLebovitz said, elaboration would help.

comment by Gigi · 2010-06-04T02:23:54.325Z · score: 0 (0 votes) · LW · GW

Gödel, Escher, Bach is definitely a good recommendation, at least it appears to be from my cursory research on it.

As to what sort of recommendations I am looking for, I've noticed that LW appears to have a few favorite philosophers (Dennett among them) and a few favorite topics (AI, bias, utilitarian perspective, etc.) which I might benefit from understanding better, nice as the articles are. Some recommendations of good books on some of LW's favorite topics would be a wonderful place to start.

Thanks much for your help.

comment by Nick_Tarleton · 2010-06-04T05:21:22.460Z · score: 0 (0 votes) · LW · GW
comment by mattnewport · 2010-06-03T01:10:22.252Z · score: 0 (0 votes) · LW · GW

I'm a fan of Consciousness Explained as well, though that may be partly nostalgia as in some ways I feel it marks the beginning of (or at least a major milestone on) my rationalist journey.

comment by Blueberry · 2010-06-03T01:37:11.541Z · score: 2 (2 votes) · LW · GW

Wow, I'm surprised to hear that two people referred to Consciousness Explained as obsolete. If there's a better book on consciousness out there, I'd love to hear about it.

comment by mattnewport · 2010-06-03T02:06:15.326Z · score: 0 (0 votes) · LW · GW

I didn't intend to imply I thought it was obsolete, just that I may hold it in higher regard because of when I read it than if I discovered it today.

comment by RobinZ · 2010-06-03T01:43:53.237Z · score: 0 (0 votes) · LW · GW

As would I, actually. I guessed "obsolete" because the book came out in 1991 (and Dennett has written further books on the subject in the following nineteen years). I've not investigated its shortcomings.

comment by Blueberry · 2010-06-03T18:47:57.737Z · score: 1 (1 votes) · LW · GW

Good point: thanks. Dennett wrote Sweet Dreams in 2005 to update Consciousness Explained, and in the preface he wrote

The theory I sketched in Consciousness Explained in 1991 is holding up pretty well . . . I didn't get it all right the first time, but I didn't get it all wrong either. It is time for some revision and renewal.

I highly recommend Sweet Dreams to Gigi and anyone else interested in consciousness. (It's also shorter and more accessible than Consciousness Explained.)

comment by Gigi · 2010-06-04T02:26:04.629Z · score: 0 (0 votes) · LW · GW

Thank you for the updated recommendation. I will probably look into reading Sweet Dreams. Would I benefit from reading Consciousness Explained first, or would I do well with just the one?

comment by Blueberry · 2010-06-04T08:43:34.835Z · score: 1 (1 votes) · LW · GW

I'd recommend reading them both, and you'd probably benefit from reading CE first. But I'd actually start with Godel, Escher, Bach (by Hofstadter) and The Mind's I (which Dennett co-wrote with Hofstadter).

comment by Tyrrell_McAllister · 2010-06-04T19:13:17.204Z · score: 0 (0 votes) · LW · GW

But I'd actually start with Godel, Escher, Bach (by Hofstadter) and The Mind's I (which Dennett co-wrote with Hofstadter).

A while back, colinmarshall posted a detailed chapter-by-chapter review of The Mind's I.

comment by RobinZ · 2010-06-04T19:09:38.245Z · score: 0 (0 votes) · LW · GW

Oh, The Mind's I was excellent - it is a compilation of short works with commentary that touches on a lot of nifty themes with respect to identity and personhood.

comment by Tyrrell_McAllister · 2010-06-04T19:14:12.604Z · score: 0 (0 votes) · LW · GW

A while back, colinmarshall posted a detailed chapter-by-chapter review of The Mind's I.

comment by RobinZ · 2010-06-04T19:19:24.672Z · score: 0 (0 votes) · LW · GW

Thanks for the link!

...which links to the recommended reading list for new rationalists, which I suppose we should have given to Gigi in the first place. The sad thing is, I contributed to that list, and completely forgot it until now.

comment by Blueberry · 2010-06-04T08:49:27.342Z · score: 0 (0 votes) · LW · GW

Oh, and also Hofstadter's Metamagical Themas. (Yes, that's the correct spelling.)

comment by RobinZ · 2010-06-04T19:12:52.935Z · score: 0 (0 votes) · LW · GW

The title - being the title of Hofstadter's column in Scientific American (back when Scientific American was a substantive publication), of which the book is a collection - is an anagram of Mathematical Games, the name of his predecessor's (Martin Gardner's) column. That, too, is an enjoyable and eclectic read.

comment by NancyLebovitz · 2010-06-02T23:03:30.791Z · score: 0 (0 votes) · LW · GW

Welcome!

Could you expand a little more on what sort of books you're interested in?

comment by taiyo · 2010-04-19T19:47:39.292Z · score: 6 (6 votes) · LW · GW

My name is Taiyo Inoue. I am a 32, male, father of a 1 year old son, married, and a math professor. I enjoy playing the acoustic guitar (American primitive fingerpicking), playing games, and soaking up the non-poisonous bits of the internet.

I went through 12 years of math study without ever really learning that probability theory is the ultimate applied math. I played poker for a bit during the easy money boom for fun and hit on basic probability theory which the 12 year old me could have understood, but I was ignorant of the Bayesian framework for epistemology until I was 30 years old. This really annoys me.

I blame my education for leaving me ignorant about something so fundamental, but mostly I blame myself for not trying harder to learn about fundamentals on my own.

This site is really good for remedying that second bit. I have a goal to help fix the first bit -- I think we call it "raising the sanity waterline".

As a father, I also want to teach my son so he doesn't have the same regret and annoyance at my age.

comment by [deleted] · 2010-04-28T01:19:52.463Z · score: 3 (3 votes) · LW · GW

I'm just realizing this myself; probability theory is epistemology.

comment by [deleted] · 2010-04-28T23:09:40.837Z · score: 1 (1 votes) · LW · GW

Hm, may be a catchy line, but don't confuse the question with a particular answer...

comment by Mass_Driver · 2010-03-30T21:24:13.587Z · score: 6 (6 votes) · LW · GW

Hi everyone!

I'm graduating law school in May 2010, and then going to work in consumer law at a small firm in San Francisco. I'm fascinated by statistical political science, space travel, aikido, polyamory, board games, and meta-ethics.

I first realized that I needed to make myself more rational when I bombed an online confidence calibration test about 6 years ago; it asked me to provide 95% confidence intervals for 100 different pieces of numerical trivia (e.g. how many nukes does China have, how many counties are in the U.S., how many species of spiders are there), and I only got about 72 correct. I can't find the website anymore, which is frustrating; I like to think I would do better now.

I am a pluralist about what should be achieved -- I believe there are several worthy goals in life, the utility of which cannot be meaningfully compared. However, I am passionately convinced that people should be consciously aware of their goals and should attempt to match their actions to their stated goals. Whatever kind of future we want, we are flabbergastingly unlikely to get it unless we identify and carry out the tasks that can lead us there.

Despite reading and pondering roughly 80 LW articles, together with some of their comments, I continue to believe a few things that will rub many LW readers the wrong way. My confidence in these beliefs has gone down, but is still over 50%. For example, I still believe in a naturalistic deity, and I still believe in ontologically basic consciousness. I am happy to debate these issues with individuals who are interested, but I do not plan on starting any top-level posts about them; I do not have the stamina or inclination to hold the field against an entire community of intelligent debaters all by myself.

I am not sure that I have anything to teach LW in the sense of delivering a prepared lecture, but I hope to contribute to discussions about how to best challenge Goodhart's Law in various applied settings.

Finally, thanks to RobinZ for the warm welcome!

comment by Karl_Smith · 2010-02-19T00:23:20.489Z · score: 6 (6 votes) · LW · GW

Name: Karl Smith

Location: Raleigh, North Carolina

Born: 1978

Education: Phd Economics

Occupation: Professor - UNC Chapel Hill

I've always been interested in rationality and logic but was sidetracked for many (12+) years after becoming convinced that economics was the best way to improve the lives of ordinary humans.

I made it to Less Wrong completely by accident. I was into libertarianism which lead me to Bryan Caplan which lead me Robin Hanson (just recently). Some of Robin's stuff convinced me that Cryonics was a good idea. I searched for Cryonics and found Less Wrong. I have been hooked ever since. About 2 weeks now, I think.

Also, skimming this I see there is a 14 year-old on this board. I cannot tell you how that makes burn with jealousy. To have found something like this at 14! Soak it in Ellen. Soak it in.

comment by realitygrill · 2010-02-20T04:43:39.170Z · score: 0 (0 votes) · LW · GW

Awesome. I'd love to hang with you if I'm there next year; you don't have any connections to BIAC do you? I just applied for a postbac fellowship there..

What's your specialty in econ?

comment by Karl_Smith · 2010-02-21T21:45:58.623Z · score: 0 (0 votes) · LW · GW

I don't have any connection to BIAC.

My specialty is human capital (education) and economic growth and development

comment by realitygrill · 2010-02-24T04:20:30.005Z · score: 0 (0 votes) · LW · GW

Ah. I know something of the former and little of the latter. I'd presume your interests are much more normative than mine.

comment by wedrifid · 2010-02-24T04:31:26.415Z · score: 0 (0 votes) · LW · GW

Does the term 'normative' work in that context?

comment by Karl_Smith · 2010-02-24T17:25:50.145Z · score: 1 (1 votes) · LW · GW

Yes,

I could try to say that my work focuses only on understand how growth and development take place for example but this in practice this it doesn't work that way.

A conversation with students, policy makers, even fellow economists will not go more than 5 - 10 mins without taking a normative tact. Virtually everyone is in favor of more growth and so the question is invariably, "what should we DO to achieve it"

comment by Psilence · 2010-02-04T20:28:40.872Z · score: 6 (6 votes) · LW · GW

Hi all, my name's Drew. I stumbled upon the site from who-knows-where last week and must've put in 30-40 hours of reading already, so suffice to say I've found the writing/discussions quite enjoyable so far. I'm heavily interested in theories of human behavior on both a psychological and moral level, so most of the subject matter has been enjoyable. I was a big Hofstader fan a few years back as well, so the AI and consciousness discussions are interesting as well.

Anyway, thought I'd pop in and say hi, maybe I'll take part in some conversations soon. Looks like a great thing you've got going here.

comment by HughRistik · 2009-04-17T06:42:32.344Z · score: 6 (8 votes) · LW · GW
  • Handle: HughRistik (if you don't get the joke immediately, then say "heuristic" out loud)
  • Age: 23
  • Education: BA Psychology, thinking of grad school
  • Occupation: Web programmer
  • Hobbies: Art, clubbing, fashion, dancing, computer games, too many others to mention
  • Research interests: Mate preferences, sex differences, sex differences in mate preferences, biological and social factors in homosexuality, and the psychology of introversion, social anxiety, high sensitivity, and behavioral inhibition

I came to Less Wrong via Overcoming Bias. I heard a talk by Eliezer around 2004-2005, and I've run into him a couple times since then.

I've been interested in rationality as long as I can remember. I obsessively see patterns in the world and try to understand it better. I use this ability to get good at stuff.

I once had social anxiety disorder, no social skills, and no idea what to do with women (see love-shyness; I'm sure there are people on here who currently have it). Thanks to finding the seduction community, I figured out that I could translate thinking ability into social skills, and that I could get good at socializing just like how I got good at everything else. Through observation, practice, and theories from social psychology, evolutionary psychology, and the seduction community, I built social skills and abilities with women from scratch.

Meanwhile, I attempted to eradicate the disadvantages of my personality traits and scientific approach to human interaction. For instance, I learned to temporarily disable analytical and introverted mental states and live more in the moment. I started identifying errors and limiting aspects of the seduction community's philosophy and characterization of women and female preferences. While my initial goal was to mechanistically manipulate people into liking me by experimenting on them socially, an unexpected outcome occurred: I actually became a social person. I started to enjoy connecting with people and emotionally vibing. I cultivated social instincts, so that I no longer had to calculate everything cognitively.

In the back of my head, I've been working on a theory of sexual ethics, particularly the ethics of seduction.

I will write more about heuristic and the seduction community as I've promised, but I've been organizing thoughts for a top-level post, and figuring out whether I'm going to address those topics with analytical posts, or with more of a personal narrative, and whether I would mix them. Anyone have any suggestions or requests?

comment by jasonmcdowell · 2009-04-17T10:09:48.900Z · score: 1 (1 votes) · LW · GW

It sounds like you are currently very much pushing your personality where you want it to go. I would be interested in hearing about your transition from being shy to being comfortable with people. Do you still remember how you were?

I more or less consciously pushed myself into sociability when I was 12 and made a lot of progress. Previously I was much shyer. I've changed so much since then, it feels strange to connect with my earlier memories. I've also experienced "calculating" social situations, emulating alien behaviors - and then later finding them to have become natural and enjoyable.

For the past few years, I've just been coasting - I haven't changed much and I don't know how to summon up the drive I had before.

comment by HughRistik · 2009-04-19T01:38:39.758Z · score: 2 (2 votes) · LW · GW

Do you still remember how you were?

Yes, though the painfulness of the memory is fading.

I've also experienced "calculating" social situations, emulating alien behaviors - and then later finding them to have become natural and enjoyable.

Do you have a particular example? For me, one of them is smalltalk. I don't necessarily enjoy all smalltalk all the time, but I enjoy it a lot more than I ever thought that I would, back when I viewed it as "pointless" and "meaningless" (because I didn't understand that the purpose of most social communication is to share emotions, not to share interesting factual information and theories). Similar story with flirting.

With such social behaviors, everyone "learned" them at some point. Most people just learned them during their formative experiences. Some people, due to a combination of biological and social factors, learn this stuff later, or not at all. The cruel thing is that once you fall off the train, it's harder and harder to get back on. See the diagram here for a graphic illustration.

For the past few years, I've just been coasting - I haven't changed much and I don't know how to summon up the drive I had before.

I've gone through periods of growth, and periods of plateaus. Once I got to a certain level of slightly above average social skills, it became easy to get complacent with mediocrity. I start making progress again when I keep trying new things, going new places, and focusing on what on what I want.

comment by HughRistik · 2009-04-17T06:45:39.874Z · score: 1 (1 votes) · LW · GW

I am also interested in gender politics. I started off with reflexively feminist views, yet I soon realized flaws in certain types of feminism. Like with religions, I think that there some really positive goals and ideas in feminism, and some really negative ones, all mixed together with really bad epistemic hygiene.

There are more rational formulations of some feminist ideas, yet more rational feminists often fail to criticize less rational feminists (instead calling them "brilliant" and "provocative"), causing a quality control problem leading to dogmatism and groupthink. I am one of the co-bloggers on FeministCritics.org, where we try to take a critical but fair look at feminism and start dialogues with feminists. I'm not very active there anymore, but here's an example of the kind of epistemic objections that I make towards feminism.

My eventual goal is to formulate a gender politics that subsumes the good things about feminism.

comment by ektimo · 2009-04-16T16:54:06.631Z · score: 6 (6 votes) · LW · GW
  • Name: Edwin Evans
  • Location: Silicon Valley, CA
  • Age: 35

I read the "Meaning of Life FAQ" by a previous version of Eliezer in 1999 when I was trying to write something similar, from a Pascal’s Wager angle (even a tiny possibility of objective value is what should determine your actions). I've been a financial supporter of the Organization That Can't Be Named and a huge fan of Eliezer's writings since that same time. After reading "Crisis of Faith" along with "Could Anything Be Right?" I finally gave up on objective value; the "light in the sky" died. Feeling my mind change was an emotional experience that lasted about two days.

This is seriously in need of updating, but here is my home page.

By the way, would using Google AdWords be a good way to draw people to 12 Virtues? Here is an example from the Google keyword tool:

  • Search phrase: how to be better
  • Cost per click: $0.05
  • Approximate volume per month: 33,100

[Edit: added basic info/clarification/formatting]

comment by Paul Crowley (ciphergoth) · 2009-04-16T12:58:30.351Z · score: 6 (6 votes) · LW · GW

OK, let's get this started. There seems to be no way of doing this that doesn't sound like a personal ad.

As well as programming for a living, I'm a semi-professional cryptographer and cryptanalyst; read more on my work there. Another interest important to me is sexual politics; I am bi, poly and kinky, and have been known to organise events related to these themes (BiCon, Polyday, and a fetish nightclub). I get the impression that I'm politically to the left of much of this site; one thing I'd like to be able to talk about here one day is how to apply what we discuss to everyday politics.

comment by Alicorn · 2009-04-16T16:35:17.375Z · score: 4 (4 votes) · LW · GW

What would it look like to apply rationalist techniques to sexual politics? The best guess I have is "interesting", but I don't know in what way.

comment by HughRistik · 2009-04-16T17:03:41.583Z · score: 3 (3 votes) · LW · GW

Yes, it would be interesting. It would involve massively changing the current gender political programs on all sides, which are all ideologies with terrible epistemic hygiene. I'll try to talk about this more when I can.

comment by lifelonglearner · 2015-12-30T21:10:16.611Z · score: 5 (5 votes) · LW · GW

Hey everyone,

My name is Owen, and I'm 17. I read HPMOR last year, but really got into the Sequences and additional reading (GEB, Thinking Fast and Slow, Influence) around this summer.

I'm interested in time management, with respect to dealing with distractions, especially with respect to fighting akrasia. So I'm trying to use what I know about how my own brain operates to create a suite of internalized beliefs, primers, and defense strategies for when I get off-track (or stopping before I get to that point).

Personally, I'm connected with a local environmental movement, which stems from a fear I had about global warming as the largest threat to humanity a few years ago. This was before I looked into other ex-risks. I'm now evaluating my priorities, and I'd also like to bring some critical thinking to the environmental movement, where I feel some EA ideals would make things more effective (prioritizing some actions over others, examining cost-benefits of actions, etc.).

Especially after reading GEB, I'm coming to realize that a lot of rather things I hold central to my "identity" are rather arbitrarily decided and then maintained through a need to stay consistent. So I'm reevaluating my beliefs and assumptions (when I notice them) and ask if they are actually things I would like to maintain. A lot of this ties back to the self-improvement with regards to time management.

In day-to-day life, it's hard to find others who have a similar info diet/ reading background as me, so I'm considering getting more friends/family interested in rationality a goal for me, especially my (apparently) very grades-driven classmates. I feel this would lead to more constructive discussions and a better ability to look at the larger picture for most people.

Finally, I also perform close-up coin magic, which isn't too relevant to most aspects of rationality, but certainly looks pretty.

I look forward to sharing ideas and learning from you all here!

comment by gjm · 2015-12-30T23:42:55.642Z · score: 0 (0 votes) · LW · GW

Welcome!

comment by [deleted] · 2011-12-24T08:44:37.277Z · score: 5 (5 votes) · LW · GW

Uh...uhm...hello?

comment by Normal_Anomaly · 2011-12-24T18:24:01.240Z · score: 1 (1 votes) · LW · GW

Hi!

comment by [deleted] · 2011-10-18T18:25:15.582Z · score: 5 (5 votes) · LW · GW

Hello Lesswrong

I am a nameless, ageless, genderless internet-being who may sometimes act like a 22 year old male from canada. I have always been quite rational and consciously aiming to become more rational, though I had never read any actual discussion of rationality, unless you count cat-v. I did have some possibly wrong ideas that I protected with anti-epistemology, but that managed to collapse on its own recently.

I got linked to lesswrong from reddit. I didn't record the details so don't ask. I do remember reading a few lesswrong articles and thinking this is awesome. Then I read the sequences. The formal treatment of rationality has really de-crufted my thinking. I'm still working on getting to a superhuman level of rationality tho.

I do a lot of thinking and I have some neat ideas to post. Can't wait.

Also, my human alter-ego is formally trained as a mechanical engineer.

I hope to contribute and make the world more awesome!

comment by kilobug · 2011-10-18T18:38:35.312Z · score: 0 (0 votes) · LW · GW

Welcome here !

comment by Phasmatis · 2011-09-10T21:19:29.932Z · score: 5 (5 votes) · LW · GW

Salutations, Less Wrong.

I'm an undergraduate starting my third year at the University of Toronto (in Toronto, Ontario, Canada), taking the Software Engineer specialist program in Computer Science.

I found Less Wrong through a friend, who found it through Harry Potter and the Methods of Rationality, who found that through me, and I found HP: MoR through a third friend. I'm working my way through the archive of Less Wrong posts (currently in March of 2009).

On my rationalist origins: One of my parents has a not-insignificant mental problem that result in subtle psychoses. I learned to favor empirical evidence and rationality in order to cope with the incongruency of reality and some of said parent's beliefs. It has been an ongoing experience since then, including upbringing in both Protestant Anglicanism and Secular Humanistic Judaism; the dual religious background was a significant contributor towards both my rationalism and my atheism.

I eagerly anticipate interesting discussions here.

comment by kilobug · 2011-10-18T18:41:15.346Z · score: 0 (0 votes) · LW · GW

Welcome here !

comment by HopeFox · 2011-06-12T12:11:24.153Z · score: 5 (5 votes) · LW · GW

Hi, I've been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I'm planning out an actual post at the moment, I figured I should tell people where I'm coming from.

I'm a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it's happened and I couldn't ask for a better job.

Like many people, I came to Less Wrong from TVTropes via Methods of Rationality. Since I started reading, I've found that it's been quite helpful in organising my own thoughts and casting aside unuseful arguments, and examining aspects of my life and beliefs that don't stand up under scrutiny.

In particular, I've found that reading Less Wrong has allowed, nay forced, me to examine the logical consistency of everything I say, write, hear and read, which allows me to be a lot more efficient in dicussions, both by policing my own speech and being more usefully critical of others' points (rather than making arguments that don't go anywhere).

While I was raised in a substantively atheist household, my current beliefs are theist. The precise nature of these beliefs has shifted somewhat since I started reading Less Wrong, as I've discarded the parts that are inconsistent or even less likely than the others. There are still difficulties with my current model, but they're smaller than the issues I have with my best atheist theory.

I've also had a surprising amount of success in introducing the logical and rationalist concepts from Less Wrong to one of my girlfriends, which is all the more impressive considering her dyscalculia. I'm really pleased that that this site has given me the tools to do that. It's really easy now to short-circuit what might otherwise become an argument by showing that it's merely a dispute about definitions. It's this sort of success that has kept me reading the site these past months, and I hope I can contribute to that success for other people.

comment by Kaj_Sotala · 2011-06-12T13:03:36.870Z · score: 1 (1 votes) · LW · GW

Welcome!

There are still difficulties with my current model, but they're smaller than the issues I have with my best atheist theory.

What issues does your best atheist theory have?

comment by HopeFox · 2011-06-12T13:42:47.742Z · score: -1 (5 votes) · LW · GW

What issues does your best atheist theory have?

My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it's been a while since I last read the sequence), but I end up applying it in the other direction. I don't think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value. Therefore I either accept a lack of moral value to humanity (both distasteful and unlikely), or accept the presence of something, let's call it a soul, that makes people worthwhile (also unlikely). I'm leaning towards the latter, both as the less unlikely, and the one that produces the most harmonious behaviour from me.

It's a work in progress. I've been considering the possibility that there is exactly one soul in the universe (since there's no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that's a low-probability hypothesis for now.

comment by Oscar_Cunningham · 2011-06-12T16:49:09.226Z · score: 3 (3 votes) · LW · GW

In the spirit of your (excellent) new post, I'll attack all the weak points of your argument at once:

  • You define "soul" as:

    the presence of something, let's call it a soul, that makes people worthwhile

This definition doesn't give souls any of their normal properties, like being the seat of subjective experience, or allowing free will, or surviving bodily death. That's fine, but we need to be on the look-out in case these meanings sneak in as connotations later on. (In particular, the "Zombies" sequence doesn't talk about moral worth, but does talk about subjective experience, so its application here isn't straight forward. Do you believe that a simulation of a human would have subjective experience?)

  • "Souls" don't provide any change in anticipation. You haven't provided any mechanism by which other people having souls causes me to think that those other people have moral worth. Furthermore it seems that my belief that others have moral worth can be fully explained by my genes and my upbringing.

  • You haven't stated any evidence for the claim that computer programs can't have moral value, and this isn't intuitively obvious to me.

  • You've produced a dichotomy between two very unlikely hypotheses. I think the correct answer in this case isn't to believe the least unlikely hypothesis, but is instead to assume that the answer is some third option you haven't thought of yet. For instance you could say "I withhold judgement on the existence of souls and the nature of moral worth until I understand the nature of subjective experience".

  • The existence of souls as you've defined them doesn't imply theism. Not even slightly. (EDIT: Your argument goes: 'By the "Zombies" sequence, simulations are concious. By assumption, simulations have no moral worth. Therefore concious does not imply moral worth. Call whatever does imply moral worth a soul. Souls exist, therefore theism.' The jump between the penultimate and the ultimate step is entirely powered by connotations of the word "soul", and is therefore invalid.)

Also you say this:

I've been considering the possibility that there is exactly one soul in the universe (since there's no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that's a low-probability hypothesis for now.

(I'm sorry if what I say next offends you.) This sounds like one of those arguments clever people come up with to justify some previously decided conclusion. It looks like you've just picked a nice sounding theory out of hypothesis space without nearly enough evidence to support it. It would be a real shame if your mind became tangled up like an Escher painting because you were too good at thinking up clever arguments.

comment by Vladimir_Nesov · 2011-06-12T16:34:54.107Z · score: 3 (3 votes) · LW · GW

You don't need an additional ontological entity to reflect a judgment (and judgments can differ between different people or agents). You don't need special angry atoms to form an angry person, that property can be either in the pattern of how the atoms are arranged, or in the way you perceive their arrangement. See these posts:

comment by Kaj_Sotala · 2011-06-12T15:21:55.546Z · score: 2 (2 votes) · LW · GW

Thanks.

Can you be more specific about what you mean by a soul? To me, it sounds like you're just using it as a designation of something that has moral value to you. But that doesn't need to imply anything supernatural; it's just an axiom in your moral system.

comment by jimrandomh · 2011-06-12T14:36:06.287Z · score: 2 (2 votes) · LW · GW

I don't think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value.

It's hard to build intuitions about the moral value of intelligent programs right now, because there aren't any around to talk to. But consider a hypothetical that's as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?

comment by Perplexed · 2011-06-12T16:19:44.072Z · score: 0 (0 votes) · LW · GW

... consider a hypothetical that's as close to human as possible: uploads.

I would have suggested pets. Or the software objects of Chang's story.

It is interesting that HopeFox's intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.

comment by Peterdjones · 2011-06-12T16:46:02.668Z · score: -1 (1 votes) · LW · GW

I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...

comment by Perplexed · 2011-06-12T17:09:23.140Z · score: 1 (1 votes) · LW · GW

... that has nothing to do with having a soul.

Well, it has nothing to do with what you think of as a 'soul'.

Personally, I'm not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word 'soul'.

Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.

comment by HopeFox · 2011-06-12T16:04:15.504Z · score: 0 (2 votes) · LW · GW

I agree, intuition is very difficult here. In this specific scenario, I'd lean towards saying yes - it's the same person with a physically different body and brain, so I'd like to think that there is some continuity of the "person" in that situation. My brain isn't made of the "same atoms" it was when I was born, after all. So I'd say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn't 100% sure.

However, if the original brain and body weren't destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I'd be more dubious. I'd be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we're supposing) and assigning them the moral status of twenty people. "People", of the sort deserving of rights and dignity and so forth, shouldn't be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there's a problem there too.)

Actually, come to think of it... if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it's the same software, could they be considered the same person? After all, they'll make all the same decisions given similar stimuli, and thus are using the same decision process.

comment by MixedNuts · 2011-06-12T16:17:26.864Z · score: 3 (3 votes) · LW · GW

Yes, the consensus seems to be that running two copies of yourself in parallel doesn't give you more measure or moral weight. But if the copies receive diferent inputs, they'll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can't retrieve Copy-A's current state from Copy-B's current state and the respective inputs, because information about the initial state has been destroyed?)

comment by hairyfigment · 2011-06-13T00:25:52.670Z · score: 0 (0 votes) · LW · GW

Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?

comment by Laoch · 2011-06-12T16:18:19.773Z · score: 0 (0 votes) · LW · GW

I'd love to know why moral value => presence of a soul? Also theist is a very vague term taken by itself could mean anything. Care to enlighten us?

comment by Oscar_Cunningham · 2011-06-12T13:14:58.606Z · score: 0 (0 votes) · LW · GW

Welcome!

I'm planning out an actual post at the moment

Exciting! What's it about?

comment by HopeFox · 2011-06-12T13:23:13.792Z · score: 1 (3 votes) · LW · GW

It's about how, if you're attacking somebody's argument, you should attack all of the bad points of it simultaneously, so that it doesn't look like you're attacking one and implicitly accepting the others. With any luck, it'll be up tonight.

comment by [deleted] · 2011-06-12T06:45:06.434Z · score: 5 (5 votes) · LW · GW

I'm 17 and I'm from Australia.

I've always been interested in science, learning, and philosophy. I've had correct thinking as a goal in my life since reading a book by John Stossel when I was 13.

I first studied philosophy at school in grade 10, when I was 14 and 15. I loved the mind/body problem, and utilitarianism was the coolest thing ever. I had great fun thinking about all these things, and was fairly good at it. I gave a speech about the ethics of abortion last year which I feel really did strike to the heart of the matter, and work as a good use of rationality, albeit untrained.

I came across Less Wrong via Three Worlds Collide, via Tv Tropes, last September. I then read HPMOR. By this point, I was convinced Eliezer Yudkowsky was the awesomest guy ever. He had all the thoughts I wanted to have, but wasn't smart enough to. I read everything on his website, then started trying to read the sequences. They were hard to understand for me, but I got some good points from them. I attended the National Youth Science Forum in January this year, and spent the whole time trying to explain the Singularity to people. Since then I've made my way through most of Eliezer's writings. I agree with most of what he says, except for bits which I might just not understand, like the Zombies sequence, and some of his more out there claims.

But yeah. Since reading his stuff, I've become stronger. Self improvement is now more explicitly one of my goals. I have tried harder to consider my beliefs. I have learnt not to get into pointless arguments. One of the most crucial lessons was the "learning to lose" from HPMOR. This has prevented me from more than a few nasty situations.

What can I contribute here? Nothing much as of yet. If I know anything, it's the small segment of rationality I've learned here. I'm good at intuitively understanding philosophy and math, but not special by Less Wrong standards.

One thing I do believe in strongly is the importance of mentoring people younger than you. I know two kids a bit younger than me. One is a really smart sciency kid, one a really talented musicish kid. I think that by linking them good science and good music, I can increase their rate of improvement. I wish that someone had told me about, for instance, Bayes's Theorem, or FAI, or Taylor series, when I was younger. You need a teacher. Sadly, there's no textbooks on this topic. But random walks through Wikipedia are a slow, frustrating way to learn when you're a curious 14 year old.

And so yeah. Pleased to meet you kids.

comment by XiXiDu · 2011-06-12T11:42:35.636Z · score: 4 (4 votes) · LW · GW

He had all the thoughts I wanted to have, but wasn't smart enough to.

You are 17. See Yudkowsky_1998, there is room for improvement at any age.

comment by [deleted] · 2011-06-12T23:55:37.613Z · score: 1 (1 votes) · LW · GW

Yeah, you're right. The difference is, he made mistakes that I also wouldn't have thought of, and expressed himself better as he did so.

Hey, I'm not despairing that I'll ever be cool, just find it unlikely I'll ever be as cool as him.

comment by cousin_it · 2011-06-13T00:03:06.204Z · score: 3 (3 votes) · LW · GW

‎"You don't become great by trying to be great. You become great by wanting to do something, and then doing it so hard that you become great in the process."

-- Randall Munroe

comment by dvasya · 2011-05-14T19:08:06.551Z · score: 5 (5 votes) · LW · GW
  • Handle: dvasya (from Darth Vasya)
  • Name: Vasilii Artyukhov
  • Location: Houston, TX (USA)
  • Age: 26
  • Occupation: physicist doing computational nanotechnology/materials science/chemistry, currently on a postdoctoral position at Rice University. Also remotely connected to the anti-aging field, as well as cryopreservation. Not personally interested in AI because I don't understand it very well (though I do appreciate its importance adequately), but who knows -- maybe that could change with prolonged exposure to LW :)
comment by MrMind · 2011-04-20T12:12:55.245Z · score: 5 (5 votes) · LW · GW

Hello everybody, I'm Stefano from Italy. I'm 30, and my story about becoming a rationalist is quite tortuous... as a kid I was raised as a christian, but not strictly so: my only obligation was to attend mass every sunday morning. At the same time since young age I was fond of esoteric and scientific literature... With hindsight, I was a strange kid: by the age of 13 I already knew quite a lot about such things as the Order of the Golden Dawn or General Relativity... My fascination with computer and artificial intelligence begun approximately at the same age, when I met a teacher that first taught me how to program: I then realized that this would be one of my greatest passion. To cut short a long story, during the years I discarded all the esoteric nonsense (by means of... well, experiments) and proceeded to explore deeper and deeper within physics, math and AI.

I found this site some month ago, and after a reasonable recognition and after having read a fair amount of the sequences, I feel ready to contribute... so here I am.

comment by MinibearRex · 2011-04-02T17:59:44.740Z · score: 5 (5 votes) · LW · GW

I started posting a while ago (and was lurking for a while beforehand), and only today found this post.

My parents were both science teachers, and I got an education in traditional rationality basically since birth (I didn't even know it had such a name as "traditional rationality", I assumed it was just how you were supposed to think). I've always used that experimental mindset in order to understand people and the rest of the universe. I'm an undergrad in the Plan II honors program at the University of Texas at Austin, majoring in Chemistry Pre-Med. A friend of mine found HP:MoR on StumbleUpon and shared it with me. I caught up with the story very quickly, and one day as I was bored waiting for Elizeer to post the next chapter, I came to Less Wrong. Lurked for a long time, read the sequences, and adopted technical rationality. One day I had something to say, so I created an account.

Goal in life: Astronaut.

comment by jslocum · 2011-03-03T17:10:00.205Z · score: 5 (5 votes) · LW · GW

Hello, people.

I first found Less Wrong when I was reading sci-fi stories on the internet and stumbled across Three Worlds Collide. As someone who places a high value on the ability to make rational decisions, I decided that this site is definitely relevant to my interests. I started reading through the sequences a few months ago, and I recently decided to make an account so that I could occasionally post my thoughts in the comments. I generally only post things when I think I have something particularly insightful to say, so my posts tend to be infrequent. Since I am still reading through the sequences, you probably won't be seeing me commenting on any of the more recent posts for a while.

I'm 21 years old, and I live in Cambridge, Mass. I'm currently working on getting a master's degree in computer science. My classes for the spring term are in machine vision, and computational cognitive science; I have a decent background in AI-related topics. Hopefully I'll be graduating in August, and I'm not quite sure what I'll be doing after that yet.

comment by flori86 · 2010-11-08T16:23:10.033Z · score: 5 (5 votes) · LW · GW

I'm Floris Nool a 24 year old recently graduated Dutch ex-student. I came across this site while reading Harry's new rational adventures, which I greatly enjoy by the way. I must say I'm intrigued by several of the subjects being talked about here. Although not everything makes sense at first and I'm still working my through the immense amounts of interesting posts on this site, I find myself endlessly scrolling through posts and comments.

The last few years I increasingly find myself trying to understand things, why they are like they are. Why I act like I do etc. Reading about the greater scientific theories and trying to relate to them in everyday life. While I do not understand as much as I want to, and probably never will seeing the amounts of information and theories out there, I hope to come to greater understanding of basically everything.

It's great to see so many people talking about these subjects, as in daily life hardly anyone seems to think about it like I do. Which can be rather frustrating when trying to talk about what I find interesting subjects.

I hope to be able to some day contribute to the community as I see other posters do, but until I feel comfortable enough about my understanding of everything going on here I will stay lurking for a while. Only having discovered the site two days ago doesn't exactly help.

comment by hangedman · 2010-10-13T21:41:31.195Z · score: 5 (5 votes) · LW · GW

Hi LW,

My name's Dan LaVine. I forget exactly how I got linked here, but I haven't been able to stop following internal links since.

I'm not an expert in anything, but I have a relatively broad/shallow education across mathematics and the sciences and a keen interest in philosophical problems (not quite as much interest in traditional approaches to the problems). My tentative explorations of these problems are broadly commensurate with a lot of the material I've read on this site so far. Maybe that means I'm exposing myself to confirmation bias, but so far I haven't found anywhere else where these ideas or the objections to them are developed to the degree they are here.

My aim in considering philosophical problems is to try to understand the relationship between my phenomenal experience and whatever causes it may have. Of course, it's possible that my phenomenal experience is uncaused, but I'm going to try to exhaust alternative hypotheses before resigning myself to an entirely senseless universe. Which is how I wind up as a rationalist -- I can certainly consider such possibilities as the impossibility of knowledge, that I might be a Boltzmann brain, that I live in the Matrix, etc., but I can't see any way to prove or provide evidence of these things, and if I take the truth of any of them as foundational to my thinking, it's hard to see what I could build on top of them.

Looking forward to reading a whole lot more here. Hopefully, I'll be able to contribute at least a little bit to the discussion as well.

comment by CronoDAS · 2010-10-13T22:29:11.080Z · score: 1 (1 votes) · LW · GW

Welcome!

comment by danield · 2010-08-30T10:52:01.585Z · score: 5 (5 votes) · LW · GW

Hi Less Wrong,

I'm a computer scientist currently living in Seattle. I used to work for Google, but I've since left to work on a game-creation-software startup. I came to Less Wrong to see Eliezer's posts about Friendly AI and stayed because a lot of interesting philosophical discussion happens here. It's refreshing to see people engaging earnestly with important issues, and the community is supportive rather than combative; nice work!

I'm interested in thinking clearly about my values and helping other people think about theirs. I was surprised to see that there hasn't been much discussion here about moral, animal-suffering-based vegetarianism or veganism. It seems to me that this is a simple, but high-impact, step towards reflective equilibrium. Has there been a conclusive argument against it here, or is everyone on LW already vegetarian (I wish)?

I'd be very happy to talk with anyone about moral vegetarianism in a PM or in a public setting. Even if you don't want to discuss it, I encourage you to think about it; my relationship with animals was a big inconsistency in my value system, and in retrospect it was pretty painless to patch, since the arguments are unusually straightforward and the causal chain is short.

comment by Morendil · 2010-08-30T10:58:34.837Z · score: 2 (2 votes) · LW · GW

See this thread for a prior discussion.

comment by danield · 2010-08-30T11:02:01.286Z · score: 0 (0 votes) · LW · GW

I must've missed it in my search. I'll post over there, thanks.

comment by jacob_cannell · 2010-09-01T23:19:46.527Z · score: 0 (0 votes) · LW · GW

I've since left to work on a game-creation-software startup

Does this startup have a website or anything? I working for a gaming tech startup ATM in a different area, but I'm quite interested in game-creation-software.

comment by danield · 2010-09-02T22:52:26.066Z · score: 0 (0 votes) · LW · GW

No, no website-- it's just me right now, and work started about a week ago, so it'll be a while yet. Calling it a "startup" is just a way to reassure my parents that I'm doing something with my time :)

The basic premises behind my approach to game-creation software are:

  • The game must always be runnable
  • The game must be easily shareable to OSX and Windows users
  • The user cannot program program (variables, threading, even loops and math should be as scarce as possible)
  • Limitations must be strict (don't let users try to create blockbuster-level games, or they'll become discouraged and stop trying)

I'd like to get a working prototype up, send it around to a few testers, and iterate before getting sidetracked into web design. I've found that I can sink a distressing amount of time into getting my CSS "just right". I'll definitely put you on my list for v.0.5 if you PM me an email address.

I see you did a game startup for some years; any tips for someone just starting out? And does your current venture have a website?

comment by thomblake · 2010-08-30T12:56:56.114Z · score: 0 (0 votes) · LW · GW

There is a rather high incidence of vegetarians around here, but certainly not universal.

comment by aurasprw · 2010-08-24T00:33:18.974Z · score: 5 (5 votes) · LW · GW

Hey Lesswrong! I'm just going to ramble for a second..

I like art, social sciences, philosophy, gaming, rationality and everything that falls in between. Examples include Go, Evolutionary Psychology, Mafia (aka Werewolves), Improvisation, Drugs and Debate.

See you if I see you!

comment by EchoingHorror · 2010-07-26T20:41:09.486Z · score: 5 (5 votes) · LW · GW

Hello, community. I'm another recruit from Harry Potter and the Methods of Rationality. After reading the first few chapters and seeing that it lacked the vagueness, unbending archetypes, and overt because the author says so theme that usually drives me away from fiction, then reading Less Wrong's (Eliezer's?) philosophy of fanfiction, I proceeded to read through the Sequences.

After struggling with the question of when I became a rationalist, I think the least wrong answer is that I just don't remember. I both remember less of my childhood than others seem to and developed more quickly. I could rationalize a few things, but I don't think that's going to be helpful.

Anyway, I'm 21 with an A.A. in Nothing in Particular and going for a B.S. in Mathematics and maybe other useful majors in November.

P.S. Quirrell FTW

comment by EStokes · 2010-07-26T22:38:04.419Z · score: 1 (1 votes) · LW · GW

Welcome!

There's a MOR discussion thread, if you hadn't seen it.

P.S. "Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been... But the stars are so very, very far away... And I wonder what I would dream about, if I slept for a long, long time." Quirrell FTW, indeed.

comment by arundelo · 2010-07-26T22:14:49.640Z · score: 1 (1 votes) · LW · GW

Less Wrong's (Eliezer's?)

Yes.

comment by AndyCossyleon · 2010-07-08T21:04:21.860Z · score: 5 (5 votes) · LW · GW

deleted

comment by SilasBarta · 2010-07-08T21:27:17.481Z · score: 4 (4 votes) · LW · GW

Welcome to Less Wrong! You seem to know your way around pretty well already! Thanks for introducing yourself.

Also, I really appreciate this:

I alternatively describe myself as a naturalistic pantheist, since the Wikipedia article on it nails my self-perception on the head, not to mention it's less confrontational ...

The article says that of naturalistic pantheism:

Naturalistic pantheism (also known as Scientific Pantheism) is a naturalistic form of pantheism that encompasses feelings of reverence and belonging towards Nature and the wider Universe, concern for the rights of humans and all living beings, care for Nature, and celebration of life. It is realist and respects reason and the scientific method. It is based on philosophical naturalism and as such it is without belief in supernatural realms, afterlives, beings or forces

Wow, I had no idea you could believe all that and still count as a kind of theism! Best. Marketing. Ever.

comment by Paul Crowley (ciphergoth) · 2010-07-22T12:03:36.015Z · score: 2 (2 votes) · LW · GW

Richard Dawkins:

Pantheism is sexed-up atheism. Deism is watered-down theism.

comment by DanielVarga · 2010-07-22T11:53:55.841Z · score: 0 (0 votes) · LW · GW

Best. Marketing. Ever.

So true. From now on, depending on who am I talking to, I will call myself either a reductionist materialist humanist, or a naturalistic pantheist. :)

comment by srjskam · 2010-06-15T23:17:22.435Z · score: 5 (5 votes) · LW · GW

Heikki, 30, Finnish student of computer engineering. Found Less Wrong by via the IRC-channel of the Finnish Transhumanist Association, which was found by random surfing ("Oh, there's a name for what I am?")

As for becoming a rationalist, I'd say the recipe was no friends and a good encyclopedia... Interest in ideas, unhindered by the baggage of standard social activities. One of the most influential single things was probably finding evolution quite early on. I remember (might be a false memory) having thought it would sure make sense if a horse's hoof was just one big toe, and then finding the same classic observation explained in the mentioned encyclopedia... That or dinosaurs. Anyway, fast forward via teenage bible-bashing and a fair amount of (hard) scifi etc. to now being here.

As the first sentence might suggest, I'm not doing nor have done anything of much interest to anyone. Well. Back to lurking, thanks to SilasBarta for the friendly welcome :) .

comment by mitchellb · 2010-04-30T15:38:57.848Z · score: 5 (5 votes) · LW · GW

Hi, I`m Michèle. I'm 22 years old and studying biology in Germany. My parents are atheists and so am I.
I stumbled upon this blog, started reading and couldn't stop reading. Nearly every topic is very interesting for me and I'm really glad I found people to talk about these things! Sometimes I find myself over emotional and unable to get the whole picture of situations. I'm trying to work on that and I hope I could get some insight reading this blog.

comment by chillaxn · 2010-04-29T01:54:22.587Z · score: 5 (5 votes) · LW · GW

Hi. I'm Cole from Maryland. I found this blog through a list of "greatest blogs of the year." I've forgot who published that list.

I'm in my 23rd year. I value happiness and work to spread it to others. I've been reading this blog for about a month. I enjoy reading blogs like this, because I'm searching for a sustainable lifestyle to start after college.

Cheers

comment by Lorenzo · 2010-04-19T20:57:03.121Z · score: 5 (5 votes) · LW · GW

Huh, I guess I should have come here earlier...

I'm Lorenzo, 31, from Madrid, Spain (but I'm Italian). I'm an evolutionary psychologist, or try to be, working on my PhD. I'm also doing a Master's Degree in Statistics, in which I discovered (almost by accident) the Bayesian approach. As someone with a longstanding interest in making psychology become a better science, I've found this blog a very good place for clarifying ideas.

I've been a follower of Less Wrong after reading Eliezer's essays on Bayesian reasoning some 3-4 months ago. I've known the Bayes theorem for quite a long time, but little or nothing about the bayesian approach to propability theory. The frecuentist paradigm dominates much of psychology, which is a shame, because I think bayesian reasoning is much better suited to the study of mind. There is still a lot of misunderstanding about what a bayesian approach entails, at least in this part of the world. Oh, well. We'll deal with it.

Thanks and keep up the good work!

comment by [deleted] · 2010-02-21T21:39:55.882Z · score: 5 (5 votes) · LW · GW

Hello All,

my name is Markus, and just decided, after, well, years? of lurk-jumping from sl4 to OvercomingBias to LessWrong that maybe I should participate in the one or another discussion; not doing so seems to lead to constant increase of things I have a feeling I know but actually fall flat on the first occasion of another person posing a question.

The process of finding to (then non-existing) LW started during senior high, when I somehow got interested into philosophy, soon enough into AI. The interest in AI lead to interest in Weiqi (Chess was publicly shot already a handful years ago), lead to an interest into eastern philosophy, lead to (interest, not really doing) Zen, lead to frustration, back to start. I was playing trumpet during those times, too; as a consequence of all interests, I did, well, not so much productive stuff. Procrastination is an often discussed topic here; I was and I am of type-A: do nothing. Well, I played Quake. Now I click links on Facebook.

I would still not call myself a rationalist by execution, but just by aspiration. However, from my philosophical gut-level feeling, just everything else does not make any sense.

I am somehow missing the real-life link; for people with IQ << 160, who are not working on AI or similarly hard topics, I cannot see the potential of the full-blown Bayesian BFG; just doing what is consensus being the best choice is most often the only thing one can do, lacking any data, even more often competence. I really do have a hard time seeing the practical benefits.

So, this one is getting too long already, I'm a chatty person...

Just for completeness, on "what you're doing": I'm currently working as a part-time software developer, and am a philosophy/math/computer science/electrical engineering college-dropout.

BTW, as English is not my mother-tongue, I often fall-back to the dictionary when writing in it; if some things seem to be taken from an overly strange thesaurus, or of especially unorthodox style, you now know why.

comment by realitygrill · 2010-02-24T04:23:57.921Z · score: 0 (0 votes) · LW · GW

I wonder how many of us play Weiqi/Igo/Baduk? I only play sporadically now but it was a bit of an obsession for a time.

comment by wedrifid · 2010-02-24T04:27:56.403Z · score: 0 (0 votes) · LW · GW

There's a few people who have reported liking Go. Is that the same game?

comment by Morendil · 2010-02-24T08:49:42.463Z · score: 0 (0 votes) · LW · GW

Yep. Peter de Blanc and I are currently "playing for a cause", the game is here.

comment by Leafy · 2010-02-18T23:45:46.512Z · score: 5 (5 votes) · LW · GW

Hi everyone.

My name is Alan Godfrey.

I am fascinated by rational debate and logical arguments, and I appear to have struck gold in finding this site! I am the first to admit my own failings in these areas but am always willing to learn and grow.

I'm a graduate of mathematics from Trinity Hall, Cambridge University and probability and statistics have always been my areas of expertise - although I find numbers so much more pleasant to play with than theorems and proofs so bear with me!

I'm also a passive member of Mensa. While most of it does not interest me the numerical, pattern spotting and spatial awareness puzzles that it is associated with have always been a big passion of mine.

I have a personal fascination in human psychology, especially my own in a narcissistic way! Although I have no skill in this area.

I currently work for a specialist insurance company and head the catastrophe modelling function, which uses a baffling mixture of all of the above! It was through this that I attended a brief seminar at the 21st Century School in Oxford which mentioned this site as an affiliation although I had already found it a few months previously.

I come to this site with open eyes and an open mind. I hope to contribute insightful observation, engage in healthy discussion and ultimately come away better than I came in.

comment by bgrah449 · 2010-02-19T00:06:42.020Z · score: 0 (0 votes) · LW · GW

Out of curiosity, are you an actuary?

comment by Leafy · 2010-02-19T08:45:08.723Z · score: 0 (0 votes) · LW · GW

Actually no I am not. I began studying the Actuarial exams when I started work and have passed the ones that I took but stopped studying 3 years ago.

I found them very interesting but sadly of only minor relevance to the work that I was doing and, since I was not intending on becoming an Actuary and therefore was not being afforded any study leave in which to progress in them, I decided to focus my spare time on my own career path instead.

Why do you ask?

comment by ThomasRyan · 2010-02-02T17:48:29.445Z · score: 5 (5 votes) · LW · GW

Hello.

Call me Thomas. I am 22. The strongest force directing my life can be called an extreme phobia of disorder. I came across overcoming bias and Eliezer Yudkowsky's writings, around the same time, in high school, shortly after reading GEB and The Singularity Is Near.

The experience was not a revelation but a relief. I am completely sane! Being here is solace. The information here is mostly systematized, which has greatly helped to organize my thoughts on rationality and has saved me a great amount of time.

I am good at tricking people into thinking I am smart, which you guys can easily catch. And I care about how you guys will perceive me, which means that I have to work hard if I want to be a valuable contributor. Something I am not used to (working hard), since I do good enough work with minimal effort.

My greatest vices are romantic literature, smooth language, and flowery writing. From Roman de la Rose, to The Knight's Tale, to Paradise Lost, to One Hundred Years of Solitude. That crap is like candy to me.

Bad music repulses me. I get anxious and irritable and will probably throw a fit if I don't get away from the music. Anything meticulous, or tedious, will make me antsy and shaky. Bad writing also has the same effect on me. Though, I am punctilious. There's a difference.

My favorite band it Circulatory System, which speaks directly to my joys and fears and hopes. If you haven't listened to them, I highly recommend you do so. The band name means "Human." It is about what is means to be us, about the circular nature of our sentience, and about the circles drawn in history with every new generation. http://www.youtube.com/watch?v=a_jidcdzXuU

I have opted out of college. I do not learn well in lectures. They are too slow, tedious, and meticulous. Books hold my attention better.

My biggest mistake? In school, never practicing retaining information. I do not have my months memorized and my vocabulary is terrible. It was much funner to use my intelligence to "get the grade" than it was to memorize information. Now, this is biting me on the butt. I need to start practicing memorizing stuff.

I am currently in a good situation. My mom got a job far from her house, and she has farm animals. I made a deal with her, where I watch her house and the animals for free if she lets me stay there. I will be in this position for at least another year.

I have enough web design skills to be useful to web design firms, which brings me my income. I am also a hobbyist programmer, though not good enough yet to turn that skill into money.

I want to teach people to be more rational; that's what I want to do with my life. I am far from being the writer I want to be, and I have not yet made my ideas congruent and clear.

Anybody with good recommendations on how to best spend this year?

Thomas.

comment by Paul Crowley (ciphergoth) · 2010-02-02T18:38:59.411Z · score: 0 (0 votes) · LW · GW

Hello, and welcome to the site!

comment by ThomasRyan · 2010-02-02T20:06:49.681Z · score: 1 (1 votes) · LW · GW

Thank you, I'll be seeing you around :) .

Anyway, I have been thinking of starting my year off by reading Chris Langan's CTMU, but I haven't seen anything written about it here or on OB. And I am very wary of what I put into my brain (including LSD :P).

Any opinions on the CTMU?

comment by Paul Crowley (ciphergoth) · 2010-02-02T20:17:45.318Z · score: 3 (3 votes) · LW · GW

Google suggests you mean this CTMU.

Looks like rubbish to me, I'm afraid. If what's on this site interests you, I think you'll get a lot more out of the Sequences, including the tools to see why the ideas in the site above aren't really worth pursuing.

comment by ThomasRyan · 2010-02-02T20:51:33.414Z · score: 1 (1 votes) · LW · GW

Introduction to the CTMU

Yeah, I know what it looks like: meta-physical rubbish. But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense. Also, from what I skimmed, it looks like a much deeper examination of reductionism and strange loops, which are ideas that I hold to dearly.

I've read and understand the sequences, though I'm not familiar enough with them to use them without a rationalist context.

comment by Morendil · 2010-02-02T21:49:45.578Z · score: 6 (8 votes) · LW · GW

However intelligent he is, he fails to present his ideas so as to gradually build a common ground with lay readers. "If you're so smart, how come you ain't convincing?"

The "intelligent design" references on his Wikipedia bio are enough to turn me away. Can you point us to a well-regarded intellectual who has taken his work seriously and recommends his work? (I've used that sort of bridging tactic at least once, Dennett convincing me to read Julian Jaynes.)

comment by Cyan · 2010-02-02T22:08:53.200Z · score: 5 (5 votes) · LW · GW

"If you're so smart, how come you ain't convincing?"

"Convincing" has long been a problem for Chris Langan. Malcolm Gladwell relates a story about Langan attending a calculus course in first year undergrad. After the first lecture, he went to offer criticism of the prof's pedagogy. The prof thought he was complaining that the material was too hard; Langan was unable to convey that he had understood the material perfectly for years, and wanted to see better teaching.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T21:28:09.256Z · score: 5 (5 votes) · LW · GW

But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense.

Eh, I'm smart too. Looks to me like you were right the first time and need to have greater confidence in yourself.

comment by Morendil · 2010-02-02T21:55:01.024Z · score: 1 (1 votes) · LW · GW

More to the point, you do not immediately fail the "common ground" test.

Pragmatically, I don't care how smart you are, but whether you can make me smarter. If you are so much smarter than I as to not even bother, I'd be wasting my time engaging your material.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T22:05:18.144Z · score: 3 (7 votes) · LW · GW

I should note that the ability to explain things isn't the same attribute as intelligence. I am lucky enough to have it. Other legitimately intelligent people do not.

comment by Morendil · 2010-02-02T22:11:30.549Z · score: 0 (0 votes) · LW · GW

If your goal is to convey ideas to others, instrumental rationality seems to demand you develop that capacity.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T22:26:40.993Z · score: 3 (3 votes) · LW · GW

Considering the extraordinary rarity of good explainers in this entire civilization, I'm saddened to say that talent may have something to do with it, not just practice.

comment by realitygrill · 2010-02-20T17:28:52.833Z · score: 0 (0 votes) · LW · GW

I wonder what I should do. I'm smart, I seem to be able to explain things that I know to people well.. to my lament, I got the same problem as Thomas: I apparently suck at learning things so that they're internalized and in my long term memory.

comment by Username · 2015-06-09T16:08:08.912Z · score: -6 (6 votes) · LW · GW

easy now

comment by MrHen · 2010-02-02T22:06:30.292Z · score: 1 (1 votes) · LW · GW

I can learn from dead people, stupid people, or by watching a tree for an hour. I don't think I understand your point.

comment by Morendil · 2010-02-02T22:22:12.185Z · score: 0 (0 votes) · LW · GW

I didn't use the word "learn". My point is about a smart person conveying their ideas to someone. Taboo "smart". Distinguish ability to reach goals, and ability to score high on mental aptitude tests. If they are goal-smart, and their goal is to convince, they will use their iq-smarts to develop the capacity to convince.

comment by mattnewport · 2010-02-02T21:23:58.610Z · score: 5 (5 votes) · LW · GW

Being very intelligent does not imply not being very wrong.

comment by MartinB · 2010-11-02T02:42:59.691Z · score: 1 (1 votes) · LW · GW

You just get to take bigger mistakes than others. From the youtube videos Langan looks like a really bright fellow that has a very broken toolbox, and little correction. Argh!

comment by pjeby · 2010-11-02T18:40:25.713Z · score: 2 (2 votes) · LW · GW

Yeah, I know what it looks like: meta-physical rubbish.

It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:

Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception.

At this point, he's already begging the question, i.e. presupposing the existence of supernatural entities. These "laws" he's talking about are in his head, not in the world.

In other words, he hasn't even got done presenting what problem he's trying to solve, and he's already got it completely wrong, and so it's doubtful he can get to correct conclusions from such a faulty premise.

comment by Tuukka_Virtaperko · 2012-01-05T22:04:40.045Z · score: 0 (4 votes) · LW · GW

That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.

Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.

I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)

According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.

The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.

If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.

Langan's paper can be found here EDIT: Fixed link.

comment by Tuukka_Virtaperko · 2012-01-10T15:28:52.263Z · score: 0 (0 votes) · LW · GW

To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.

comment by Risto_Saarelma · 2012-01-10T18:34:42.623Z · score: 4 (4 votes) · LW · GW

There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.

comment by Tuukka_Virtaperko · 2012-01-13T01:02:05.626Z · score: 1 (1 votes) · LW · GW

It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you're interested in the mathematical structure of a recursive metaphysical theory, here's one: http://www.moq.fi/?p=242

Formal RP doesn't require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.

comment by Tuukka_Virtaperko · 2012-01-13T13:25:40.169Z · score: 0 (2 votes) · LW · GW

Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:

Let n be 4.

R contains everything that could be used to ground the meaning of symbols.

  • R1 contains sensory perceptions
  • R2 contains biological needs such as eating and sex, and emotions
  • R3 contains social needs such as friendship and respect
  • R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)

N contains relations of purely abstract symbols.

  • N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
  • N2 contains functions of symbols
  • N3 contains functions of functions. In mathematics I suppose this would include topology.
  • N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what "truth" is like.

Let ℘(T) be the power set of T.

The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn't been defined, though. If we don't assume subsets of R to emerge from each other, we'll have to construct a lot more complicated theories that are more difficult to understand.

This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.

O set includes the "realistic" theories, which assume the existence of an "objective reality".

  • ℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
  • ℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
  • ℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
  • ℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted

The relationship between O and N:

  • N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
  • N2 ⊆ O2
  • N3 ⊆ O3
  • N4 ⊆ O4

S set includes "solipsistic" ideas in which "mind focuses to itself".

  • ℘(R4) ⊆ S1 includes ideas regarding what one believes
  • ℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one's surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
  • ℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
  • ℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.

The relationship between S and N:

  • N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
  • N3 ⊆ S2
  • N2 ⊆ S3
  • N1 ⊆ S4

That's the metaphysical portion in a nutshell. I hope someone was interested!

comment by Risto_Saarelma · 2012-01-14T11:00:20.119Z · score: 4 (4 votes) · LW · GW

We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn't look like this was mentioned here before.

I'm assuming I'd want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I'm getting almost no notion of what useful work this theory would do for me.

Mathematical descriptions can be useful for people, but it's not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining

  • FAI = <S, P*> as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,
  • a: FAI -> A* as a function that gives the list of possible actions for a given FAI instance
  • u: A -> Real as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history and
  • f: FAI * A -> S, P as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.

And there's a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that's still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don't have anything that does actual work there. All I did was push all the complexity into the black boxes of the u, a and f.

I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.

With the metaphysics thing, beyond not getting a sense of it doing any work, I'm not even seeing where the work would hide. I'm not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?

comment by Tuukka_Virtaperko · 2012-01-14T13:24:11.816Z · score: 1 (1 votes) · LW · GW

You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.

In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.

More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn't necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.

At some point, philosophical questions and AI will collide. Suppose the following thought experiment:

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

  • 1) The brain scanner is broken
  • 2) The person is broken

In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: "Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá." Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, "categorizes the parts of the actual solution to the symbol grounding problem").

In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.

But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.

  • One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
  • We should define functions for "generalizing" and "specifying" sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
  • We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
  • The theory does not yet have an algorithmic part, but it should have. That's why it doesn't yet do anything.
  • ℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.

Questions to you:

  • Is T -> U the Cartesian product of T and U?
  • What is *?

I will not guarantee having discussions with me is useful for attaining a good job. ;)

comment by Risto_Saarelma · 2012-01-14T18:39:28.493Z · score: 3 (3 votes) · LW · GW

We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?

1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.

I don't really understand this part.

"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.

I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.

comment by Tuukka_Virtaperko · 2012-02-08T11:59:46.893Z · score: 1 (1 votes) · LW · GW

I've read some of this Universal Induction article. It seems to operate from flawed premises.

If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.

Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: "In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of "DoNotExecuteProgram('IndianaJonesAndTheFateOfAtlantis')".

That's not how computers operate. They just don't run the program. They don't need a special process for not running the program. Instead, not running the program is "implicitly contained" in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can't process the state of affairs that it is not running any of them.

Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding "everything" is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.

Furthermore:

This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions generally perform well in practice.

The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as "arbitrary complexity" or "randomness" seem intuitively meaningful, but what is their context?

The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.

In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.

comment by Risto_Saarelma · 2012-02-11T16:56:44.402Z · score: 1 (1 votes) · LW · GW

The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn't permanently omitted, it's just depriorized. The algorithm doesn't start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.

One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term "lawful universe" sometimes thrown around in LW probably refers to something similar.

Solomonoff's universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You'd also be unlikely to find any sort of native intelligent entities in such universes. I'm not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn't strike me as that great a requirement.

If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn't be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you'd start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.

The core thing "independent of context" in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.

comment by Tuukka_Virtaperko · 2012-02-15T20:35:33.417Z · score: 1 (1 votes) · LW · GW

Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"

However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.

I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.

In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.

comment by Tuukka_Virtaperko · 2012-02-15T21:31:54.424Z · score: 0 (0 votes) · LW · GW

Commenting the article:

"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."

I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.

Wait a second. Wikipedia already knows this stuff is a formalization of Occam's razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam's razor, and aware of it having, at least probably, been formalized?

Okay then, but this doesn't solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.

comment by Tuukka_Virtaperko · 2012-02-15T21:08:45.383Z · score: 0 (0 votes) · LW · GW

Commenting the article:

"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."

I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.

comment by Tuukka_Virtaperko · 2012-01-19T22:38:46.785Z · score: 1 (1 votes) · LW · GW

The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.

At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?

  • '[A-Z][a-z]*'

This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: "Jennifer", "Washington", "Terminator". The * means [a-z] may have an arbitrary amount of iterations.

comment by Risto_Saarelma · 2012-01-20T04:15:25.042Z · score: 1 (1 votes) · LW · GW

Yeah, that's probably where it comes from. The [A-Z] can be read as "the set of every possible English capital letter" just like X can be read as "the set of every possible perception to an agent", and the * denotes some ordered sequence of elements from the set exactly the same way in both cases.

comment by Tuukka_Virtaperko · 2012-01-15T10:56:55.871Z · score: 1 (1 votes) · LW · GW

I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:

  • Observe behavior
  • Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
  • Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
  • Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
  • Use the differentiation ability to develop the "common sense" view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
  • Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
  • Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.

Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.

I don't consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.

The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat.

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.

Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that's not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don't suppose it's ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.

I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.

I'm still not really grasping the underlying assumptions behind this approach.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

I see here the danger for rather trivial debates, such as whether I believe an AI could "experience" consciousness or reality. I don't know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.

(The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else"

The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it's just an opinion, and I don't feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...

, "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")

I don't wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven't missed anything important when making this judgement.

Your strange link is very intriguing. I like very much being given this kind of links. Thank you.

comment by Risto_Saarelma · 2012-01-15T14:54:17.372Z · score: 3 (3 votes) · LW · GW

About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)

The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.

I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.

I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.

But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.

Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.

Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.

"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.

This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

comment by Tuukka_Virtaperko · 2012-01-15T20:07:15.461Z · score: 1 (1 votes) · LW · GW

It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?

I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...

If you dislike metaphysics, only the latter is for you. I can't paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren't in that format right now. I should maybe convert them.

You won't understand the flowchart if you don't want to discuss metaphysics. I don't think I can prove that something, of which you don't know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don't want to know, it's ok.

I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don't seem very interested to have a discussion where I would actually do that. If you don't know why you are having this discussion, maybe you would like to do something else?

There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.

But I don't know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I'm some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don't understand what I'm doing.

I'm not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can't just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don't need to defend yourself either, because I'm here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I'm not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you'll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.

comment by Risto_Saarelma · 2012-01-16T08:11:08.821Z · score: 2 (2 votes) · LW · GW

I'm mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like "what's the informal explanation of why this is a good way to approach figuring out reality", which isn't really an easy thing to answer. I'm mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it's easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you'll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.

I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there's some sort of slight mutual incomprehension between it and modern traditional philosophy. It's a bit like C.P. Snow's Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW's problems with philosophy are probably Against Modal Logics and A Diseased Discipline. Also there's the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you'll need to practice empirical science to figure out what's actually going on with life, plain old thinking hard won't help since that'll just lead to your broken head machinery tripping you up again, and that the end result of what you're trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn't look like it will end up being an algorithm.

I'm not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that's so recent that it hasn't percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.

comment by Tuukka_Virtaperko · 2012-01-16T13:01:36.692Z · score: 1 (1 votes) · LW · GW

Sorry if this feels like dismissing your stuff.

You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.

The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.

That's a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he's doing is barren but he doesn't know of a better option. It's very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.

Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don't understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It's not that glamorous, and I don't think it was very glamorous to invent 0 either - whoever did that - but it was pretty important.

I'm not sure what good this ranting of mine is supposed to do, though.

I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.

The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)

If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).

What more to say... Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that's why I thought we might have something in common in the first place. I think we belong to the same world, because I'm pretty sure I don't belong to Culture One.

The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get

Okay, but nobody truly understands that hairball, if it's the brain.

the end result of what you're trying to do should be a computable algorithm.

That's what I'm trying to do! But it is not my only goal. I'm also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won't get far with the algorithm approach before he's finished that and is available for something else. But we are actually planning that. I'm not bullshitting you or anything. We have been planning to do that for some time already. And it won't be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...

My friend understands philosophy pretty well, but he's not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can't prove to anyone that it's correct. Not right now. It's just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don't think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow "beautiful" form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don't think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.

The "state basic assumptions" approach is not good in the sense that it would go all the way to explaining RP. It's maybe a good starter, but I can't really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn't make any sense of that.

I really have to go now, actually I'm already late from somewhere...

comment by Tuukka_Virtaperko · 2012-01-16T18:18:01.109Z · score: 0 (0 votes) · LW · GW

I commented Against Modal Logics.

comment by Tuukka_Virtaperko · 2012-01-15T18:12:59.574Z · score: 0 (2 votes) · LW · GW

A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.

You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don't.

I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn't know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life... and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.

I would like to write a few of those 100 000 pages that we need. I don't get your point. You seem to require me to have written them before I have written them.

My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.

Do you expect to build the digital sauce kernel without any kind of a plan - not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I'm not happy about that either. I can't teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don't need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.

I had a hard time figuring out what you mean by basic assumptions, because I've been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I've now caught up with you. Here are some basic assumptions.

  • RP is about definable things. It is not supposed to make statements about undefinable things - not even that they don't exist, like you would seem to believe.
  • Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn't know how to tell you that because I didn't know you wanted to hear that and not some other part of the theory in order to not go whaaa. I'd need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if "world" is something that begins with the Big Bang.
  • From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
  • At least in the current simple isntance of RP, you don't need to know anything about the metaphysical content to understand the math. You don't need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
  • If you do go out of the math mode and want to know what the symbols stand foor, I think that's very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where's the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don't acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call "subjective", point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don't do that all the time although you do that sometimes.
  • I don't subscribe to any kind of "subjectivism", whatever that might be in this context, or idealism, in the sense that something like that would be "true" in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can't begin from the Big Bang, because you weren't there.
  • You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don't work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
  • RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
  • One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
  • RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label "metaphysical" or "philosophical" in O.
  • I don't yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don't know how people here will perceive it. I have altered the MOQ a lot. It's latest "authorized" variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I'll think what to say...
  • RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can't find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
  • Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.

I have read GEB but don't remember much. I'll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.

The cat/line thing is not very relevant, but apparently I didn't remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason - such as the robot needing to operate the scanner and thus not seeing inside the scanner - the robot could alter the person's brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...

comment by Tuukka_Virtaperko · 2012-01-15T23:39:03.664Z · score: 0 (0 votes) · LW · GW

According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.

In any case, this illustrates an essential feature of RP. It's not so much a theory about "things", you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn't currently do much more because the algorith part is missing. It's also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.

Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.

By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren't interested of the formalisms, I didn't understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn't use that in the MOQ community, which would have been unlikely to request that approach.

comment by Risto_Saarelma · 2012-01-14T16:12:15.945Z · score: 1 (1 votes) · LW · GW

I'll address the rest in a bit, but about the notation:

Questions to you:

  • Is T -> U the Cartesian product of T and U?
  • What is *?

T -> U is a function from set T to set U. P* means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.

The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot's eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.

The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.

Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block's Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.

Scott Aaronson's Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we'd like to have when we're looking for a computable function that implements an AI.

comment by gregconen · 2010-02-02T22:29:15.502Z · score: 2 (2 votes) · LW · GW

But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense.

You can't rely too much on intelligence tests, especially in the super-high range. The tester himself admitted that Langan fell outside the design range of the test, so the listed score was an extrapolation. Further, IQ measurements, especially at the extremes and especially on only a single test (and as far as I could tell from the wikipedia article, he was only tested once) measure test-taking ability as much as general intelligence.

Even if he is the most intelligent man alive, intelligence does not automatically mean that you reach the right answer. All evidence points to it being rubbish.

comment by Paul Crowley (ciphergoth) · 2010-02-02T22:24:23.701Z · score: 1 (3 votes) · LW · GW

Chris Langan is the smartest known living man

Many smart people fool themselves in interesting ways thinking about this sort of thing. And of course, when predicting general intelligence based on IQ, remember to account for return to the mean: if there's such a thing as the smartest person in the world by some measure of general intelligence, it's very unlikely it'll be the person with the highest IQ.

comment by advael · 2015-06-09T17:14:09.561Z · score: 0 (0 votes) · LW · GW

A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.

(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author's intelligence when assessing whether a model bears out in reality.)

comment by Saviorself138 · 2010-02-02T19:03:04.900Z · score: -9 (11 votes) · LW · GW

Id say the best way to spend the rest of this year is to fry your brain on acid over and over again.

comment by thomblake · 2010-02-02T19:06:28.440Z · score: 4 (4 votes) · LW · GW

N.B. - LSD doesn't do something well characterized by "fry your brain" (most of the time). And if you meant acid in the chemical sense, that was very bad advice.

comment by aausch · 2010-02-02T19:36:59.392Z · score: 0 (0 votes) · LW · GW

Does the "LSD fries your brain" meme have any kind of positive effect?

comment by Saviorself138 · 2010-02-02T19:27:01.666Z · score: -6 (8 votes) · LW · GW

yeah, I know. I was just being a jackass because that guy's post was ridiculous

comment by JGWeissman · 2010-02-02T19:36:23.022Z · score: 4 (4 votes) · LW · GW

This is the Welcome Thread, for people to introduce themselves. People should have more leeway to talk about personal interests that would elsewhere be considered off topic.

comment by FeministX · 2009-11-07T08:06:12.074Z · score: 5 (7 votes) · LW · GW

Hi,

I am FeministX of FeministX.blogspot.com. I found this blog after Eliezer commented on my site. While my online name is FeministX, I am not a traditional feminist, and many of my intellectual interests lie outside of feminism.

Lately I am interestedin learning more about the genetic and biological basis for individual and group behavior. I am also interested in cryonics and transhumanism. I guess this makes me H+BD.

I am a rationalist by temperament and ideology. Why am I a rationalist? To ask is to answer the question. A person who wishes to accurately comprehend the merits of a rationalist perspective is already a rationalist. It's a deeply ingrained thinking style which has grown with me since the later days of my childhood.

I invite you all to read my blog. I can almost guarenteee that you will like it. My awesomeness is reliably appealing. (And I'm not so hard on the eyes either :) )

comment by RobinZ · 2009-11-07T14:05:29.897Z · score: 1 (1 votes) · LW · GW

Welcome!

Edit: I don't know if you were around when Eliezer Yudkowsky was posting on Overcoming Bias, but if you weren't, I'd highly, highly recommend Outside the Laboratory. Also, from Yudkowsky's own site, The Simple Truth and An Intuitive Explanation of Bayes' Theorem.

And do check out some of the top scoring Less Wrong articles.

comment by XFrequentist · 2009-04-18T20:31:20.764Z · score: 5 (5 votes) · LW · GW
  • Name: Alex Demarsh
  • Age: 26
  • Education: MSc Epidemiology/Biostatistics
  • Occupation: Epidemiologist
  • Location: Ottawa, Canada
  • Hobbies: Reading, travel, learning, sport.

I found OB/LW through Eliezer's Bayes tutorial, and was immediately taken in. It's the perfect mix of several themes that are always running through my head (rationality, atheism, Bayes, etc.) and a great primer on lots of other interesting stuff (QM, AI, ev. psych., etc). The emphasis on improving decision making and clear thinking plus the steady influx of interesting new areas to investigate makes for an intoxicating ambrosia. Very nice change from many other rationality blogs, which seem to mostly devote themselves to the fun-but-eventually-tiresome game of bashing X for being stupid/illogical/evil (clearly, X is all of these things and more, but that's not the point). Generally very nice writing, too.

As for real-life impact, LW has:

  • grown my reading list exponentially,
  • made me want to become a better writer,
  • forced me to admit that my math is nowhere near where it needs to be,
  • made my unstated ultimate goal of understanding the world as a coherent whole seem less silly, and
  • altered my list of possible/probable PhD topics.

I'll put some thought into my rationalist origins story, but it may have been that while passing several (mostly enjoyable) summers as a door-to-door salesman, I encountered the absolutely horrible decision making mechanisms of lots and lots of people. It kind of made me despair for the world, and probably made me aspire to do better. But that could be a false narrative.

comment by [deleted] · 2009-04-19T18:37:29.533Z · score: 1 (1 votes) · LW · GW

del

comment by Vladimir_Nesov · 2009-04-16T21:24:33.770Z · score: 5 (5 votes) · LW · GW
  • Vladimir Nesov
  • Age: 24
  • Location: Moscow
  • MS in Computer Science, minor in applied math and physics, currently a grad student in CS (compiler technologies, static analysis of programs).

Having never been interested in AI before, I became obsessed with it about 2 years ago, after getting impressed with its potential. Got a mild case of AI-induced raving insanity, have been recuperating for a last year or so, treating it with regular dosage of rationality and solid math. The obsession doesn't seem to pass though, which I deem a good thing.

comment by [deleted] · 2009-04-16T16:48:12.607Z · score: 5 (5 votes) · LW · GW

deleted

comment by rhollerith · 2009-04-16T18:40:01.172Z · score: 4 (4 votes) · LW · GW

Most mystics reject science and rationality (and I think I have a pretty good causal model of why that is) but there have been scientific rational mystics, e.g., physicist David Bohm. I know of no reason why a person who starts out committed to science and rationality should lose that commitment through mystical training and mystical experience if he has competent advice.

My main interest in mystical experience is that it is a hole in the human motivational system -- one of the few ways for a person to become independent from what Eliezer calls the thousand shards of desire. Most of the people in this community (notably Eliezer) assign intrinsic value to the thousand shards of desire, but I am indifferent to them except for their instrumental value. (In my experience the main instrumental value of keeping a connection to them is that it makes one more effective at interpersonal communication.)

Transcending the thousand shards of desire while we are still flesh-and-blood humans strikes me as potentially saner and better than "implementing them in silicon" and relying on cycles within cycles to make everything come out all right. And the public discourse on subjects like cryonics would IMHO be much crisper if more of the participants would overcome certain natural human biases about personal identity and the continuation of "the self".

I am not a mystic or aspiring mystic (I became indifferent to the thousand shards of my own desire a different way) but have a personal relationship of long standing with a man who underwent the full mystical experience: ecstacy 1,000,000 times greater than any other thing he ever experienced, uncommonly good control over his emotional responses, interpersonal ability to attract trusting followers without even trying. And yes, I am sure that he is not lying to me: I had a business relationship with him for about 7 years before he even mentioned (causally, tangentially) his mystical experience, and he is among the most honest people I have ever met.

Marin County, California, where I live, has an unusually high concentration of mystics, and I have in-depth personal knowledge of more than one of them.

Mystical experience is risky. (I hope I am not the first person to tell you that, Stefan!) It can create or intensify certain undesirable personality traits, like dogmatism, passivity or a messiah complex, and even with the best advice available, there is no guarantee that one will not lose one's commitment to rationality. But it has the potential to be extremely valuable, according to my way of valuing thing.

If you really do want to transcend the natural human goal system, Stefan, I encourage you to contact me.

comment by Vladimir_Nesov · 2009-04-16T21:07:07.250Z · score: 3 (3 votes) · LW · GW

Most of the people in this community (notably Eliezer) assign intrinsic value to the thousand shards of desire, but I am indifferent to them except for their instrumental value.

Not so. You don't assign value to your drives because they were inbuilt in you by evolution, you don't value your qualities just because they come as a package deal, just because you are human [*]. Instead, you look at what you value, as a person. And of the things you value, you find that most of them are evolution's doing, but you don't accept all of them, and you look at some of them in a different way from what evolution intended.

[*] Related, but overloaded with other info: No License To Be Human.

comment by rhollerith · 2009-04-16T22:13:14.171Z · score: 0 (0 votes) · LW · GW

Nesov points out that Eliezer picks and chooses rather than identifying with every shard of his desire.

Fair enough, but the point remains that it is not too misleading to say that I identify with fewer of the shards of human desire than Eliezer does -- which affects what we recommend to other people.

comment by Bongo · 2009-04-17T12:15:46.757Z · score: 1 (1 votes) · LW · GW

I would be interested to know what it is then that you desire nowadays.

And does everyone who gives up the thousand shards of desire end up desiring the same thing?

comment by rhollerith · 2009-04-17T22:11:09.550Z · score: 0 (0 votes) · LW · GW

Bongo asks me what is it then that I desire nowadays?

And my answer is, pretty much the same things everyone else desires! There are certain things you have to have to remain healthy and to protect your intelligence and your creativity, and getting those things takes up most of my time. Also, even in the cases where my motivational structure is different from the typical, I often present a typical facade to the outside world because typical is comfortable and familiar to people whereas atypical is suspicious or just too much trouble for people to learn.

Bongo, the human mind is very complex, so the temptation is very great to oversimplify, which is what I did above. But to answer your question, there is a ruthless hard part of me that views my happiness and the shards of my desire as means to an end. Kind of like money is also a means to an end for me. And just as I have to spend some money every day, I have to experience some pleasure every day in order to keep on functioning.

A means to what end? I hear you asking. Well, you can read about that. The model I present on the linked page is a simplification of a complex psychological reality, and it makes me look more different from the average person than I really am. Out of respect for Eliezer's wishes, do not discuss this "goal system zero" here. Instead, discuss it on my blog or by private email.

Now to bring the discussion back to mysticism. My main interest in mysticism is that it gives the individual flexibility that can be used to rearrange or "rationalize" the individual's motivational structure. A few have used that flexibility to rearrange emotional valences so that everything is a means to one all-embracing end, resulting in a sense of morality similar to mine. But most use it in other ways. One of the most notorious way to use mysticism is to use it to develop the interpersonal skills necessary to win a person's trust (because the person can sense that you are not relating to him in the same anxious or greedy way that most people relate to him) and then once you have his trust, to teach him to overcome unnecessary suffering. This is what most gurus do. If you want a typical example, search Youtube for Gangaji, a typical mystic skilled at helping ordinary people reduce their suffering.

I take you back to the fact that a full mystical experience is 1,000,000 times more pleasurable than anything a person would ordinarily experience. That blots out or makes irrelevant everything else that is happening to the person! So the person is able to sit under a tree without moving for weeks and months while his body slowly rots away. People do that in India: a case was in the news a few years ago.

Of course he should get up from sitting under the tree and go home and finish college. Or fetch wood, carry water. Or whatever it is he needs to do to maintain his health, prosperity, intelligence and creativity. But the experience of sitting under the tree can put the petty annoyances and the petty grievances of life in perspective so that they do not have as much influence on the person's thinking and behavior as they used to. Which is quite useful.

comment by [deleted] · 2009-04-16T19:14:23.947Z · score: 1 (1 votes) · LW · GW

deleted

comment by Paul Crowley (ciphergoth) · 2009-04-16T16:59:53.479Z · score: 1 (1 votes) · LW · GW

I've always thought of a mystic as someone who likes mysterious answers to mysterious questions - I guess you mean something else by it?

comment by [deleted] · 2009-04-16T17:16:30.791Z · score: 2 (2 votes) · LW · GW

deleted

comment by pluto · 2016-02-22T16:11:26.149Z · score: 4 (4 votes) · LW · GW

Hello, my friends. I'm a brazilian man, fully blind and gay...

I knew Fanfiction.net, HP MOR and LessWrong. I hope to learn more :)

comment by DanielH · 2012-07-11T03:00:02.452Z · score: 4 (6 votes) · LW · GW

TL;DR: I found LW through HPMoR, read the major sequences, read stuff by other LWers including the Luminosity series, and lurked for six months before signing up.

My name, as you can see above if you don't have the anti-kibitzing script, Daniel. My story of how I came to self-identify as a rationalist, and then how I later came to be a rationalist, breaks down into several parts. I don't remember the order of all of them.

Since well before I can remember (and I have a fairly good long-term memory), I've been interested in mathematics, and later science. One of my earliest memories, if not my earliest, is of me, on my back, under the coffee table (well before I could walk). I had done this multiple times, I think usually with the same goal, but one time in particular sticks in my memory. I was kicking the underside of the coffee table, trying to see what was moving. This time, I moved it, got out, and saw that the drawer of the coffee table was open; this caused me to realize that this was what was moving, and I don't think I crawled under there again.

Many years later, I discovered Star Trek TNG, and from that learned a little about Star Trek. I wanted to be more rational from the role models of Data and Spock, and I did not realize at the time how non-rational Spock was. It was very quickly, however, that I realized that emotions are not the opposite of logic, and the first time I saw the TOS episode that Luke references [here][http://facingthesingularity.com/2011/why-spock-is-not-rational/], I realized that Spock was being an idiot (though at the time I thought it was unusually idiotic, not standard behavior; I hadn't and still haven't seen much of the original series). It was around this time that I thought I myself was "rational" or "logical".

Of course, it wasn't until much later that I actually started learning about rationalism. Around Thanksgiving 2011, I was on fanfiction.net looking for a Harry Potter fanfic I'd seen before and liked (I still haven't found it) that I stumbled upon Harry Potter and the Methods of Rationality. I read it, and I liked it, and it slowly took over my life. I decided to look for other works by that author, and went to the link to Less Wrong because it was recommended (not realizing that the Sequences were written by the same person as HPMoR yet). Since then, I've read the sequences and most other stuff written by EY (that's still easily accessible and not removed), and it all made sense. I finally understood that yes, in fact, I and the other "confused" students WERE correct in that probability class where the professor said that "the probability that this variable is in this interval" didn't exist, I noticed times when I was conforming instead of thinking, and I noticed some accesses of cached thoughts. At first I was a bit skeptical of the overly-atheistic bit (though I'd always had doubts and was pretty much agnostic-though-I-wouldn't-admit-it), until I read the articles about how unlikely the hypothesis of God was and thought about them.

I did not know much about Quantum Mechanics when I read that sequence, but I had heard of the "waveform collapse" and had not understood it, and I realized fairly quickly how that was an unnecessary hypothesis. When I saw one of the cryonics articles (I'm cryocrastinating, trying to get my parents to sign up) taking the idea seriously, I thought "Oh, duh! I should have seen that the first time I heard of it, but I was specifically told that the person involved was an idiot and it didn't work, so I never reevaluated" (later I remembered my horror at Picard's attitude in the relevant TNG episode, and I've always only believed in the information-theoretic definition of "death").

After I read the major sequences, I read some other stuff I found through the Wiki and through googling "Less Wrong __" for various things I wanted the LW community opinion on. I found my favorite LW authors (Yvain, Luke, Alicorn, and EY) and read other things by them (Facing the Singularity and Luminosity). I subscribed to the RSS feed (I don't know how that'll work when I want to strictly keep to anti-kibitzing), and I now know that I want to help SIAI as much as possible (I was planning to be a computer scientist anyway); I'm currently reading through a lot of their recommended reading. I'm also about to start GEB, followed by Jaynes and Pearl. I plan to become a lot more active comment-wise, but probably not post-wise for a while yet. I may even go to one of the meetups if one is held somewhere I can get to.

Now we've pretty much caught up to the present. Let's see... I read some posts today, I read Luke's Intuitive Explanation to EY's Intuitive Explanation, I found an error in it (95% confidence), I sent him an email, and I decided to sign up here. Now I'm writing this post, and I'm supposed to put some sort of conclusion on it. I estimate that the value of picking a better conclusion is not that high compared to the cost, so I'll just hit th submit button after this next period.

Edit: Wow, I just realized how similar my story is to parts of Comment author: BecomingMyself's. I swear we aren't the same person!

comment by shminux · 2012-07-11T04:57:42.836Z · score: 1 (5 votes) · LW · GW

I did not know much about Quantum Mechanics when I read that sequence, but I had heard of the "waveform collapse" and had not understood it, and I realized fairly quickly how that was an unnecessary hypothesis.

I recommend learning QM from textbooks, not blogs. This applies to most other subjects, as well.

comment by DanielH · 2012-07-18T02:03:47.254Z · score: 3 (3 votes) · LW · GW

I did not mean to imply that I had actual knowledge of QM, just that I had more now than before. If I was interested in understanding QM in more detail, I would take a course on it at my college. It turns out that I am so interested, and that I plan to take such a course in Spring 2013.

I also know that there are people on this site, apparently a greater percentage than with similar issues, who disagree with EY about the Many Worlds Interpretation. I have not been able to follow their arguments, because the ones I have seen generally assume a greater knowledge of quantum mechanics than I possess. Therefore, MWI is still the most reasonable explanation that I have heard and understood. Again, though, that means very little. I hope to revisit the issue once I have some actual background on the subject.

EDIT: To clarify, "similar issues" means issues where the majority of people have one opinion, such as theism, the Copenhagen Interpretation, or cryonics not being worth considering, while Less Wrong's general consensus is different.

comment by beoShaffer · 2012-10-08T04:34:42.824Z · score: 0 (0 votes) · LW · GW

Hi Daniel, do you follow Yvian's blog? Also, the term is rationality, not rationalism. I wouldn't nitpick except that rationalism already refers to a fairly major thing in mainstream philosophy.

comment by kajro · 2012-06-23T00:06:26.476Z · score: 4 (4 votes) · LW · GW

I'm a 20 year old mathematics/music double major at NYU. Mainly here because I want to learn how to wear Vibrams without getting self conscious about it.

comment by Kevin · 2012-06-23T01:11:12.193Z · score: 3 (3 votes) · LW · GW

I get nothing but positive social affect from Ninja Zemgears. http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=zemgear

Cheaper than Vibrams, more comfortable, less durable, less agile, much friendlier looking.

comment by kajro · 2012-06-23T03:01:27.481Z · score: 0 (0 votes) · LW · GW

Those combined with some toe socks and I have exactly what I want. I might actually order these... Thanks!

comment by Kevin · 2012-06-23T22:40:59.624Z · score: 0 (0 votes) · LW · GW

They actually work well enough with normal socks, scrunched in to seperate the big toe.

comment by Alicorn · 2012-06-23T01:37:51.069Z · score: 0 (2 votes) · LW · GW

The ninja shoes are much less abominable than Vibrams.

comment by John_Maxwell (John_Maxwell_IV) · 2012-06-23T01:05:22.523Z · score: 3 (3 votes) · LW · GW

Hi there!

This might help: http://www.psych.cornell.edu/sec/pubPeople/tdg1/Gilo.Medvec.Sav.pdf

comment by kajro · 2012-06-23T03:05:46.652Z · score: 3 (3 votes) · LW · GW

Is this some kind of LW hazing, linking to academic papers in an introduction thread? (I joke, this looks super interesting).

comment by John_Maxwell (John_Maxwell_IV) · 2012-06-23T03:24:02.531Z · score: 1 (1 votes) · LW · GW

It was either that or the Psychology Today article. (Pretty sure Psychology Today is where I learned about the concept, but googling found the paper.)

comment by kmacneill · 2012-02-15T18:52:04.480Z · score: 4 (4 votes) · LW · GW

Hey, I've been an LW lurker for about a year now, and I think it's time to post here. I'm a cryonicist, rationalist and singularity enthusiast. I'm currently working as a computer engineer and I'm thinking maybe there is more I can do to promote rationality and FAI. LW is an incredible resource. I have a mild fear that I don't have enough rigorous knowledge about rationality concepts to contribute anything useful to most discussion.

LW has changed my life in a few ways but the largest are becoming a cryonicist and becoming polyamorous (naturally leaned toward this, though). I feel like I am in a one-way friendship with EY, does anyone else feel like that?

comment by Alex_Altair · 2012-02-16T17:05:04.194Z · score: 2 (2 votes) · LW · GW

I am also in a one-way friendship with EY.

comment by Dmytry · 2011-12-29T18:56:04.912Z · score: 4 (4 votes) · LW · GW

I am a video game developer. I find most of this site fairly interesting albeit once in a while I disagree with description of some behaviour as irrational, or the explanation projected upon that behaviour (when I happen to see a pretty good reason for this behaviour, perhaps strategic or as matter of general policy/cached decision).

comment by DanPeverley · 2011-07-18T02:36:10.383Z · score: 4 (4 votes) · LW · GW

Salutations, LessWrong!

I am Daniel Peverley, I lurked for a few months and joined not too long ago. I was first introduced to this site via HPatMOR, my first and so far only foray into the world of fan-fiction. I've been raised as a mormon, and I've been a vague unbeliever for a few years, but the information on this site really solidified the doubts and problems I had with my religion. Just knowing how to properly label common logical fallacies has been vastly helpful in my life, and a few of the posts on social dynamics have likewise been of great utility. I'm seventeen, headed into my senior year of highschool, and on-track to attend a high end university. My hobbies include Warhammer 40k, watching anime, running, exercising, studying chinese, video games, webcomics, and reading and writing speculative fiction and poetry. I live in the skeptic-impoverished Salt Lake City area. I look forward to posting, but I'll probably LURK MOAR for a while just to make sure what I have to say is worth reading.

comment by jsalvatier · 2011-07-27T20:07:42.006Z · score: 0 (0 votes) · LW · GW

Welcome :)

comment by artsyhonker · 2010-12-28T13:36:56.132Z · score: 4 (4 votes) · LW · GW

I came across a post on efficiency of charity, and joined in order to be able to add my comments. I'm not sure I would identify myself as a rationalist at all, though I share some of what I understand to be rationalist values.

I am a musician and a teacher. I'm also a theist, though I hope to be relatively untroublesome about this and I have no wish to proselytize. Rather, I'm interested in exploring rational ways of discussing or thinking about moral and ethical issues that have more traditionally been addressed within a religious framework.

comment by Deltamatic · 2010-12-22T11:06:30.087Z · score: 4 (4 votes) · LW · GW

Hello all. I want to sign up for cryonics, but am not sure how. Is there a guide? What are the differences in the process for minors? [I pressed enter in the comment box but there aren't any breaks in the comment itself; how do you make breaks between lines in comments?] I'm a sixteen-year-old male from Louisiana in the US. I was raised Christian and converted to atheism a few months ago. I found Less Wrong from Eliezer's site--I don't remember how I found that--and have been lurking and reading sequences since.

comment by [deleted] · 2011-10-31T03:58:34.952Z · score: 2 (2 votes) · LW · GW

Contact Rudi Hoffman. Today.

Cryonics is expensive on a sixteen-year-old's budget. Rudi can get you set up with something close to your price range. You can expect it to be the cost of life insurance, plus maybe $200 a year, with the Cryonics Institute. If you're in good health, my vague expectation is that your life insurance will be on the order of $60/month.

This is judging by my experiences and assuming that these things scale linearly and that CI hasn't significantly changed their rates.

comment by ArisKatsaris · 2011-10-31T04:27:22.305Z · score: 0 (0 votes) · LW · GW

Putting two spaces after a line (before the line break) will produce a single line break, like this:
Line One
Line Two
Line Three

Putting two returns will produce a new paragraph like this:

Paragraph 1

Paragraph 2

Paragraph 3

comment by quinsie · 2011-10-31T04:19:28.802Z · score: 0 (0 votes) · LW · GW

You make breaks in the comment box with two returns.

Just one will not make a line.

As to your actual question, you should probably check your state's laws about wills. I don't know if Louisiana allows minors to write a will for themselves, and you will definately want one saying that your body is to be turned over to the cryonics agency of your choice (usually either the Cryonics Institute or Alcor) upon your death. You'll also probably want to get a wrist bracelet or dog tags informing people to call your cryonicist in the event that you're dead or incapacitated.

comment by Oscar_Cunningham · 2010-12-22T12:04:13.599Z · score: 0 (0 votes) · LW · GW

[I pressed enter in the comment box but there aren't any breaks in the comment itself; how do you make breaks between lines in comments?]

Press enter twice. I don't know why.

comment by jkaufman · 2010-11-04T21:57:19.430Z · score: 4 (4 votes) · LW · GW

Jeff Kaufman. Working as a programmer doing computational linguistics in the boston area. Found "less wrong" twice: first through the intuitive explanation of bayes' theorem and then again recently through "hp and the methods of rationality". I value people's happiness, valuing that of those close to me more than that of strangers, but I value strangers' welfare enough that I think I have an obligation to earn as much as I can and live on as little as I can so I can give more to charity.

comment by Vaniver · 2010-10-27T02:09:43.227Z · score: 4 (4 votes) · LW · GW

Hello!

I take Paul Graham's advice to keep my identity small, and so describing myself is... odd. I'm not sure I consider rationalism important enough to make it into my identity.

The most important things, I think, are that I'm an individualist and an empiricist. I considered "pragmatist" for the second example, and perhaps that would be more appropriate.

Perhaps vying for third place is that I'm an optimizer. I like thinking about things, I like understanding systems, I like replacing parts with better parts. I think that's what I enjoy about LW; there's quite a bit of interest in optimization around here. Now, how to make that function better... :P

comment by shokwave · 2010-10-14T11:45:35.174Z · score: 4 (4 votes) · LW · GW

Hi! 21 year old university dropout located in Melbourne, Australia. Coming from a background of mostly philosophy, linguistics, and science fiction but now recognising that my dislike for maths and hard science comes from a social dynamic at my high school: humanities students were a separate clique from the maths/sci students and both looked down on each other, and I bought into it to gain status with my group. So that's one major thing that LW has done for me in the few months I've been reading it: helped me recognise and eventually remove a rationalisation that was hurting my capabilities.

That explains why I stayed here; I think I first got here through something about the Agreement Theorem, as well as reading this pretty interesting Harry Potter fanfic. I'd gotten through to about ten chapter when I checked the author and thought it was quite odd that it was also LessWrong... but if you see an odd thing once, you start seeing it everywhere, right? So I very nearly chalked it up to some sort of perceptual sensitivity. The point about knowing biases making you weaker is very clear to me from that.

Anyway, I'm somewhat settled on being an author as a profession, I'd like to add to LessWrong in the capacity of exploring current philosophical questions that impinge on rationality, truthseeking, and understanding of the mind, and I would like to take from LessWrong the habit of being rational at all times.

comment by wedrifid · 2010-10-14T17:51:35.294Z · score: 0 (0 votes) · LW · GW

Hi! 21 year old university dropout located in Melbourne, Australia

Another from Melbourne! Welcom.

comment by jacob_cannell · 2010-08-24T08:45:14.650Z · score: 4 (6 votes) · LW · GW

Greetings All.

I've been a Singularitan since my college years more than a decade ago. I still clearly remember the force with which that worldview and its attendant realizations colonized my mind.

At that time I was strongly enamored with a vision of computer graphics advancing to the point of pervasive, Matrix-like virtual reality and that medium becoming the creche from which superhuman artificial intelligence would arise. (the Matrix of Gibson's Neuromancer, as this was before the film of the same name). Actually, I still have that vision, and although it has naturally changed, we do appear finally to be on the brink of a major revolution in graphics and perhaps the attendant display tech to materialize said vision.

Anyway, I studied computer graphics, immersed myself in programming and figured making a video game startup would be a good first step to amassing some wealth so that I could then do the 'real work' of promoting the Singularity and doing AI research. I took a little investment, borrowed some money, and did consulting work on the side. After four years or so the main accomplishment was taking a runner up prize in a business plan competition and paying for a somewhat expensive education. That isn't as bad as it sounds though - I did learn a good deal of atypical knowledge.

Eventually I threw in the towel with the independent route and took a regular day job as a graphics programmer in the industry. After working so much on startups I had some fun with life for a change. I went to a couple of free 'workshops' at a strange house where some unusual guys with names like 'Mystery' and 'Style' taught the game, back before Style wrote his book and that community blew up. I found some interesting roommates (not affiliated with the above), and moved into a house in the Hollywood Hills. One of our neighbors had made a fortune from a website called Sextoy.com and threw regular pool parties, sometimes swinger parties. Another regular life in LA.

Over the years I had this mounting feeling that I was wasting my life, that there was something important I had forgotten. I still read and followed some of the Singularity related literature, but wasn't that active. But occasionally it would come back and occupy my mind, albeit temporarily. Kurzweil's TSIN reactivated my attention, and I attended the Singularity Summit in 2008, 2010. I already had a graphics blog and had written some articles for gaming publications, but in the last few years started reading more neuroscience and AI. I have a deep respect for the brain's complexity, but I'm still somewhat surprised at the paucity of large-scale research and the concomitant general lack of success in AGI. I'm not claiming (as of yet) to have some deep revolutionary new metamathical insight, but a graphics background gives one a particular visualizing intuition and toolbox for optimizing simulations that should come in handy.

All that being said, and even though I'm highly technical by trade, I actually think the engineering challenge is the easier part of the problem (only in relation), and I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task. I agree with the importance of the challenge, but I do not find the most likely hypothesis to be: SIAI develops FriendlyAI before anyone else in the world develops AI in general. I do not think that SIAI currently holds >50% of the lottery tickets, not even close.

However, I do think the movement can win regardless, if we can win on the social engineering front. To me now it seems that the most likely hypothesis is that the winning ticket will be some academic team or startup in this decade or the next, and thus the winning ticket (with future hindsght) is currently held by someone young. So it is a social engineering challenge.

The Singularity challenges everything: our social institutions, politics, religion, economic infrastructure, all of our current beliefs. I share the deep concern about existential risk and Hard Takeoff scenarios, although perhaps differing in particulars with typical viewpoints I've seen on this site.

How can we get the world to wake up?

I somehow went to two Singularity Summits without ever reading LessWrong or OverComingBias. I think I had read partly through EY's Seed AI doc at some point previously, but that was it. I went to school with some folks who are now part of LessWrong or SIAI: (Anna, Steve, Jennifer), and was pointed to this site through them. I've quite enjoyed reading through most of the material so far, and I don't think i'm half way through yet, although I don't see a completion meter anywhere.

I'm somewhat less interested in: raw 'Bayesianism' as enlightment, and Evo Psych. I used to be more into Evo Psych when I was into the game, but I equate that with my childish years. I do believe it has some utility in understanding the brain, but not nearly as much as neuroscience or AI themselves.

Also, as an aside, I'm curious about the note for theists. From what I gather, many LWers find the Simulation Argument to work. If so, that technically makes you a deist, and theism is just another potential hypothesis. Its actually even potentially a testable hypothesis. And even without the Simulation Argument, the Singularity seriously challenges strict atheism - most plausible Singluarity aware Eschatologies result in some black-hole diety spawning new universes - a god in every useful sense of the term at the end of our timeline.

I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company. Of course, that isolation was only ever self-imposed, and this site has opened my mind to the possibility that there's many now who have ventured along similar lines.

comment by Nick_Tarleton · 2010-08-25T18:57:07.843Z · score: 1 (1 votes) · LW · GW

Welcome to LW!

I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task.

Not entirely. Less Wrong is about raising the sanity waterline, not just recruiting FAI theorists.

Also, as an aside, I'm curious about the note for theists.

Theists in the usual supernatural sense, not the (rare, and even more rarely called 'theism') simulation or future-'god' senses.

I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company

It seems to me that there are plenty of open-minded, technical circles in which one can do this, as long as one takes basic care not to sound fanatical.

comment by NancyLebovitz · 2010-08-25T14:03:46.415Z · score: 1 (1 votes) · LW · GW

o me now it seems that the most likely hypothesis is that the winning ticket will be some academic team or startup in this decade or the next, and thus the winning ticket (with future hindsght) is currently held by someone young.

What do you think of the possibility of a government creating the first AI?

comment by jacob_cannell · 2010-08-25T18:01:37.430Z · score: 2 (2 votes) · LW · GW

Its certainly a possibility, ranging from the terrifying if its created as something like a central intelligence agent, to the beneficial if its created as a more transparent public achievement, like landing on the moon.

The potential for arms race seems to contribute to possibility of doom.

The government seems on par with the private sector in terms of likelihood, but I dont have a strong notion of that. At this point it is already some sort of blip on their radar, even if small.

comment by Mitchell_Porter · 2010-08-24T10:36:13.503Z · score: 1 (1 votes) · LW · GW

I do think the movement can win regardless, if we can win on the social engineering front.

What is the outcome that you want to socially engineer into existence?

How can we get the world to wake up?

What is it that you want the world to realize?

comment by jacob_cannell · 2010-08-24T22:25:06.490Z · score: 1 (1 votes) · LW · GW

What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?

Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.

comment by Mitchell_Porter · 2010-08-25T12:16:36.710Z · score: 4 (8 votes) · LW · GW

What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?

Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.

You remind me of myself maybe 15 years ago. Excited about the idea of escaping the human condition through advanced technology, but with the idea of avoiding bad (often apocalyptically bad) outcomes also in the mix; wanting the whole world to get excited about this prospect; writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.

So let me see if I have anything coherent to say about such an outlook, from the perspective of 15 years on. I am certainly jaded when it comes to breathless accounts of the incomprehensible transcendence that will occur: the equivalent of all Earth's history happening in a few seconds, societies of inhuman meta-minds discovering the last secret of how the cosmos works and that's just the beginning, passages about how a googol intelligent beings will live inside a Planck length and so forth.

If you haven't seen them, you should pay a visit to Dale Carrico's writings on "superlative futurology". Whatever the future may bring, it's a fact that this excited anticipation of everything good multiplied by a trillion (or terrified anticipation of badness on a similar scale, if we decide to entertain the negative possibilities) is built entirely from imagination. It is not surprising that after more than a decade, I have become skeptical about the value of such emotional states, and also about their realism; or at least, a little bored with them. I find myself trying to place them in historical perspective. 2000 years ago there were gnostics raving about transcendental, sublime hierarchies of gods, and how mind, time, and matter were woven together in strange ways. History and science tell us that all that was mostly just a strange conceptual storm happening in the skulls of a few people who died like anyone else and who made little discernible impact on the course of events - that being reserved more for the worldly actors like the emperors and generals. Yet one has to suppose that gnosticism was not an accident, that it was a symptom of what was happening to culture and to human consciousness at that time.

It seems very possible that a great deal of the ecstasy (leavened with dread) that one finds in singularity and transhumanist writing is similarly just an epiphenomenal symptom of the real processes of the age. Lots of people say that, of course; it's the capitalist ego running amok, denying ecological limits, a new gnostic body-denial that fetishizes calculating machines, blah blah blah. Such criticisms themselves tend to repress or deny the radicalism of what is happening technologically.

So, OK, there shall be robots, cyborgs, brain implants, artificial intelligence, artificial life, a new landscape of life and mind which gets called postbiological or posthuman but much of which is just hybridization of natural and artificial. All that is a huge development. But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is; millions of subjective years of posthuman civilizations squeezed into a few seconds; and various other quantitative amplifications of life as we know it, by large powers of ten?

I think at best it is rational to give these ideas a chance. These technologies are new, this hasn't happened before, we don't know how far it goes; so we might want to remain open to the possibility that almost infinite space and time lie on the other side of this transition. But really, open to the possibility is about all we can say. This hasn't happened before, and we don't know what new barriers and pitfalls lie ahead; and it somehow seems unhealthy to be deriving this ecstatic hope from a few exponential numbers.

Something that the critics of extreme transhumanism often fail to note is the highly utopian altruism that exists within the subculture. To be sure, there are many individualist transhumanists who are cynics and survivalists; but there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies (those which possess no trace of the desire to punish or to achieve transformation through violence). It's the dream of world peace, raised to the nth power, and achieved because there's no death, scarcity, involuntary work, ageing process, and other such pains and frustrations to drive people mad. I wanted to emphasize this aspect because the critics of singularity thought generally love to explain it by imputing disreputable motives: it's all adolescent power fantasy and death denial and so forth. There should be a little more respect for this aspect, and if they really think it's impossible, they should show a little more regret about this. (Incidentally, Carrico, who I mentioned above, addresses this aspect too, saying it's a type of political infantilism, imagining that conflict and loss can be eliminated from the world.)

The idea of "waking up the world" to the imminence of the Singularity, to its glories and terrors, can have an element of this profoundly unworldly optimism about human nature - along with the more easily recognized aspect of self-glorification: I, and maybe my colleagues and guru figures, am the messenger of something that will gain the attention of the world. I think it can be expected that the world will continue to "wake up" to the dawning possibilities of biological rejuvenation, artificial intelligence, brain emulation, and so on, and that it will do this not just in a sober way, but also with bursts of zany enthusiasm and shuddering terror; and it even makes sense to want to foster the sober advance of understanding, if only we can figure out what's real and what's illusion about these anticipations.

But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the "knowledge" of immortality through mind uploading (just one example)... that, almost certainly, achieves nothing deeply useful. And the expectation that in a few years everyone will agree with the Singularity outlook (I've seen this idea expressed most recently by the economist James Miller) I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?! It's a logical deduction: you understand the possibilities of the Singularity, you don't understand how anyone could want to reject them or dismiss them, and you observe that most people are not singularity futurists; therefore, you deduce that the idea is about to sweep the world like wildfire, and you just happen to be one of the lucky first to be exposed to it. That thought process is naivety and unfamiliarity with normal psychology. It may partly be due to a person of above-average intelligence not understanding how different their own subjectivity is to that of a normal person; it may also be due to not yet appreciating how incredibly cruel life can be, and how utterly helpless people are against this. The passivity of the human race, its resignation and wishful thinking, its resistance to "good news", is not an accident. And there is ample precedent for would-be vanguards of the future finding themselves powerless and ignored, while history unfolds in a much duller way than they could have imagined.

So much for the general cautionary lecture. I have two other more specific things to say.

First, it is very possible that the quasi-scientific model of mind which underlies so many of these brave new ideas about copies and mind uploads is simply wrong, a sort of passing historical crudity that will be replaced by something very new. The 19th century offers many examples in physics and biology of paradigms which informed a whole generation of thought and futurology, and which are now dead and forgotten. Computing hardware is a fact, but consciousness in a program is not yet a fact and may never be a fact. I've posted a lot about this here.

Second, since you're here, you really should think about whether something like the SIAI notion of friendly singularity really is the only natural way to achieve a "global positive singularity". The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to promote the idea of immortality through mind uploading, or reverse engineering the brain. I think it's a genuine conceptual advance on the older idea of hoping to ride the technological wave to a happy ending, just by energetic engagement with new developments and a will to do whatever is necessary. We still don't know if the premises of such futurisms are valid, but if they are accepted as such, then the SIAI strategy is a very reasonable one.

comment by jacob_cannell · 2010-08-25T21:33:21.141Z · score: 0 (2 votes) · LW · GW

writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.

The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the "just like me 15 years ago." But I'm not trying to take the argumentative stance by saying this, I'm just requesting it: I value your outlook.

Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.

But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is;

Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there's a reasonable argument that its net utility was greater before the arrival of language and technology.

Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)

If you haven't seen them, you should pay a visit to Dale Carrico's writings on "superlative futurology".

I will have to investigate Carrico's "superlative futurology".

Imagination guides human future. If we couldn't imagine the future, we wouldn't be able to steer the present towards it.

there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies

Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.

Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.

Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.

AGI is a technology that could amplify 'our' knowledge and capability to such a degree that it could literally enable 'us' to shape our reality in any way 'we' can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.

However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.

The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.

In short, many people when they hear about the Singularity reach this irrational conclusion - "that sounds like religious eschatologies I've heard before, therefore its just another instance of that". You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.

I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.

I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.

But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the "knowledge" of immortality through mind uploading (just one example)... that, almost certainly, achieves nothing deeply useful.

Naturally, I strongly disagree, but I'm confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.

I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?

Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.

I've personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.

I was once at a party at some film producer's house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.

Computing hardware is a fact, but consciousness in a program is not yet a fact and

I've yet to see convincing arguments showing "consciousness in a program is impossible", and at the moment I don't assign special value to consciousness as distinguishable from human level self-awareness and intelligence.

The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to

My position is not to just "promote the idea of immortality through mind uploading, or reverse engineering the brain" - those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.

I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a "humane posthuman order" (which itself is a funny play on words - don't you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI's current mission.

To put it another way: whose utility function?

To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.

I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as "humane future of humanity" will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as "dog" are orders of magnitude higher than anything that could be cleanly represented in human code.

However, the above criticism is in the particulars of implementation, and doesn't cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I'd rather support a project exploring multiple routes, and brain-like routes in particular - not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.

comment by Mitchell_Porter · 2010-08-27T09:47:57.108Z · score: 1 (1 votes) · LW · GW

I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a "humane posthuman order" (which itself is a funny play on words - don't you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle.

Ironically, the idea involves reverse-engineering the brain - specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.

I'll avoid returning to the other issues for the moment since this is the really important one.

comment by jacob_cannell · 2010-08-27T18:54:11.077Z · score: 0 (0 votes) · LW · GW

I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:

One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history,

Our morality and 'metamorality' already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong - it is cultural.

The flaw then is assuming there is a single evolutionary target for humanity's future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.

There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post

If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.

Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.

Given the success of Xtianity and related worldviews, we have some guess at features of the CEV - people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show - but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.

The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god's task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.

I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)

comment by Mitchell_Porter · 2010-08-28T05:14:40.341Z · score: 0 (0 votes) · LW · GW

Our morality and 'metamorality' already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong - it is cultural.

Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI's initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what's good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).

Mind uploading is another way you could try to humanize the new computational platform, but I think there's little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)

Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.

I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.

comment by jacob_cannell · 2010-08-25T17:54:26.241Z · score: 0 (0 votes) · LW · GW

I liked this, will reply when I have a chance.

comment by Nick_Tarleton · 2010-08-25T18:56:44.494Z · score: 0 (0 votes) · LW · GW

Welcome to LW!

I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task.

Not entirely. Less Wrong is about raising the sanity waterline, not just recruiting FAI theorists.

Also, as an aside, I'm curious about the note for theists.

Theists in the usual supernatural sense, not the (rare, and even more rarely called 'theism') simulation or future-'god' senses.

I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company

It seems to me that there are plenty of open-minded, technical circles in which one can do this, as long as one takes basic care not to sound fanatical.

comment by daedalus2u · 2010-07-20T23:08:57.508Z · score: 4 (4 votes) · LW · GW

Hi, my name is Dave Whitlock, I have been a rationalist my whole life. I have Asperger's, so rationalism comes very easily to me, too easily ;) I have a blog

http://daedalus2u.blogspot.com/

Which is mostly about nitric oxide physiology, but that includes a lot of stuff. Lately I have been working a lot on neurodevelopment and especially on autism spectrum disorders.

I comment a fair amount in the blogosphere, Science Based Medicine, neurologica, skepchick, Left brain-right brain and sometimes Science blogs; pretty much only under the daedalus2u pseudonym. Sb seems to be in a bit of turmoil right now, so it is unclear how that will fall out.

I am extremely liberal and I think I come by that completely rationally coming from the premise that all people have the same human rights and the same human obligations to other humans (including yourself). This is pretty well codified in the universal declaration of human rights (which I think is insufficiently well followed in many places).

comment by utilitymonster · 2010-04-19T12:31:25.246Z · score: 4 (4 votes) · LW · GW

I'm a philosophy PhD student. I studied math and philosophy as an undergrad. I work on ethics and a smattering of Bayesian topics. I care about maximizing the sum of desirable experiences that happen in the future. In less noble moments, I care more immediately about advancing my career as a philosopher and my personal life.

I ran into OB a couple years ago when Robin Hanson came and gave a talk on disagreement at a seminar I was attending. I started reading OB, and then I drifted to LW territory a few months ago.

At first, much of the discussion here sounded crazy to me. It often still does. But I thought I'd give it a detailed look, since everyone here seems to have the same philosophical prejudices as me (Bayesian utilitarian atheist physicalists).

I like discussion of Bayesian topics and applied ethics best.

comment by misterpower · 2010-04-19T06:17:07.964Z · score: 4 (4 votes) · LW · GW

Bueno! I'm Jason from San Antonio, Texas. Nice to say 'hi' to all you nice people! (Nice, also, to inflate the number of comments for this particular post - give the good readers of Less Wrong an incrementally warmer feeling of camaraderie.)

I've been reading Overcoming Bias and Less Wrong for over a year since I found a whole bunch of discussions on quantum mechanics. I've stayed for the low, low cost intellectual gratification.

I (actually, formally) study physics and math, and read these blogs to the extent that I feel smarter...also, because the admittedly limited faculties of reason play out a fascinating and entertaining show of bravery against their own project of rationality. What I learn about these shortcomings helps to buttress my own monoliths, as much as what I learn might should could erode these pillars' unsubstantial foundations. It's a thrilling undertaking.

Thanks, all!

comment by oliverbeatson · 2009-09-11T00:10:20.679Z · score: 4 (4 votes) · LW · GW

Hello! I'm Oliver, as my username should make evident. I'm 17 years old, and this site was recommended to me by a friend, whose LW username I observe is 'Larks'. I drift over to Overcoming Bias occasionally, and have RSS feeds to Richard Dawkins' site and (the regrettably sensationalist) NewScientist magazine. As far as I can see past my biases, I aspire to advance my understanding of the kinds of things I've seen discussed here, science, mathematics, rationality and a large chunk of stuff that at the moment rather confuses me.

I started education with a prominent interest in mathematics, which later expanded to include the sciences and writing, and consider myself at least somewhat lucky to have escaped ten years of light indoctrination from church-school education, later finding warm comfort in the intellectual bosom of Richard Dawkins. I've also become familiar with the likes of Alan Turing, Steven Pinker and yet others, from fields of philosophy, mathematics, computing and science.

I'm currently at college in the UK studying my second year of Mathematics, Philosophy, English Language and entering a first year of Physics (I have concluded a year of Computing). As much as I enjoy and value philosophy as a mechanism for genuine rational learning and discovery, I often despise the canon for its almost religious lack of progression and for affixing value to ultimately meaningless questions. It is for this reason that I value having access to Less Wrong et alia. Mathematics is a subject which I learned (the hard way) that I cannot live without.

I think I've said as much here as I can and as much as I need to, so I'll conclude with a toast: to a future of enlightenment, learning, overcoming biases and most importantly fun.

comment by dmfdmf · 2009-08-11T07:30:50.731Z · score: 4 (4 votes) · LW · GW
  • Name: David
  • Space: SF Bay Area
  • Time: 46
  • Education: MS Econ, MS Mechanical Engr.
  • Occupation: IT Consultant

I am interested in reason, how it works and how I can improve my own abilities. I have been an AI/Singularity skeptic but am reconsidering these ideas on reading Jaynes over the past year. Working on integrating the work of Rand, Aristotle, Jaynes, Turing, Godel and Shannon because I think all the essentials are covered in these author's work. Love the blog, especially the commitment to clear understanding but also clearly identifying that which we don't understand. Unfortunately many of the topics are too technical for me but I enjoy the discussion anyway.

comment by alexflint · 2009-07-23T09:48:04.581Z · score: 4 (4 votes) · LW · GW

Hi,

I'm Alex and I'm studying computer vision at Oxford. Essentially we're trying to build AI that understands the visual world. We use lots of machine learning, probabilistic inference, and even a bit of signal processing. I arrived here through the Future of Humanity Institute website, which I found after listening to Nick Bostrom's TED talk. I've been lurking for a few weeks now but I thought I should finally introduce myself.

I find the rationalist discussion on LW interesting both on a personal interest level, and in relation to my work. I would like to get some discussion going on the relationship between some of the concrete tools and techniques we use in AI and the more abstract models of rationality being discussed here. Out of interest, how many people here have some kind of computer science background?

comment by Vladimir_Nesov · 2009-07-23T10:58:00.021Z · score: 1 (1 votes) · LW · GW

Hi Alex, welcome to LessWrong. You can find some info about the people here in the survey results post. Quite a lot are with CS background, and some grok machine learning.

comment by Kaj_Sotala · 2009-07-23T10:00:24.184Z · score: 0 (0 votes) · LW · GW

Out of interest, how many people here have some kind of computer science background?

Quite a few if not most, it seems. See http://lesswrong.com/lw/fk/survey_results/ - the summary there doesn't mention the educational background, but looking through the actual spreadsheet, lots of people have listed a "computing" background.

comment by AnnaSalamon · 2009-05-04T07:52:43.296Z · score: 4 (4 votes) · LW · GW

(This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread. Or if not here, to some beginner-friendly area for discussing or debating background material.)

brynema wrote:

So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.

Kind of. The idea is that:

  • Both human minds, and whatever AIs can be built, are mechanistic systems. We’re complex, but we still do what we do for mechanistic reasons, and not because the platonic spirit of “right thing to do”ness seeps into our intelligence.
  • Goals, and “optimization power / intelligence” with which to figure out how to reach those goals, are separable to a considerable extent. You can build many different systems, each of which is powerfully smart at figuring out how to hit its goals, but each of which has a very different goal from the others.
  • Humans, for example, have some very specific goals. We value, say, blueberry tea (such a beautiful molecule...), or particular shapes and kinds of meaty creatures to mate with, or particular kinds of neurologically/psychologically complex experiences that we call “enjoyment”, “love”, or “humor”. Each of these valued items has tons of arbitrary-looking details; just as you wouldn’t expect to find space aliens who speak English as their native language, you also shouldn’t expect an arbitrary intelligence to have human (as opposed to parrot, octopus, or such-and-such variety of space aliens) aesthetics or values.
  • If you’re dealing with a sufficiently powerful optimizing system, the question isn’t whether it would assign some value to you. The question is whether you are the thing that it would value most of all, compared to all the other possible things it could do with your atoms/energy/etc. Humans re-arranged the world far more than most species, because we were smart enough to see possibilities that weren’t in front of us, and to figure out ways of re-arranging the materials around us to better suit our goals. A more powerful optimizing system can be expected to change things around considerably more than we did.

That was terribly condensed, and may well not make total sense at this point. Eliezer’s OB posts fill in some of this in considerably better detail; also feel free, here in the welcome thread, to ask questions or to share counter-evidence.

comment by JGWeissman · 2009-04-17T03:10:09.397Z · score: 4 (4 votes) · LW · GW
  • Handle: JGWeissman
  • Name: Jonathan Weissman
  • Location: Orange County, California
  • Age: 27
  • Education: Majored in Math and Physics, minored in Computer Science
  • Occupation: Programmer
  • Hobby: Sailboat racing

I found OB through StumbleUpon.

comment by PhilGoetz · 2009-04-17T00:01:54.816Z · score: 4 (4 votes) · LW · GW
  • Location: Washington DC, USA
  • Education: BS math (writing minor), PhD comp sci/artificial intelligence (cog sci/linguistics minors), MS bioinformatics
  • Jobs held (chronological): robot programmer in a failed startup, cryptologist, AI TA, lecturer, virtual robot programmer in a failed startup, distributed simulation project manager, AI research project manager, computer network security research, patent examiner, founder of failed AIish startup, computational linguist, bioinformatics engineer
  • Blog

I was a serious fundamentalist evangelical until about age 20. Factors that led me to deconvert included Bible study, successful simulations of evolution, and observation of radical cognitive biases in other Christians.

I was active on the Extropian mailing list, and published a couple of things in Extropy, about 1991-1995.

Like EY, I think AI is inevitable, and is the most important problem facing us. I have a lot of reservations about his plans, to the point of seeing his FAI as UFAI (don't ask in this thread). I think the most difficult problem isn't developing AI, or even making it friendly, but figuring out what kind of possible universes we should aim for; and we have a limited time in which we have large leverage over the future.

I prioritize slowing aging over work on AI. I expect that partial cures for aging will be developed 10-20 years before they are approved in the US, and so I want to be in a position to take published research and apply it to myself when the time comes.

I believe that rationality is instrumental, and repeatedly dissent when people on LW make what I see as ideological claims about rationality, such as that it is defined as that which wins; and at presenting rationality as a value-system or a lifestyle. There's room for that too; I mainly want people to recognize that being rational doesn't require all that.

comment by outlawpoet · 2009-04-16T21:26:27.416Z · score: 4 (6 votes) · LW · GW
  • Handle: outlawpoet
  • Name: Justin Corwin
  • Location: Playa del Rey California
  • Age: 27
  • Gender: Male
  • Education: autodidact
  • Job: researcher/developer for Adaptive AI, internal title: AI Psychologist
  • aggregator for web stuff

Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It's not clear to me exactly what the Point of it is.

I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren't well suited for settling questions of fact. The culture may be fact-based, but this certainly isn't an academic or scientific community, it's mechanisms have nothing to do with data management, experiment, or documentation.

The community isn't going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.

It's interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don't strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I'm following along at this point, hoping to figure that out.

comment by Paul Crowley (ciphergoth) · 2009-04-16T22:53:02.891Z · score: 4 (4 votes) · LW · GW

I suspect we're going to hear more about the goal in May. We're not allowed to talk about it, but it might just have to do with exi*****ial r*sk...

comment by [deleted] · 2009-04-17T09:07:18.579Z · score: 0 (0 votes) · LW · GW

deleted

comment by mattnewport · 2009-04-16T18:47:36.706Z · score: 4 (4 votes) · LW · GW
  • Handle: mattnewport
  • Name: Matt Newport
  • Location: Vancouver, Canada
  • Age: 30
  • Occupation: Programmer (3D graphics for games)
  • Education: BA, Natural Sciences (experimental psychology by way of maths, physics, history and philosophy of science and computer science)

I'm here by way of Overcoming Bias which attracted me with its mix of topics I'm interested in (psychology, economics, AI, atheism, rationality). With a lapsed catholic mother and agnostic father I had a half-heartedly religious upbringing but have been an atheist for as long as I can remember thinking about it. Politically my parents were left-liberal/socialist and I would have described myself that way until my early 20s. I've been trending increasingly libertarian ever since.

I'm particularly interested in applying rationality to actually 'winning' in everyday life. I'm interested in the broad 'life-hacking' movement but think it could benefit from a more rigorously rational/scientific approach. I hope to see more discussion of this kind of thing on less wrong.

comment by lavalamp · 2009-04-16T18:05:12.985Z · score: 4 (4 votes) · LW · GW

Hi, I've been lurking for a few weeks and am likely to stay in lurker mode indefinitely. But I thought I should comment on the welcome thread.

I would prefer to stay anonymous at the moment, but I'm male, 20's, BS in computer programming & work as a software engineer.

As an outsider, some feedback for you all:

Interesting topics -- keep me reading Jargon -- a little is fine, but the more there is, the harder it is to follow. The fact that people make go (my favorite game) references is a nice plus.

I would classify myself as a theist at the moment. As such (and having been raised in a very christian environment), I have some opinions on how you guys could more effectively proselytize--but I'm not sure it's worth my time to speak up.

comment by Paul Crowley (ciphergoth) · 2009-04-16T20:13:03.234Z · score: 4 (4 votes) · LW · GW

Thanks for commenting, if this thread gives cause to you and more like you to stick their heads above the parapet and say hello it will have been a good thing.

People here have mixed feelings about the desirability of proselytization, since the ideas that are most vigorously proselytized are so often the worst. I think that we will want to do so, but we will want to work out a way of doing it that at least gives some sort of advantage to better ideas over worse but more appealing ones. I think we'll definitely want to hear from people like you who probably have more real experience in this field than many of us put together.

And since you're a theist, I'm afraid you'll be one of the people we're proelytizing to, so if you can teach us how to do it without pissing people off that would help too :-)

comment by lavalamp · 2009-04-17T18:36:33.708Z · score: 0 (0 votes) · LW · GW

Thanks for the welcome, everyone.

Personally, I pretty much have no desire to proselytize anyone for anything. Waste of time, in my experience. Maybe you all are different, but no one I've ever met will actually change their mind in response to hearing a new line of reasoning, anyway.

What I do have an interest in is people actually taking the time to understand each other and present points in ways that the other party will understand. Atheists and Christians are particularly bad at this. Unfortunately, the worst offenders on the christian side are the least likely to change, or even see the problem. Perhaps there's more hope for those on the other side.

Anyway, I have no desire to debate theism here.

comment by mattnewport · 2009-04-17T18:45:46.252Z · score: 0 (0 votes) · LW · GW

I have changed my mind in response to hearing a new line of reasoning. One particular poster on a forum I used to frequent changed my mind about politics by patiently giving sound arguments that I had not been presented with before. My political beliefs have been undergoing a continual evolution since then but I can pretty much point to that one individual as instrumental in shifting my political opinions in a new direction.

comment by ChrisHibbert · 2009-04-16T18:23:25.571Z · score: 3 (3 votes) · LW · GW

I have some opinions on how you guys could more effectively proselytize--but I'm not sure it's worth my time to speak up.

If you post about things that are interesting to you, we'll talk about them more.

If you act like you have something valuable to say, we'll read it and respond. We would all be likely to learn something in the process.

comment by pnkflyd831 · 2009-04-16T20:51:03.580Z · score: 2 (2 votes) · LW · GW

lava, You aren't the only one on LW that feels the same way. I have similar background and concerns. We are not outsiders. LW's dedication to attacking the reasoning of a post/comment, but not the person has been proved over and over.

comment by Paul Crowley (ciphergoth) · 2009-04-16T22:43:36.647Z · score: 1 (1 votes) · LW · GW

LW's dedication to attacking the reasoning of a post/comment, but not the person has been proved over and over.

This is very good to hear; I wouldn't put it quite that strongly, but I had the impression it was an axis we did well on and it's nice to know someone else sees it that way too.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-16T17:15:45.609Z · score: 4 (8 votes) · LW · GW

Perhaps take out the "describe what it is that you protect" part. That's jargon / non-obvious new concept.

comment by AnnaSalamon · 2009-04-16T19:51:16.774Z · score: 5 (5 votes) · LW · GW

Oh, I thought it was nice, because it linked newcomers to one of my favorite posts as one of the orienting-aspects of the site (if people come here new). Maybe if linking text was made transparent, e.g. "describe what it is you value and work to achieve"?

I also like the idea of implicitly introducing LW as a community of people who care about things, and who learn rationality for a reason.

comment by MBlume · 2009-04-16T23:16:44.284Z · score: 0 (0 votes) · LW · GW

done =)

comment by MBlume · 2009-04-16T18:59:37.719Z · score: 1 (1 votes) · LW · GW

Done, but I wonder if there's another way of saying the same thing. A discussion of what it is we each strive towards would, I think, be a good way of getting to know one another.

comment by jamesnvc · 2009-04-16T17:04:52.367Z · score: 4 (4 votes) · LW · GW
  • Handle: jamesnvc
  • Location: Toronto, ON
  • Age: 19
  • Education: Currently 2nd year engineering science
  • Occupation: Student/Programmer
  • Blog: http://jamesnvc.blogspot.com

As long as I can remember, I've been an atheist with a strong rationalist bent, inspired by my grandfather, a molecular biologist who wanted at least one grandchild to be a scientist. I discoverd Overcoming Bias a year or so ago and became completely enthralled by it: I felt like I had discovered someone who really knew what was going on and what they were talking about.

comment by MBlume · 2009-04-16T16:53:04.724Z · score: 4 (6 votes) · LW · GW

A couple of possible additions to the page which I'm still a bit unsure of:

You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this. If you've any questions about karma or voting, please feel free to ask here.

and

A note for theists: you will find a pretty uniformly atheist community here at LW. You may assume that this is an example of groupthink in action, but please allow for the possibility that we really truly have given full consideration to theistic claims and have found them to be false. If you'd like to know how we came to this conclusion, you might be interested to read (list of OB posts, probably including Alien God, Religion's Claim, Belief in Belief, Engines of Cognition, Simple Truth, Outside the Lab etc.) In any case, we're happy to have you participating here, but please don't be too offended to see other commenters treating religion as an open-and-shut case

Any thoughts?

comment by timtyler · 2009-04-16T17:21:22.600Z · score: 1 (3 votes) · LW · GW

Maybe single out the theists? Buddhism and Taoism are "religions" too - by most accounts - but they are "significantly" less full of crap.

comment by PhilGoetz · 2009-04-16T23:31:10.941Z · score: 2 (4 votes) · LW · GW

I'm not convinced Buddhism has less crap. It's just more evasive about it. The vast majority of Buddhist practitioners have no idea what Buddhism is about. When you come right down to it, it's a religion that teaches that the world is bad, love is bad, and if you work very hard for thousands of lifetimes, you might finally attain death.

comment by timtyler · 2009-04-17T22:54:35.290Z · score: 0 (0 votes) · LW · GW

I'm not sure where you are getting that from. A more conventional summary:

"Buddhists recognize him as an awakened teacher who shared his insights to help sentient beings end their suffering by understanding the true nature of phenomena, thereby escaping the cycle of suffering and rebirth (saṃsāra), that is, achieving Nirvana. Among the methods various schools of Buddhism apply towards this goal are: ethical conduct and altruistic behaviour, devotional practices, ceremonies and the invocation of bodhisattvas, renunciation of worldly matters, meditation, physical exercises, study, and the cultivation of wisdom."

comment by Jack · 2009-04-16T17:07:12.939Z · score: 1 (1 votes) · LW · GW

I vote definitely yes to the first.

As to the second the message isn't a bad idea. But there are so many OB posts being linked to I'm not sure linking to more is the right idea. Maybe once the wiki gets going there can be a summary of our usual reasons there?

comment by MBlume · 2009-04-16T17:15:40.956Z · score: 1 (1 votes) · LW · GW

The Wiki is going =)

I'll start thinking about a short intro.

comment by zaph · 2009-04-16T17:05:43.082Z · score: 1 (1 votes) · LW · GW

I think the first one's good to have: it's positive, and gets people somewhat acclimated to the whole karma thing. I really don't know what to say about the 2nd; if there were a perfect boilerplate response to religious criticism of rationalism, I suppose this forum probably wouldn't exist. Yours is still as good an effort as any, though could we possibly take debating evolution completely off the table? That and calling any scientific theory "just a theory"?

comment by MrHen · 2009-04-16T16:59:22.145Z · score: 1 (1 votes) · LW · GW

After the note to the religious, perhaps a nice, comforting "you are still welcome here as long as you don't cause trouble." That is, of course, assuming they are still welcome here. Because they are, right?

comment by zaph · 2009-04-16T17:09:22.787Z · score: 3 (3 votes) · LW · GW

We are all looking to be "less wrong", so I can't imagine why anyone would be barred.

comment by MBlume · 2009-04-16T19:50:48.720Z · score: 0 (0 votes) · LW · GW

In any case, we're happy to have you participating here, but please don't be too offended to see other commenters treating religion as an open-and-shut case.

Something like that?

comment by MrHen · 2009-04-16T22:42:42.055Z · score: 1 (1 votes) · LW · GW

Yeah, that works. If I had to edit it myself I would do something like this:

A note to the religious: you will find LW overtly atheist. If you'd like to know how we came to this conclusion you may find these related posts a good starting point. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to religious claims and found them to be false.

Just food for thought. I trimmed it up a bit and tried being a little more charitable. I also started an article on the wiki but someone else may want to approve it or move it. The very last sentence is a bit aggressive, but I think it is the softest way to make the point that this is an unmovable object.

comment by Bongo · 2009-04-17T12:07:39.861Z · score: 1 (1 votes) · LW · GW

Shouldn't just assert that it isn't groupthink. Maybe it is. Let them judge that for themselves. Now it sounds defensive, even.

It's probably always dangerous and often wrong to assert that you, or your group, is free of any given bias.

Otherwise I do like the paragraph.

comment by MBlume · 2009-04-16T23:18:35.458Z · score: 0 (0 votes) · LW · GW

Blinks well that'll save me a lot of work, thank you =)

comment by MBlume · 2009-04-16T17:03:07.329Z · score: 0 (0 votes) · LW · GW

I'd like to think so, yes, though they shouldn't be too offended if people poke and prod them a bit.

comment by Paul Crowley (ciphergoth) · 2009-04-16T22:46:19.764Z · score: 0 (0 votes) · LW · GW

I like both of these (though yes, theism rather than religion will avoid some nitpicking).

comment by byrnema · 2009-04-16T17:56:57.458Z · score: 0 (0 votes) · LW · GW

I appreciate the links.

and especially if you intend to argue the matter, it will almost certainly profit you

a little snippy, and not necessary -- remember these newcomers haven't done anything wrong yet

I would replace the word "supernaturalist" with "religious" again. No reason to be even that tiny bit confrontational.

comment by MBlume · 2009-04-16T18:41:28.485Z · score: 0 (0 votes) · LW · GW

a little snippy, and not necessary

Removed then -- it was not at all my intention to be snippy, only to motivate the reading

I would replace the word "supernaturalist" with "religious" again. No reason to be even that tiny bit confrontational.

Done, but do keep in mind that, at least on LW, "supernatural" has a clearly defined meaning, being used to describe theories which grant ontologically fundamental status to things of the mind -- intelligence, emotions, desires, etc.

comment by byrnema · 2009-04-16T21:03:52.623Z · score: 0 (0 votes) · LW · GW

Can we delete this thread in the spirit of taking out noise?

comment by MBlume · 2009-04-16T21:06:59.294Z · score: 0 (0 votes) · LW · GW

Do you just mean our discussion, or the entire thread about the proposed additions?

comment by byrnema · 2009-04-16T21:11:21.673Z · score: 0 (0 votes) · LW · GW

Just me, down.

comment by MBlume · 2009-04-16T21:13:18.091Z · score: 0 (0 votes) · LW · GW

no problem.

comment by thomblake · 2009-04-16T17:38:53.480Z · score: 0 (0 votes) · LW · GW

The first paragraph seems good.

Despite the vocal atheist and nonreligious majority, I wouldn't doubt that there are many religious people here. Is the second paragraph really helpful? Any religious folks (even pagans, heathens, unitarians, buddhists, etc) here to back me up on this?

comment by John_Maxwell (John_Maxwell_IV) · 2009-04-16T18:06:42.146Z · score: 1 (1 votes) · LW · GW

I know one evangelical christian who reads but does not post to Less Wrong.

comment by Jack · 2009-04-16T16:50:31.987Z · score: 4 (4 votes) · LW · GW
  • Handle: Jack
  • Location: Washington D.C.
  • Age: 21
  • Education: Feeling pretty self-conscious about being the only person to post so far without a B.A. I'll finish it next year, major is philosophy with a minor in cognitive science and potentially another minor/major in government. After that its more school of some kind.

I wonder if those of us on the younger end of things will be dismissed more after posting our age and education. I admit to be a little worried, but I'm pretty sure everyone here is better than that. Anyway, I was a late joiner to OB (I think I got there after seeing a Robin Hanson bloggingheads) and then came here. I'm an atheist/materialist by way of Catholicism- but pretty bored by New Atheism. I was raised in a pretty standard liberal/left wing home but have moved libertarian. I'm very sympathetic to the "liberaltarian" idea. Free markets with direct and efficient redistribution are where its at.

comment by thomblake · 2009-04-16T17:40:33.169Z · score: 5 (7 votes) · LW · GW

I wonder if those of us on the younger end of things will be dismissed more after posting our age and education.

Don't worry - the top contributor and minor demigod 'round these parts doesn't have a degree, either.

ETA: Since Lojban doesn't think it's clear, I'm somewhat snarkily referring to Eliezer Yudkowsky.

comment by Lojban · 2009-04-16T19:21:15.567Z · score: 0 (4 votes) · LW · GW

I think you are referring to Eliezer Yudkowsky.

comment by ThoughtDancer · 2009-04-16T18:03:04.038Z · score: 1 (1 votes) · LW · GW

Actually, I'm a bit afraid of the opposite--as an older fart who has a degree through an English Department... I'm often more than a little unsure and I'm concerned I'll be rejected out of hand, or, worse, simply ignored.

I suspect, though, that this crowd is inherently friendly, even when the arguments end up using sarcasm. ;-)

comment by MBlume · 2009-04-16T18:50:18.211Z · score: 0 (0 votes) · LW · GW

We do our best =)

comment by MorgannaLeFey · 2009-04-17T13:47:59.133Z · score: 0 (0 votes) · LW · GW

I was sitting here thinking "Wow, I think I'm older than anyone here" and wondering if I might be dismissed in some way. Funny, that.

comment by byrnema · 2009-04-16T17:45:57.608Z · score: 0 (0 votes) · LW · GW

For some reason I actually thought you were 13, and thought you were you were a terrific 13 year old to be here on LW, being well-read with astute comments. I'll delete this comment in about 5 minutes, it's just chatty.

comment by MBlume · 2009-04-16T17:47:42.966Z · score: 0 (0 votes) · LW · GW

I've seen a slight community norm against "just chatty" comments, and strongly oppose it myself. In any case, this thread is an excellent place for chatty comments =)

comment by zaph · 2009-04-16T16:49:24.271Z · score: 4 (4 votes) · LW · GW

Handle: zaph Location: Baltimore, MD Age: 35 Education: BA in Psychology, MS in Telecommunications Occupation: System Performance Engineer

I'm mostly here to learn more about applied rationality, which I hope to use on the job. I'm not looking to teach anybody anything, but I'd love to learn more about tools people use (I'm mostly interested in software) to make better decisions.

comment by RichardKennaway · 2009-04-16T16:37:30.051Z · score: 4 (4 votes) · LW · GW
  • Handle: You can see it just above. (Edit: I didn't realise that one can read LW with handles hidden, so: RichardKennaway.)
  • Name: Like the handle.
  • Gender: What the name suggests.
  • Location: Norwich, U.K (a town about two hours from London and 1.5 from Cambridge).
  • Age: Over 30 :-)
  • Education: B.Sc., D.Phil. in mathematics.
  • Occupation: Academic research. Formerly in theoretical computer science; since 10 to 12 years ago, applied mathematics and programming. (I got disillusioned with sterile crossword puzzle solving.)

Like, I suspect, most of the current readership, I'm here via OB. I think I discovered OB by chance, while googling to see if AI was still twenty years away (it was -- still is).

Atheist, materialist, and libertarian views typical for this group; no drastic conversion involved from any previous views, so not much of a rationalist origin story. My Facebook profile actually puts down my religion as "it's complicated", but I won't explain that, it's complicated.

comment by GuySrinivasan · 2009-04-16T16:51:09.109Z · score: 3 (3 votes) · LW · GW

This is pretty funny if you happen to have the anti-kibitzer (which hides handles) turned on. :D

comment by RichardKennaway · 2009-04-16T19:55:22.304Z · score: 1 (1 votes) · LW · GW

I wrote:

Atheist, materialist, and libertarian views typical for this group; no drastic conversion involved from any previous views, so not much of a rationalist origin story.

Bit of a non sequitur I made there. How did I come to value rationality itself, rather than all those other things that are some of its fruits? I always have, to the extent that I knew there was such a thing. I remember coming across the books of Korzybski, Tony Buzan, Edward de Bono, and the like, in my teens, and enjoyed similar themes in science fiction. OB is the most interesting thing I've come across in recent years. For the same reasons I've also been interested in "mysticism", but still have no idea what it is or any experience of it. Who will found "Overcoming Woo" to write a blog-book on the subject?

comment by MrHen · 2009-04-16T18:13:01.176Z · score: 0 (0 votes) · LW · GW

Edit: I didn't realise that one can read LW with handles hidden

Whoa, you can? Where did I miss that preference?

comment by MBlume · 2009-04-16T18:26:12.209Z · score: 2 (2 votes) · LW · GW

It's not actually a native preference. Marcello wrote us a script which, run under a particular Firefox extension, produces this effect.

comment by MrHen · 2009-04-16T18:42:41.269Z · score: 1 (1 votes) · LW · GW

Ah, thanks. Is there any chance of this becoming a native preference? I would use it, but do not use Firefox.

comment by MrHen · 2009-04-16T14:44:13.722Z · score: 4 (4 votes) · LW · GW
  • Handle: MrHen
  • Name: Adam Babcock
  • Location: Tyler, TX
  • Age: 24
  • Education: BS in Computer Science, minors in Math and Philosophy
  • Occupation: Software engineer/programmer/whatever the current term is now

I found LW via OB via a Google search on AI topics. The first few OB posts I read were about Newcomb's paradox and those encouraged me to stick it on my blogroll.

Personal interests in rationality stem from a desire to eliminate "mental waste". I hold pragmatic principles to be of higher value than Truth for Truth's sake. As it turns out, this means something similar to systemized winning.

comment by research_prime_space · 2017-06-14T21:40:43.339Z · score: 3 (3 votes) · LW · GW

Hi! I'm 18 years old, female, and a college student (don't want to release personal information beyond that!). I'm majoring in math, and I hopefully want to use those skills for AI research :D

I found you guys from EA, and I started reading the sequences last week, but I really do have a burning question I want to post to the Discussion board so I made an account.

comment by cousin_it · 2017-06-15T08:20:19.944Z · score: 1 (1 votes) · LW · GW

Welcome! You can ask your question in the open thread as well.

comment by volya · 2013-10-07T12:54:22.209Z · score: 3 (3 votes) · LW · GW

Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.

comment by GloriaSidorum · 2013-03-06T23:24:39.746Z · score: 3 (3 votes) · LW · GW

Hello. My name is not, in fact, Gloria. My username is merely (what I thought was) a pretty-sounding Latin translation of the phrase "the Glory of the Stars", though it would actually be "Gloria Siderum" and I was mixing up declensions.

I read Three Worlds Collide more than a year ago, and recently re-stumbled upon this site via a link from another forum. Reading some of Elizier's series', I realized that most of my conceptions about the world were were extremely fuzzy, and they could be better said to bleed into each other than to tie together. I realized that a large amount of what I thought of as my "knowledge" is just a set of passwords, and that I needed to work on fixing that. And I figured that a good way to practice forming coherent, predictive models and being aware of what mental processes may affect those models would be to join an online community in which a majority of posters would have read a good number of articles on bias, heuristic, and becoming more rational, and will thus be equipped to some degree to call flaws in my thinking.

comment by mapnoterritory · 2012-06-02T07:02:11.198Z · score: 3 (3 votes) · LW · GW

Hi everybody,

I've been lurking here for maybe a year and joined recently. I work as an astrophysicist and I am interested in statistics, decision theory, machine learning, cognitive and neuro-psychology, AI research and many others (I just wish I had more time for all these interests). I find LW to be a great resource and it introduced me to many interesting concepts. I am also interested in articles on improving productivity and well-being.

I haven't yet attended any meet-up, but if there was one in Munich I'd try to come.

comment by [deleted] · 2012-02-14T17:18:53.452Z · score: 3 (3 votes) · LW · GW

Hello,

I am a world citizen with very little sense of identification or labelling. Perhaps "Secular Humanist" could be my main affiliation. As for belonging to nations and companies and teams... I don't believe in this thrust-upon, unchosen unity. I'm a natural expatriate. And I believe this site is awesomeness incarnate.

Though some lesswrongers really seem to go out of their way to make their readers feel stupid... though I'd guess that's the whole point, right?

comment by kateblu · 2011-12-04T03:44:36.051Z · score: 3 (3 votes) · LW · GW

Hello. I found this place as a result of reading Yudkowski's intuitive explanation of Bayes Theorem. I think we are like a very large group of blind people each trying to describe the elephant on the basis of the small part we touch. However, if I can aggregate the tactile observations of a large number of us blind people, I might end up with a pretty good idea of what that elephant looks like. That's my goal - to build a coherent and consistent mental picture of that elephant.

comment by Desrtopa · 2011-12-06T17:19:12.016Z · score: 2 (2 votes) · LW · GW

I honestly have some pretty bad associations for that metaphor. The parable makes sense, but I find that it's almost invariably (indeed, even in its original incarnations) presented with the implication "if we could pool our knowledge and experiences, we would come away with an understanding that resembles what I already believe."

comment by kateblu · 2011-12-07T02:52:15.700Z · score: 0 (0 votes) · LW · GW

I have no prior belief as to what this elephant looks like and I am continuously surprised and challenged by the various pieces that have to somehow fit into the overall picture. I don't worry whether my mental construct accords to Reality. I live with the fact that my limited understanding of quarks is probably not how they are understood by a particle physicist. But I am driven to keep learning more and somehow to fit it all together. Commenting helps me to articulate my personal theory of everything. But I need critical feedback from others to help me spot the inconsistencies. to force me not to be lazy, and to point out the gaps in my knowledge.

comment by saph · 2011-07-09T14:45:38.077Z · score: 3 (3 votes) · LW · GW

Hi,

  • Handle: saph
  • Location: Germany (hope my English is not too bad for LW...)
  • Birth: 1983
  • Occupation: mathematician

I was thinking quite a lot for myself about topics like

  • understanding and mind models
  • quantitative arguments
  • scientific method and experiments
  • etc...

and after discovering LW some days ago I have tried to compare my "results" to the posts here. It was interesting to see that many ideas I had so far were also "discovered" by other people but I was also a little bit proud that I have got so far on my own. Probably this is the right place for me to start reading :-).

I am an Atheist, of course, but cannot claim many other standard labels as mine. Probably "a human being with a desire to understand as much of the universe as possible" is a good approximation. I like learning and teaching, which is why I am interested in artificial intelligence. I am surrounded by people with strange beliefs, which is why I am interested in learning methods on how to teach someone to question his/her beliefs. And while doing so, I might discover the one or other wrong assumption in my own thinking.

I hope to spend some nice time here and probably I can contribute something in the future...

comment by jsalvatier · 2011-07-27T20:10:26.550Z · score: 0 (0 votes) · LW · GW

Welcome!

comment by free_rip · 2011-01-28T01:58:38.161Z · score: 3 (3 votes) · LW · GW

Does anyone know a good resource to go with Eliezer's comic guide on Lob's Theorem? It's confusing me a... well, a lot.

Or, if it's the simplest resource on it out there, are there any prerequisites for learning it/ skills/ knowledge that would help?

I'm trying to build up a basis of skills so I can participate better here, but I've got a long way to go. Most of my skills in science, maths and logic are pretty basic.

Thanks in advance.

comment by ata · 2011-01-28T06:40:34.491Z · score: 1 (1 votes) · LW · GW

Or, if it's the simplest resource on it out there, are there any prerequisites for learning it/ skills/ knowledge that would help?

I'm trying to build up a basis of skills so I can participate better here, but I've got a long way to go. Most of my skills in science, maths and logic are pretty basic.

Definitely read Gödel, Escher, Bach. Aside from that, here's a great list of reading and resources for better understanding various topics discussed on LW. (The things under Mathematics -> Logic and Mathematics -> Foundations would be the most relevant to Löb's Theorem.)

comment by free_rip · 2011-01-28T07:20:38.036Z · score: 0 (0 votes) · LW · GW

Thanks! That looks like a great list, just what I need.

comment by arundelo · 2011-01-28T02:09:05.517Z · score: 0 (0 votes) · LW · GW

I bet if you found the first spot in it where you get confused and asked about it here, someone could help. (Maybe not me; I have barely a nodding acquaintance with Löb's theorem, and the linked piece has been languishing on my to-read list for a while.)

Edit: cousin_it recommends part of a Scott Aaronson paper.

comment by free_rip · 2011-01-28T05:37:26.761Z · score: 0 (0 votes) · LW · GW

Okay. The part where I start getting confused is the statement: 'Unfortunately, Lob's Theorem demonstrates that if we could prove the above within PA, then PA would prove 1 + 2 = 5'. How does PA 'not prov(ing) 1 + 2 = 5' (the previous statement) mean that it would prove 1 + 2 = 5?

Maybe it's something I'm not understanding about something proving itself - proof within PA - as I admit I can't see exactly how this works. It says earlier that Godel developed a system for this, but the theorem doesn't seem to explain that system... my understanding of the theorem is this: 'if PA proves that when it proves something it's right, then what it proves is right.' That statement makes sense to me, but I don't see how it links in or justify's everything else. I mean, it seems to just be word play - very basic concept.

I feel like I'm missing something fundamental and basic. What I do seem to understand is so self-explanatory as to need no mention, and what I don't seems separate from it. It's carrying on from points as if they are self-explanatory and link, when I can't see the explanations or make the links. I also don't see the point of this as a whole - what, practically, is it used for? Or is it simply an exercise in thinking logically?

Oh, I also don't know what the arrows and little squares stand for in the problem displayed after the comic. That's a separate issue, but answers on it would be great.

Any help would be appreciated. Thanks.

comment by arundelo · 2011-01-28T06:30:00.367Z · score: 1 (1 votes) · LW · GW

'Unfortunately, Lob's Theorem demonstrates that if we could prove the above within PA, then PA would prove 1 + 2 = 5'.

I believe that that's just a statement of Löb's theorem, and the rest of the Cartoon Guide is a proof.

It says earlier that Godel developed a system for this, but the theorem doesn't seem to explain that system

The exact details aren't important, but Gödel came up with a way of using a system that talks about natural numbers to talk about things like proofs. As Wikipedia puts it:

Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itself.

Actually, going through a proof (it doesn't need to be formal) of Gödel's incompleteness theorem(s) would probably be good background to have for the Cartoon Guide. The one I read long ago was the one in Gödel, Escher, Bach; someone else might be able to recommend a good one that's available online not embedded in a book (although you should read GEB at some point anyway).

arrows and little squares

The rightward-pointing arrows mean "If [thing to the left of the arrow] then [thing to the right of the arrow]". E.g. if A stands for "Socrates is drinking hemlock" and B stands for "Socrates will die" then "A -> B" means "If Socrates is drinking hemlock then Socrates will die".

I suspect the squares were originally some other symbol when this was first posted on Overcoming Bias, and they got messed up when it was moved here [Edit: nope, they're supposed to be squares], but in any case, here they mean "[thing to the right of the square] is provable". And the parentheses are just used for grouping, like in algebra.

comment by free_rip · 2011-01-28T07:24:00.507Z · score: 0 (0 votes) · LW · GW

Ah, okay, I think I understand it a bit better now. Thank you!

I think I will order Godel, Escher, Bach. I've seen it mentioned a few times around this site, but my library got rid of the last copy a month or so before I heard of it - without replacing it. Apparently it was just too old.

comment by Dreaded_Anomaly · 2011-01-05T01:40:46.772Z · score: 3 (3 votes) · LW · GW

I'm a 22-year-old undergraduate senior, majoring in physics, planning to graduate in May and go to graduate school for experimental high energy physics. I also have studied applied math, computer science, psychology, and politics. I like science fiction and fantasy novels, good i.e. well-written TV, comic books, and the occasional video game. I've been an atheist and science enthusiast since the age of 10, and I've pursued rational philosophy since high school.

I got here via HPMoR, lurked since around the time Chapter 10 was posted, and found that a lot of the ideas resonated with my own conclusions about rationality. I still don't have a firm grasp on all of the vocabulary that gets used here, so if it seems like I'm expressing usual ideas in an unusual way, that's the reason.

comment by hmickali · 2010-12-06T04:42:54.336Z · score: 3 (3 votes) · LW · GW

I am college student who found this website through a friend and Harry Potter and the Methods of Rationality.

comment by peuddO · 2010-11-05T23:10:20.940Z · score: 3 (3 votes) · LW · GW

I like to call myself Sindre online. I'm just barely 18, and I go to school in Norway - which doesn't have a school system entirely similar to any other that I'm familiar with, so I'll refrain from trying to describe what sort of education I'm getting - other than to say that I'm not very impressed with how the public school system is laid out here in Norway.

I found Less Wrong through a comment on this blog, where it was mentioned as a place populated by reasonably intelligent people. Since I thought that was an intriguing endorsement, I decided to give it a look. I've been lurking here ever since - up until now, anyway.

How I came to identify as a rationalist

There's really not much to that story. I can't even begin to remember at what age I endorsed reason as a guiding principle. I was mocked as a 'philosopher' as far back as when I was nine years old, and probably earlier still.

I value and work to achieve an understanding of human psychology, as well as a diversity of meditative achievements derived from yoga. There's certainly more, but that's all I can think of right now.

P.S.: Some of the aesthetic choices I've made in this post, like italicizing my name, are mostly just to see if I understood the instructions correctly and are otherwise arbitrary.

comment by wedrifid · 2010-11-05T23:14:18.185Z · score: 0 (4 votes) · LW · GW

I like to call myself Sindre online.

Out of curiosity...

comment by peuddO · 2010-11-05T23:16:26.802Z · score: 1 (5 votes) · LW · GW

Out of curiosity...

Out of curiosity... what?

Edit: Since that seems to have earned me a downvote, I'd like to clarify that I'm just wondering as to what, specifically, you're curious about. Why I choose to call myself that? If I'm some other Sindre you know? Why my username is not Sindre? etc.

comment by wedrifid · 2010-11-05T23:23:17.532Z · score: 0 (2 votes) · LW · GW

No idea why someone would downvote a reasonable question. That would be bizarre.

'Username not' was the one. :)

comment by peuddO · 2010-11-05T23:30:12.653Z · score: 4 (4 votes) · LW · GW

Hrm. Now someone's downvoted your question, it seems. It's all a great, sinister conspiracy.

Well, regardless... peuddO is a username I occasionally utilize on internet forums. It's "upside down" in Norwegian, written upside down in Norwegian (I'm so very clever). Even so, I know that I personally prefer to know the names people go by out-of-internet. It's a strange quirk, perhaps, but it makes me feel obligated to provide my real first name when introducing myself.

comment by wedrifid · 2010-11-06T05:26:37.246Z · score: 0 (0 votes) · LW · GW

:) Thanks. And welcome ironic-Norwegian-reference.

comment by [deleted] · 2010-10-28T02:11:46.108Z · score: 3 (3 votes) · LW · GW

I'm currently an electrical engineering student. I suppose the main thing that drew me here is that I hold uncommon political views (market libertarian/minarchist, generally sympathetic to non-coercive but non-market collective action); I think that view is "correct" for now, but I'm sure that a lot of my reasons for holding those beliefs are faulty, or there'd probably be at least a few more people who agree with me. I want to determine exactly what's happening (and why) when politics and political philosophy come up in a conversation/internal monologue and I end up thinking to myself "Ah, good, my prior beliefs were exactly correct!", with the eventual goal of refining/discarding/bolstering those beliefs, because the chances that they actually were correct 100% of the time is vanishingly small.

That's what got me hooked on LW, at least, but pretty much everything here is interesting.

comment by hairyfigment · 2010-10-14T23:01:59.904Z · score: 3 (3 votes) · LW · GW

Do what thou wilt shall be the whole of the Law. I'm a currently unemployed library school graduate with a fondness for rationality. I believe as a child I read most of Korzybski's Bible of general-semantics, which I now think breaks its own rules about probability but still tends to have value for the people most likely to believe in it.

I didn't plan to mention it before seeing this, but I practice an atheistic form of Crowley's mystical path. I hope to learn how to produce certain experiences in myself (for whoever I saw arguing about a priori certainty, call them non-Kantian experiences) while connected to whatever brain-scanners exist fourteen-odd years from now.

In that Crowley thread I saw a few bits that seem misleading, and I think I can explain some of them if people here still have an interest. Oh, and did Yvain really link to a copy of this without telling people to beware the quotation marks? That's just mean. ^_^

I also think Friendly AI seems like a fine idea, and I hope if the SIAI doesn't produce an FAI in EY's lifetime, they at least publicize a more detailed theory of Friendliness.

comment by edgar · 2010-08-12T13:16:06.913Z · score: 3 (3 votes) · LW · GW

Hello I am a professional composer/composition teacher and adjunct instructor teaching music aesthetics to motion graphic artists at the Fashion Institute of Technology and in the graduate computer arts department at the School of Visual Arts. I have a masters from the Juilliard School in composition and have been recorded on Newport Classics with Kurt Vonnegut and Michael Brecker.

I live and work in New York City. I spend my life composing and explaining music to students who are not musicians, connecting the language of music to the principles of the visual medium. Saying the accurate thing getting others to question me letting them find their way and admitting often that I am wrong is a life long journey.

comment by komponisto · 2010-08-12T13:44:13.033Z · score: 0 (0 votes) · LW · GW

Welcome! Always nice to have more music people around here.

comment by Alexei · 2010-08-02T15:40:00.996Z · score: 3 (3 votes) · LW · GW

Hello everyone!

I've been quietly lurking on this website for a while now, reading articles as fast as I can with much enthusiasm. I've read most of Eliezer's genius posts and started to read through others' posts now. I've came to this website when I learned about AI-in-a-box scenario. I am a 23 year old male. I have a B.S. in computer science. I like to design and program video games. My goal in life is to become financially independent and make games that help people improve themselves. I find the subject of rationality to be very interesting and helpful in general, though I have trouble seeing the application for the more scientific parts (bayes) of rationality in real life, since there is no tag attached to most events in life. I would like to pose a question to this community: do you think video games can help spread the message and the spirit of this website? What kind of video games will accomplish that? Would you be interested in working on a game or contributing to one in other ways (e.g. donations or play testing)? Or maybe instead of writing games I should just commit to S.I. and work on F.A.I.?

comment by Oscar_Cunningham · 2010-08-02T15:55:17.577Z · score: 1 (1 votes) · LW · GW

We've already thought about the possibility of a game. See this page. IIRC PeerInfinity is particularly fond of the idea.

comment by red75 · 2010-06-06T10:18:05.084Z · score: 3 (5 votes) · LW · GW

Hello. I'm 35, Russian, work as very applied programmer. I end up here by side effect of following path RNN -> RBM -> DBN -> G. E. Hinton -> S. Legg's blog.

I was almost confident about my biases, when "Generalizing From One Example" take me by surprise (some time ago I noticed that I cannot visualize abstract colored cube without thinking color's name, so I generalized. Now I generalized this case of generalization, and had a strange feeling). I'd attention switch and desided to explore.

comment by RobinZ · 2010-06-07T16:21:19.681Z · score: 1 (1 votes) · LW · GW

Welcome!

If you want a cool place to start, I recommend the links on the About page and whatever strikes your fancy when you page through the Sequences - "Knowing About Biases Can Hurt People" is a particularly interesting one if you liked "Generalizing From One Example".

comment by red75 · 2010-06-07T17:59:55.982Z · score: 0 (0 votes) · LW · GW

Thanks. This site is so content rich, I found it diffucult to even overview full range of topics. And yes, the Sequences are handy.

comment by dyokomizo · 2010-05-29T11:29:40.588Z · score: 3 (3 votes) · LW · GW

Hi, I'm Daniel. I've read OB for a long time and followed on LW right in the beginning, but work /time issues in the last year made my RSS reading queue really long (I had all LW posts in the queue). I'm a Brazilian programmer, long time rationalist and atheist.

comment by sclamons · 2010-04-19T21:10:05.417Z · score: 3 (3 votes) · LW · GW

Hello from the lurking shadows!

Some stats:

  • Name: Samuel Clamons
  • Birth Year: 1990
  • Location: College of William and Mary or northern VA, depending on the time of year
  • Academic interests: Biology, mathematics, computer science *Personal interests: Science fiction, philosophy, understanding quantum mechanics, writing.

I've pretty much always been at least an aspiring rationalist, and I convinced myself of atheism at a pretty early age. My journey to LW started with my discovery of Aubrey de Gray in middle school and my discovery of the transhumanism movement in high school. Some internet prodding brought me to SL4, but I was intimidated with the overwhelming number of prior posts and didn't really read much of it. The little I did read, however, led me to Eliezer's Creating Friendly AI, which struck me on perusal as the most intelligently-written thing I'd read since The Selfish Gene. Earlier this year, the combination of reading through a few of Gardner Dozois' short "best of" short story collections and the discovery of Google Reader brought me to some of Eliezer's posts on AI and metaethics, and I've been reading through LW ever since. I'm currently plowing slowly through Eliezer's quantum physics sequence while trying not to fall behind too much on new threads.

My primary short-term goal is to learn as much as I can while I'm still young and plastic. My primary mid-range goals are to try to use technology to enhance my biology and to help medical immortality become practical and available while I'm still alive. My long-term goals include understanding physics, preserving what's left of the environment, and maximizing my happiness (while remaining within reasonable bounds of ethics).

I also have a passing but occasionally productive interest in writing science fiction, as well as a strong interest in reading it.

comment by 6n5x1hn1sq · 2010-08-04T16:55:42.840Z · score: -1 (1 votes) · LW · GW

Didn't know where else to find S. E. C. Don't know if you'll see this.

comment by arundelo · 2010-04-17T00:43:06.564Z · score: 3 (3 votes) · LW · GW

Hi! I've been on Less Wrong since the beginning. I'm finally getting around to posting in this thread. I found Less Wrong via Overcoming Bias, which I (presumably) found by wandering around the libertarian blogosphere.

comment by clarissethorn · 2010-03-15T10:55:15.795Z · score: 3 (3 votes) · LW · GW

I looked around for an FAQ link and didn't see one, and I've gone through all my preferences and haven't found anything relevant. Is there any way to arrange for followup comments (I suppose, the contents of my account inbox) to be emailed to me?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-15T11:19:31.262Z · score: 4 (4 votes) · LW · GW

Is there any way to arrange for followup comments (I suppose, the contents of my account inbox) to be emailed to me?

Not that I know of, I'm afraid. There are lots of requested features that we would implement if we had the programmatic resources, but alas, we don't. One just has to check if the envelope is red once in a while.

comment by taryneast · 2010-12-12T16:10:46.271Z · score: 0 (0 votes) · LW · GW

What language is LessWrong written in? Is it Open Source?

I'd suspect that there may be a number of "programmatic resources" (ie us computer geeks) on LW that would be willing to contribute if it were open enough to do so.

comment by Vladimir_Nesov · 2010-12-12T16:21:40.822Z · score: 1 (1 votes) · LW · GW

http://lesswrong.com/lw/1t/wanted_python_open_source_volunteers/

comment by taryneast · 2010-12-12T16:24:49.193Z · score: 0 (0 votes) · LW · GW

hmmm... it appears I need to exercise my search-fu a little more.

thanks for the link :)

comment by markrkrebs · 2010-02-26T14:07:57.447Z · score: 3 (3 votes) · LW · GW

Hi! Vectored here by Robin who's thankfully trolling for new chumps and recommending initial items to read. I note the Wiki would be an awesome place for some help, and may attempt to put up a page there: NoobDiscoveringLessWrongLeavesBreadcrumbs, or something like that.

My immediate interest is argument: how can we disagree? 1+1=2. Can't that be extrapolated to many things. I have been so happy to see a non-cocky (if prideful) attitude in the first several posts that I have great hopes for what I may learn here. We have to remember ignorance is an implacable enemy, and being insulting won't defeat it, and we may be subject to it ourselves. I've notice I am.

First post from me is coming shortly.

  • mark krebs
comment by wedrifid · 2010-02-26T14:52:44.500Z · score: 1 (1 votes) · LW · GW

Hi! Vectored here by Robin who's thankfully trolling for new chumps and recommending initial items to read.

Aaahhh. Now I see. RobinZ.

I usually read 'Robin' as Robin Hanson from Overcoming Bias, the 'sister site' from the sidebar. That made me all sorts of confused when I saw that you first found us when you were talking to a biased crackpot.

Anyway, welcome to Lesswrong.com.

How can we disagree? 1+1=2. Can't that be extrapolated to many things?

Let's see:

  • One of us is stupid.
  • One of us doesn't respect the other (thinks they are stupid).
  • One of us is lying (or withholding or otherwise distorting the evidence).
  • One of doesn't trust the other (thinks they aren't being candid with evidence so cannot update on all that they say).
  • One of us doesn't understand the other.
  • The disagreement is not about facts (ie. normative judgments and political utterances).
  • The process of disagreement is not about optimally seeking facts (ie. it is a ritualized social battle.)

Some combination of the above usually applies, where obviously I mean "at least one of us" in all cases. Of course, each of those dot points can be broken down into far more detail. There are dozens of posts here describing how "one of use could be stupid". In fact, you could also replace the final bullet point with the entire Overcoming Bias blog.

comment by RobinZ · 2010-02-26T14:59:12.406Z · score: 0 (0 votes) · LW · GW

I usually read 'Robin' as Robin Hanson from Overcoming Bias, the 'sister site' from the sidebar. That made me all sorts of confused when I saw that you first found us when you were talking to a biased crackpot.

So do I, actually. He got here first, is the thing.

comment by realitygrill · 2010-02-20T04:40:51.681Z · score: 3 (3 votes) · LW · GW

Hi. My name's Derrick.

I've been reading LW and HN for a while now but have only just started to learn to participate. I'm 23, ostensibly hold a bachelor's in economics, and interested in way too much - a dilettante of sorts. Unfortunately I have the talent of being able to sound like I know stuff just by quickly reading around a subject.

Pretty much have always been a Traditional Rationalist; kind of treated the site discussions as random (if extremely high impact) insights. Getting interested in Bayesian modeling sort of sent me on a path here. Lots of Eliezer's Coming of Age sequence reminds me of myself. Is 23 the magical age for Bayesian Enlightenment?

My current interest is in the Art of Strategy, in the way Musashi set down.

Just discovered the sequences and some recommended books! Think I'm going to be sidetracked for a while now...

comment by Kevin · 2010-01-27T09:54:50.234Z · score: 3 (3 votes) · LW · GW

Hi. My name's Kevin. I'm 23. I graduated with a degree in industrial engineering from the University of Pittsburgh last month. I have a small ecommerce site selling a few different kinds of herbal medicine, mainly kratom, and I buy and sell sport and concert tickets. Previously I started a genetic testing startup and I am gearing up for my next startup.

I post on Hacker News a lot as rms. kfischer &$ gmail *^ com for email and IM, kevin143 on Twitter, kfischer on Facebook.

I signed up for Less Wrong when it was first started but have just recently reached the linguistic level where I feel I can almost keep up with the conversation. 9 months ago I found myself bored by the nearly exclusive focus on meta-conversation and rationality. I would just read Eliezer's less meta stuff. But since graduating from school and having a job that requires me to work no more than 2 hours a day, I've been able to dedicate myself to social hedonism/relationship building and philosophy. I've learned more in one month of posting here than I did in my last two years of college classes.

I posted my rationalist origin story a while ago. http://lesswrong.com/lw/2/tell_your_rationalist_origin_story/74

comment by Paul Crowley (ciphergoth) · 2010-01-27T10:41:10.681Z · score: 3 (3 votes) · LW · GW

I'm sure you're not surprised by this question :-) but if you're a rationalist, how come you sell herbal medicines?

comment by Kevin · 2010-01-27T10:59:19.287Z · score: 3 (3 votes) · LW · GW

Herbal medicine is a polite euphemism for legal drugs. The bulk of our business comes from one particular leaf that does have legitimate medical use and is way, way more effective than a reasonable prior says it should be.

We were actually planning on commercializing the active ingredient (called 7H), based on this gap we found in the big pharma drug discovery process, and it would have been a billion dollar business. However, it would have required us to raise money for research, so we could iterate through all of the possible derivatives of the molecule and it's nearly impossible to raise money for research without having a PhD in the relevant area. We tried but kept hitting catch 22's.

At the most recent Startup School, I met someone who introduced me to a young CEO funded by top VCs who assured me that this idea fit the VC model perfectly, that he was pretty confident we could raise a million dollars for research and a patent, and that for something with potential like this, it did not matter at all that our team was incomplete, the VCs would find us people. I told him to give me a day to revise our one pager. I did a quick patent search and found that the Japanese discoverers of 7H had just filed a patent on all possible derivatives of 7H -- and they found some really awesome derivatives. They discovered 7H in 2001 and filed for the patent of the derivative molecules in 2009. For various reasons, we believed that their funding was not for all derivatives of 7H and that they were chasing an impossible pharmaceutical dream, but in retrospect we believe they were selectively publishing papers of their discoveries to throw others off of their tracks, why else would they have published the discovery of a medically useless derivative?

We came so close, but it always seemed a little too good to be true. There's always the next thing. For now, selling the leaf itself pays the rent.

PM or email for more details about the herb/molecule in question; I think it's probably inappropriate to post the links to my business or even the relevant Wikipedia page here.

comment by Paul Crowley (ciphergoth) · 2010-01-27T11:08:39.371Z · score: 1 (1 votes) · LW · GW

OK, that makes sense, thanks!

comment by ideclarecrockerrules · 2010-01-06T08:29:22.882Z · score: 3 (3 votes) · LW · GW

Male, 26; Belgrade, Serbia. Graduate student of software engineering. Been lurking here for a few months, reading sequences and new stuff through RSS. Found the site through reddit, likely.

Self-diagnosed (just now) with impostor syndrome. Learned a lot from reading this site. Now registered an account to facilitate learning (by interaction), and out of desire to contribute back to the community (not likely to happen by insightful posts, so I'll check out the source code).

comment by Sly · 2010-01-03T11:30:35.773Z · score: 3 (3 votes) · LW · GW
  • Anthony
  • Age 21
  • Computer Science Student
  • Seattle/Redmond area

I have been lurking LW and OB since summer and finally became motivated/bored enough to post. I do not remember exactly how I came to find this site, but it was probably from following a link on some atheist blog or forum.

I became interested in rationality after taking some philosophy classes my freshman year and discovering that I had been wrong about religion. Everything followed from that.

Interests that you probably do not care about: Gaming and game design in particular. I have thus far made a flash game and an iPhone game, both of which are far too difficult for most people.

comment by Matt_Duing · 2009-10-17T04:00:19.886Z · score: 3 (3 votes) · LW · GW

Name: Matt Duing Age: 24 Location: Pittsburgh, PA Education: undergraduate

I've been an overcoming bias reader since the beginning, which I learned of from Michael Anissimov's blog. My long term goal is to do what I can to help mitigate existential risks and my short term goals include using rationality to become a more accurate map drawer and a more effective altruist.

comment by pdf23ds · 2009-09-21T06:32:52.760Z · score: 3 (3 votes) · LW · GW

Eh. Might as well.

Chris Capel (soon to be) Mount Pleasant, TX (hi MrHen!) Programmer

I've been following Eliezer since the days of CFAI, and was an early donor to SIAI. I struggle with depression, and thus am much less consistently insightful than I wish I'd be. I'm only 24 and I already feel like I've wasted my life, fallen permanently behind a bunch of the rest of you guys, which kind of screws up my high ambitions. Oh well.

I'd like to see a link explaining the mechanics of the karma system (like how karma relates to posting, for instance) in this post.

comment by orthonormal · 2009-12-30T21:16:04.320Z · score: 1 (1 votes) · LW · GW

Welcome, Chris!

I'm only 24 and I already feel like I've wasted my life, fallen permanently behind a bunch of the rest of you guys, which kind of screws up my high ambitions. Oh well.

It's poor form of me to analyze you from outside, but this reminds me of the discussion of impostor syndrome we've been having in another thread. I definitely identify with this kind of internal monologue, and it's helped me to recognize that others suffer it too (and that it's typically a distorted view).

I'd like to see a link explaining the mechanics of the karma system (like how karma relates to posting, for instance) in this post.

I second this, especially now that the karma threshold for posting has been changed.

comment by pdf23ds · 2010-01-03T03:44:49.230Z · score: 1 (1 votes) · LW · GW

I don't think I have a problem with imposter syndrome in particular. I believe I'm appropriately proud of some of my real accomplishments.

comment by orthonormal · 2010-01-03T05:00:26.288Z · score: 0 (0 votes) · LW · GW

As well you should be. Great idea, and (reading the comments) well executed!

comment by Larks · 2009-08-11T23:21:03.861Z · score: 3 (3 votes) · LW · GW
Handle: Larks (also commonly Larklight, OxfordLark, Artrix)
Name: Ben
Sex: Male
Location: Eastbourne, UK 
Age: at 17 I suspect I may be the baby of the group?
Education: results permitting (to which I assign a probability in excess of 0.99) I'll be reading Mathematics and Philosophy at Oxford
Occupation: As yet, none. Currently applying for night-shift work at a local supermarket

I came to LW through OB, which I found as a result of Bryan Caplan's writing on Econlog (or should it be at Econlog?). I fit much of the standard pattern: atheist, materialist, economist, reductionist, etc. Probably my only departure is being a Conservative Liberal rather than a libertarian; an issue of some concern to me is the disconnect between the US/Econlog/OB/LW/Rationalist group and the UK/Classical Liberal/Conservative Party group, both of which I am interested in. Though Hayek, of course, pervades all.

In an impressive display, I suppose, of cognitive dissidence, I realised that the Bible and Evolution were contradictory in year 4 (age:8), and so came to the conclusion that the continents had originally been separated into islands on opposite sides of the planet. Eden was on one side, evolution on the other, and then continental drift occurred. I have since rejected this hypothesis. I came to Rationalism partly as a result of debating on the NAGTY website.

There are probably two notable influences OB/LW have had on my life. Firstly, I've begun to reflexively refer to what would or would not be empirically the case under different policies, states of affairs, etc., thus making discourse notably more efficient (or at least, it makes it harder for other people to argue back. Hard to tell the difference.)

Secondly, I've given up trying out out-argue my irrational Marxist friend, and instead make money off him by making bets about political and economic matters. This does not seem to have affected his beliefs, but it is profitable.

comment by Alicorn · 2009-08-11T23:28:46.091Z · score: 3 (3 votes) · LW · GW

cognitive dissidence

I suspect you mean "cognitive dissonance". Perhaps you meant "cognitive dissidents", though, which is closer in spelling and would be a charming notion.

Edit: I looked it up and apparently, unbeknownst to me, "dissidence" is a word. But I still suspect that "dissonance" was meant and that "dissidents" would have been charming.

comment by conchis · 2009-08-11T23:32:28.188Z · score: 4 (4 votes) · LW · GW

Dissidence (i.e. dissent/the state of being a dissident) actually seems to fit the context better than dissonance. I thought it was a nice turn of phrase.

comment by Larks · 2009-08-12T23:06:25.440Z · score: 1 (1 votes) · LW · GW

I'm glad that my word has caused such joy. I've now read the line so many times I can't for the life of me work out which one I intended, or is correct, or recall if it was simply a typo!

comment by ajayjetti · 2009-07-23T01:24:38.555Z · score: 3 (3 votes) · LW · GW

Hi

I am Ajay from India. I am 23. I was a highly rebellious person(still am i think), flunked out college, but completed it to become a programmer. But as soon as i finished college, i had severe depression because of a woman. I than thought of doing Masters degree in US, and applied, but then dropped the idea.Then i recaptured a long gone passion to make music, so i started drumming. I got accepted to berklee college of music, but then i lost interest to make a career out of it, i have some reasons for it. Then i started reading a lot(parallel to some programming). I face all the problems that an average guy faces(from social to economic problems). I graduated from one of the top colleges in india and now don't do my degree any justice. sometimes i think about the fact that all my colleagues are happy working with companies like google, oracle, etc. In a spur to make a balance, i gave gmat and applied and got admit to some supposedly TOP MBA schools. But i again lost interest for pursuing that thing. Now i write a bit, and read and i teach primary school mathematics in a local school. I love music ranging from art tatum to balamurlikrishna to illayraja to blues. I have been to US once when i was working with Perot systems bangalore(i was campus placed there). I would like to travel more, but i dont see that happening in near future because of financial contraints and constraints by governments of this world.

So, i always keep searching for some interesting "cures" on internet. One fine day i found paul grahams website through some Ajax site. Then i was reading something on hacker news, something related to cult following and stuff. There was a name mentioned there--Eliezer yudkowsky(hope i spelled it right). So i wikied that name. i found his site and then from there to less wrong and overcoming bias. Since 2 months, i am really obsessed by this blog. I dont know how will this help me "practically", but i am quite happy reading and demystifying my brain on certain things.

One thing: I have noticed that this forum has people who are relatively intellectual. Lot of them seem to be from developed countries, who have got very less idea about how things work in a country like India. Sitting here, all these things that are happening in "developed" world seem incredulous to me. I get biased like lot of indians who think US or Europe is a better place. I dont need to say that there are millions of indians in these regions. Then i think some more. So far, i dont think anybody is doing things any differently when it comes to living a life. Even in this community i dont see we are living differently, i dont know whether we even need to!!

We are born, we live and we die, that is the only truth that appeals me so far. One might think that a different state of my mind would give different opinion about what my brain thinks is "truth", but i doubt that. But i love this site, if anybody doubts that whether this site has practical benefits or not---I say that it is very useful. Onething stands out, people here are open to criticism. Even if we don't get truth from this site, we have so many better routes to choose from!! This site seems to be a map. For a timeless travel. Dont give a shit about what others have to say. People can come with theories about everything it seems. And i dont like when people have -ve stuff to say about this forum. I am and would like to loyal to the forum which serves me good.

I hope something happens that we are able to live for atleast 500 years. I think that would be a good time to know few things( my fantasy)

i have recently started writing at http://ajayjetti.com/

thanks for reading if u have reached here!!

comment by RobinZ · 2009-07-23T01:32:21.710Z · score: 0 (0 votes) · LW · GW

Welcome! I'll be interested to hear what you have to say.

comment by Whisper · 2009-07-22T06:56:31.615Z · score: 3 (5 votes) · LW · GW

Greetings. To this community, I will only be known as "Whisper". I'm a believer in science and rationality, but also a polythiest and a firm believer that there are some things that science cannot explain. I was given the site's address by one Alicorn, who I've been trying to practice Far-Seeing with...with much failure.

I'm 21 years old right now, living in NY, and am trying to write my novels. As for who I am, well, I believe you'll all just have to judge me for yourself by my actions (posts) rather than any self-description. Thankee to any of you who bothered to read.

comment by thomblake · 2009-07-22T14:19:42.136Z · score: 3 (3 votes) · LW · GW

a firm believer that there are some things that science cannot explain

I think this is a common enough epistemic position to be in, though some of us might define our terms a bit differently.

For any decent definitions of 'explain' and 'science', though, whatever "science can't explain" is not going to be explained by anything else any better.

comment by Cyan · 2009-04-21T00:59:48.588Z · score: 3 (3 votes) · LW · GW
  • Handle: Cyan
  • Age: 31
  • Species: Pan sapiens (male)
  • Location: Ottawa, Ontario, Canada
  • Education: B.Sc. biochemistry, B.A.Sc. chemical engineering, within pages of finishing my Ph.D. thesis in biomedical engineering
  • Occupation: statistical programmer (would be a postdoc if I were actually post the doc) at the Ottawa Institute of Systems Biology

    I'm principally interested in Bayesian probability theory (as applied in academic contexts as opposed to rationalist ones). I don't currently attempt to apply rationalist principles in my own life, but I find the discussion interesting.

comment by MorganHouse · 2009-04-18T18:06:05.163Z · score: 3 (5 votes) · LW · GW
  • Handle: MorganHouse
  • Age: 25
  • Education: Baccalaureate in natural sciences
  • Occupation: Freelance programmer
  • Location: West Europe
  • Hobbies: Programming, learning, traveling, dancing

I discovered Less Wrong from a post on Overcoming Bias. I discovered Overcoming Bias from a comment on Slashdot.

I have been promoting rationality for as long as I can remember, although I have improved much in the past few years and even more after discovering this forum. About the same time as "citation needed" exploded on Wikipedia, I started applying this standard rigorously to my conversations, and I look for outside sources in my discussions every day. This community has taught me to promote nothing less than the full truth, wh