Welcome to Less Wrong!
post by MBlume · 2009-04-16T09:06:25.124Z · LW · GW · Legacy · 2000 commentsContents
2001 comments
If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, or how you found us. Tell us how you came to identify as a rationalist, or describe what it is you value and work to achieve.
If you'd like to meet other LWers in real life, there's a meetup thread and a Facebook group. If you've your own blog or other online presence, please feel free to link it. If you're confused about any of the terms used on this site, you might want to pay a visit to the LW Wiki, or simply ask a question in this thread. Some of us have been having this conversation for a few years now, and we've developed a fairly specialized way of talking about some things. Don't worry -- you'll pick it up pretty quickly.
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this; it happened to many of us. If you've any questions about karma or voting, please feel free to ask here.
If you've come to Less Wrong to teach us about a particular topic, this thread would be a great place to start the conversation, especially until you've worked up enough karma for a top level post. By posting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.
A note for theists: you will find LW overtly atheist. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false. If you'd like to know how we came to this conclusion you may find these related posts a good starting point.
A couple technical notes: when leaving comments, you may notice a 'help' link below and to the right of the text box. This will explain how to italicize, linkify, or quote bits of text. You'll also want to check your inbox, where you can always see whether people have left responses to your comments.
Welcome to Less Wrong, and we look forward to hearing from you throughout the site.
(Note from MBlume: though my name is at the top of this page, the wording in various parts of the welcome message owes a debt to other LWers who've helped me considerably in working the kinks out)
2000 comments
Comments sorted by top scores.
comment by BecomingMyself · 2011-01-15T23:35:59.874Z · LW(p) · GW(p)
Hi, I am Alyssa, a 16-year-old aspiring programmer-and-polymath who found her way to the wiki page for Egan's Law from the Achron forums. From there I started randomly clicking on links that mostly ended up leading to Eliezer's posts. I was a bit taken aback by his attitude toward religion, but I had previously seen mention of his AI Box thing (where (a) he struck me as awesome, and (b) he said some things about "intelligence" and "wisdom" that caused me to label him as an ally against all those fools who hated science), and I just loved his writing, so I spent about a week reading his stuff alternately thinking, "Wow, this guy is awesome" and "Poor atheist. Doesn't he realize that religion and science are compatible?" Eventually, some time after reading Religion's Claim to be Non-disprovable, I came to my senses. (It is a bit more complicated and embarrassing than that, but you get the idea.)
That was several months ago. I have been lurking not-quite-continuously since then, and it slowly dawned on me just how stupid I had been -- and more importantly, how stupid I still am. Reading about stuff like confirmation bias and overconfidence, I gradually became so afraid to trust myself that I became an expert at recognizing flaws in my own reasoning, without being able to recognize truth or flaws in others' reasoning. In effect, I had artificially removed my ability to consciously classify (non-obvious) statements as true: the same gross abuse of humility I had read about. After a bit of unproductive agonizing over how to figure out a better strategy, I have decided I'm probably too lazy for anything but making samples of my reasoning available for critique by people who are likely to be smarter than me -- for example, by participating in discussion on Less Wrong, which in theory is my goal here. So, hi! (I have been tweaking this for almost an hour and will submit it NOW.)
Replies from: lukeprog, None, MartinB, None, ata↑ comment by lukeprog · 2011-01-15T23:41:32.574Z · LW(p) · GW(p)
Welcome, Alyssa!
Finding out how "stupid" I am is one of the most important things I have ever learned. I hope I never forget it!
Also, congrats on seriously questioning your religion at your age. I didn't do so until much later.
Replies from: timtyler↑ comment by timtyler · 2011-01-15T23:59:06.570Z · LW(p) · GW(p)
I'm not sure Alyssa said she was religious!
Replies from: BecomingMyself↑ comment by BecomingMyself · 2011-01-16T01:51:43.190Z · LW(p) · GW(p)
Now that I think of it I didn't say it explicitly, but I was. I called myself Catholic, but I had already rejected the Bible (because it was written by humans, of course) and concluded that God so loved His beautiful physics that He would NEVER EVER touch the universe (because I had managed to develop a fondness for science, though for some reason I did not yet accept e.g. materialism).
Replies from: Tesseract↑ comment by Tesseract · 2011-01-17T00:24:30.164Z · LW(p) · GW(p)
That's pretty much Deism, I think. Not right, but not quite as wrong as some other possible approaches.
Welcome! I don't know much/how systematically you've read, but if you're wondering about what makes something "true", you'll want to check out The Simple Truth (short answer: if it corresponds to reality), followed by Making Beliefs Pay Rent and What is Evidence.
But it sounds like you've made a very good start.
↑ comment by [deleted] · 2012-10-15T00:33:49.228Z · LW(p) · GW(p)
You should check out the lesswrong for highschoolers facebook page
comment by Normal_Anomaly · 2010-11-14T04:01:16.525Z · LW(p) · GW(p)
My name's Normal Anomaly, and I'm paranoid about giving away personal information on the Internet. Also, I don't like to have any assumptions made about me (though this is likely the last place to worry about that), so I'd rather go without a gender, race, etc. Apologies for the lack of much personal data. I can say that my major interest is biology, although I am not yet anything resembling an expert. I eventually hope to work in life extension research. I’m an Asperger’s Syndrome Sci Fi-loving nerd, which is apparently the norm here.
I used to have religious/spiritual beliefs, though I was also a fan of science and was not a member of an organized religion. I believed it was important to be rational and that I had evidence for my beliefs, but I was rationalizing and refusing to look at the hard questions. A couple years ago, I was exposed to atheism and rationalism and have since been trying to make myself more reasonable/less insane. I found LW through Harry Potter and the Methods of Rationality a few months ago, and have been lurking and reading the sequences. I'm still scared of posting on here because it’s the first discussion forum where I have known myself to be intellectually outclassed.
I chose the name Normal Anomaly because in my everyday meatspace life I feel different from (read: superior to) everyone around me, but on LW I feel like an ordinary mortal trying to keep up with people talking over my head. Hopefully I've lurked long enough to at least speak the language, and I won't be an annoyance when I comment. I want to socialize with people superior to me; unfortunately for me, they tend to want the same.
In the time I've been lurking, I've started seriously considering cryonics and will probably sign up unless something else changes my mind. I think it's pretty likely that an AGI will be developed eventually, and if it ever is it definitely needs to be Friendly, but I have no idea when other than that I hope it’s in my lifetime, which I want to end only of my own choosing and possibly never.
Replies from: shokwave, Alicorn, Jack, Carinthium, NancyLebovitz, lsparrish↑ comment by shokwave · 2010-11-14T15:05:30.646Z · LW(p) · GW(p)
I'm still scared of posting on here because it’s the first discussion forum where I have known myself to be intellectually outclassed.
I have found that some of the time you can make up for a (perceived) lack of intellect with a little work, and this is true (from my own experience) here on LessWrong: when about to comment on an issue, it pays big dividends to use the search feature to check for something related in previous posts with which you can refine, change, or bolster your position. Of the many times I have done it, twice I caught myself in grievous and totally embarrassing errors!
For what it's worth, commenting on LW is so far from normal conversation and normal internet use that most intellects haven't developed methods for it; they have to grind through mostly the same processes as everyone else - and nobody can actually tell if it took you five seconds or five minutes to type your reply. My own replies might be left in the comment box for hours, to be reread with a fresh mind later and changed entirely.
tl;dr Don't be afraid to comment!
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2010-11-14T16:38:35.940Z · LW(p) · GW(p)
For what it's worth, commenting on LW is so far from normal conversation and normal internet use that most intellects haven't developed methods for it
This is interesting-- LW seems to be pretty natural for me. I think the only way my posting here is different from anywhere else is that my sentences might be more complex.
On the other hand, once I had a choice, I've spent most of my social life in sf fandom, where the way I write isn't wildly abnormal, I think.
Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?
Replies from: Emile, taryneast, katydee, shokwave, Swimmer963, wnoise, Randaly, None↑ comment by Emile · 2010-11-15T19:45:13.104Z · LW(p) · GW(p)
I find writing on LW pretty 'normal', on par with some other forums or blog comments (though with possibly less background hostility and flamewars).
I suspect the ban on discussing politics does more to increase the quality of discourse here than the posts on cognitive bias.
↑ comment by taryneast · 2010-12-12T15:21:22.678Z · LW(p) · GW(p)
Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?
Yes. I get the sense that here you are expected to at least try for rigor.
In other venues - it's totally ok to randomly riff on a topic without actually having thought deeply about either the consequences, or whether or not there's any probability of your idea actually having any basis in reality.
↑ comment by shokwave · 2010-11-15T05:22:30.610Z · LW(p) · GW(p)
Wow, that is interesting ... conditional on more people feeling this way (LW is natural), I might just have focused my intellect on rhetoric and nonreasonable convincing to the point that following LW's guidelines is difficult, and then committed the typical mind fallacy and assumed everyone had too.
Replies from: NihilCredo↑ comment by NihilCredo · 2010-11-15T07:26:28.647Z · LW(p) · GW(p)
Actually, I've come to notice that rhetoric and other so-called Dark Arts are still worth their weight in gold on LW, except when the harder subjects (math and logic) are at hand.
But LessWrong commenters definitely have plenty of psychological levers, and the demographic uniformity only makes them more effective. For a simple example, I guesstimate that, in just about any comment, a passing mention of how smart LessWrongers are is worth on average 3 or 4 extra karma points - and this is about as old as tricks can get.
Replies from: Jack, NancyLebovitz↑ comment by NancyLebovitz · 2010-11-15T19:24:55.714Z · LW(p) · GW(p)
Of course, LessWrongers are smarter than most people, but what's really striking is the willingness to update. And the modesty.
Replies from: Emile↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-14T02:13:37.008Z · LW(p) · GW(p)
Anyone who's reading this, do you think what's wanted at LW is very different from what's wanted in other venues?
I haven't noticed, but this is the first online community I've belonged to. I'm used to writing fiction, which may affect the way I post here, but if it does I don't notice it affecting it. Commenting feels natural. I don't try to make my sentences complex; if anything, I try to make them as simple as they can be to still convey my point. And at the very least, my comments and posts aren't drastically downvoted.
↑ comment by wnoise · 2010-11-15T06:21:13.121Z · LW(p) · GW(p)
LW feels fairly normal to me as well. It is different than my experience of (most) other forums, but that's because I adjust myself to be more explicit on other forums about things that I feel should be taken for granted, including all of common sense data, a materialistic worldview, and minor inferential steps. This lets me get to the point rather easily here without having to worry (as much) about being misunderstood.
↑ comment by Randaly · 2010-12-24T07:19:35.092Z · LW(p) · GW(p)
Are you talking about the level of rationality, about the expected level (or types) of knowledge, or the grammar and sentence structure?
For obvious reasons, the level of rationality expected here is far higher than (AFAIK) anywhere else on the internet.
The expected knowledge at LW...is probably middling to above average for me. More relevantly, much more knowledge of science, and in particular the sciences that contribute to rationality (or, more realistically, the ones touched on in the sequences), which tend to be fairly 'hard'. I've found a much higher knowledge of, e.g. history, classical philosophy, politics/political science, and other 'softer' disciplines is expected elsewhere.
As for grammar, I'd say that LW is middling to below average, though this may be availability bias: LW is much larger than most of the other internet communities I belong to, so it could have a higher number of errors while still having a better average level of grammar.
Replies from: Emile, wedrifid↑ comment by wedrifid · 2010-12-24T07:29:10.737Z · LW(p) · GW(p)
The expected knowledge at LW...is probably middling to above average for me. More relevantly, much more knowledge of science, and in particular the sciences that contribute to rationality (or, more realistically, the ones touched on in the sequences), which tend to be fairly 'hard'. I've found a much higher knowledge of, e.g. history, classical philosophy, politics/political science, and other 'softer' disciplines is expected elsewhere.
I presume you are averaging over a high-sophistication sample of the internet, not the internet at large.
↑ comment by Jack · 2010-11-14T09:43:14.154Z · LW(p) · GW(p)
Also, I don't like to have any assumptions made about me (though this is likely the last place to worry about that), so I'd rather go without a gender, race, etc.
FYI, this had a "don't think of a pink elephant" effect on me. I immediately made guesses about your gender, race and age. I'm betting I'm not the only one. Sorry!
Anyway welcome! Sounds like you'll fit right in. Don't be too scared to comment, especially if it is just to ask a question (I don't recall ever seeing a non-sarcastic question downvoted).
↑ comment by Carinthium · 2010-11-14T09:42:20.490Z · LW(p) · GW(p)
Mightn't you be discriminated against for having Aspergers Syndrome? There is presumably some risk of such even here.
Replies from: Jack↑ comment by Jack · 2010-11-14T09:47:41.029Z · LW(p) · GW(p)
I sometimes feel discriminated against here for not being autistic enough.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-11-14T21:10:36.309Z · LW(p) · GW(p)
Can you, or others, give some examples of this?
I don't doubt you, but this is an area where I, and other auties, seem likely to be less well calibrated - we tend to encounter discrimination often enough that it can come to seem like a normal part of interacting with people, rather than something that we should avoid doing. Being made aware of it when that's the case is then likely to be useful to those of us who'd like to recalibrate ourselves.
Replies from: Jack↑ comment by Jack · 2010-11-14T22:06:27.210Z · LW(p) · GW(p)
Er. For example, it is really hard to communicate here without being totally literal! And people don't get my jokes!:-)
I wasn't complaining. I was trying to point out that the risk of being discriminated against for having Aspergers Syndrome here was very low given the high number of autism spectrum commenters here and the general climate of the site. I thought I was making a humorous point about the uniqueness of Less Wrong, like "We're so different from the rest of the internet; we discriminate against neurotypicals! Take that rest of the world!" while also sort of engaging in collective self-mockery "Less Wrong is a really autistic place."
I really hope the upvotes are from people who chuckled, and not sympathy for an oppressed minority (in any case I'm like a 26 on the Baron-Cohen quiz).
Sorry if I alarmed anyone. *Facepalm
Replies from: AdeleneDawner, Kingreaper, Normal_Anomaly↑ comment by AdeleneDawner · 2010-11-14T22:48:02.339Z · LW(p) · GW(p)
I did chuckle, actually, but that's not mutually exclusive with it being a true statement that I haven't previously noticed the truth of. It's better to check than to assume, per my values. :)
↑ comment by Kingreaper · 2010-12-12T15:41:46.723Z · LW(p) · GW(p)
I really hope the upvotes are from people who chuckled, and not sympathy for an oppressed minority (in any case I'm like a 26 on the Baron-Cohen quiz).
I upvoted due to chuckling, because it contains a nugget of truth.
I don't believe that neurotypicals are oppressed here, but I can certainly see that NTs would feel marginalised in the same way that auts can feel marginalised in normal social scenes.
I probably go below 26 on the baron-cohen test sometimes (I normally lie at 31, but recent bout of depression has had me at ~38) but if so, I've never taken it at such a time (well, I wouldn't expect to, I'd be too busy socialising)
↑ comment by Normal_Anomaly · 2010-11-15T00:02:45.302Z · LW(p) · GW(p)
I got that you may have been making a joke, but I wasn't sure how much truth was behind it. Now that I know it was a joke, I do find it funny.
↑ comment by NancyLebovitz · 2010-11-14T09:23:26.946Z · LW(p) · GW(p)
That's an interesting choice to not give personal information. Do you find that people tend to jump to conclusions about you? Do you usually tell them that you aren't giving them that information?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2010-11-14T21:28:27.066Z · LW(p) · GW(p)
I don't really know how to deal with multiple replies without making six different comments and clogging the thread, so I'm responding to everyone upthread of me in reverse order.
Nancy: I lurk on a lot more sites than I comment, so I don't really have the experience to answer those questions. This is the first site I've joined where people give away as much info as they do.
Jack: I'm sorry you're discriminated against and I'll try not to do it. Also, like I said, I rarely get on forums, so I didn't know about the "don't think of a pink elephant effect". I'm glad you pointed it out.
Carinthium: I'm happy with my Asperger's; I wouldn't give up the good parts to get rid of the bad parts. I've never encountered discrimination on that score, so it didn't really occur to me. Besides, it's the sort of thing that will probably be visible in my comments.
Shokwave: Thanks for the reassurance. I do find the conversation here unique, in content and in tone.
Alicorn: I like e for the subject case, en for the object, and es for possessive, but I don't use them in meatspace or other forums as much as in my thoughts because it confuses people. I'll probably use them here. What do you think?
Replies from: Alicorn↑ comment by Alicorn · 2010-11-14T22:05:54.617Z · LW(p) · GW(p)
Alicorn: I like e for the subject case, en for the object, and es for possessive, but I don't use them in meatspace or other forums as much as in my thoughts because it confuses people. I'll probably use them here. What do you think?
I'll use those pronouns for you if you prefer them. When I'm picking gender-neutral pronouns on my own I usually use some combination of Spivak and singular "they".
↑ comment by lsparrish · 2010-11-14T04:26:18.546Z · LW(p) · GW(p)
Welcome! One thing you can easily do without being a super-genius is spread more accurate ideas about cryonics. I get a lot of mileage out of Google Alerts and Yahoo Answers for this purpose. I still don't have arrangements myself, but I certainly plan to.
comment by EStokes · 2009-12-19T23:58:59.307Z · LW(p) · GW(p)
I'm Ellen, age 14, student, planning to major in molecular biology or something like that. I'm not set on it, though.
I think I was browsing wikipedia when I decided to google some related things. I think I found some libertarian or anarchist blog that then had a link to Overcoming Bias or Lesswrong. Or I might've seen the word transhumanism on the wiki page for libertarianism and googled it, with it eventually leading here somehow. My memory is fuzzy as it was pretty irrelevant to me.
I'm an atheist, and have been for a while, as is typical for this community. I wasn't brought up religiously, so it was pretty much untheism that turned into atheism.
My rationalist roots... I've always wanted to be right, of course. Partly because I could make mistakes from being wrong, partly because I really, really hated looking stupid. Then I figured that I couldn't know if I was right unless I listened to the other side, really listened, and was careful. (Not enough people do even this. People are crazy, the world is mad. Angst, angst.) I found lesswrong which has given me tools to much more effectively do this. w00t.
I'm really lazy. Curse you, akrasia!
It should be obvious how I came up with my username. Aren't I original?
Some other hobbies I have are gaming and anime/manga. Amusingly enough, I barely ever watch any anime. The internet is very distracting.
Edit: Some of this stuff is outdated. I don't plan to major in molecular biology, for one, and I don't like how I wrote the rationalist roots part. Meh. I doubt anyone is going to see this, but I'm 16 now and plan to major in Computer Science.
Replies from: Eliezer_Yudkowsky, Kevin, Zack_M_Davis↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-20T23:40:11.536Z · LW(p) · GW(p)
Welcome on board! You're a key segment of my target audience, so please speak up if you have any thoughts on things I could have done better in my writing.
↑ comment by Kevin · 2010-02-17T07:16:49.793Z · LW(p) · GW(p)
I strongly recommend people go to school for something they find interesting, but since I don't think it's commonly known information, I would like to note that salaries for biologists are lower than for other scientists. Lots more people graduate with PhDs in biology than PhDs in physics which really drives down the salaries for biologists that don't have tenure. Though if you plan on going to professional school (medical school, business school, etc.), a molecular biology degree is a good thing to have if you enjoy molecular biology. Again, I really think people should go to school for something they like, but if you want to make a lot of money, don't become a researching biologist. Biology researchers with MD's do a lot better financially.
Replies from: EStokes↑ comment by Zack_M_Davis · 2009-12-20T00:16:01.822Z · LW(p) · GW(p)
It should be obvious how I came up with my username. Aren't I original?
Apparently not. O, but welcome!
comment by free_rip · 2011-01-27T04:43:46.933Z · LW(p) · GW(p)
Hi, I'm Zoe. I found this site in a round-about way after reading Dawkin's The God Delusion and searching some things related to it. There was a comment in a forum mentioning Less Wrong and I was interested to see what it was.
I've been mainly lurking for the past few months, reading the sequences and some of the top posts. I've found that while I understand most of it, my high-school level math (I'm 16) is quite inadequate, so I'm working through the Khan Academy to try and improve it.
I'm drawn to rationalism because, quite simply, it seems like the world would be a better place if people were more rational and that has to start somewhere. Whatever the quotes say, truth is worthwhile. It also makes me believe in myself more to know that I'm willing and somewhat able to shift my views to better match the territory. Maybe someday I'll even advance from 'somewhat' into plain ol' 'able'.
My goals here, at this point, aren't particularly defined. I find the articles and the mission inspiring and interesting and think that it will help me. Maybe when I've learnt more I'll have a clearer goal for myself. I already analyze everything (to the point where many a teacher has been quite annoyed), so I suppose that's a start. I'm looking forward to learning more and seeing how I can use it all in my actual life.
Cheers, Zoe
Replies from: None↑ comment by [deleted] · 2011-12-20T16:59:51.283Z · LW(p) · GW(p)
Welcome!
Hope now a few months later you still find some utility from our community. Overall, I just wanted to chime in and say good luck in getting sane in your lifetime, its something all of us here strive for and its far from easy. :)
Replies from: free_rip↑ comment by free_rip · 2011-12-20T17:26:26.973Z · LW(p) · GW(p)
Thank you! I am still enjoying the site - there's so much good stuff to get through. I've read most of the sequences and top posts now, but I'm still in the (more important, probably) process of compiling a list of all the suggested activities/actions, or any I can think of in terms of my own life and the basic principles, for easy reference to try when I have some down-time.
Replies from: thomblakecomment by mni · 2009-07-24T21:41:16.399Z · LW(p) · GW(p)
Hello.
I've been reading Less Wrong from its beginning. I stumbled upon Overcoming Bias just as LW was being launched. I'm a young mathematician (an analyst, to be more specific) currently working towards a PhD and I'm very interested in epistemic rationality and the theory of altruist instrumental rationality. I've been very impressed with the general quality of discussion about the theory and general practice of truth-seeking here, even though I can think of places where I disagree with the ideas that I gather are widely accepted here. The most interesting discussions seem to be quite old, though, so reviving those discussions out of the blue hasn't felt like - for lack of a better word - a proper thing to do.
There are many discussions here of which I don't care about. A large proportion of people here are programmers or otherwise from a CS background, and that colors the discussions a lot. Or maybe it's just that the prospect of an AGI in recent future doesn't seem at all likely to me. Anyway, the AI/singularity stuff, the tangentially related topics that I bunch together with them, and approaching rationality topics from a programmer's point of view I just don't care about. Not very much, at least.
The self-help stuff, "winning is everything" and related stuff I'd rather not read. Well, I do my best not to. The apparent lack of concern for altruism in those discussions makes me even wish they wouldn't take place here in the first place.
And then there are the true failings of this community. I had been thinking of registering and posting in some threads about the more abstract sides of rationality, but I must admit I eventually got around to registering and posting because of the gender threads. But there's just so much bullshit going on! Evolutionary psychology is grossly misapplied (1). The obvious existence of oppressive cultural constructs (2) is flatly denied. The validity of anecdotes and speculation as evidence is hardly even questioned. The topics that started the flaming have no reason of even being here in the first place. This post pretty well sums up the failures of rationality here at Less Wrong; and that post has been upvoted to 25! Now, the failings and attitudes that surfaced in the gender debate have, of course, been visible for quite some time. But that the failures of thought seem so common has made me wonder if this community as a whole is actually worth wasting my time for.
So, in case you're still wondering, what has generously been termed "exclusionary speech" really drives people away (3). I'm still hoping that the professed rationality is enough to overcome the failure modes that are currently so common here (4). But unfortunately I think my possible contributions won't be missed if I rid myself of wishful thinking and see it's not going to happen.
It's quite a shame that a community with such good original intentions is failing after a good start. Maybe humans simply won't overcome their biases (5) yet in this day and age.
So. I'd really like to participate in thoughtful discussions with rationalists I can respect. For quite a long time, Less Wrong seemed like the place, but I just couldn't find a proper place to start (I dislike introductions). But now as I'm losing my respect for this community and thus the will to participate here, I started posting. I hope I can regain the confidence in a high level of sanity waterline here.
(Now a proper rationalist would, in my position, naturally reconsider his own attitudes and beliefs. It might not be surprising that I didn't find all too much to correct. So I might just as well assume that I haven't been mind-killed quite yet, and just make the post I wanted to.)
EDIT: In case you felt I was generalizing with too much confidence - and as I wrote here, I agree I was - see my reply to Vladimir Nesov's reply.
(1) I think failing to control for cultural influences in evolutionary psychology should be considered at least as much of a fail as postulating group selection. Probably more so.
(2) Somehow I think phrases like "cultural construct", especially when combined with qualifiers like "oppressive", trigger immediate bullshit alarms for some. To a certain extent, it's forgivable, as they certainly have been used in conjunction with some of the most well-known anti-epistemologies of our age. But remember: reversing stupidity doesn't make you any better off.
(3) This might be a good place to remind the reader that (our kind can't cooperate)[http://lesswrong.com/lw/3h/why_our_kind_cant_cooperate/]. (This is actually referring to many aspects of the recent debate, not just one.)
(4) Yes, I know, I can't cooperate either.
(5) Overcoming Bias is quite an ironic name for that blog. EDIT: This refers exclusively to many of Robin Hanson's posts about gender differences I have read. I think I saw a post linking to some of these recently, but I couldn't find a link to that just now. Anyway, this footnote probably went a bit too far.
Replies from: orthonormal, SoullessAutomaton, Vladimir_Nesov, None, MrHen, None, Z_M_Davis, Vladimir_Nesov↑ comment by orthonormal · 2009-07-24T22:43:48.476Z · LW(p) · GW(p)
Somehow I think phrases like "cultural construct", especially when combined with qualifiers like "oppressive", trigger immediate bullshit alarms for some. To a certain extent, it's forgivable, as they certainly have been used in conjunction with some of the most well-known anti-epistemologies of our age. But remember: reversing stupidity doesn't make you any better off.
Upvoted for this in particular.
↑ comment by SoullessAutomaton · 2009-07-24T23:33:58.199Z · LW(p) · GW(p)
I appreciate your honest criticisms here, as someone who participated (probably too much) in the silly gender discussion threads.
I also encourage you to stay and participate, if possible. Despite some missteps, I think there's a lot of potential in this community, and I'd hate to see us losing people who could contribute interesting material.
↑ comment by Vladimir_Nesov · 2009-07-25T12:16:03.892Z · LW(p) · GW(p)
The evils of in-group bias are getting at me. I felt a bit of anger when reading this comment. Go figure, I rarely feel noticeable emotions, even in response to dramatic events. The only feature that could trigger that reaction seems to be the dissenting theme of this comment, the way it breached the normal narrative of the game of sane/insane statements. I wrote a response after a small time-out, I hope it isn't tainted by that unfortunate reaction.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2009-07-25T13:07:16.185Z · LW(p) · GW(p)
I don't think it's in-group bias. If anything, people are giving mni extra latitude because he or she is seen as new here.
If an established member of the community were to make the same points, that much of the discussion is uninteresting or bullshit, that the community is failing and maybe not worth "wasting" time for, and to claim to have interesting things to say but make excuses for not actually saying them, I bet there would be a lot more criticism in response.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-25T13:16:33.001Z · LW(p) · GW(p)
As I wrote, anger is an improbable reaction for me, and there doesn't seem to be anything extraordinarily angering about that comment, so I can't justify that emotion appearing in this particular instance. The fact that the poster isn't a regular might be a factor as well.
↑ comment by [deleted] · 2011-09-16T05:45:09.658Z · LW(p) · GW(p)
mni, I followed in your footsteps years later, and then dropped away, just as you did. I came back after several months to look for an answer to a specific question -- stayed for a bit, poking around -- and before I go away again, I'd just like to say: if this'd been a community that was able to keep you, it probably would have kept me too.
You seem awesome. Where did you go? Can I follow you there?
Replies from: Nisan↑ comment by Nisan · 2011-09-16T06:20:15.340Z · LW(p) · GW(p)
I see people leave Less Wrong for similar reasons all the time. In my optimistic moods, I try to understand the problem and think up ways to fix it. In my pessimistic moods, this blog and its meetups are doomed from the start; the community will retain only those women who are already dating people in the community; and the whole thing will end in a whimper.
Replies from: shokwave↑ comment by Z_M_Davis · 2009-07-24T23:08:35.516Z · LW(p) · GW(p)
I'm still hoping that the professed rationality is enough to overcome the failure modes that are currently so common here[.] But unfortunately I think my possible contributions won't be missed if I rid myself of wishful thinking and see it's not going to happen. [...] I'd really like to participate in thoughtful discussions with rationalists I can respect. For quite a long time, Less Wrong seemed like the place, but I just couldn't find a proper place to start (I dislike introductions). But now as I'm losing my respect for this community and thus the will to participate here, I started posting. I hope I can regain the confidence in a high level of sanity waterline here.
Oh, please stay!
↑ comment by Vladimir_Nesov · 2009-07-25T12:15:47.755Z · LW(p) · GW(p)
I assume that you are overconfident about many of the statements you made (and/or underestimate the inferential gap). I agree with some things you've said, but about some of the things you've said there seems to be no convincing argument in sight (either way), and so one shouldn't be as certain when passing judgment.
Replies from: Z_M_Davis, mni↑ comment by mni · 2009-07-27T10:29:47.683Z · LW(p) · GW(p)
I think I understand your point about overconfidence. I had thought of the post for a day or two but I wrote it in one go, so I probably didn't end up expressing myself as well as I could have. I had originally intended to include a disclaimer in my post, but for reasons that now seem obscure I left it out. When making as strong, generalizing statements as I did, the ambiguity of statements should be minimized a lot more thoroughly than I did.
So, to explain myself a little bit better: I don't hold the opinion that what I called "bullshit" is common enough here to make it, in itself, a "failing of this community". The "bullshit" was, after all, limited only to certain threads and to certain individuals. What I'm lamenting and attributing to the whole community is a failure to react to the "bullshit" properly. Of course, that's a sweeping generalization in itself - certainly not everyone here failed to react in what I consider a proper way. But the widest consensus in the multitude of opinions seemed to be that the reaction might be hypersensitivity, and that the "bullshit" should be discouraged only because it offends and excludes people (and not because it offends and excludes people for irrational reasons).
And as for overconfidence about my assessment of the "bullshit" itself, I don't really want to argue about that. Any more than I'd want to argue with people who think atheists should be excluded from public office. (Can you imagine an alternate LW in which the general consensus was that's a reasonable, though extreme, position to take? That might give an only slightly exaggerated example of how bizarrely out of place I considered the gender debate to be.) If pressed, I will naturally agree to defend my statements. But I wouldn't really want to have to, and restarting the debate isn't probably in anyone else's best interests either. So, I'll just have to leave the matter as something that, in my perspective, lessens appreciation for the level of discourse here in quite a disturbing way. Still, that doesn't mean that LW wouldn't get the best marks from me as far as the rationality of internet communities I know is considered, or that a lowered single value for "the level of discourse" lessened my perception of the value of other contributions here.
Now the latest top-level post about critiquing Bayesianism look quite interesting, I think I'd like to take a closer look at that...
comment by Filipe Marchesini (filipe-marchesini) · 2020-03-11T09:27:17.935Z · LW(p) · GW(p)
WHO I AM: I have 24 years of existence. I give math, chemistry and physics lessons to high school students since 17. I am pretty good at it and I never announced anywhere on planet that I give lessons - all new students appear from recommendations from older students. On the end of 2016 I already had 38 months going to the university, trying to get mechanical engineering credentials. I wasn't interested on the course - I really liked the math and the subjects, but the teachers sucked and the experience was, in general, terrible. I hated my life and was doing it just to look good for my parents - always loved arts and I study classical music since 14. I heard about "artificial intelligence" just once, and I decided all my actions in life should be towards automate the process of learning. I started a MIT Python course and then dropped out university. I am completely passionate about learning.
WHAT I'M DOING: (short-term) I am currently learning and doing beautiful animations with the python library called MANIM (Mathematical ANIMations). I am searching for people to unite forces to transform tens of posts in The Sequences into video content with this library. I hope to gather money from it and spread rationality in general, turning it more popular. My family and my best friends have shit quality life and I would like to get enough money to change that. Most of my reasoning is "explore the solution space and find the best ways to help most people, if you really help and it is something scalable, you will get money on the way". If I get the money (instrumental goal), I will help those who need (relatives and close friends). As soon as I achieve this goal, I will jump to my long-term goals on helping ending disease, extending life (first) then killing unwanted death, refining art, playing games.
HOW I FOUND YOU: I googled "pascal wager artificial intelligence" after seeing a Robert Miles (AI researcher) video. Then I found Roko's basilisk, read a lot about it and had a bad week. Then I found Eliezer Yudkowsky talking about it, and my fear went away. Then I discovered this forum and it turned out to be my main source of truth-seeking. I don't know personally anyone who has access to this kind of source of information (Lesswrong), collective truth-seeking with strong grounds.
I VALUE: I value people who works towards making the life of others better. I value people who really seeks truth. I value people who takes ethical problems seriously. I value people who spends more than 5 minutes by the clock looking for better solutions for our everyday problems.
TO ACHIEVE WHAT I VALUE: To make the life of others better, I am daily trying to discover how can I use the programming knowledge I am acquiring daily about blockchain, artificial intelligence and mobile/web/software development to create real solutions for real problems - solutions for drinking water, food (automate prodution/distribution), education (for all ages), energy, housing, income, health and environment. I didn't develop a scalable solution for any of these problems, because I am still learning and it is very hard, I admit, but I will help, no matter what, and if you wanna lose, just bet against me. I just need a little more time.
Replies from: habryka4, TurnTrout↑ comment by habryka (habryka4) · 2020-03-11T18:47:20.923Z · LW(p) · GW(p)
Welcome! Your story sounds exciting and I am looking forward to seeing you around!
comment by Qiaochu_Yuan · 2012-11-24T08:45:14.278Z · LW(p) · GW(p)
Hello! I'm a first-year graduate student in pure mathematics at UC Berkeley. I've been reading LW posts for awhile but have only recently started reading (and wanting to occasionally add to) the comments. I'm interested in learning how to better achieve my goals, learning how to choose better goals, and "raising the sanity waterline" generally. I have recently offered to volunteer for CFAR and may be an instructor at SPARC 2013.
Replies from: None↑ comment by [deleted] · 2012-11-29T02:30:44.026Z · LW(p) · GW(p)
I've read your blog for a long time now, and I really like it! <3 Welcome to LW!
Replies from: Qiaochu_Yuan↑ comment by Qiaochu_Yuan · 2012-11-29T02:49:25.203Z · LW(p) · GW(p)
Thanks! I'm trying to branch out into writing things on the internet that aren't just math. Hopefully it won't come back to bite me in 20 years...
comment by Arandur · 2011-07-28T18:35:22.438Z · LW(p) · GW(p)
Hello, Less Wrong.
I suppose I should have come here first, before posting anything else, but I didn't come here through the front door. :3 Rather, I was brought here by way of HP:MOR, as I'm sure many newbies were.
My name is Anthony. I'm 21 years old, married, studying Linguistics, and I'm an unapologetic member of the Church of Jesus Christ of Latter-Day Saints.
Should be fun.
Replies from: MatthewBaker, jsalvatier↑ comment by MatthewBaker · 2011-07-28T22:56:30.849Z · LW(p) · GW(p)
Enjoy :)
↑ comment by jsalvatier · 2011-07-28T18:50:50.830Z · LW(p) · GW(p)
Welcome! Nice to have you :)
I don't think anyone comes through the front door.
How did you happen across HP:MOR?
Replies from: Arandurcomment by apophenia · 2010-04-16T21:19:35.658Z · LW(p) · GW(p)
Hello, Less Wrong.
My name is Zachary Vance. I'm an undergraduate student at the University of Cincinnati, double majoring in Mathematics and Computer Science--I like math better. I am interested in games, especially board and card games. One of my favorite games is Go.
I've been reading Less Wrong for 2-3 months now, and I posted once or twice under another name which I dropped because I couldn't figure out how to change names without changing accounts. I got linked here via Scott Aaronson's blog Shtetl-Optimized after seeing a debate between him and Eliezer. I got annoyed at Eliezer for being rude, forgot about it for a month, and followed the actual link on Scott's site over here. (In case you read this Eliezer, you both listen to people more than I thought (update, in Bayesian) and write more interesting things than I heard in the debate.) I like paradoxes and puzzles, and am currently trying to understand the counterfactual mugging. I've enjoyed Less Wrong because everybody here seems to read everything and usually carefully think about it before they post, which means not only articles but also comments are simply amazing compared to other sites. It also means I try not to post too much so Less Wrong remains quality.
I am currently applying to work at the Singularity Institute.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-04-17T00:09:41.762Z · LW(p) · GW(p)
Hi, welcome to Less Wrong and thanks for posting an introduction!
comment by Sarokrae · 2011-09-25T11:24:27.183Z · LW(p) · GW(p)
Greetings, LessWrong!
I'm Saro, currently 19, female and a mathematics undergraduate at the University of Cambridge. I discovered LW by the usual HP:MoR route, though oddly I discovered MoR via reading EY's website, which I found in a Google search about Bayes' once. I'm feeling rather fanatical about MoR at the moment, and am not-so-patiently awaiting chapter 78.
Generally though, I've found myself stuck here a lot because I enjoy arguing, and I like convincing other people to be less wrong. Specifically, before coming across this site, I spent a lot of time reading about ways of making people aware of their own biases when interpreting data, and effective ways of communicating statistics to people in a non-misleading way (I'm a big fan of the work being done by David Spiegelhalter). I'm also quite fond of listening to economics and politics arguments and trying to tear them down, though through this, I've lost any faith in politics as something that has any sensible solutions.
I suspect that I'm pretty bad at overcoming my own biases a lot of the time. In particular, I have a very strong tendency to believe what I'm told (including what I'm being told by this site), I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site), and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but am now not so sure. (At some level, I have the thought that our form of logic should only apply to our plane of existence, whatever that means.) But hey, figuring all that out is what this site's about, right?
Replies from: None, CaveJohnson, Swimmer963, Oscar_Cunningham↑ comment by [deleted] · 2011-09-25T16:31:08.820Z · LW(p) · GW(p)
Welcome!
I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site),
I wouldn't necessarily call that a failing in and of itself -- it's important to notice the influence that tone and eloquence and other ineffable aesthetic qualities have on your thinking (lest you find yourself agreeing with the smooth talker over the person with a correct argument), but it's also a big part of appreciating art, or finding beauty in the world around you.
and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but > am now not so sure.
If it helps, I was raised atheist, only ever adopted organized religion once in response to social pressure (it didn't last, once I was out of that context), find myself a skeptical, materialist atheist sort -- and with my brain wiring (schizotypal, among other things) I still have intense, vivid spiritual experiences on a regular basis. There's no inherent contradiction, if you see the experiences as products-of-brain and that eerie sense that maybe there's something more to it as also a product-of-brain, with antecedents in known brain-bits.
Replies from: Sarokrae↑ comment by Sarokrae · 2011-09-25T19:06:42.005Z · LW(p) · GW(p)
Thanks for the welcome!
I'm certainly not going to join organised religion any time soon, seeing as I think I'm much better off without them. However, it's proving pretty difficult to argue myself out of a general, self-formed religion because of the hangups I have about our logic only applying to our world. I mean, if there is a supreme being for whom "P and ¬P"...
Fortunately, any beings that use logic that is above and beyond my own, and cares about my well-being, will probably want me to just try my best with my own logic. It's not a belief that gets in the way of life much, so I don't think about it all the time, but it would be interesting to sit down and just poke all of that bit of my thoughts with a rationalist stick at some point.
↑ comment by CaveJohnson · 2011-12-20T15:42:33.464Z · LW(p) · GW(p)
Welcome!
Generally though, I've found myself stuck here a lot because I enjoy arguing, and I like convincing other people to be less wrong. Specifically, before coming across this site, I spent a lot of time reading about ways of making people aware of their own biases when interpreting data, and effective ways of communicating statistics to people in a non-misleading way (I'm a big fan of the work being done by David Spiegelhalter).
Honestly that made me cringe slightly and I wanted to write something about it when I came to the second paragraph:
I suspect that I'm pretty bad at overcoming my own biases a lot of the time. In particular, I have a very strong tendency to believe what I'm told (including what I'm being told by this site), I'm particularly easily inspired by pretty slogans and inspirational tones (like those this site), and I have, and have always had, one of those Escher-painting brains, to the extent that I was raised very atheist but am now not so sure. (At some level, I have the thought that our form of logic should only apply to our plane of existence, whatever that means.) But hey, figuring all that out is what this site's about, right?
You are bad at overcoming your own biases, since all of us are. We've got pretty decent empirical evidence that knowing about some biases does help you, but not with others. The best practical advice to avoid being captured by slogans and inspirational tones is to practice playing the devils advocate.
I'm also quite fond of listening to economics and politics arguments and trying to tear them down, though through this,
Check out LW's sister site Overcoming Bias. Robin Hanson loves to make unorthodox economical arguments about nearly everything. Be warned his contrarianism and cynicism with a simile are addictive! He also has some interesting people on his blogroll.
I've lost any faith in politics as something that has any sensible solutions.
I'm afraid hanging out here probably will not make it any better. Seek different treatment. :)
↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-09-25T12:39:23.565Z · LW(p) · GW(p)
Welcome! Sweet, another girl my age!
though oddly I discovered MoR via reading EY's website, which I found in a Google search about Bayes' once.
Kind of similar to how I discovered it. I think I googled EY and found his website after seeing his name in the sl4 mailing list.
Replies from: tenshiko↑ comment by tenshiko · 2011-09-25T16:02:20.029Z · LW(p) · GW(p)
My story is similar, finding this stuff from that good old "The Meaning of Life" FAQ from back in 2003, which I think he's officially renounced, kind of like the doornail dead SL4 wiki. A search brought me back into the website fold years later.
Anyway, seconding Swimmer's happiness at the young female demographic being bolstered a little more with your arrival, Sarokrae! May you gain the maximum amount of utilons from this site.
↑ comment by Oscar_Cunningham · 2011-09-25T18:01:37.085Z · LW(p) · GW(p)
Welcome!
I'm Saro, currently 19, female and a mathematics undergraduate at the University of Cambridge.
Note to self: Organise Cambridge meet-up.
comment by KND · 2011-07-02T00:41:42.660Z · LW(p) · GW(p)
Hello fellow Less Wrongians,
My name is Josh and I'm a 16-year-old junior in high school. I live in a Jewish family at the Jersey Shore. I found the site by way of TV Tropes after a friend told me about the Methods of Rationality. Before i started reading Eliezer's posts, i made the mistake of believing I was smart. My goal here is mainly to just be the best that I can be and maybe learn to lead a better life. And by that I mean that I want to be better than everyone else I meet. That includes being a more rational person better able to understand complex issues. I think i have a fair grip on the basic points of rationality as well as philosophy, but i am sorely lacking in terms of math and science (which can't be MY fault obviously, so I'll just go ahead and blame the public school system). I never knew what exactly an logarithm WAS before a few days ago, sadly enough (I knew the term of course, but was never taught what it meant or bothered enough to look it up. I have absolutely no idea what i want to do with my life other than amassing knowledge of whatever i find to be interesting.
I was raised in a conservative household, believing in God but still trying to look at the world rationally. My father never tried to defend the beliefs he taught me with anything but logic. I suppose I'm technically atheist, but i prefer to consider myself agnostic. Believe it or not, I actually became a rationalist after my dad got me to read Atlas Shrugged. While i wasnt taken in very much by the appeal to my sense of superiority, however correct it may be, i did take special notice of a particular statement in which Rand maintains that man is a reasoning animal and that the only evil thought is to not think as to do so is to reject the only tool that mankind has used to survive and instead embrace death. This and her rejection of emotion as a substitute for rationality impressed me more than anything i had read up to that point. i soon became familiar with Aristotle and from then on studied both philosophy and rationality. Of course i hadnt really seen anything before I started reading Eliezer's writing!
Overall, Im just happy to be here and have enjoyed everything i have seen of the site so far. Im still young and relatively ignorant to many of the topics discussed here, but if you will just bare with me, as i know you will, i might, in time, actually learn to add something to the site. Thanks for reading my story, i look forward to devoting many more hours to the site!
Replies from: None↑ comment by [deleted] · 2011-12-20T16:35:18.426Z · LW(p) · GW(p)
Great to have you here Josh!
Im still young and relatively ignorant to many of the topics discussed here, but if you will just bare with me, as i know you will, i might, in time, actually learn to add something to the site.
Most of all as you read and participate in the community, don't be afraid to question common beliefs here, that's where the contribution is likley to be there I think. Also if you plan on going through one or more of the sequences systematically consider finding a chavruta.
I think i have a fair grip on the basic points of rationality as well as philosophy, but i am sorely lacking in terms of math and science (which can't be MY fault obviously, so I'll just go ahead and blame the public school system)
To quote myself:
As for relevant math, or studying math in general just ask in the open threads! LWers are helpful when it comes to these things. You even have people offering dedicated math tutoring, like Patrick Robotham or as of recently me.
Also a great great resource for basic math are the Khan Academy videos and exercises.
comment by GDC3 · 2010-12-29T09:22:37.533Z · LW(p) · GW(p)
HI, I'm GDC3. Those are my initials. I'm a little nervous about giving my full name on the internet, especially because my dad is googlible and I'm named after him. (Actually we're both named after my grandfather, hence the 3) But I go by G.D. in real life anyway so its not exactly not my name. I'm primarily working on learning math in advance of returning to college right now.
Sorry if this is TMI but you asked: I became an aspiring rationalist because I was molested as a kid and I knew that something was wrong, but not what it was or how to stop it, and I figure that if I didn't learn how the world really worked instead of what people told me, stuff like that might keep happening to me. So I guess my something to protect was me.
My something to protect is still mostly me, because most of my life is still dealing with the consequences of that. My limbic system learned all sorts of distorted and crazy things about how the world works that my neocortex has to spend all of its time trying to compensate for. Trying to be a functional human being is sort of hard enough goal for now. I also value and care about eventually using this information to help other people who've had similar stuff happen to them. I value this primarily because I've pre-committed to valuing that so that the narrative would motivate me emotionally when I hate myself too much to motivate myself selfishly.
So I guess I self-modified my utility function. I actually was pretty willing to hurt other people to protect myself as a kid. I've made myself more altruistic not to feel less guilty (which would mean that I wasn't really as selfish as I thought I was), but to feel less alone. Which is plausible I guess, because I wasn't exactly a standard moral specimen as a kid.
I hope that was more interesting than upsetting. I think I can learn a lot from you guys if I can speak freely. I hope that I can contribute or at least constitute good outreach.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2010-12-29T14:28:23.212Z · LW(p) · GW(p)
I value this primarily because I've pre-committed to valuing that so that the narrative would motivate me emotionally when I hate myself too much to motivate myself selfishly.
I think that's the most succinct formulation of this pattern I've ever run into. Nicely thought, and nicely expressed.
(I found the rest of your comment interesting as well, but that really jumped out at me.)
Welcome!
comment by taryneast · 2010-12-12T15:19:01.538Z · LW(p) · GW(p)
Hi, I'm Taryn. I'm female, 35 and working as a web developer. I started studying Math, changed to Comp Sci and actually did my degree in Cognitive Science (Psychology of intelligence, Neurophysiology, AI, etc) My 3rd year Project was on Cyberware.
When I graduated I didn't see any jobs going in the field and drifted into Web Development instead... but I've stayed curious about AI, along with SF, Science, and everything else too. I kinda wish I'd known about Singularity research back then... but perhaps it's better this way. I'm not a "totally devoted to one subject" kinda person. I'm too curious about everything to settle for a single field of study.
That being said - I've worked in web development now for 11 years. Still, when I get home, I don't start programming, preferring pick up a book on evolutionary biology, medieval history, quantum physics, creative writing (etc) instead. There's just too damn many interesting things to learn about to just stick to one!
I found LW via Harry Potter & MOR, which my sister forwarded to me. Since then I've been voraciously reading my way through the sequences, learning just how much I have yet to learn... but totally fascinated. This site is awesome.
comment by [deleted] · 2010-08-11T22:05:45.778Z · LW(p) · GW(p)
[Hi everyone!]
comment by [deleted] · 2010-04-28T01:16:25.647Z · LW(p) · GW(p)
Hi, I'm Sarah. I'm 21 and going to grad school in math next fall. I'm interested in applied math and analysis, and I'm particularly interested in recent research about the sparse representation of large data sets. I think it will become important outside the professional math community. (I have a blog about that at http://numberblog.wordpress.com/.)
As far as hobbies go, I like music and weightlifting. I read and talk far too much about economics, politics, and philosophy. I have the hairstyle and cultural vocabulary of a 1930's fast-talking dame. (I like the free, fresh wind in my hair, life without care; I'm broke, that's Oke!)
Why am I here? I clicked the link from Overcoming Bias.
In more detail, I'm here because I need to get my life in order. I'm a confused Jew, not a thoroughgoing atheist. I've been a liberal and then a libertarian and now need something more flexible and responsive to reason than either.
Some conversations with a friend, who's a philosopher, have led me to understand that there are some experiences (in particular, experiences he's had related to poverty and death) that nothing in my intellectual toolkit can deal with, and so I've had to reconsider a lot of preconceptions.
I'm here, to be honest, for help. I've had difficulty since childhood believing that I am valuable, partly because in mathematics you always have the example before you of people far better. Let me put it this way: I need to find something to do or believe that doesn't crumble periodically into wishing I were dead, because otherwise I won't have a very productive future. That sounds dismal, but really it's a good problem to have -- I'm pretty fortunate otherwise. Still, I want to solve it. I like this community, I think there's a lot to learn here, and my inclination is always to solve problems by learning.
Replies from: mattnewport, CronoDAS↑ comment by mattnewport · 2010-04-28T01:27:50.366Z · LW(p) · GW(p)
I'm here, to be honest, for help. I've had difficulty since childhood believing that I am valuable, partly because in mathematics you always have the example before you of people far better.
I don't know if it will help you, but the concept of comparative advantage might help you appreciate how being valuable does not require being better than anyone else at any one thing. I found the concept enlightening, but I'm probably atypical...
Replies from: None↑ comment by [deleted] · 2010-04-28T01:37:33.815Z · LW(p) · GW(p)
I am familiar with it, actually. Never seemed to do much good, but maybe with a little meditation it might. If someone is paying me voluntarily, I must be earning my keep, in a sort of caveat emptor way...
Replies from: mattnewport↑ comment by mattnewport · 2010-04-28T02:01:20.980Z · LW(p) · GW(p)
I think gains from trade is one of the most uplifting (true) concepts in all of the social sciences. It is a tragedy that it is not more widely appreciated. Most people see trade as zero sum.
comment by Rain · 2010-03-21T15:02:25.261Z · LW(p) · GW(p)
- Persona: Rain
- Age: 30s
- Gender: Unrevealed
- Location: Eastern USA
- Profession: Application Administrator, US Department of Defense
- Education: Business, Computers, Philosophy, Scifi, Internet
- Interests: Gaming, Roleplaying, Computers, Technology, Movies, Books, Thinking
- Personality: Depressed and Pessimistic
- General: Here's a list of my news sources
Rationalist origin: I discovered the scientific method in highschool and liked the results of its application to previously awkward social situations, so I extended it to life in general. I came up with most of OB's earlier material by myself under different names, or not quite as well articulated, and this community has helped refine my thoughts and fill in gaps.
Found LW: The FireFox add-on StumbleUpon took me to EY's FAQ about the Meaning of Life on 23 October 2005, along with Max More, Nick Bostrom, Alcor, Sentient Developments, the Transhumanism Wikipedia page, and other resources. From there, to further essays, to the sl4 mailing list, to SIAI, to OB, to LW, where I started interacting with the community in earnest in late January 2010 and achieved 1000 karma in early June 2010. Previous to the StumbleUpon treasure trove, I had been turned off the transhumanist movement by a weird interview of Kurzweil in Wired, but still hopeful due to scifi potentials.
Value and desire to achieve: I'm still working on that. The metaethics sequence was unsatisfactory. In particular, I have problems with our ability to predict the future and what we should value. I'm hoping smarter than human intelligence will have better answers, so I strongly support the Singularity Institute.
comment by David Althaus (wallowinmaya) · 2011-04-20T22:04:01.450Z · LW(p) · GW(p)
hi everybody,
I'm 22, male, a student and from Germany. I've always tried to "perceive whatever holds the world together in its inmost folds", to know the truth, to grok what is going on. Truth is the goal, and rationality the art of achieving it. So for this reason alone lesswrong is quite appealing.
But in addition to that Yudkowsky and Bostrom convinced me that existential risks, transhumanism , the singularity, etc. are probably the most important issues of our time.
Furthermore this is the first community I've ever encountered in my life that makes me feel rather dumb. ( I can hardly follow the discussions about solomonoff induction, everett-branches and so on, lol, and I thought I was good at math because I was the best one in high school :-) But, nonetheless being stupid is sometimes such a liberating feeling!
To spice this post with more gooey self-disclosure: I was sort of a "mild" socialist for quite some time ( yeah, I know. But, there are some intelligent folks who were socialists, or sort-of-socialists like Einstein and Russell). Now I'm more pro-capitalism, libertarian, but some serious doubts remain. I'm really interested in neuropsychological research of mystic experiences. ( I think I share this personal idiosyncrasy with Sam Harris...) I think many rational atheists ( myself included before I encountered LSD), underestimate the preposterous and life-transfomring power of mystic experiences, that can convert the most educated rationalist into a gibbering crackpot. It makes you think you really "know" that there is some divine and mysterious force at the deepest level of the universe, and the quest for understanding involves reading many, many absurd and completely useless books, and this endeavor may well destroy your whole life.
Replies from: Swimmer963, rhollerith_dot_com, MrMind↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-21T13:00:17.855Z · LW(p) · GW(p)
But mystic experiences, caused by psychedelics (or other neurological "happenings"), may well be one of the reasons why some highly intelligent people remain/ or become religious.
I can personally support this. I've never taken LSD or any other consciousness-altering drug, but I can trigger ecstatic, mystical "religious experiences" fairly easy in other ways; even just singing in a group setting will do it. I sing in an Anglican church choir and this weekend is Easter, so I expect to have quite a number of mystical experiences. At one point I attended a Pentecostal church regularly and was willing to put up with people who didn't believe in evolution because group prayer inevitably triggered my "mystical experience" threshold. (My other emotions are also triggered easily: I laugh out loud when reading alone, cry out loud in sad books and movies, and feel overpowering warm fuzzies when in the presence of small children.)
I have done my share of reading "absurb and useless" books. Usually I found them, well, absurd and useless and pretty boring. I would rather read about the neurological underpinnings of my experience, especially since grokking science's answers can sometimes trigger a near-mystical experience! (Happened several times while reading Richard Dawkins' 'The Selfish Gene'.)
In any case, I would like to hear more about your story, too.
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2011-04-21T15:54:38.103Z · LW(p) · GW(p)
I can trigger ecstatic, mystical "religious experiences" fairly easy in other ways; even just singing in a group setting will do it.
Wow, impressive that nevertheless you've managed to become a rationalist! Now I would like to hear how you achieved this feat :-)
I would rather read about the neurological underpinnings of my experience, especially since grokking science's answers can sometimes trigger a near-mystical experience!
I totally agree. Therefore neuroscience of "altered states of consiousness" is one of my pet subjects...
Replies from: Swimmer963↑ comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-04-21T16:41:45.148Z · LW(p) · GW(p)
Wow, impressing that nevertheless you've managed to become a rationalist! Now I would like to hear how you achieved this feat :-)
Mainly by having read so much pop science and sci-fi as a kid that by the time the mystical-experience things happened in a religious context (at around 14, when I started singing in the choir and actually being exposed to religious memes) I was already a fairly firm atheist in a family of atheists. Before that, although I remember having vaguely spiritual experiences as a younger kid, they were mostly associated with stuff like looking at beautiful sunsets or swimming. And there's the fact that I'm genuinely interesting in topics like physics, so I wasn't going to restrict my reading list to New Age/religious books.
↑ comment by RHollerith (rhollerith_dot_com) · 2011-04-21T12:00:25.284Z · LW(p) · GW(p)
For "weltanschauung" (an English word), Wiktionary has, "a person's or a group's conception, philosophy or view of the world; a worldview". Moreover (if you capitalize it) it means the same thing in German.
↑ comment by MrMind · 2011-04-21T07:33:11.852Z · LW(p) · GW(p)
I think your experience deserves a narration in the discussion section.
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2011-04-21T10:12:40.510Z · LW(p) · GW(p)
Hm, I don't know. Merely writing about the trip can never be as profound as the experience itself. Read e.g. descriptions of experiences with meditation. They often sound just silly. Furthermore there are enough trip-reports in the internet about experiences with psychedelic drugs, from people who can better write than I can, and who have more knowledge than I have. If you are really interested in mystic or psychedelic experiences, you can go to Erowid , which is one of the best sites on the internet if you are interested in this stuff...
Replies from: MrMind↑ comment by MrMind · 2011-04-21T10:24:16.881Z · LW(p) · GW(p)
I was referring not to the experience of your trip, but to the following battle you fought about overcoming the (almost) Absolute Bias...
Replies from: wallowinmaya↑ comment by David Althaus (wallowinmaya) · 2011-04-21T11:28:01.112Z · LW(p) · GW(p)
Oh,sorry, I see...
Well, overcoming this worldview consisted mainly of reading some sequences of Eliezer:-)
And remember that I wasn't a New-Age crackpot. I had only very mild mystic experiences, but these alone lead me to question the nature of consiousness, the universe etc..
So for me it was not really difficult, but I imagine that really radical experiences make you "immune" to a naturalistic, atheistic explanation.
I think Yvain made a similar experience with hashish (This post also convinced me that mystic experiences are only strange realignments of neurological processes )
Well, maybe I will write a post in the future that discusses risks and benefits of psychedelic drugs and meditation. But first I have to read the remaining sequences of Eliezer, which will be time-consuming enough:-)
comment by AlexGreen · 2010-12-11T03:43:10.876Z · LW(p) · GW(p)
Good day I'm a fifteen year-old high school student, Junior, and ended up finding this through the Harry Potter & MOR story, which I thought would be a lot less common to people. Generally I think I'm not that rational of a person, I operate mostly on reaction and violence, and instinctively think of things like 'messages' and such when I have some bad luck; but, I've also found some altruistic passion in me, and I've done all of this self observation which seems contradictory, but I think that's all a rationalization to make me a better person. I also have some odd moods, which split between talking like this, when usually I can't like this at all.
I'd say something about my age group but I can't think of anything that doesn't sound like hypocrisy, so I think I'll cut this off here.
- Aaaugh, just looking at this giant block of text makes me feel like an idiot.
↑ comment by fortyeridania · 2010-12-11T04:01:54.841Z · LW(p) · GW(p)
Don't be so hard on yourself. Or, more precisely: don't be hard on yourself in that way. Bitter self-criticism could lead to helpful reforms and improved habits, but it could also lead to despair and cynicism. If you feel that you need to be criticized, post some thoughts and let other LWers do it.
comment by SwingDancerMike · 2012-06-20T19:37:47.986Z · LW(p) · GW(p)
Hi everyone, I've been reading LW for a year or so, and met some of you at the May minicamp. (I was the guy doing the swing dancing.) Great to meet you, in person and online.
I'm helping Anna Salamon put together some workshops for the meetup groups, and I'll be posting some articles on presentation skills to help with that. But in order to do that, I'll need 5 points (I think). Can you help me out with that?
Thanks
Mike
Replies from: SwingDancerMike↑ comment by SwingDancerMike · 2012-06-20T21:57:18.207Z · LW(p) · GW(p)
Yay 5 points! That was quick. Thanks everyone.
comment by TheatreAddict · 2011-07-08T05:51:49.682Z · LW(p) · GW(p)
Hello everyone,
My name is Allison, and I'm 15 years old. I'll be a junior next year. I come from a Christian background, and consider myself to also be a theist, for reasons that I'm not prepared to discuss at the moment... I wish to learn how to view the world as it is, not through a tinted lens that is limited in my own experiences and background.
While I find most everything on this site to be interesting, I must confess a particular hunger towards philosophy. I am drawn to philosophy as a moth is to a flame. However, I am relatively ignorant about pretty much everything, something I'm attempting to fix. I have a slightly above average intelligence, but nothing special. In fact, compared to everyone on this site, I'm rather stupid. I don't even understand half of what people are talking about half the time.
I'm not a science or math person, although I find them interesting, my strengths lie in English and theatre arts. I absolutely adore theatre, not that this really has much to do with rationality. Anyway, I kind of want to get better at science and math. I googled the double slit experiment, and I find it.. captivating. Quantum physics holds a special kind of appeal to me, but unfortunately, is something that I'm not educated enough to pursue at the moment.
My goals are to become more rational, learn more about philosophy, gain a basic understanding of math and science, and to learn more about how to refine the human art of rationality. :)
Replies from: KPier, TheatreAddict, None, kilobug↑ comment by KPier · 2011-07-08T06:05:34.761Z · LW(p) · GW(p)
Welcome! Encountering Less Wrong as a teenager is one of the best things that ever happened to me. One of the most difficult techniques this site can teach you, changing your mind, seems to be easier for younger people.
Not understanding half the comments on this blog is about standard, for a first visit to the site, but you aren't stupid; if you stick with it you'll be fluent before you know it. How much of the site have you read so far?
Replies from: TheatreAddict↑ comment by TheatreAddict · 2011-07-08T07:00:48.365Z · LW(p) · GW(p)
Yeah, I mean from history, it shows that even when people think they're right, they can still be wrong, so if I'm proved wrong, I'll admit it, there's no point holding onto an argument that's proven scientifically wrong. :3
Hmm, I've darted around here and there, I've read a few of the sequences, and I'm continuing to read those. I've read how to actually change your mind. I've attempted to read more difficult stuff involving Bayes theorum, but it pretty much temporarily short-circuited my brain. Hahh.
Replies from: TheatreAddict↑ comment by TheatreAddict · 2011-07-09T06:11:56.937Z · LW(p) · GW(p)
Edit: I've read most of the sequence, Mysterious Answers to Mysterious Questions.
↑ comment by TheatreAddict · 2011-07-08T05:54:48.773Z · LW(p) · GW(p)
Ahh! I forgot, I learned about this site through Eliezer Yudkowsky's fanfiction, Methods or Rationality. :3 A good read.
↑ comment by [deleted] · 2011-12-20T16:20:30.651Z · LW(p) · GW(p)
While I find most everything on this site to be interesting, I must confess a particular hunger towards philosophy. I am drawn to philosophy as a moth is to a flame. However, I am relatively ignorant about pretty much everything, something I'm attempting to fix. I have a slightly above average intelligence, but nothing special. In fact, compared to everyone on this site, I'm rather stupid. I don't even understand half of what people are talking about half the time.
LessWrong is basically a really good school of philosophy.
And while you may hear some harsh words about academic philosophy ( that stuff, at least most of what's written in the 20th century, is dull anyway), reading some of the classics can be really fun and even useful for understanding the world around you (because so many of those ideas, sometimes especially the wrong ones, are baked into our society). I started with Plato right after my 15th birthday, continued reading stuff all through high school instead of studying, and occasionally still taking some time to read some old philosophy now that I'm in college.
Concerning intelligence, do not be mislead by the polls that return self-reported IQs in the 140~ range, for active participants its probably a good 20 points lower and for average readers 5 points bellow that.
As for relevant math, or studying math in general just ask in the open threads! LWers are helpful when it comes to these things. You even have people offering dedicated math tutoring, like Patrick Robotham or as of recently me.
↑ comment by kilobug · 2011-10-18T18:46:35.063Z · LW(p) · GW(p)
Welcome here !
Don't underestimate yourself too much, being here and spending time reading the Sequences at your age is already something great :) And if you don't understand something, there is no shame to that, don't hesitate to ask questions on the points that aren't clear to you, people here will be glad to help you !
As for quantum physics, I hope you'll love Eliezer's QM Sequence, it's by far the clearest introduction to QM I ever saw, and doesn't require too much maths.
comment by Tuesday_Next · 2010-04-07T17:20:08.146Z · LW(p) · GW(p)
Hello everyone!
Name: Tuesday Next Age: 19 Gender: Female
I am an undergraduate student studying political science, with a focus on international relations. I have always been interested in rationalism and finding the reasons for things.
I am an atheist, but this is more a consequence of growing up in a relatively nonreligious household. I did experiment with paganism and witchcraft for several years, a rather frightening (in retrospect) display of cognitive dissonance as I at once believed in science and some pretty unscientific things.
Luckily I was able to to learn from experience, and it soon become obvious that what I believed in simply didn't work. I think I wanted to believe in witchcraft both as a method of teenage rebellion and to exert some control over my life. However I was unable to delude myself.
I tried to interest myself in philosophy many times, but often became frustrated by the long debates that seemed divorced from reality. One example is the idea of free will. Since I was a child (I have a memory of trying, when I was in elementary school, of trying to explain this to my parents without success) I have had a conception of reality and free will that seemed fairly reasonable to me and I never understood what all the fuss was about.
It went something like this: The way things did turn out is the only way things could have turned out, given the exact pre-existing circumstances. In particular, when one person makes a decision they presumably do so for a reason, whether that reason is rational or not; if that decision is not predetermined by the situation and the person, then it is random. If a decision is random, this is not free will because the choice is not a result of a person's decision; rather it is a result of some random phenomenon involving the word "quantum."
But since no two situations are alike, and it is impossible for anyone to know everything, let alone extrapolate from knowledge of the present to figure out what the future will be, there is no practical effect from this determinism. In short, we act as if we have free will and we cannot predict the future. It is the same thing with reality. Whether it is "real" or not is irrelevant.
The practical consequences of this, for me at least, are that arguing about whether we have free will or not misses the point. We may be able to predict the "future" of a simple computer program by knowing all the conditions of the present, but cannot do the same for the real world; it is too complex.
I finally found this articulated, to my great relief that I was not crazy for believing it, in Daniel Dennet's "Freedom Evolves." This is what got me interested in philosophy again.
I am also interested in how to change minds (including my own). I have always had fairly strong (and, in retrospect, irrational) political beliefs. When I took an Economics course, I found many of my political beliefs changing significantly.
I even found myself arguing with a friend (who like me is fairly liberal), and he later praised me for successfully defending a point of view he knew I disagreed with. (The argument in question was about a global minimum wage law; I was opposed.) I found this disconcerting as I was in fact arguing what I honestly believed, though I do have a tendency to play "Devil's Advocate" and argue against what I believe.
This forced me to confront the fact that some of my political views had actually changed. Later, when I challenged some of the basic assumptions that Economics class made, like the idea that markets can be "perfect," I found myself reassessing my political views again. I am trying to get in the habit of doing this to avoid becoming dogmatic.
Anyway, I think that's enough for now; if anyone has any questions I would be happy to address them.
--Tuesday
Replies from: Alicorn, Morendil, orangecomment by [deleted] · 2011-09-25T16:22:38.813Z · LW(p) · GW(p)
Hey everyone.
I'm Jandila (not my birth, legal or even everyday name), I'm a 28-year old transgendered woman living in Minnesota. I've been following EY's writings off and on since many years ago on the sl4 mailing list, mostly on the topic of AI; initially I got interested in cognitive architecture and FAI due to a sci-fi novel I've been working on forever. I discovered LW a few years ago but only recently started posting; somehow I missed this thread until just recently.
I've been interested in bias and how people think, and in modifying my own instrumental ability to understand and work around it, for many years. I'm on the autistic spectrum and have many clusters of neurological weirdness; I think this provided an early incentive to understand "how people think" so I could signal-match better.
So far I've stuck around because I like LW's core mission and what it stands for in abstract; I also feel that the community here is a bit too homogenous in terms of demographics for a community with such an ostensibly far-reaching, global goal, and thus want to see the perspective base broadened (and am encouraged by the recent influx of female members).
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-09-25T18:01:08.946Z · LW(p) · GW(p)
Welcome!
comment by JenniferDavies · 2011-08-20T18:32:12.151Z · LW(p) · GW(p)
Hey everyone,
My name is Jennifer Davies. I'm 35 years old and am married with a 3 year old daughter. I live in Kitchener, Ontario, Canada.
Originally a computer programmer, I gave it up after spending a year coding for a bank (around 1997). Motivated by an interest in critical thinking, I earned a BA in Philosophy.
Currently, I'm completing a one year post-grad program to become a Career Development Practitioner. I plan to launch a private practice in 2012 to help people find and live their passions while providing them with the tools to do so.
A friend introduced me to Harry Potter: Methods of Rationality and Less Wrong. I have never enjoyed a piece of reading more than that fanfic -- I even saved a PDF version to introduce to my daughter once she's able to benefit from it.
My main motivations (that I'm aware of) for becoming a member of this community are to: improve my thinking skills (and better understand/evaluate values and motivations), help clients to think more rationally, better encourage independent, critical thought in my daughter.
Although it can be painful at times (for my ego) to be corrected, I appreciate such corrections and the time put into them.
Any tips for teaching young children rationality? I'm at a loss and wonder if I need to wait until she's older.
Replies from: beoShaffer↑ comment by beoShaffer · 2011-08-20T19:04:04.546Z · LW(p) · GW(p)
Hi Jennifer. There's been quite a bit written about teaching children rationality. Unfortunately, the relative newness of LW and the low percentage of parents means its all somewhat speculative. The following links cover most(but probably not all of what LW has on the subject).
- http://lesswrong.com/lw/25/on_the_care_and_feeding_of_young_rationalists/
- http://lesswrong.com/lw/2q/on_juvenile_fiction/
- http://lesswrong.com/lw/3c/rationalist_storybooks_a_challenge/
- http://lesswrong.com/lw/63f/rational_parenting/
- http://lesswrong.com/lw/3i/little_johny_bayesian/
- http://lesswrong.com/lw/4uw/preschoolers_learning_to_guess_the_teachers/
- http://lesswrong.com/lw/70b/raise_the_age_demographic/ (You have to go down to the comments section for this one)
↑ comment by JenniferDavies · 2011-08-20T19:19:40.261Z · LW(p) · GW(p)
Oops. I should have done a search first before mentioning it. Thanks for taking the time for posting those links.
comment by Oligopsony · 2010-08-03T20:15:40.550Z · LW(p) · GW(p)
I've existed for about 24 years, and currently live in Boston.
I regard many of the beliefs popular here - cyronics, libertarianism, human biodiversity, pickup artistry - with extreme skepticism. (As if in compensation, I have my own unpopular frameworks for understanding the world.) I find the zeitgeist here to be interestingly wrong, though, because almost everyone comes from a basically sane starting point - a material universe, conventionally "Western" standards of science, reason, and objectivity - and actively discusses how they can regulate their beliefs to adhere to these. I have an interest in achieving just this kind of regulation (am a "rationalist",) and am aware that it's epistemically healthy to expose myself to alternative points of view expressed in a non-crazy way. So hopefully the second aspect will reinforce the first.
As for why I'm a rationalist, I don't know, and the question doesn't seem particularly interesting to me. I regard it beyond questions of justification, like other desires.
Replies from: Blueberry↑ comment by Blueberry · 2010-08-03T20:31:38.653Z · LW(p) · GW(p)
Welcome to Less Wrong!
I regard many of the beliefs popular here - cyronics, libertarianism, human biodiversity, pickup artistry - with extreme skepticism. (As if in compensation, I have my own unpopular frameworks for understanding the world.)
I'd love to hear more about this: I also like exposing myself to alternative points of view expressed in a non-crazy way, and I'm interested in your unpopular frameworks.
Specifically: cryonics is highly speculative, but do you think there's a small chance it might work? When you say you don't believe in human biodiversity, what does that mean? And when you say you don't believe in pickup artistry, you don't think that dating and relationships skills exist?
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-03T22:41:05.627Z · LW(p) · GW(p)
Thanks for the friendly welcome!
"I'd love to hear more about this: I also like exposing myself to alternative points of view expressed in a non-crazy way, and I'm interested in your unpopular frameworks."
Specifically, I've become increasingly interested in Marxism, especially the varieties of Anglo post-Marxism that emerged from the analytical tradition. I don't imagine this is any more popular here than it is among normal people, but the general mode of analysis is probably less foreign to libertarian types than they might assume - as implied above, we're both working from materialist assumptions (beyond what's implied above, this applies to more than one meaning of "materialist," at least for certain types of libertarians.)
In general, my bias is to assume that people's behavior is more rational (I mean this in a utility-maximizing sense, rather than in the "rationalist" sense) than it initially appears. In general, the more we know about the context of a decision, the more rational it usually appears to be; and there may be something beyond vanity for the tendency of people, who are in greatest possession of their own situations, to consider themselves atypically rational. I see this materialist (in the "latter," economic sense) viewpoint as avoiding unnecessary mulitiplication of entities and (not that it should matter for truth) a basically respectful way of facially analyzing people: "MAYBE they're just crazy, but until we have more contextual knowledge, let's take as a working assumption that this is in their self-interest." This is my general verbal justification for reflexively turning to materialist explanations, although the CAUSE of my doing so is probably just that I studied neoclassical economics for four years.
"Specifically: cryonics is highly speculative, but do you think there's a small chance it might work?"
Of course. The transparent wish-fulfillment seems inherently suspect, like the immortality claims of religions, but that doesn't mean it couldn't be the case; and it doesn't seem like enthusiasm for cyrogenics seems more harmful than other hobbies. So I wish everyone involved the best of luck.
Of course I can't how much I'm generalizing from my own lack of enthusiasm. I don't put a positive value on additional years of my life - I experience some suicidal ideation but don't act on it because I know it would make people I care about incredibly upset. (This doesn't mean that I subjectively find my life to be torturous, or that it's hard not to act on the ideation; I think my life overall averages out to a level of slight annoyance - one can say "cet par, I'd rather not have experienced that span of annoyance" but one can also easily endure such a span if not doing so would cause tremendous outrage in others.)
"When you say you don't believe in human biodiversity, what does that mean?"
I mean I don't believe in what the sort of people who say "human biodiversity" refer to when they use that phrase: namely, that non-cosmetic, non-immunity genetic differences between ethnic groups are great enough to be of social importance. (Or to use the sort of moralizing, PC language I'd use in most any social context other than here: I am not a consciously-identified racist, though like anyone I have unconscious racial prejudices.) As above, politico-moral reasons wouldn't inhabit my verbal justification for this, although they're probably the efficient cause of my belief.
It's probably inevitable that racism will be unusually popular among a community devoted to Exploring Brave Edgy Truths No Matter the Cost, but I'm not afraid that actually XBETNMtC will lead me to racism - both because I consider that very unlikely, and because if reason does lead me to racism, then it is proper to be a racist. (This is true of beliefs generally, of course.)
"And when you say you don't believe in pickup artistry, you don't think that dating and relationships skills exist?"
Dating and relationship skills exist, but it seems transparent that the meat of PUA is just a magic feather to make dorky young men more confident. (Though one should not dismiss the utility of magic feathers!) I find the "seduction community" repulsively misogynistic, but that's a separate issue. (Verbal justifications, efficient causes, you know the drill.)
Being easily confident with strangers is by far the most important skill for acquiring a large number of sexual partners - this is of course a truth proclaimed by PUA, one which has been widespread knowledge since the dawn of time - and for the same time that easy confidence with strangers is the most important skill for politicians, sales professionals, &c. I do think it's here, for game-theoretic reasons, that the idea of "general social skills" can break down: easy confidence with strangers sabotages your ability to send certain social signals that are important to maintaining close relationships. So there are tradeoffs to make, and I think generally speaking people make the tradeoffs that reflect their preferences.
Replies from: Blueberry↑ comment by Blueberry · 2010-08-03T23:17:51.901Z · LW(p) · GW(p)
I typically think of Marxists as people who don't understand economics or human nature and subscribe to the labor theory of value. But you've studied economics, so I'm curious exactly what form of Marxism you subscribe to.
I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.
There are different views on PUA, but in my experience the "meat of PUA" is just conversational practice and learning flirtation and comfort. It's like the magic feather in that believing in your own ability helps, but I don't see it as fake at all.
I do think it's here, for game-theoretic reasons, that the idea of "general social skills" can break down: easy confidence with strangers sabotages your ability to send certain social signals that are important to maintaining close relationships.
Please elaborate on this. It sounds interesting but I'm not sure what you mean.
Replies from: NancyLebovitz, Risto_Saarelma, 715497741532, Oligopsony, Emile↑ comment by NancyLebovitz · 2010-08-04T13:24:47.406Z · LW(p) · GW(p)
I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.
My impression was that it is popular here, but I may be overgeneralizing from a few examples or other contexts.
The fact that no one else is saying it's popular suggests but doesn't prove that I'm mistaken.
IIRC, the last time the subject came up, the racial differences in IQ proponent was swatted down, but it was for not having sound arguments to support his views, not for being wrong.
More exactly, there were a few people who disagreed with the race/IQ connection at some length, but the hard swats were because of the lack of good arguments.
↑ comment by Risto_Saarelma · 2010-08-04T14:50:51.039Z · LW(p) · GW(p)
I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap. When you said "human biodiversity", I thought you were referring to psychological differences among humans and the idea that we don't all think the same way.
The psychological diversity article you link to is about Gregory Cochran's and Henry Harpending's book, which is all about the thesis of human evolution within the last ten thousand years affecting the societies of different human populations in various ways. It includes a chapter about Ashkenazi Jews seeming to have a higher IQ than their surrounding populations due to genetics. So I'm not really sure what the difference you are going for here is.
Replies from: Blueberry↑ comment by Blueberry · 2010-08-04T15:16:58.105Z · LW(p) · GW(p)
That the evidence suggests there may be a genetic explanation for the higher IQ of Ashkenazim but not for the racial IQ gap.
Replies from: None↑ comment by [deleted] · 2011-12-20T17:27:19.708Z · LW(p) · GW(p)
I'm afraid you may be a bit confused on this. What are the odds that out of all ethnicities on the planet, only Ashkenazi Jews where the ones to develop a different IQ than the surrounding peoples? And only in the past thousand years or so. What about all those groups that have been isolated or differentiated in very different natural and even social environments for tens of thousands of years?
Unless you are using "the racial gap" to refer to the specific measured IQ differences between people of African, European and East Asian descent, which may indeed be caused by the envrionment, rather than the possibility of differences between human "races" in general. But even in that case the existence of ethnic genetic IQ differences should increase the probability of a genetic explanation somewhat.
↑ comment by 715497741532 · 2010-08-04T14:09:38.921Z · LW(p) · GW(p)
Participant here from the beginning and from OB before that, posting under a throwaway account. And this will probably be my only comment on the race-IQ issue here.
I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap [emphasis mine].
The vast majority of writers here have not given their opinion on the topic. Many people here write under their real name or under a name that can be matched to their real name by spending a half hour with Google. In the U.S. (the only society I really know) this is not the kind of opinion you can put under your real name without significant risk of losing one's job or losing out to the competition in a job application, dating situation or such.
Second, one of the main reasons Less Wrong was set up is as a recruiting tool for SIAI. (The other is to increase the rationality of the general population.) Most of the people here with a good reputation are either affiliated with SIAI or would like to keep open the option of starting an affiliation some day. (I certainly do.) Since SIAI's selection process includes looking at the applicant's posting history here, even writers whose user names cannot be correlated with the name they would put on a job application will tend to avoid taking the unpopular-with-SIAI side in the race-IQ debate.
So, want to start a debate that will leave your side with complete control of the battlefield? Post about the race-IQ issue on Less Wrong rather than one of the web sites set up to discuss the topic!
Replies from: Unknowns, gensym↑ comment by Unknowns · 2010-08-04T16:54:06.395Z · LW(p) · GW(p)
Downvoted for not even giving your opinion on the issue even with your throwaway account.
Some have pointed out that cultural and environmental explanations can account for significant IQ differences. This is true.
It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are.
Replies from: Oligopsony, 715497741532, jimrandomh↑ comment by Oligopsony · 2010-08-05T00:15:31.357Z · LW(p) · GW(p)
"It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are."
And what direction they're in. If social factors are sufficient to explain (e.g.) the black-white IQ gap, and the argument for their being some innate differences is "well, it's exceedingly unlikely that they're precisely the same," we don't have reason to rate "whites are natively more intelligent than blacks" as more likely than "blacks are natively more intelligent than whites." (If we know that Smith is wealthier than Jones, and that Smith found a load of Spanish dubloons by chance last year, we can't make useful conclusions about whose job was more renumerative before Smith found her pirate booty.) Of course, native racial differences might also be such that there are environmental conditions under which blacks are smarter than whites and others in which the reverse applies, or whatever.
In any event I don't think we need to hypothesize the existence of such entities (substantial racial differences) to explain reality, so the razor applies.
Replies from: Unknowns↑ comment by Unknowns · 2010-08-05T01:13:40.202Z · LW(p) · GW(p)
Even if cultural factors are sufficient, in themselves, to explain the black-white IQ difference, it remains more probable that whites tend to have a higher IQ by reason of genetic factors, and East Asians even more so.
This should be obvious: a person's total IQ is going to be the sum of the effects of cultural factors plus genetic factors. But "the sum is higher for whites" is more likely given the hypothesis "whites have more of an IQ contribution from genetic factors" than given the hypothesis "blacks have more of an IQ contribution from genetic factors". Therefore, if our priors for the two were equal, which presumably they are, then after updating on the evidence, it is more likely that whites have more of a contribution to IQ from genetic factors.
Replies from: Oligopsony↑ comment by Oligopsony · 2010-08-05T01:32:30.368Z · LW(p) · GW(p)
I'm not sure that this is the case, given that the confound has a known direction and unknown magnitude.
Back to Smith, Jones, and Spanish treasure: let's assume that we have an uncontroversial measure of their wealth differences just after Smith sold. (Let's say $50,000.) We have a detailed description of the treasure Smith found, but very little market data on which to base an estimation of what she sold them for. It seems that ceteris paribus, if our uninformed estimation of the treasure is >$50,000, Jones is likelier to have a higher non-pirate gold income, and if our uninformed estimation of the treasure is <$50,000, Smith is likelier to.
Replies from: Unknowns↑ comment by Unknowns · 2010-08-05T03:04:35.485Z · LW(p) · GW(p)
Whites and blacks both have a cultural contribution to IQ. So to make your example work, we have to say that Smith and Jones both found treasure, but in unequal amounts. Let's say that our estimate is that Smith found treasure approximately worth $50,000, and Jones found treasure approximately worth $10,000. If the difference in their wealth is exactly $50,000, then most likely Smith was richer in the first place, by approximately $10,000.
In order to say that Jones was most likely richer, the difference in their wealth would have to be under $40,000, or the difference between our estimates of the treasures found by Smith and Jones.
I agree with this reasoning, although it does not contradict my general reasoning: it is much like the fact that if you find evidence that someone was murdered (as opposed to dying an accidental death), this will increase the chances that Smith is a murderer, but then if you find very specific evidence, the chance that Smith is a murderer may go down below what it was originally.
However, notice that in order to end up saying that blacks and whites are equally likely to have a greater genetic component to their intelligence, you must say that your estimate of the average demographic difference is EXACTLY equal to the difference between your estimates of the cultural components of their average IQs. And if you say this, I will say that you wrote it on the bottom line, before you estimated the cultural components.
And if you don't say this, you have to assert one or the other: it is more likely that whites have a greater genetic component, or it is more likely that blacks do. It is not equally likely.
Replies from: wedrifid, Oligopsony↑ comment by wedrifid · 2010-08-05T03:16:00.985Z · LW(p) · GW(p)
And if you don't say this, you have to assert one or the other: it is more likely that whites have a greater genetic component, or it is more likely that blacks do. It is not equally likely.
Often when people say "equally likely" they mean "I don't know enough to credibly estimate which one is greater, the probability distributions just overlap too much." (Yes, the 'bottom line' idea is more relevant here. It's a political minefield.)
Replies from: Unknowns↑ comment by Unknowns · 2010-08-05T03:21:54.413Z · LW(p) · GW(p)
But that's the point of my general argument: if you know that whites average a higher IQ score, but not necessarily by how much (say because you haven't investigated), and you also know that there is a cultural component for both whites and blacks, but you don't know how much it is for each, then you should simply say that it is more likely (but not certain) that whites have a higher genetic component.
Replies from: wedrifid↑ comment by Oligopsony · 2010-08-05T04:01:56.220Z · LW(p) · GW(p)
I mean "equally likely" in wedrifid's sense: not that, having done a proper Bayesian analysis on all evidence, I may set the probability of p(W>B)=p(B>W}=.5 (assuming intelligence works in such a way that this implied division into genetic and environmental components makes sense), but that 1) I don't know enough about Spanish gold to make an informed judgement and 2) my rough estimate is that "I could see it going either way" - something inherent in saying that environmental differences are "sufficient to explain" extant differences. So actually forming beliefs about these relative levels is both insufficiently grounded and unnecessary.
I suppose if I had to write some median expectation it's that they're equal in the sense that we would regard any other two things in the phenomenal world of everyday experience equal - when you see two jars of peanut butter of the same brand and size next to each other on a shelf in the supermarket, it's vanishingly unlikely that they have exaaaactly the same amount of peanut butter, but it's close enough to use the word.
I don't think this is really a case of writing things down on the bottom line. What reason would there be to suppose ex ante that these arbitrarily constructed groups differ to some more-than-jars-of-peanut-butter degree? Is there some selective pressure for intelligence that exists above the Sahara but not below it (more obvious than counter-just-so-stories we could construct?) Cet par I expect a population of chimpanzees or orangutans in one region to be peanut butter equal in intelligence to those in another region, and we have lower intraspecific SNP variation than other apes.
Replies from: Unknowns↑ comment by Unknowns · 2010-08-05T06:47:20.672Z · LW(p) · GW(p)
"I could see it going either way" is consistent with having a best estimate that goes one way rather than another.
Just as you have the Flynn effect with intelligence, so average height has also been increasing. Would you say the same thing about height, that the average height of white people and black people has no significant genetic difference, but it is basically all cultural? If not, what is the difference?
In any case, both height and intelligence are subject to sexual selection, not merely ordinary natural selection. And where you have sexual selection, one would indeed expect to find substantial differences between diverse populations: for example, it would not be at all surprising to find significantly different peacock tails among peacock populations that were separated for thousands of years. You will find these significant differences because there are so many other factors affecting sexual preference; to the degree that you have a sexual preference for smarter people, you are neglecting taller people (unless these are 100% correlated, which they are not), and to the degree that you have a sexual preference for taller people, you are neglecting smarter people. So one just-so-story would be that black people preferred taller people more (note the basketball players) and so preferred more intelligent people less. This just-so-story would be supported even more by the fact that the Japanese are even shorter, and still more intelligent.
Granted, that remains a just-so-story. But yes, I would expect "ex ante" to find significant genetic differences between races in intelligence, along with other factors like height.
↑ comment by 715497741532 · 2010-08-04T19:04:05.755Z · LW(p) · GW(p)
The reason I did not even give my opinion on the race-IQ issue is that IMHO the expected damage to the quality of the conversation here exceeds the expected benefit.
It is possible for a writer to share the evidence that brought them to their current position on the issue without stating their position, but I do not want to do that because it is a lot of work and because there are probably already perfectly satisfactory books on the subject.
By the way, the kind of person who will discriminate against me because of my opinion on this issue will almost certainly correctly infer which side I am on from my first comment without really having to think about it.
↑ comment by jimrandomh · 2010-08-04T18:09:11.291Z · LW(p) · GW(p)
It doesn't follow that there aren't racial difference based on genetics as well. In fact, the idea that there might NOT be is quite absurd. Of course there are. The only question is how large they are.
That is not the only question. The question that gets people into trouble, is "which groups are favored or disfavored". You can't answer that without offending some people, no matter how small you think the genetic component of the difference is, because many of the people who read it will discard or forget the magnitude entirely and look at only the sign. Saying that group X is genetically smarter than group Y by 10^-10 IQ points will, for many listeners, have the same effect as saying that X is 10^1 IQ points smarter. And while the former belief may be true, the latter belief is false, harmful to those who hold it, and harmful to uninvolved third parties. True statements about race, IQ, and genetics are very easy to simplify or round off to false, harmful and disreputable ones.
That's why comments about race, IQ, and genetics always have to be one level separated from reality, talking about groups X and Y and people with orange eyes rather than real traits and ethnicities. And if they aren't well-separated from reality, they have to be anonymous, to protect the author from the reputational effects of things others incorrectly believe they've said.
(Edited to add: See also this comment I previously wrote on the same topic, which describes a mechanism by which true beliefs about demographic differences in intelligence (not necessarily genetic ones) produce false beliefs about individual intelligence.)
Replies from: steven0461, TobyBartels↑ comment by steven0461 · 2010-08-04T22:45:15.799Z · LW(p) · GW(p)
It seems clear to me that much of the time when people mistakenly get offended, they're mistaken about what sort of claim they should get offended about, not just mistaken about what claim was made.
↑ comment by TobyBartels · 2010-08-11T03:18:43.880Z · LW(p) · GW(p)
The important thing for me is that the standard deviations swamp the average difference, so the argument against individual prejudice is valid.
↑ comment by gensym · 2010-08-05T00:00:06.276Z · LW(p) · GW(p)
Since SIAI's selection process includes looking at the applicant's posting history here, even writers whose user names cannot be correlated with the name they would put on a job application will tend to avoid taking the unpopular-with-SIAI side in the race-IQ debate.
What makes you think "the unpopular-with-SIAI side" exists? Or that it is what you think it is?
↑ comment by Oligopsony · 2010-08-05T01:01:42.508Z · LW(p) · GW(p)
I wouldn't say I "subscribe" to Marxism, though it seems plausible to me that I might in the near future. I'm still investigating it. While I wouldn't say that specific Marxist hypothesis have risen to the level of doxastic attitudes, the approach has affected the sort of facial explanations I give for phenomena. But as I said the tradition I'm most interested in is recent, economics-focused English language academic Marxism. (The cultural stuff doesn't really interest me all that much, and most of it strikes me as nonsense, but I'm not informed enough about it to conclude that "yes, it is nonsense!") If I could recommend a starting point it would be Harvey's "Limits to Capital," although it was Hobsbawm's trilogy on the 19th century that sparked my interest.
I hope this doesn't sound evasive! I try to economize on my explicit beliefs while being explicit on my existing biases.
(As a side note, while there are a lot of different LTVs floating around, it's likely that they're almost all a bit more trivial and a lot less crazy than what you might be imagining. Most forms don't contradict neoclassical price theory but do place some additional (idealized, instrumental) constaints in order to explain additional phenomena.)
By the signaling thing, I mean the following: normal humans (not neurotic screwballs, not sociopath salesmen) show a level of confidence in social situations that corresponds roughly to how confident they themselves feel at the time. Thus, when someone approaches you and tries to sell you on something - a product, an idea, or, most commonly, themselves - their confidence level can serve as a good proxy for whether they think the item under sale is actually worthy of purchase. The extent to which they seem guarded signals that they're not all that. So for game-theoretic reasons, salesmanship works.
But it's also the case that normal people become more confident and willing to let their guards down when they're around people they trust, for obvious reasons. Thus, lowering of guards can signal "I trust you; indeed, trust you significantly more than most people" if you showed some guardedness when you first met them. There are other signals you can send, but these are among those whose absence will leave people suspicious, if you want to take your relationships in a more serious direction.
So there are tradeoffs in where you choose to place yourself on the easy-confidence spectrum. Moving to the left makes it easier to make casual friends, and lots of them; to the right makes it easier to make good friends. I suspect that most people slide around until they get the goods bundle that they want - I've even noticed how I've slid around throughout time, in reaction to being placed in new social environments - although there are obvious dysfunctional cases.
Sorry for implying that racism is common here if it isn't! Seeing Saileresque shibboleths thrown around here a few times and, indeed, the nearbyness of blogs like Roissy probably colored my perceptions. (Perhaps the impression I have of PUA from the Game and Roissy is similarly inaccurate.)
Replies from: TobyBartels↑ comment by TobyBartels · 2010-08-11T03:22:19.018Z · LW(p) · GW(p)
I used to be interested in Marxism, but not so much anymore.
However, I'm still interested in theories of value. The labour theory of value is not just a Marxist thing; it was widely accepted in the 19th century, and there are still non-Marxists who use it.
I have a hard time deciding if the debate is anything more than a matter of definition. Perhaps one ought to have multiple theories of value for different purposes?
Anyway, I want to ask if you have any recommendations for reading on this subject.
↑ comment by Emile · 2010-08-04T10:08:27.017Z · LW(p) · GW(p)
I don't think the view that there are genetic racial differences in IQ is popular here, if that's what you're referring to. It's come up a few times and the consensus seems to be that the evidence points to cultural and environmental explanations for the racial IQ gap.
I was wondering about that too, it's not really a a major topic here, though maybe the fact that it's been recently discussed on Overcoming Bias and that Roissy in DC is a "nearby" blog gave him this impression?
Replies from: satt↑ comment by satt · 2010-08-04T17:55:30.405Z · LW(p) · GW(p)
The topic kinda-sorta came up in last month's Open Thread, and WrongBot used it as an example in "Some Thoughts Are Too Dangerous For Brains to Think".
Replies from: Nonecomment by erratio · 2010-06-29T10:18:41.631Z · LW(p) · GW(p)
Hi all, I'm Jen, an Australian Jewish atheist, and an student in a Computer Science/Linguistics/Cognitive Science combined degree, in which I am currently writing a linguistics thesis. I got here through recommendations from a couple of friends who visit here and stayed mostly for the akrasia and luminosity articles (hello thesis and anxiety/self-esteem problems!) Oh and the other articles too, but the ones I've mentioned are the ones that I've put the most effort into understanding and applying. The others are just interesting and marked for further processing at some later time.
I think I was born a rationalist rather than becoming one - I have a deep-seated desire for things to have reasons that make sense, by which I mean the "we ran some experiments and got this answer" kind of sense as opposed to the "this validates my beliefs" kind of sense. Although having said that I'm still prey to all kinds of irrationality, hence this site being helpful.
At some point in the future I would be interested in writing something about linguistic pragmatics - it's basically another scientific way of looking at communication. There's a lot of overlap between pragmatics and the ideas I've seen here on status and signalling, but it's all couched in different language and emphasises different parts, so it may be different enough to be helpful to others. But at the moment I have no intention of writing anything beyond this comment (hello thesis again!), the account is mostly just because I got sick of not being able to upvote anything.
Replies from: Morendilcomment by gscshoyru · 2011-02-04T14:27:52.642Z · LW(p) · GW(p)
Hi, my handle is gscshoyru (gsc for short), and I'm new here. I found this site through the AIBox experiment, oddly enough -- and I think I got there from TVTropes, though I don't remember. After reading the fiction, (and being vaguely confused that I had read the NPC story before, but nothing else of his, since I'm a fantasy/sci-fi junkie and I usually track down authors I like), I started reading up on all of Eliezer's writings on rationality. And found it made a lot of sense. So, I am now a budding rationalist, and have decided to join this site because it is awesome.
That's how I found you -- as for who I am and such, I am a male 22-year-old mathematics major/CS minor currently working as a programmer in New Jersey. So, that's me. Hi everyone!
comment by Bill_McGrath · 2011-08-24T11:51:28.907Z · LW(p) · GW(p)
Hello, Less Wrong!
I'm Bill McGrath. I'm 22 years old, Irish, and I found my way here, as with many others, from TVTropes and Harry Potter and the Methods of Rationality.
I'm a composer and musician, currently entering the final year of my undergrad degree. I have a strong interest in many other fields - friends of mine who study maths and physics often get grilled for information on their topics! I was a good maths student in school, I still enjoy using maths to solve problems in my other work or just for pleasure, and I still remember most of what I learned. Probablity is the main exception here - it wasn't my strongest area, and I've forgotten a lot of the vocabulary, but it's the next topic I intend to study when I get a chance. This is proving problematic in my understanding of the Bayesian approach, but I'm getting there.
I've been working my way through the core sequences, along with some scattered reading elsewhere on the site. So far, a lot of what I've encountered has been ideas that are familiar to me, and that I try to use when debating or discussing ideas anyway. I've held for a while now that you have to be ready to admit your mistakes, not be afraid of being wrong sometimes, and take a neutral approach to evidence - allowing any of these to cloud your judgement means you won't get reliable data. That said, I've still learned quite a bit from LW, most importantly how to express these ideas about rationality to other people.
I'm not sure I could pinpoint what moment brought me to this mindset, but it was possibly the moment I understood why the scientific method was about trying to disprove, rather than prove, your hypothesis; or perhaps when I realized that the empiricisist's obligation to admit when they are wrong was makes them strong. Other things that have helped me along the way - the author Neal Stephenson, the comedian Tim Minchin, and Richard Fenyman.
My other interests, most of which I have no formal training in but I have read about in my own time or have learned about through conversation with friends, include:
-politics - I consider myself to be socially liberal but economically ignorant
-languages (I speak a little German and less Irish, have taken brief courses in other languages), linguistic relativism
-writing, and the correct use of language
-quantum physics (in an interested layman way - I am aware of a lot of the concepts, but I'm by no means knowledgeable)
-psychology
as well as many other things which are less LW-relevant!
Thank you to the founders and contributors to the site who have made it such an interesting collection of thoughts and ideas, as well as a welcoming forum for people to come and learn. I think I'll learn a lot from it, and hopefully some day I'll be able to repay the favour!
-Bill
comment by lincolnquirk · 2011-04-05T20:29:45.162Z · LW(p) · GW(p)
Hi, I'm Lincoln. I am 25; I live and work in Cambridge, MA. I currently build video games but I'm going to start a Ph.D program in Computer Science at the local university in the fall.
I identified rationality as a thing to be achieved ever since I knew there was a term for it. One of the minor goals I had since I was about 15 was devising a system of morality which fit with my own intuitions but which was consistent under reflection (but not in so many words). The two thought experiments I focused on were abortion and voting. I didn't come up with an answer, but I knew that such a morality was a thing I wanted -- consistency was important to me.
I ran across Eliezer's work 907 days ago reading a Hacker News post about the AI-box experiment, and various other Overcoming Bias posts that were submitted over the years. I didn't immediately follow through on that stuff.
But I became aware of SIAI about 10 months ago, when rms on Hacker News linked an interesting post about the Visiting Fellows program at SIAI.
I think I had a "click" moment: I immediately saw that AI was both an existential risk and major opportunity, and I wanted to work on these things to save the world. I followed links and ended up at LW; I didn't immediately understand the connection between AI and rationality, but they both looked interesting and useful, so I bookmarked LW.
I immediately sent in an application to the Visiting Fellows program, thinking "hey, I should figure out how to do this" -- I think it was Jasen who responded and asked me by email to summarize the purpose of SIAI and how I thought I could contribute. I wrote the purpose summary, but got stuck on how to contribute. I had barely read any of the Sequences at that time and had no idea how I could be useful. For those reasons (as well as a healthy dose of akrasia), I gave up on my application at that time.
Somewhere in there I found HP:MoR (perhaps via TVTropes?), saw the author was "Less Wrong" and made the connection.
Since then, I have been inhaling the Sequences; in the last month I've been checking the front page almost daily. I applied to the Rationality Boot Camp.
I'm very far from being a rationalist -- I can see that my rationality skills are really quite poor, but I at least identify as a student of rationality.
Replies from: Kevin, Alexei↑ comment by Kevin · 2011-06-06T05:22:28.997Z · LW(p) · GW(p)
That's me, welcome to Less Wrong! Glad to form some part of your personal causal history.
Replies from: lincolnquirk↑ comment by lincolnquirk · 2011-06-06T19:45:36.859Z · LW(p) · GW(p)
Update: I got into Rationality Boot Camp, which is starting tomorrow. Thanks for posting that on HN! I wouldn't (probably) be here otherwise.
↑ comment by Alexei · 2011-06-11T02:14:11.628Z · LW(p) · GW(p)
Hey, I am in kind of in a similar situation as you. I've worked on making games (as a programmer) for several years, and currently I'm working on a game of my own, where I incorporate certain ideas from LessWrong. I've been wondering lately if I could contribute more if I did FAI related research. What convinced you to switch to it? How much do you think you'll contribute? How talented are you and how much of a deciding factor was that?
comment by bigjeff5 · 2011-01-27T02:02:01.130Z · LW(p) · GW(p)
Hello, I'm Jeff, I found this site via a link on an XKCD forum post, which also included a link to the Harry Potter and the Methods of Rationality fan-fic. I read the book first (well, what has been written so far, I just couldn't stop!) and decided that whoever wrote that must be made of pure awesome, and I was excited to see what you all talked about here.
After some perusal, I decided I had to respond to one of the posts, which of course meant I had to sign up. The post used keyboard layouts (QWERTY, etc.) as an example of how to rephrase a question properly in order to answer it in a meaningful way. Posting my opinion ended up challenging some assumptions I had about the QWERTY layout and the Dvorak layout, and I am now three and a half hours into learning the Dvorak layout in order to determine which is actually the better layout (based on things I read it seemed a worthwhile endeavor, instead of too difficult like I assumed).
I would have posted this in Dvorak layout, but I only have half the keys down and it would be really, really slow, so I switched back to QWERTY just for this. QWERTY comes out practically as I think it - Dvorak, not so much yet. The speed with which I'm picking up the new layout also shatters some other assumptions I had about how long it takes to retrain muscle memory. Turns out, not long at all (at least in this case), though becoming fluent in Dvorak will probably take a while.
I would say I am a budding rationalist, and I hope this site can really speed my education along. If that doesn't tell you enough about who I am, then I don't really know what else to say.
comment by [deleted] · 2010-12-24T00:59:25.401Z · LW(p) · GW(p)
Greetings, fellow thinkers! I'm a 19-year-old undergraduate student at Clemson University, majoring in mathematics (or, as Clemson (unjustifiably) calls it, Mathematical Sciences). I found this blog through Harry Potter and the Methods of Rationality about three weeks ago, and I spent those three weeks doing little else in my spare time but reading the Sequences (which I've now finished).
My parents emigrated from the Soviet Union (my father is from Kiev, my mother from Moscow) just months before my birth. They spoke very little English upon their arrival, so they only spoke Russian to me at home, and I picked up English in kindergarten; I consider both to be my native languages, but I'm somewhat more comfortable expressing myself in English. I studied French in high school, and consider myself "conversant", but definitely not fluent, although I intend to study abroad in a Francophone country and become fluent. This last semester I started studying Japanese, and I intend to become fluent in that as well.
My family is Jewish, but none of my relatives practice Judaism. My mother identifies herself as an agnostic, but is strongly opposed to the Abrahamic religions and their conception of God. My father identifies as an atheist. I have never believed in Santa Claus or God, and was very confused as a child about how other people could be so obviously wrong and not notice it. I've never been inclined towards mysticism, and I remember espousing Physicalist Reductionism (although I did not know those words) at an early age, maybe when I was around 9 year old.
I've always been very concerned with being rational, and especially with understanding and improving myself. I think I missed out on a lot of what Americans consider to be classic sci-fi (I didn't see Star Wars until I got to college, for example), but I grew up with a lot of good Russian sci-fi and Orson Scott Card.
I used to be quite a cynical misanthrope, but over the past few years I've grown to be much more open and friendly and optimistic. However, I've been an egoist for as long as I can remember, and I see no reason why this might change in the foreseeable future (this seems to be my primary point of departure from agreement with Eliezer). I sometimes go out of my way to help people (strangers as much as friends) because I enjoy helping people, but I have no illusions about whose benefit my actions are for.
I'm very glad to have found a place where smart people who like to think about things can interact and share their knowledge!
Replies from: Eliezer_Yudkowsky, TobyBartels, ata, shokwave↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-12-24T01:31:06.085Z · LW(p) · GW(p)
I've been an egoist for as long as I can remember
No offense intended, but: If you could take a pill that would prevent all pain from your conscience, and it could be absolutely guaranteed that no one would ever find out, how many twelve-year-olds would you kill for a dollar?
(Perhaps you meant to say that you were mostly egoist, or that your deliberatively espoused moral principles were egoistic?)
PS: Welcome to Less Wrong!
Replies from: None, None, wedrifid, None↑ comment by [deleted] · 2010-12-25T06:04:06.664Z · LW(p) · GW(p)
Eliezer, I've been thinking about this a lot. When I backed up and asked myself whether, not why, I realized that
1) I'm no longer sure what "I am an egoist" means, especially given how far my understanding of ethics has come since I decided that, and
2) I derive fuzzies from repeating that back to myself, which strikes me as a warning sign that I'm covering up my own confusion.
↑ comment by [deleted] · 2010-12-24T06:38:57.436Z · LW(p) · GW(p)
Eliezer, please don't think you can offend me by disagreeing with me or questioning my opinions - every disagreement (between rational people) is another precious opportunity for someone (hopefully me!) to get closer to Truth; if the person correcting me is someone I believe with high probability to be smarter than me, or to have thought through the issue at hand better than I have (and you fit those criteria!), this only raises the probability that it is I who stand to benefit from the disagreement.
I'm not certain this is a very good answer to your question, but 1) I would not take such a pill, because I enjoy empathy and don't think pain is always bad, 2) peoples' deaths negatively affect many people (both through the ontologically positive grief incurred by the loss and the through ontologically negative utility they would have produced), and that negative effect is very likely to make its way to me through the Web of human interaction, especially if the deceased are young and have not yet had much of a chance to spread utility through the Web, and 3) I would have to be quite efficient at killing 12-year-olds for it to be worth my time to do it for a dollar each (although of course this is tangential to your question, since the amount "a dollar" was arbitrary).
I should also point out that I have a strongly negative psychological reaction to violence. For example, I find the though of playing a first-person shooting game repugnant, because even pretending to shoot people makes me feel terrible. I just don't know what there is out there worse than human beings deliberately doing physical harm to one another. As a child, I felt little empathy for my fellow humans, but at some point, it was as if I was treated with Ludovico's Technique (à la A Clockwork Orange)... maybe some key mirror neurons in my prefrontal cortex just needed time to develop.
Thank you for taking time to make me think about this!
Replies from: jimrandomh↑ comment by jimrandomh · 2010-12-24T16:38:32.720Z · LW(p) · GW(p)
If your moral code penalizes things that make you feel bad, and doing X would make you feel bad, then is it fair to say that not doing X is part of your moral code?
I think the point Eliezer was getting at is that human morality is very complex, and statements like "I'm an egoist" sweep a lot of that under the rug. And to continue his example: what if the pill not only prevented all pain from your conscience, but also gave you enjoyment (in the form of seratonin or whatever) at least as good as what you get from empathy?
Replies from: None, None↑ comment by [deleted] · 2010-12-25T06:04:42.879Z · LW(p) · GW(p)
You're right, human morality is more complex than I thought it was when "I am an egoist" seemed like a reasonable assertion, and all the fuzzies I got from "resolving" the question of ethics prevented me from properly updating my beliefs about my own ethical disposition.
↑ comment by wedrifid · 2010-12-24T02:33:07.315Z · LW(p) · GW(p)
No offense intended, but: If you could take a pill that would prevent all pain from your conscience, and it could be absolutely guaranteed that no one would ever find out, how many twelve-year-olds would you kill for a dollar?
How much do bullets cost again? :P
↑ comment by TobyBartels · 2010-12-24T02:12:53.197Z · LW(p) · GW(p)
majoring in mathematics (or, as Clemson (unjustifiably) calls it, Mathematical Sciences)
If you mean that mathematics is not a natural science, then I agree with you. But ‘science’ has an earlier, broader meaning that applies to any field of knowledge, so mathematical science is simply the systematic study of mathematics. (I don't know why they put it in plural, but that's sort of traiditional.)
Compare definitions 2 and 4 at dictionary.com.
Replies from: None↑ comment by [deleted] · 2010-12-24T06:19:46.954Z · LW(p) · GW(p)
You're right! I've been so caught up (for years now) with explaining to people that mathematics was not a science because it was not empirical (although, as I've since learned from Eliezer, "pure thought" is still a physical process that we must observe in order to learn anything from it), that I've totally failed to actually think about the issue.
There goes another cached thought from my brain; good riddance, and thanks for the correction!
Replies from: TobyBartels↑ comment by TobyBartels · 2010-12-26T07:06:54.729Z · LW(p) · GW(p)
You're welcome!
↑ comment by ata · 2010-12-24T01:04:33.975Z · LW(p) · GW(p)
Welcome!
I spent those three weeks doing little else in my spare time but reading the Sequences (which I've now finished).
Impressive. I've been here for over a year and I still haven't finished all of them.
However, I've been an egoist for as long as I can remember, and I see no reason why this might change in the foreseeable future (this seems to be my primary point of departure from agreement with Eliezer). I sometimes go out of my way to help people (strangers as much as friends) because I enjoy helping people, but I have no illusions about whose benefit my actions are for.
I'm curious — if someone invented a pill that exactly simulated the feeling of helping people, would you switch to taking that pill instead of actually helping people?
Replies from: None↑ comment by [deleted] · 2010-12-24T06:46:04.340Z · LW(p) · GW(p)
Impressive. I've been here for over a year and I still haven't finished all of them.
Thanks! My friends thought I was crazy (well, they probably already did and still do), but once I firmly decided to get through the Sequences, I really almost didn't do anything else while I wasn't either in class, taking an exam, or taking care of biological needs like food (having a body is such a liability!).
I'm curious — if someone invented a pill that exactly simulated the feeling of helping people, would you switch to taking that pill instead of actually helping people?
No, because helping people has real effects that benefit everyone. There's a reason I'm more inclined to help my friends than strangers - I can count on them to help me in return (this is still true of strangers, but less directly - people who live in a society of helpful people are more likely to be helpful!). This is especially true of friends who know more about certain things than I do - many of my friends are constantly teaching each other (and me) the things they know best, and we all know a lot more as a result... but it won't work if I decide I don't want to teach anyone anything.
Replies from: None↑ comment by [deleted] · 2011-12-20T17:07:17.190Z · LW(p) · GW(p)
I think there are few humans who don't genuinely care more about themselves their friends and family than people in general.
Personally I find the idea that I should prefer the death of say, my own little sister, to two or three or four random little girls absurd. I suspect even when it comes to one's own life people are hopelessly muddled on what they really want and their answers don't correlate too well with actions. A better way to get an estimate of what a person is likley to do, is to ask them what fraction of people would sacrifice their lives to save the lives of N (small positive integer) other random people.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-12-20T17:24:34.829Z · LW(p) · GW(p)
It's even more complicated than that. If I see a few strangers in immediate, unambiguous danger, I'm pretty sure I will die to save them. But I will not spend all that much on donating to a charity that will save these same people, twenty years later and two thousand miles away. (...what was that about altruistic ideals being Far?)
Replies from: None↑ comment by shokwave · 2010-12-24T09:36:38.468Z · LW(p) · GW(p)
However, I've been an egoist for as long as I can remember,
I'm not entirely sure what this position entails. Wikipedia sent me to 'egotist' and here. I am curious because it seems like quite a statement to use a term so similar to an epithet to describe one's own philosophy.
Replies from: None, arundelo↑ comment by [deleted] · 2010-12-24T10:30:31.483Z · LW(p) · GW(p)
The distinction between egoism and egotism is an oft-mixed-up one. An egotist is simply someone who is overly concerned with themselves; egoism is a somewhat more precise term, referring to a system of ethics (and there are many) in which the intended beneficiary of an action "ought" (a word that Eliezer did much to demystify for me) to be the actor.
The most famous egoist system of ethics is probably Ayn Rand's Objectivism, of which I am by no means a follower, although I've read all of her non-fiction.
↑ comment by arundelo · 2010-12-24T16:29:10.637Z · LW(p) · GW(p)
See the article on ethical egoism.
comment by JJ10DMAN · 2010-10-15T13:25:48.547Z · LW(p) · GW(p)
I originally wrote this for the origin story thread until I realized it's more appropriate here. So, sorry if it straddles both a bit.
I am, as nearly as I believe can be seen in the present world, an intrinsic rationalist. For example: as a young child I would mock irrationality in my parents, and on the rare occasions I was struck, I would laugh, genuinely, even through tears if they came, because the irrationality of the Appeal to Force made the joke immensely funnier. Most people start out as well-adapted non-rationalists; I evidently started as a maladaptive rationalist.
As an intrinsic (maladaptive) rationalist, I have had an extremely bumpy ride in understanding my fellow man. If I had been born 10 years later, I might have been diagnosed with Asperger's Syndrome. As it was, I was a little different, and never really got on with anyone, despite being well-mannered. A nerd, in other words. Regarding bias, empathic favoritism, willful ignorance, asking questions in which no response will effect subsequent actions or belief confidences, and other peculiarities for which I seem to be an outlier, any knowledge about how to identify and then deal with these peculiarities has been extremely hard-won from years upon years of messy interactions in uncontrolled environments with few hypotheses from others to go on (after all, they "just get it", so they never needed to sort it out explicitly).
I've recently started reading rationalist blogs like this one, and they have been hugely informative to me because they put things I have observed about people but failed to understand intuitively into a very abstract context (i.e. one that bypasses intuition). Less Wrong, among others, have led to a concrete improvement in my interactions with humanity in general, the same way a blog about dogs would improve one's interactions with dogs in general. This is after just a couple months! Thanks LW.
Replies from: HughRistik↑ comment by HughRistik · 2010-10-15T17:54:01.362Z · LW(p) · GW(p)
Less Wrong, among others, have led to a concrete improvement in my interactions with humanity in general, the same way a blog about dogs would improve one's interactions with dogs in general.
That's really cool. I'd be curious to know some examples of some ideas you've read here that you found useful.
Replies from: JJ10DMAN↑ comment by JJ10DMAN · 2011-02-17T19:40:59.540Z · LW(p) · GW(p)
Rationalist blogs cite a lot of biases and curious sociological behaviors which have plagued me because I tend optimistically accept what people say at face value. In explaining them in rationalist terms, LW and similar blogs essentially explain them to my mode of thinking specifically. I'm now much better at picking up on unwritten rules, at avoiding punishment or ostracism for performing too well, at identifying when someone is lying politely but absolutely expects me to recognize it as a complete lie, etc., thanks to my reading into these psychological phenomena.
Additionally, explanations of how people confuse "the map" to be "the territory" have been very helpful in determining when correcting someone is going to be a waste of time. If they were sloppy and mis-read their map, I should step in; if their conclusion is the result of deliberately interpreting a map feature (flatness, folding) as a territory feature, unless I know the person to be deeply rational, I should probably avoid starting a 15-minute argument that won't convince them of anything.
comment by Relsqui · 2010-09-17T02:47:12.100Z · LW(p) · GW(p)
I suppose it's high time I actually introduced myself.
Hullo LW! I'm Elizabeth Ellis. That's a very common first name and a very common last name, so if you want to google me, I recommend "relsqui" instead. (I'm not a private person, the handle is just more useful for being a consistently recognizable person online.) I'm 24 and in Berkeley, California, USA. No association with the college; I just live here. I'm a cyclist, an omnivore, and a nontheist; none of these are because of moral beliefs.
I'm a high school dropout, which I like telling people after they've met me, because I like fighting the illusion that formal education is the only way to produce intelligent, literate, and articulate people--or rather, that the only reason to drop out is not being one. In mid-August of this year I woke up one morning, thought for a while about things I could do with my life that would be productive and fulfilling, and decided it would be helpful to have a bachelor's degree. I started classes two weeks later. GEs for now, then a transfer into a communication or language program. It's very strange taking classes with people who were in high school four months ago.
My major area of interest is human communication. Step back for a moment and think about it: You've got an electric meatball in your head which is capable of causing other bits of connected meat to spasm, producing vibrations in the air. Another piece of meat somewhere else is touched by those vibrations ... and then the electric meatball in somebody else's head is supposed to produce an approximation of the signals that happened to be running through yours? That's ridiculous. The wonder isn't how often we miscommunicate, it's that we ever communicate well.
So, my goal is to help people do it better. This includes spreading communication techniques which I've found effective for getting one electric meatball to sync up with another, as well as more straightforward things like an interest in languages. (I'm only fluent in English, but I'm conversational in Spanish, know some rudimentary Hebrew, and have a semester-equivalent or less of a handful of other things.)
One of my assets in this department is that, on the spectrum of strongly logic-driven people to strongly emotion-driven people, I am fairly close to the center. This has its good and bad points. I understand each side better than the other one does, and have had success translating between them for people who weren't getting across to each other. On the other hand, I'm repelled by both extremes, which can be inconvenient. I think that no map of a human can be accurate without acknowledging emotions in the territory, which we feel, and which drive us, but which we do not fully understand. This does not preclude attempting to understand them better; it just requires working with those emotions rather than wishing they didn't exist.
I came to LW because someone linked me to the parable of the dagger and it delighted me, so I looked around to see what else was here. I'm interested in ways to make better decisions and be less wrong because I find it useful to have these ideas floating around in my head when I have a decision to make--much like aforementioned communication techniques when I'm talking to someone. I'm not actively trying to transform myself, at least not in any way related to rationality.
That's everything of any relevance I can think of at the moment.
Replies from: Alicorncomment by Skepxian · 2010-07-26T15:44:45.965Z · LW(p) · GW(p)
Greetings, all. Found this site not too long ago, been reading through it in delight. It has truly energized my brain. I've been trying to codify and denote a number of values that I held true to my life and to discussion and to reason and logic, but was having the most difficult time. I was convinced I'd found a wonderful place that could help me when it provided me a link to the Twelve Virtues of Rationality, which neatly and tidily listed out a number of things I'd been striving to enumerate.
My origins in rationality basically originated at a very, very young age, when the things adults said and did didn't make sense. Some of it did, as a matter of fact, make more sense once I'd gotten older - but they could have at least tried to explain it to me - and I found that their successes too often seemed more like luck than having anything to do with their reasons for doing things. I suppose I became a rationalist out of frustration, one could say, at the sheer irrationality of the world around me.
I'm a Christian, and have applied my understanding of Rationality to Christianity. I find it holds up strongly, but am not insulted that not everyone feels that way. This site may be slanted atheist, but I find that rationalists have more in common with each other no matter their religious beliefs than a rationalist atheist has with a dogmatic atheist, or a rationalist Christian has with a dogmatic Christian, generally speaking.
I welcome discussion, dialog, and spirited debate, as long as you listen to me and I listen to you. I have a literal way of speaking, and don't tend to indulge in those lingual niceties that are technically untrue, which so many people hold strongly to. My belief is that if you don't want to discuss something, don't bring it up. So if I bring something up, I'd better darn well be able to discuss it. My belief is also that I should not strongly hold an opinion if I cannot strongly argue against my opinion, so I value any and all strong arguments against any opinion I hold.
I look forward to meeting many of you!
Replies from: RobinZ↑ comment by RobinZ · 2010-07-26T16:05:33.399Z · LW(p) · GW(p)
Welcome! I imagine a number of us would be quite happy to argue the rectitude of Christianity with you whenever you are interested, but no big rush.
A while ago someone posted a question about introductory posts if you want a selection of reading material which doesn't require too much Less Wrong background. And yes, I posted many of those links. Hey, I'm enthusiastic!
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T17:07:12.798Z · LW(p) · GW(p)
Thank you very much!
A small element of my own personal quirks (which, alas, I keep screwing up) is to avoid using the words 'argue' and 'debate'. Arguing is like trying to 'already be right', and Debate is a test of social ability, not the rightness of your side. I like to discuss - some of the greatest feelings is when I suddenly get that sensation of "OH! I've been wrong, but that makes SO MUCH MORE SENSE!" And some of the scariest feelings are "What? You're changing your mind to agree with me? But what if I'm wrong and I just argued it better?"
I'm not really looking to try to convince anyone of Christianities' less-wrongedness, but it seems to be a topic that pops up with a decent frequency. (Though admittedly I've not read enough pages to really get a good statistical assessment yet) Since it was directly mentioned in "Welcome to Less Wrong," I figured I'd make my obvious biases a bit of public knowledge. :) But I always do enjoy theological discussion, when it comes my way.
I look forward to discussing with you soon. :) I'm taking my time getting through the Sequences, at the moment, but I'll keep an eye on those introductory posts as well.
Replies from: Apprentice, ata↑ comment by Apprentice · 2010-07-26T17:37:15.857Z · LW(p) · GW(p)
Christian or atheist - in the end we all believe in infinite torture forever. Welcome!
Replies from: WrongBot, Skepxian↑ comment by WrongBot · 2010-07-26T18:05:13.526Z · LW(p) · GW(p)
I think you're leaving out a substantial number of people who don't believe in infinite anything.
Replies from: Apprentice, Skepxian↑ comment by Apprentice · 2010-07-26T18:35:37.327Z · LW(p) · GW(p)
This was an attempt at humor. Usually when people start sentences with "Whatever religion we adhere to..." they are going to utter a platitude ending with "...we all believe in love/life/goodness". The intended joke was to come about through a subversion of the audience's expectation. It was also meant to poke fun at all the torture discussions here lately, though perhaps that's already been done to death.
Replies from: orthonormal, WrongBot↑ comment by orthonormal · 2010-07-26T18:42:44.581Z · LW(p) · GW(p)
Creative idea, poor execution. You'd have to combine it with several other such platitude parodies before other people would interpret your joke correctly.
Replies from: khafra, Skepxian↑ comment by Skepxian · 2010-07-26T19:25:16.303Z · LW(p) · GW(p)
Just because you didn't get the joke doesn't mean he did it wrong. I got the joke, and he was saying it to me, so I believe the joke was performed correctly, given his target audience! ^_^
The problem, I'd say, would be an assumption of shared prior experience - but humor in general tends to make that assumption, whether it's puns which assume a shared experience with lingual quirks, friend in-jokes which are directly about shared experiences, or genre humor which assumes a shared experience in that genre. This was genre humor.
While transparent communication is wonderful for rational discussion, I would conjecture that humor is inherently about the irrational links our minds make between disparate information with similar qualities.
↑ comment by WrongBot · 2010-07-26T19:56:15.183Z · LW(p) · GW(p)
I got the joke, but I guess I just didn't think it was funny. That may be because I've been pretty annoyed with all the infinite torture discussions that have been going on; I think the idea is laughably implausible, and don't understand the compulsion people seem to have to keep talking about it, even after being informed that they are causing other people horrible nightmares by doing so.
↑ comment by Skepxian · 2010-07-26T19:31:01.738Z · LW(p) · GW(p)
I think everyone believes in infinite something, even if it's infinite nothingness, or infinite cosmic foam, but I understand your meaning. ^_^
Replies from: WrongBot↑ comment by WrongBot · 2010-07-26T19:52:28.037Z · LW(p) · GW(p)
I don't. I believe that there are things that can only be described in terms of stupendously huge numbers, but I believe that everything that exists can be described without reference to infinities.
Really, when I think about how incomprehensibly enormous a number like BusyBeaver(3^^^3) is, I have trouble believing that there is some physical aspect of the universe that could need anything bigger. And if there is, well, there's always BusyBeaver(3^^^^3) waiting in the wings.
Eliezer calls this infinite-set atheism, which is as good a name as any, I suppose.
Replies from: Sniffnoy, Vladimir_Nesov, Skepxian↑ comment by Vladimir_Nesov · 2010-07-26T20:12:42.607Z · LW(p) · GW(p)
Concepts don't have to be about "reality", whatever that is (not a mathematically defined concept for sure).
Replies from: WrongBot↑ comment by WrongBot · 2010-07-26T20:25:27.532Z · LW(p) · GW(p)
Infinities exist as concepts, yes. They're even useful in math. But I have never encountered anything that exists (for any reasonable definition of "exists") that can't be described without an infinity. MWI describes a preposterously large but still finite multiverse, as far as I understand it. And if our physical universe is infinite, as some have supposed, I haven't seen proof of it.
Really, like any other form of atheism, infinite-set atheism should be easy to dispel. All anyone has to do to change my mind is show me an infinite set.
Replies from: dclayh, Vladimir_Nesov↑ comment by dclayh · 2010-07-26T20:28:41.604Z · LW(p) · GW(p)
All anyone has to do to change my mind is show me an infinite set.
Considering your brain is finite, I don't think you're entitled to that particular proof.
(Perhaps you're just saying it would be a sufficient but not a necessary proof, in which case...okay, I guess.)
Replies from: WrongBot↑ comment by WrongBot · 2010-07-26T20:58:42.326Z · LW(p) · GW(p)
That's not the only proof I'd accept, but given that I do accept conceptual infinities, I don't think my brain is necessarily the limiting factor here.
Another form of acceptable evidence would be some mathematical proof that begins with the laws of physics and demonstrates that reality contains an infinity. I'm not sure if a similar proof that demonstrates that reality could contain an infinity would be as convincing, but it would certainly sway me quite a bit.
↑ comment by Vladimir_Nesov · 2010-07-26T20:29:05.611Z · LW(p) · GW(p)
Unfortunately, observations don't have epistemic power, so we'd have to live with all possible concepts. Besides, it's quite likely that reality doesn't in fact contain any infinities, in which case it's not possible to show you an infinity, and you are just demanding particular proof. :-)
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T20:38:01.243Z · LW(p) · GW(p)
Wait... he's already saying he believes reality doesn't contain any infinities...
And you say that you can't show proof to the contrary because it's likely reality doesn't contain any infinities...
I don't think I followed you there.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-26T20:50:33.347Z · LW(p) · GW(p)
I distinguish between "believing in X" and "believing reality contains X". I grew to dislike the non-mathematical concept of reality lately. Decision theory shouldn't depend on that.
Replies from: WrongBot↑ comment by Skepxian · 2010-07-26T20:25:08.542Z · LW(p) · GW(p)
I'm not sure I understand. Part of it is the use of BusyBeaver - I'm familiar with Busy Beaver as an AI state machine, not as a number. Second: So you say you do not believe in infinity ... but only inasmuch as physical infinity? So you believe in conceptual infinity?
Replies from: WrongBot↑ comment by WrongBot · 2010-07-26T20:35:16.166Z · LW(p) · GW(p)
The BusyBeaver value I'm referring to is the maximum number of steps that the Busy Beaver Turing Machine with n states (and, for convenience, 2 symbols) will take before halting. So (via wikipedia), BB(1) = 1, BB(2) = 6, BB(3) = 21, BB(4) = 107, BB(5) >= 47,176,870, BB(6) >= 3.8 × 10^21132, and so on. It grows the fastest of all possible complexity classes.
Replies from: Sniffnoy, Skepxian↑ comment by Sniffnoy · 2010-07-27T20:48:31.821Z · LW(p) · GW(p)
OK, have to make technical corrections here. Busy Beaver is not a complexity class, complexity classes do not grow. Busy Beaver function grows faster than any computable function, but I doubt it's the "fastest" at anything, seeing as you can always just take e^BB(n), e.g.
Replies from: WrongBot, cousin_it↑ comment by WrongBot · 2010-07-27T21:15:03.898Z · LW(p) · GW(p)
Ugh, thank you. I seem to have gotten complexity classes and algorithmic complexity mixed up. Busy Beaver's algorithmic complexity grows asymptotically faster than any computable function, so far as considerations like Big-O notation are concerned. In those sorts of cases, I think that even for functions like e^BB(n), the BB(n) part dominates. Or so Wikipedia tells me.
ETA: cousin_it has pointed out that there uncomputable functions which dominate Busy Beaver.
Replies from: Sniffnoy↑ comment by cousin_it · 2010-07-27T20:59:58.875Z · LW(p) · GW(p)
As Eliezer pointed out on HN, there is a way to define numbers that dominate BB values as decisively as BB dominates the Ackermann function, but you actually need some math knowledge to make the next step, not just stack BB(BB(...)) or something. (To be more precise, once you make the step, you can beat any person who's "creatively" using BB's and oracles but doesn't know how to make the same step.) And after that quantum leap, you can make another quantum leap that requires you to understand another non-trivial bit of math, but after that leap he doesn't know what to do next, and I, being a poor shmuck, don't know either. If you want to work out for yourself what the steps are, don't click the link.
↑ comment by Skepxian · 2010-07-26T20:36:17.696Z · LW(p) · GW(p)
Ah, excellent, so I'm not so far off. Then what's 3^^^3, then?
Replies from: Vladimir_Nesov↑ comment by ata · 2010-07-26T18:01:18.939Z · LW(p) · GW(p)
A small element of my own personal quirks (which, alas, I keep screwing up) is to avoid using the words 'argue' and 'debate'. Arguing is like trying to 'already be right', and Debate is a test of social ability, not the rightness of your side. I like to discuss - some of the greatest feelings is when I suddenly get that sensation of "OH! I've been wrong, but that makes SO MUCH MORE SENSE!" And some of the scariest feelings are "What? You're changing your mind to agree with me? But what if I'm wrong and I just argued it better?"
Good attitude. I'm much the same, both in enjoying learning new things even when it means relinquishing a previously held belief, and in feeling slightly guilty when I cause someone to change their mind. :) LW has actually helped me get over the latter, because now that I understand rationality much better, I'm accordingly more confident that I'm doing things correctly in debates.
I'm glad you mentioned your Christianity and your specific belief that it is rationally justified — I'll be curious to see how it holds up after you've read the sequences Mysterious Answers to Mysterious Questions, How to Actually Change Your Mind, and Reductionism. (I hope you'll be considering that issue with that same curious, unattached mindset — if Christianity were false, would you really, honestly, sincerely want to know?) If I may ask, what specific beliefs do you consider part of your Christianity? The Holy Trinity? The miracles described in the NT? Jesus's life as described by the Gospels? The moral teachings in the OT? Creationism? Biblical literalism? Prayer as a powerful force? Heaven and Hell? Angels, demons, and the Devil as actual beings? Salvation through faith or works? The prophecies of the Revelation?
Replies from: EStokes, Skepxian↑ comment by EStokes · 2010-07-26T22:19:00.043Z · LW(p) · GW(p)
Not in response to anyone, but to this thread/topic.
Is this really something that should be on LessWrong? LessWrong is more about debate on new territory and rationality and such, not going to well-tread territory. There are many other places on the internet for debate on religion, but there's only one LW. Perhaps /r/atheism, (maybe being careful to say that you're honestly looking to challenge your beliefs and not test your faith.)
Unless there are new points that haven't been heard before, or people are genuinely interested in this specific debate.
Just not sure this is the right place, and want to hear other people's opinions on this.
Replies from: Skepxian, Bongo, RobinZ, Nick_Tarleton↑ comment by Skepxian · 2010-07-27T00:16:59.465Z · LW(p) · GW(p)
Well, thus far, I've mainly seen, "Welcome to LessWrong ... let's poke at the new guy and see what he's thinking!" I don't think we're getting into any real serious philosophy, yet. It's all been fairly light stuff. I've been trying to self-moderate my responses to be polite and answer people, but not get too involved in a huge discussion, because I agree, this wouldn't be the right place. But so far, it's seemed just some curiosity being satisfied about me, specifically, and my theology - not theology as a whole. As such, it certainly seems to belong in a 'Meet the new guys' thread.
Additionally, I'm personally not here to challenge my beliefs or test my faith, though I certainly won't turn it down as it happens. Given the lean of belief in the place, I expect it to happen. My main draw, however, isn't theological but instead in the realm of discovering a knowledge base and discussion area based around rationality, containing elements of discussion which have already done the work I've been running in circles in my head because I've been lacking someone to talk about them with!
↑ comment by RobinZ · 2010-07-27T03:18:19.598Z · LW(p) · GW(p)
I should apologize for having kicked off the topic - I had some vague ideas of someday getting on the IRC channel and bouncing thoughts back and forth, and didn't realize that it would inevitably become a conversation in the thread here if I mentioned it.
↑ comment by Nick_Tarleton · 2010-07-26T22:25:45.188Z · LW(p) · GW(p)
In general I'd agree, but a theological argument in which all parties refer to the Sequences seems like a worthwhile novelty.
↑ comment by Skepxian · 2010-07-26T19:15:21.332Z · LW(p) · GW(p)
I'm partway through Mysterious Answers to Mysterious Questions, and it's very, very interesting. Much better fodder than I usually see from people misusing those concepts. It's refreshing to see points made in context to their original meaning, and intelligently applied! I'm giving myself some time to let my thoughts simmer, before making a few comments on a couple of them.
I want to know what's true. Even if Christianity wasn't true, I've already found a great deal of Truth in its teachings for how to live life. The Bible, I feel, encourages a rational mindset, as much as many might think otherwise - to not use one's intellect to examine one's religion would be to reject many of Jesus' teachings. Most specifically, it would reject Jesus' parable about taking a treasure (reason) that the Master (God) has given his servant (Man), and burying it in the ground instead of using it to create more treasure (knowledge). It also can be seen in the way that the only people Jesus really gets angry at throughout the entire bible, and cries out against, are members of the 'true religion' (Pharisees) who abused and misused the tenets of their religion to push their own preconceptions.
The Holy Trinity: yes. The miracles: yes. Jesus' life: yes. Moral teachings: yes.
Creationism: Not supported by the bible, nor by a thorough examination of the Ancient Hebrew culture, where the '6 days' were considered a metaphor for time too vast to be comprehended by the mortal mind. The Genesis sequence contains quite a few inherent metaphoric parallels with our scientific understanding of how the world was created, too.
Biblical literalism: sorta. I believe the bible was divinely inspired but I believe that man's language is completely unable to manifest any sort of 'perfect understanding' as the language itself is imperfect. Even, theoretically speaking, were the bible able to present a perfect language, man is imperfectly able to understand it. So on a technical level, yes, I believe in biblical literalism (except where scholarly study, historical cultural examination, and the bible itself tells us it's not literal), but in practice, I treat it a lot more loosely in recognition of man's inherent bias.
Prayer as a powerful force: Yes, but not like a wish-granting genie. Really, the power of prayer is more a power of inspiration and internal moral / emotional strength, an effect which could be explained by a placebo effect. Studies also show that prayer does have a powerful healing effect - but only if the subject knows that they are being prayed for. But medically speaking, attitude is a strong component of healing, not simply biochemical response to stimuli - so it might be internal strength, might be a placebo effect. As an attitude towards the world around one, I see 'answers' to prayers quite a bit, but not so much that I can rule out coincidence and a Rorschach-like effect upon the world around me.
Heaven and Hell, angels, demons, faith or works: I believe Heaven is where beings go in order to serve and follow the rules (which are there for our benefit, not just arbitrary). I believe that when beings of free will expressed a desire to do things their own way, not according to the rules, God created a place we call "Hell" which is where people who wish total freedom from the rules can go to do things their way without hurting the people who are following the rules. Not a punishment at all. As such, the "Salvation" question becomes rather a bit more complex as neither faith nor works is an appropriate descriptor. I'm looking into some theological scholarly writings at the moment which recently were brought to my attention which goes into more detail on this concept.
Prophecies, finally, tend to be awfully confusing till after they've happened, so till I see fire in the skies over Israel or an earthquake that shakes the whole world at once, I'm really not paying too much attention to them. The prophecies of the OT seem to have held up pretty well, though.
Replies from: mattnewport, orthonormal, WrongBot↑ comment by mattnewport · 2010-07-26T20:06:52.570Z · LW(p) · GW(p)
Studies also show that prayer does have a powerful healing effect - but only if the subject knows that they are being prayed for.
Citations please. The only well controlled study00649-6/abstract) I know of found the opposite - subjects who knew they were being prayed for suffered more complications than those who did not.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T20:26:50.680Z · LW(p) · GW(p)
I actually found it several years ago through an atheist site which was using it as evidence that prayer had only a placebo effect, so I'm afraid I don't have a citation for you just at the moment. I'll see what I can do when I have time. My apologies.
↑ comment by orthonormal · 2010-07-27T00:43:39.558Z · LW(p) · GW(p)
I want to know what's true. Even if Christianity wasn't true, I've already found a great deal of Truth in its teachings for how to live life. The Bible, I feel, encourages a rational mindset, as much as many might think otherwise - to not use one's intellect to examine one's religion would be to reject many of Jesus' teachings.
Having been religious (in particular, a very traditionalist Catholic, more so than my parents by far)† for a good chunk of my life before averting to atheism a few years ago (as an adult), I would have agreed with you, but a bit uneasily. And now, I can't help but point out a distinction.
When you point to the Bible for moral light, you're really pointing to a relatively small fraction of the total text, and much of that has been given new interpretations†† that the original apostles didn't use.
Let's give an example: to pick a passage that's less emotionally charged and less often bruited about in this connection, let's consider the story of Mary and Martha in Luke 10:38-42. People twist this every which way to make it sound more fair to Martha, when the simplest reading is just that Luke thought that the one best thing you could do with your life was to be an apostle, and wrote the episode in a way that showed this. Luke wasn't thinking about how the story should be interpreted within a large society where the majority are Christians going about daily business like Martha, because he expected the end times to come too soon for that society to be realized on Earth. He really, genuinely, wanted the reader to conclude that they should forget living like Martha††† if they possibly could, and imitate Mary instead.
Now, when faced with a passage like this, what do you prefer? The simpler interpretation which doesn't seem to help you as moral guidance? Or a more convoluted one which meshes with the way you think the truth should be lived in the world today? Which interpretation would you expect to find upheld in letters of the Church Fathers who lived before Rome converted? Which interpretation do you think was more likely for Luke?
And most importantly, if you're saying you're learning about moral truth from the Bible, but you're choosing your preferred interpretation of Scripture by aesthetic and moral criteria of the modern era, rather than criteria that are closer to the text and the history, why do you need the Scripture at all? Why not just state your aesthetic and moral principles and be done with it?
† Sorry for these distracting parentheticals, but I know the assumptions I'd have made had I read the unadorned account from someone else.
†† For one year at school, I took on the task of finding both Scripture readings and commentary from the Church Fathers to be read during a weekly prayer group. The latter task proved to be a lot harder than it seemed, because the actual content of typical passages from the Church Fathers is really foreign, and not in an inspiring way either. Augustine gets read today in schools as exemplar of Christian thought basically because he's the only Church Father of the Roman era who doesn't look completely insane on a straightforward reading of any full work.
††† There are places of honor in Luke and Acts for patrons who help the apostles, but they're rather clearly supporting roles, and less admirable than the miracle-working apostles themselves.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-27T00:58:50.434Z · LW(p) · GW(p)
Every time someone says, "The simplest reading..." about a passage, I really draw back cautiously. I see, usually, two types of people who say "There's only one way to read that passage," on any nonspecific passage. The first is "I know what it means and anyone who disagrees with me is wrong because I know the Will of God," and the second is "I know what it means and it's stupid and there is no God."
I'm not saying you're doing that - quite the opposite, you agree that there are many ways to approach the passage. The way Luke may have approached it, I couldn't say. I just see a story being presented, and Jesus rarely said anything in a straightforward manner. He always presented things in such a way that those listening to it had to really think about what he meant, and there are many ways to interpret it. Even Jesus, when pressed, usually meant many things by his stories. Admittedly, this wasn't a parable, this was an 'event that happened', but I think any of Jesus' responses still need to get considered carefully.
Second, we have the fact that you're talking about what Luke saw in it. I don't pretend the Apostles were perfect or didn't have their flaws. Every apostle, every prophet, was shown to be particularly flawed - unlike many other religions, the chosen of God in JudeoChristian belief were terribly flawed. There was a suicidally depressed prophet, there was the rash murderer, there were liars and thieves. The closest to a 'good' prophet was Joseph of the Coat of Many Colors, but even he had his moments of spite and anger.
I'm interested, but not dedicated, to what Luke thought of the situation. I'm much more interested in what Jesus did in the situation. Additionally, what about the context in which that scene appears? Jesus was constantly about service ... and that's what Martha was doing. He never admonished Martha ... he simply told her that Mary had made her choice, and it was better. He never said Martha should make the same choice, either.
It's worth noting that Mary was in a position that was traditionally denied women - but Jesus defended her right to be there, listening and learning from a teacher.
And I almost forgot the 'most importantly' part...
The strong lessons I learn from the bible ... wouldn't necessarily have occurred to me otherwise. Yes, I interpret them from my bias of modern life and mores ... but the bible presents me with things I wouldn't have thought to bring forward and consider. Methods of thinking I wouldn't have come up with on my own, or by talking with most others. This doesn't mean it's 'The True Faith', but it does make it a useful tool.
At any rate, we need to be careful not to go too much further. This is getting dangerously close to a theology discussion rather than a 'meet the new guy' discussion.
Replies from: orthonormal↑ comment by orthonormal · 2010-07-27T01:28:19.157Z · LW(p) · GW(p)
Anyhow, I think it's illuminating to be aware of what criteria actually go into one's judgments of Biblical interpretations. Your particular examples will vary.
Replies from: Skepxian↑ comment by WrongBot · 2010-07-26T19:38:07.371Z · LW(p) · GW(p)
I believe the bible was divinely inspired
Why? This seems to be the foundation for all your justifications here, and it's an incredibly strong claim. What evidence supports it? Is there any (weaker, presumably) evidence that contradicts it? I'd suggest you take a look at the article on Privileging the Hypothesis, which is a pretty easy failure mode to fall into when the hypothesis in question was developed by someone else.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T20:08:39.576Z · LW(p) · GW(p)
A weighty question... At the moment, I'm not entirely able to give you the full response, I'm afraid, but I'll give you the best 'short answer' that I'm able to compile.
1: The universe seems slanted towards Entropy. This suggests a 'start'. Which suggests something to start the universe. This of course has a great many logical fallacies inherent in it, but it's one element. 2: Given a 'something to start the universe', we're left with hypothetical scientific/mathematical constructs or a deity-figure of some sort. 3: Assuming a deity figure (yes, privileging the Hypothesis - but given a small number of possibilities, we can hypothesize each in turn and then exhaustively test that element) we need to assume that either the deity figure doesn't care if we know about it, in which case it's pointless to search, or that it does care if we know about it, in which case there will be evidence. If it is pointless to search, then I see little difference between that and a hypothetical scientific/mathematical construct. Thus, we're still left with 'natural unknown force' or 'knowable deity figure'. 4: Assuming a deity figure with the OOMPH to make a universe, it'll probably be able to make certain it remains known. So it's probably one of the existing long-lasting and persistent belief systems. 5: ( magic happens ) Given a historical study of various long-lasting and persistent belief systems, I settled on Christianity as the most probable belief system, based on my knowledge of human behavior, the historical facts of the actions surrounding the era and life of Jesus such as the deaths of the Disciples, a study of the bible, and a basic irrational hunch. I found that lots of what I was brought up being taught about the bible and Christianity was wrong, but the Bible itself seemed much more stable. 6: Given certain historical elements, I was led to have to believe in certain Christian miracles I'm unable to explain. That, combined with the assumption that a deity-figure would want itself to be known, results in an active belief.
3: Assuming there is no deity-figure, or the deity-figure does not care to be known. In this case, the effort expended applying rational thought to religious institutions will not provide direct fruit for a proper religion. 4: If there is no deity figure, or the deity-figure dose not care to be known, the most likely outcome of assumption #1 will likely have a serious flaw in it. 5: ( magic happens ) I searched out (and continue to search out) all the strongest "Christianity cannot be true" arguments I could (and can) find, and compare the anti-Christianity to the pro-Christianity arguments, and could not find a serious flaw. Several small flaws which are easily attributable to human error or lack of knowledge about a subject, but nothing showing a serious flaw in the underpinnings of the religion. 6: Additional side effect: the act of researching religions includes a researching and examination of comparable morality systems and social behavior, and how it affects the world around it. This provides sufficient benefit that even if there is no deity figure, or a deity figure does not care to be known, the act of searching is not wasted. Quite the contrary, I consider the ongoing study into religion, and into Christianity itself, to be time well spent - even if at some later date I discover that the religion does have the serious flaw that I have not yet found.
Replies from: WrongBot, mattnewport, byrnema↑ comment by WrongBot · 2010-07-26T21:39:21.159Z · LW(p) · GW(p)
1: The universe seems slanted towards Entropy. This suggests a 'start'. Which suggests something to start the universe. This of course has a great many logical fallacies inherent in it, but it's one element.
If this point is logically fallacious, why is it the foundation of your belief? Eliezer has addressed the topic, but that post focuses more on whether one should jump to the idea of God from the idea of a First Cause, which you do seem to have thought about. But why assume a First Cause at all?
On a slightly different tack, if Thor came down (Or is it up? My Norse mythology is a little rusty) from Valhalla, tossed some thunderbolts around, and otherwise provided various sorts of strong evidence to support his claim that he was the God of Thunder with all that that entails, would you then worship him? Or, to put it another way, is there some evidence that would make you change your mind?
(Apologies if I'm being too aggressive with my questions. You seem like good people, and I wouldn't want to drive you away.)
Replies from: Skepxian↑ comment by Skepxian · 2010-07-27T00:12:20.753Z · LW(p) · GW(p)
Oh, no, not at all! I'm quite happy to have people interested in what I have to say, but I'm trying to keep my conversation suitable for the 'Welcome to Less Wrong' thread, and not have it get too big. ^_^
As far as 'If it's logically fallacious, why is it the foundation of your belief?'
Well, it's not the foundation of my belief, it's just a very strong element thereof. It would probably require several months of dedicated effort and perhaps 30,000 words to really hit the whole of my belief with any sort of holistic effort. However, why assume a First Cause? Well, because of entropy, we have to assume some sort of start for this iteration. Anything past that starts getting into extreme hypotheticals that only really 'make more sense than God' if it suits your pre-existing conditions. And no, I'm not saying God makes more sense outside of a bias - more that given a clean slate, "There might be laws of physics we can't detect because they don't function in a universe where they've already countered entropy to a new start state" is about equal to "Maybe there's a Deity figure that decided it wanted to start the universe" are about equal in my mind. And to be fair, 'deity figure' could be equivalent to 'Higher-level universe's programmer making a computer game.' Or this could all be a simulation, and none of it's actually real, or, or, or...
But the reason that I decide to accept this as a basic assumption is that, eventually, you have to assume that there is truth, and work off of the existing scientific knowledge instead of waiting for brand new world-shattering discoveries in the field of metaphysics. So I keep an interested eye on stuff like brane vibration or cosmic froth, but still assume that entropy happens, and the universe had an actual start.
if Thor came down throwing lightning bolts and etc, and claiming our worship, I'd be... well, admittedly, a little confused, and unsure. That's not exactly his MO from classic Norse mythology (which I love) and Norse mythology really didn't have the oomph of world creation that goes together with scientific evidence. I'd have to wonder if he wasn't a Nephilim or alien playing tricks. (Hi, Stargate SG-1!)
However, I take your meaning. If some deity figure came down and said, "hey, here's proof," yeah, I'd have a LOT of re-evaluating to do. It'd depend a lot on circumstances, and what sort of evidence of the past, rather than just pure displays of power, the deity figure could present. What answers does it have to the tough questions? Does it match certain anti-christ elements from Revelations?
Alternatively, what sort of evidence would make me change my mind and become atheist?
I would love to be able to easily say, "Yeah, if this happened, I'd totally change my mind in an instant!" but I am aware that I'm only human, and certain beliefs have momentum in my mind. Negative circumstance certainly won't do it - I've long ago resolved the "Why does a good God allow bad things to happen?" element. Idiotic Christian fanboys won't do it - I've been developing a very careful attitude towards religion and politics in divorcing ideas from the proponents of ideas. And if I had an idea what that proof would be - I'd already be researching it. So I just keep kicking around looking for new stuff to research.
Thank you for the interest!
Replies from: WrongBot↑ comment by WrongBot · 2010-07-27T00:49:45.059Z · LW(p) · GW(p)
Sounds like you've given this some serious thought and avoided all kinds of failure modes. While I disagree with you and think that there's probably an interesting discussion here, I agree that this probably isn't the place to get into it. Welcome to Less Wrong, and I hope you stick around.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-27T01:16:53.855Z · LW(p) · GW(p)
I've certainly tried, thank you very much. I think that might be the most satisfying reaction I could have hoped to receive. ^_^ I hope to stick around for a good long time, too... this site's rivaling "TV Tropes" for the ability to completely suck me in for hours at a time without me noticing it.
↑ comment by mattnewport · 2010-07-26T20:41:54.608Z · LW(p) · GW(p)
Given a historical study of various long-lasting and persistent belief systems, I settled on Christianity as the most probable belief system, based on my knowledge of human behavior, the historical facts of the actions surrounding the era and life of Jesus such as the deaths of the Disciples, a study of the bible, and a basic irrational hunch.
This sounds interesting. So were you raised an atheist or in some non-Christian religious tradition? Is the culture of your home country predominantly non-Christian? Conversion to a new belief system based on evidence is an interesting phenomenon because it is so relatively rare. The vast majority of religious people simply adopt the religion they were raised in or the dominant religion of the surrounding culture which is one piece of evidence that religious belief is not generally arrived at through rational thinking. Counter examples to this trend offer a case study in the kinds of evidence that can actually change people's minds.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T20:52:51.268Z · LW(p) · GW(p)
Apologies, I'm not as interesting as that. I changed a lot of beliefs about the belief system, but I was nonetheless still raised Christian. I didn't mean to imply otherwise - pre-existing developmental bias is part of the 'basic irrational hunch' part of the sentence.
I agree that religious belief is not generally arrived at through rational thinking, however - whether that religious belief is 'there is a God, and I know who it is!' or 'there is no God'. This is evidenced, for instance, the time I was standing there at church, just before services, and enjoying the fine day, and someone steps up next to me. "Isn't it a beautiful morning?" he asks. "Yes it is!" I reply. "Makes you wonder how someone can see this and still be an atheist," he says.
( head turns slooooowly ) "I think it's possible to appreciate a beautiful morning and still be atheist..." "Yes, but then who would have made something so beautiful?" ( mouth opens to talk ) ( mouth works silently ) "I believe the assumption would be, no one." "And what kind of sense would that make?" "I'd love to have that discussion, but service is about to start, and it's too beautiful a morning for what I suspect would be an argument."
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-26T21:02:47.537Z · LW(p) · GW(p)
Apologies, I'm not as interesting as that. I changed a lot of beliefs about the belief system, but I was nonetheless still raised Christian.
See also: Epistemic luck.
Replies from: Skepxian↑ comment by byrnema · 2010-07-26T21:46:41.541Z · LW(p) · GW(p)
4: Assuming a deity figure with the OOMPH to make a universe, it'll probably be able to make certain it remains known. So it's probably one of the existing long-lasting and persistent belief systems.
I like this argument. If there was such a deity, it could make certain it is known (and rediscovered when forgotten). The deity could embed this information into the universe in any numbers of ways. These ways could be accessed by humans, but misinterpreted. Evidence for this is the world religions, which have many major beliefs in common, but differ in the details. Christianity, being somewhat mature as a religion and having developed concurrently with rational and scientific thought, could have a reliable interpretation in certain aspects.
Replies from: Skepxian↑ comment by Skepxian · 2010-07-26T23:43:53.750Z · LW(p) · GW(p)
Thank you very much, I appreciate that.
however, I'm following from an assumption of a deity that wants to be known and moving forward. It certainly doesn't suffice for showing that a deity figure does exist, because if we follow the assumption of a deity that doesn't want to be known, or a lack of a deity, then any religion which has withstood the test of time is likely the one with the fewest obvious flaws. It's rather like evolution of an idea rather than a creature.
However, the existence of such a religion does provide for the possibility of a deity figure.
Replies from: byrnema↑ comment by byrnema · 2010-07-27T03:50:34.909Z · LW(p) · GW(p)
I used the word 'embed' because this implies the deity could (possibly) be working within the rules of physics. The relationship between the deity, physical time and whether it is immediately involved in human events would be an interesting digression. The timelessness of physics is a relevant set of posts for that.
I agree with your comments. Regarding the strength of implications in either direction, (the possibility of a deity given a vigorous religion or the possibility of a true religion given a deity), there are two main questions:
if a deity exists, should we expect that it cares if it is known?
does the world actually look like a world in which a deity would be revealing itself? (though as you cautioned, such a world may or may not actually have a deity within it)
If this thread is likely to attenuate here, these questions are left for academic interest ...
comment by LauralH · 2010-07-22T20:02:25.484Z · LW(p) · GW(p)
My name is Laural, 33-yo female, degree in CS, fetish for EvPsych. Raised Mormon, got over it at 18 or so, became a staunch Darwinist at 25.
I've been reading OvercomingBias on and off for years, but I didn't see this specific site till all the links to the Harry Potter fanfic came about. I had in fact just completed that series in May, so was quite excited to see the two things combined. But I think I wouldn't have registered if I hadn't read the AI Box page, which convinced me that EY was a genius. Personally, I am more interested in life-expansion than FAI. I'm most interested in changing social policy to legalize drugs, I suppose; if people are allowed to put whatever existing substances in their bodies, the substances that don't yet exist have a better chance.
comment by TobyBartels · 2010-07-22T03:14:19.275Z · LW(p) · GW(p)
I also found this blog through HP:MoR.
My ultimate social value is freedom, by which I mean the power of each person to control their own life. I believe in something like a utilitarian calculus, where utility is freedom, except that I don't really believe that there is a common scale in which one person's loss of freedom can be balanced by another person's gain. However, I find that freedom is usually very strongly positive-sum on any plausible scale, so this flaw doesn't seem to matter very much.
Of course, freedom in this sense can only be a social value; this leaves it up to each person to decide their own personal values: what they want for their own lives. In my case, I value forming and sustaining friendships in meatspace, often with activities centred around food and shared work, and I also value intellectual endeavours, mostly of an abstract mathematical sort. But this may change with my whims.
I might proselytise freedom here from time to time. There would be no point in proselytising my personal values, however.
Replies from: TobyBartels, None, CronoDAS↑ comment by TobyBartels · 2010-07-24T17:10:39.009Z · LW(p) · GW(p)
I also found this blog through HP:MoR.
Now that I think about it, I may have found HP:MoR through this blog. (I don't read much fan fiction.)
I can't remember anymore what linked me to HP:MoR, but I think that I got there after following a series of blog posts linking to blog posts on blogs that I don't ordinarily read. So I might well have gone through Less Wrong (or Overcoming Bias) along that way.
But if so, I wasn't inspired to read further in Less Wrong until after I'd read HP:MoR.
↑ comment by [deleted] · 2010-07-22T03:27:09.155Z · LW(p) · GW(p)
Freedom, I can get behind. Also math. Welcome aboard.
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-22T08:18:27.955Z · LW(p) · GW(p)
Thanks!
↑ comment by CronoDAS · 2010-07-22T06:11:08.815Z · LW(p) · GW(p)
I suspect that some kinds of "freedom" are overrated. Suppose that A, B, and C are mutually exclusive options, and you prefer A to both of the others. If you have a choice between A and B, you'd choose A. If I then give you the "freedom" to choose between A, B, and C instead of just between A and B, you'll still choose A, and the extra "freedom" didn't actually benefit you.
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-22T08:18:19.313Z · LW(p) · GW(p)
Right, by the standard of control over one's own life, that extra option does not actually add to my freedom. In real life, an extra option can even be confusing and so actually detract from freedom! (But it can also help clarify things and add to freedom that way, although you can get the same effect by merely contemplating the extra option if you're smart enough to think of it.)
Replies from: cousin_it↑ comment by cousin_it · 2010-07-22T08:26:27.445Z · LW(p) · GW(p)
More freedom is always good from an individual rationality perspective, but game theory has lots of situations where giving more behavior options to one agent causes harm to everyone, or where imposing a restriction makes everyone better off. For example, if we're playing the Centipede game and I somehow make it impossible for myself to play "down" for the first 50 turns - unilaterally, without requiring any matching commitment on your part - then we both win much more than we otherwise would.
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-22T08:53:34.760Z · LW(p) · GW(p)
Well, if you make it impossible for you to play down, then that's a perfectly valid exercise of your control over your own life, isn't it? For a paradox, you should consider whether I would impose that restriction on you (or at least whether I would take part in the enforcement mechanism of your previously chosen constraint when you change your mind).
Usually in situations like this, I think that the best thing to do is to figure out why the payoffs work in that way and then try to work with you to beat the system. If that's not possible now, then I would usually announce my intention to cooperate, then do so, to build trust (and maybe guilt if you defect) for future interactions.
If I'm playing the game as part of an experiment, so that it really is just a game in the ordinary sense, then I would try to predict your behaviour and play accordingly; this has much more to do with psychology than game theory. I wouldn't have to force you to cooperate on the first 50 turns if I could convince you of the truth: that I would cooperate on those turns anyway, because I already predict that you will cooperate on those turns.
If the centipede game, or any of the standard examples from game theory, really is the entire world, then freedom really isn't a very meaningful concept anyway.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-22T09:02:59.789Z · LW(p) · GW(p)
Well, if you make it impossible for you to play down, then that's a perfectly valid exercise of your control over your own life, isn't it?
Then you make it a tautology that "freedom is good", because any restriction on freedom that leads to an increase of good will be rebranded as a "valid exercise of control". Maybe I should give an example of the reverse case, where adding freedom makes everyone worse off. See Braess's paradox: adding a new free road to the road network, while keeping the number of drivers constant, can make every driver take longer to reach their destination. (And yes, this situation has been observed to often occur in real life.) Of course this is just another riff on the Nash equilibrium theme, but you should think more carefully about what your professed values entail.
Replies from: TobyBartels, Vladimir_Nesov↑ comment by TobyBartels · 2010-07-22T10:20:18.365Z · LW(p) · GW(p)
Then you make it a tautology that "freedom is good"
Yes, it's my ultimate social value! That's not a tautology, but an axiom. I don't like it because I believe that it maximises happiness (or whatever), I just like it.
Braess's paradox
Yes, this is more interesting, especially when closing a road would improve traffic flow. People have to balance their desire to drive on the old road with their desire to drive in decongested traffic. If the drivers have control over whether to close the road, then the paradox dissolves (at least if all of the drivers think alike). But if the road closure is run by an outside authority, then I would oppose closing the road, even if it's ‘for their own good’.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-22T10:46:31.484Z · LW(p) · GW(p)
Also maybe relevant: Sen's paradox. If you can't tell, I love this stuff and could go on listing it all day :-)
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-23T07:53:20.882Z · LW(p) · GW(p)
As currently described at your link, that one doesn't seem so hard. Person 2 simply says to Person 1 ‘If you don't read it, then I will.’, to which Person 1 will agree. There's no real force involved; if Person 1 puts down the book, then Person 2 picks it up, that's all. I know that this doesn't change the fact that the theorem holds, but the theorem doesn't seem terribly relevant to real life.
But Person 1 is still being manipulated by a threat, so let's apply the idea of freedom instead. Then the preferences of Persons 1 and 2 may begin as in the problem statement, but Person 1 (upon sober reflection) allows Person 2's preferences to override Person 1's preferences, when those preferences are only about Person 2's life, and vice versa. Then Person 1 and Person 2 both end up wanting y,z,x; Person 1 grudgingly, but with respect for Person 2's rights, gives up the book, while Person 2 refrains from any manipulative threats, out of respect for Person 1.
↑ comment by Vladimir_Nesov · 2010-07-22T09:07:53.354Z · LW(p) · GW(p)
More freedom makes signaling of what you'll actually do more difficult. All else equal, freedom is good.
Replies from: TobyBartels, cousin_it↑ comment by TobyBartels · 2010-07-22T09:52:54.815Z · LW(p) · GW(p)
More freedom makes signaling of what you'll actually do more difficult.
Yes, this is something that I worry about. You can try to force your signal to be accurate by entering a contract, but even if you signed a contract in the past, how can anybody enforce the contract now without impinging on your present freedom? The best that I've come up with so far is to use trust metrics, like a credit rating. (Payment of debts is pretty much unenforceable in the modern First World, which is why they invented credit reports.)
Replies from: cousin_it, Vladimir_Nesov↑ comment by cousin_it · 2010-07-22T10:28:18.357Z · LW(p) · GW(p)
What Nesov said.
Thomas Schelling gives many examples of incentivising agreements instead of enforcing them. Here's one: you and I want to spend 1 million dollars each on producing a nonexcludable common good that will give each of us 1.5 million in revenue. (So each dollar spent on the good creates 1.5 dollars in revenue that have to be evenly split among us both, no matter who spent the initial dollar.) Individually, it's better for me if you spend the million and I don't, because this way I end up with 1.75 million instead of 1.5. Schelling's answer is spreading the investment out in time: you invest a penny, I see it and invest a penny in turn, and so on. This way it costs almost nothing for us both to establish mutual trust from the start, and it becomes rational to keep cooperating every step of the way.
Replies from: TobyBartels↑ comment by TobyBartels · 2010-07-23T08:17:41.360Z · LW(p) · GW(p)
The paradoxical decision theorist would still say, ‘You fool! Don't put in a penny; your rational opponent won't reciprocate, and you'll be out a farthing.’. Fortunately nobody behaves this way, and it wouldn't be rational to predict it.
I would probably put in half a million right away, if I don't know you at all other than knowing that you value the good like I do. I'm sure that you can find a way to manipulate me to my detriment if you know that, since it's based on nothing more than a hunch; and actually this is the sort of place where I would expect to see a lecture as to exactly how you would do so, so please fire away! (Of course, any actual calculation as to how fast to proceed depends on the time discounting and the overhead of it all, so there is no single right answer.)
I agree, slowly building up trust over time is an excellent tactic. Looking up somebody's trust metric is only for strangers.
↑ comment by Vladimir_Nesov · 2010-07-22T09:55:00.748Z · LW(p) · GW(p)
You are never free to change what you actually are and what you actually want, so these invariants can be used to force a choice on you by making it the best one available.
↑ comment by cousin_it · 2010-07-22T09:08:50.494Z · LW(p) · GW(p)
Um, Braess's paradox doesn't involve signaling.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-22T09:48:42.543Z · LW(p) · GW(p)
That's the reason bad things happen. Before the added capacity, drivers' actions are restricted by problem statement, so signaling isn't needed, its role is filled. If all drivers decide to ignore the addition, and effectively signal to each other that they actually will, they end up with the old plan, better than otherwise, and so would choose to precommit to that restriction. More freedom made signaling the same plan more difficult, by reducing information. But of course, with new capacity they could in principle find an even better plan, if only they could precommit to it (coordinate their actions).
comment by WrongBot · 2010-06-21T20:42:41.412Z · LW(p) · GW(p)
Hi all.
I found this site through Methods of Rationality (as I suspect many have, of late). I've been reading through the sequences and archives for a while, and am finally starting to feel up to speed enough to comment here and there.
My name is Sam. I'm a programmer, mostly interested in writing and designing games. Oddly enough, my username derives from my much-neglected blog, which I believe predated this website.
I've always relished discovering that I'm wrong; if there's a better way to consistently improve the accuracy of one's beliefs, I'm not aware of it. So the LW approach makes an awful lot of sense to me, and I'm really enjoying how much concentrated critical thinking is available in the archives.
I'm also polyamorous, and so I'm considering a post or two on how polyamory (and maybe other kinds of alternative sexualities) relates to the practice of rationality. Would there be any interest in that sort of thing? I don't want to drag a pet topic into a place it's unwanted.
Furthermore, I am overfond of parentheses and semicolons. I apologize in advance.
Replies from: RobinZ, Blueberry↑ comment by RobinZ · 2010-06-22T01:01:30.917Z · LW(p) · GW(p)
Hello! I like your blog.
I have a bit harsher filter than a number of prolific users of Less Wrong, I think - I would, pace Blueberry, like to see discussion of polyamory here only if you can explain how to imply the insights to other fields as well. I would be interested in the material, but I don't think this is the context for the merely interesting.
Replies from: WrongBot↑ comment by WrongBot · 2010-06-22T02:37:42.942Z · LW(p) · GW(p)
The post I'm envisioning is less an analysis of polyamory as a lifestyle and more about what I'm tentatively calling the monogamy bias. While the science isn't quite there (I think; I need to do more research on the topic) to argue that a bias towards monogamy is built into human brain chemistry, it's certainly built into (Western) society. My personal experience has been that overcoming that bias makes life much more fun, so I'd probably end up talking about how to analyze whether monogamy is something a person might actually want.
The other LW topic that comes out of polyamory is the idea of managing romantic jealousy, which ends up being something of a necessity. Depending on how verbose I get, those may or may not get combined into a single post.
In any case, would either of those pass your (or more general) filters?
Replies from: Vladimir_M, RobinZ, wedrifid↑ comment by Vladimir_M · 2010-06-22T04:19:35.458Z · LW(p) · GW(p)
I certainly find quality discussions about such topics interesting and worthwhile, and consistent with the mission statement of advancing rationality and overcoming bias, but I'm not sure if the way you define your proposed topic is good.
Namely, you speak of the possibility that "bias towards monogamy is built into human brain chemistry," and claim that this bias is "certainly built into (Western) society." Now, in discussing topics like these, which present dangerous minefields of ideological biases and death-spirals, it is of utmost importance to keep one's language clear and precise, and avoid any vague sweeping statements.
Your statement, however, doesn't make it clear whether you are talking about a bias towards social norms encouraging (or mandating) monogamy, or about a bias towards monogamy as a personal choice held by individuals. If you're arguing the first claim, you must define precisely the metric you use to evaluate different social norms, which is a very difficult problem. If you're arguing the second one, you must establish which precise groups of people your claim applies to, and which not, and what metric of personal welfare you use to establish that biased decisions are being made. In either case, it seems to me that establishing a satisfactory case for a very general statement like the one you propose would be impossible without an accompanying list of very strong disclaimers.
Therefore, I'm not sure if it would be a good idea to set out to establish such a general and sweeping observation, which would, at least to less careful readers, likely be suggestive of stronger conclusions than what has actually been established. Perhaps it would be better to limit the discussion to particular, precisely defined biases on concrete questions that you believe are significant here.
Replies from: WrongBot↑ comment by WrongBot · 2010-06-22T06:02:24.291Z · LW(p) · GW(p)
I think I grouped my ideas poorly; the two kinds of bias you point out would be better descriptions of the two topics I'm thinking of writing about. (And they definitely seem to be separate enough that I shouldn't be writing about them in the same post.) So, to clarify, then:
Topic 1: Individuals in industrialized cultures (but the U.S. more strongly than most, due to religious influence) very rarely question the default relationship style of monogamy in the absence of awareness of other options, and usually not even then. This is less of a bias and more of a blind spot: there are very few people who are aware that there are alternatives to visible monogamy. Non-consensual non-monogamy (cheating) is, of course, something of a special case. I'm not sure if there's an explicit "unquestioned assumptions that rule large aspects of your life" category on LW, but that kind of material seems to be well-received. I'd argue that there's at least as much reason to question the idea that "being monogamous is good" as the idea that "being religious is good." Of course my conclusions are a little different, in that one's choice of relationship style is ultimately a utilitarian consideration, whereas religion is nonsense.
Topic 2: Humans have a neurological bias in favor of (certain patterns of behavior associated with) monogamy. This would include romantic jealousy, as mentioned. While the research in humans is not yet definitive, there's substantial evidence that the hormone vasopressin, which is released into the brain during sexual activity, is associated with pair-bonding and male-male aggression. In prairie voles, vasopressin production seems to be the sole factor in whether or not they mate for life. Romantic/sexual jealousy is a cultural universal in humans, and has no known purpose other than to enforce monogamous behavior. So there are definitely biological factors that affect one's reasoning about relationship styles; it should be obvious that if some people prefer to ignore those biological factors, they see some benefit in doing so. I can say authoritatively that polyamory makes me happier than monogamy does, and I am not so self-absorbed as to think myself alone in this. Again, this is a case where at least some people can become happier by debiasing.
And that still leaves Topic 3: jealousy management, which I imagine would look something like the sequence on luminosity or posts on akrasia (my personal nemesis).
Thanks for your comment; it's really helped me clarify my organizational approach.
Replies from: CronoDAS↑ comment by CronoDAS · 2010-06-22T07:16:25.034Z · LW(p) · GW(p)
Several of us have enough trouble forming and maintaining even a single romantic relationship. :(
Replies from: khafra↑ comment by RobinZ · 2010-06-22T03:25:38.421Z · LW(p) · GW(p)
Let me give an example of a topic that I think would pass my filter: establish that there is a bias (i.e. erroneous heuristic) toward monogamy, reverse-engineer the bias, demonstrate the same mechanisms working in other areas, and give suggestions for identifying other biases created by the same mechanism.
Let me give an example of a topic that I think would not pass my filter: establish that there is a bias towards monogamy, demonstrate the feasibility and desirability of polygamy, and offer instructions on how to overcome the bias and make polyamory an available and viable option.
Does that make sense?
↑ comment by wedrifid · 2010-06-22T05:21:23.892Z · LW(p) · GW(p)
The first is clearly about rational choices, psychological and social biases and the balance of incorporating existing instincts and overriding them with optimizations. That is right on topic so stop asking for permission and approval and go for it. Anything to do with social biases will inevitably have some people disapproving of it. That is how social biases get propagated! Here is not the place to be dominated by that effect.
As for the romantic jealousy thing... I don't see the relevance to rationality myself but if you think it is an effective way to demonstrate some rationalist technique or concept then go for it.
↑ comment by Blueberry · 2010-06-21T20:46:02.124Z · LW(p) · GW(p)
Welcome!
I'm considering a post or two on how polyamory (and maybe other kinds of alternative sexualities) relates to the practice of rationality.
I'd certainly be very interested. The topic has come up a few times before; try searching in the search box on the right. I think the post would be well received, especially if you can explain how to apply the insights from polyamory to other fields as well.
Furthermore, I am overfond of parentheses and semicolons.
It's ok; I am too (they're hard to resist).
comment by ValH · 2010-05-07T13:21:28.443Z · LW(p) · GW(p)
I'm Valerie, 23 and a brand new atheist. I was directed to LW on a (also newly atheist) friend's recommendation and fell in love with it.
Since identifying as an atheist, I've struggled a bit with 'now what?' I feel like a whole new world has opened up to me and there is so much out there that I didn't even know existed. It's a bit overwhelming, but I'm loving the influx of new knowledge. I'm still working to shed old patterns of thinking and work my way into new ones. I have the difficulty of reading something and feeling that I understand it, but not being able to articulate it again (something left over from defending my theistic beliefs, which had no solid basis). I think I just need some practice :)
EDIT: Your link to the series of posts on why LW is generally atheistic is broken. Which makes me sad.
Replies from: ata, alexflint↑ comment by ata · 2010-05-07T13:55:04.214Z · LW(p) · GW(p)
Welcome!
The page on LW's views on religion (or something like that page — not sure if the old wiki's content was migrated directly or just replaced) is now here. The Mysterious Answers to Mysterious Questions, Reductionism, and How To Actually Change Your Mind sequences are also relevant, in that they provide the background knowledge sufficient to make theism seem obviously wrong. Sounds like you're already convinced, but those sequences contain some pretty crucial core rationalist material, so I'd recommend reading them anyway (if you haven't already).
If there's anything in particular you're thinking "now what?" about, I and others here would be happy to direct you to relevant posts/sequences and help with any other questions about life, the universe, and everything. (Me, I recently decided to go back to the very beginning and read every post and the comments on most of them... but I realize not everyone's as dedicated/crazy (dedicrazy?) as me. :P)
↑ comment by Alex Flint (alexflint) · 2010-05-07T13:56:21.278Z · LW(p) · GW(p)
Welcome! I hope you enjoy the posts and discussion here, and suggest ways that it could be improved.
comment by ThoughtDancer · 2009-04-16T17:59:46.783Z · LW(p) · GW(p)
- Handle: thoughtdancer
- Name: Deb
- Location: Middle of nowhere, Michigan
- Age: 44
- Gender: Female
- Education: PhD Rhetoric
- Occupation: Writer-wannabe, adjunct Prof (formerly tenure-track, didn't like it)
- Blog: thoughtdances Just starting, be gentle please
I'm here because of SoullessAutomaton, who is my apartment-mate and long term friend. I am interested in discussing rhetoric and rationality. I have a few questions that I would pose to the group to open up the topic.
1) Are people interested in rhetoric, persuasion, and the systematic study thereof? Does anyone want a primer? (My PhD is in the History and Theory of Rhetoric, so I could develop such a primer.)
2) What would a rationalist rhetoric look like?
3) What would be the goals / theory / overarching observations that would be the drivers behind a rationalist rhetoric?
4) Would a rationalist rhetoric be more ethical than current rhetorics, and if so, why?
5) Can rhetoric ever be fully rational and rationalized, or is the study of how people are persuaded inevitably or inherently a-rational or anti-rational (I would say that rhetoric can be rationalized, but I know too many scholars who would disagree with me here, either explicitly or implicitly)?
6) Question to the group: to what degree might unfamiliar terminology derived from prior discussions here and in the sister-blog be functioning as an unintentional gatekeeper? Corollary question: to what degree is the common knowledge of math and sciences--and the relevant jargon terms thereof--functioning as a gatekeeper? (As an older woman, I was forbidden from pursuing my best skill--math--because women "didn't study math". I am finding that I have to dig pretty deeply into Wikipedia and elsewhere to make sure I'm following the conversation--that or I have to pester SoullessAutomaton with questions that I should not have to ask. sigh)
Replies from: MBlume, mattnewport↑ comment by MBlume · 2009-04-16T21:24:01.644Z · LW(p) · GW(p)
I rather like Eliezer's description of ethical writing given in rule six here. I'm honestly not sure why he doesn't seem to link it anymore.
Replies from: BongoEthical writing is not "persuading the audience". Ethical writing is not "persuading the audience of things I myself believe to be true". Ethical writing is not even "persuading the audience of things I believe to be true through arguments I believe to be true". Ethical writing is persuading the audience of things you believe to be true, through arguments that you yourself take into account as evidence. It's not good enough for the audience unless it's good enough for you.
↑ comment by Bongo · 2009-04-17T11:55:05.690Z · LW(p) · GW(p)
That's what I was going to reply with. To begin with, a rationalist style of rethoric should force you to write/speak like that, or make it easy for the audience to tell whether or not you do.
(Rationalist rethoric can mean at least three things: ways of communication you adopt in order to be able to deliver your message as rationally and honestly as possible, not in order to persuade; techniques that persuade rationalists particularly well; or new forms of dark arts discovered by rationalists)
(We should distinguish between forms of rhetoric that optimize for persuasion and those that optimize for truth. Eliezer's proposed "ethical writing" seems to optimize for truth. That is, if everyone wrote like that, we would find out more truths and lying would be harder, or even persuading people of untruths. Though it's also awfully persuasive... On the other hand, political rhetoric probably optimizes for persuasion, in so far as it involves knowingly persuading people of lies and bad policies.)
↑ comment by mattnewport · 2009-04-16T20:33:16.163Z · LW(p) · GW(p)
1) Yes, I'm interested.
2) I suspect that the study of rhetoric is already fairly rationalist, in the sense of rationality being about winning. Rhetoric seems to be the disciplined/rational study of how to deliver persuasive arguments. I suspect many aspiring rationalists attempt to inoculate themselves against the techniques of rhetoric because they desire to believe what is true rather than what is most convincingly argued. A rationalist rhetoric might then be a rhetoric which does not trigger the rationalist cognitive immune system and thus is more effective at persuading rationalists.
3) From my point of view the only goal is success - winning the argument. Everything else is an empirical question.
4) Not necessarily. Since rationalists attempt to protect themselves against well-sounding but false arguments, rationalist rhetoric might focus more on avoiding misleading or logically flawed arguments but only as a means to an end. The goal is still to win the argument, not to be more ethical. To the extent that signaling a desire to be ethical helps win the argument, a rationalist rhetoric might do well to actually pre-commit to being ethical if it could do so believably.
5) I think the study of rhetoric can absolutely be rational - it is after all about winning. The rational study of how people are irrational is not itself irrational.
6) My feeling is that the answer is 'to a significant degree' but it's a bit of an open question.
comment by zslastman · 2012-06-24T16:30:18.799Z · LW(p) · GW(p)
I'm a 24 year old PhD student of molecular biology. I arrived here trying to get at the many worlds vs copenhagen debate as a nonspecialist, and as part of a sustained campaign of reading that will allow me to tell a friend who likes Hegel where to shove it. I'm also here because I wanted to reach a decision about whether I really want to do biology, if not, whether I should quit, and if I leave, what i actually want to do.
comment by phonypapercut · 2012-06-20T23:35:39.155Z · LW(p) · GW(p)
Hello. I've been browsing articles that show up on the front page for about a year now. Just recently started going through the sequences and decided it would be a good time to create an account.
comment by jwhendy · 2011-01-06T02:53:13.886Z · LW(p) · GW(p)
Hi, I've been hanging around for several months now and decided to join. My name is John and I found the site (I believe) via a link on CommonSenseAtheism to How to actually change your mind. I read through many of those posts and took notes and resonated with a lot. I loved EY's Twelve Virtues and the Litany of Gendlin.
I'm a graduate in mechanical engineering and work as one today. I don't know that I would call myself a rationalist, but only because I haven't perhaps become one. In other words, I want to be but do not consider myself to be well-versed in rationalist methods and thought compared to posts/comments I read here.
To close, I was brought to this site in a round-about way because I have recently de-converted from Catholicism (which is what took me to CSA). I'm still amidst my "quest" and blog about it HERE. I would say I'm not sure god doesn't exist or that Christianity is false, but the belief is no longer there. I seek to be as certain and justified I can in whatever beliefs I hold. LessWrong has seemed to be a good tool toward that end. I look forward to continuing to learn and want to take this opportunity to begin participating more.
Note: I also post as "Hendy" on several other blogs. We are the same.
comment by RedRobot · 2010-11-24T18:32:56.196Z · LW(p) · GW(p)
Hello!
I work in a semi-technical offshoot of (ducks!) online marketing. I've always had rationalist tendencies, and reading the material on this website has had a "coming home" feeling for me. I appreciate the high level of discourse and the low levels of status-seeking behaviors.
I am female, and I read with interest the discussion on gender, but unfortunately I do not think I can contribute much to that topic, because I have been told repeatedly that I am "not like other women." I certainly don't think it would be a good idea to generalize from my example what other women think or feel (although to be honest the same could be said about my ability to represent the general populace).
I found my way here through the Harry Potter story, which a friend sent to me knowing that I would appreciate the themes. I am enjoying it tremendously.
comment by Axel · 2010-11-12T22:33:06.570Z · LW(p) · GW(p)
My name's Axel Glibert. I'm 21, I just finished studying Biology and now I'm going for a teaching job. I found this wonderful site through hp and the methods of rationality and it has been an eyeopener for me.
I've been raised in a highly religious environment but it didn't take very long before I threw that out of the window. Since then I had to make my own moral rules and attempts at understanding how the universe works. My firsts "scientific experiments" were rather ineffective but it caused me to browse through the science section of the local library... and now, more then a decade later, here I am!
I have long thought I was the only one to so openly choose Science over Religion (thinking even scientists were secretly religious because it was the "right thing to do") but then I found Less Wrong filled with like-minded people! For the past 3 months I've been reading through the core sequences on this site and now I've finally made an account. I'm still too intimidated by the sheer brilliance of some of the threads here to actually post but that's just more motivation for me to study on my own.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T21:49:30.893Z · LW(p) · GW(p)
Just to go cross-site (RW is slightly anti-endorsed by LW), would the Atheism FAQ for the Newly Deconverted have been of conceivable use to your recovering religious younger self?
Replies from: Axel↑ comment by Axel · 2010-12-27T23:00:45.003Z · LW(p) · GW(p)
Yes, that list has a lot of the answers I was looking for. However, for my younger self, breaking from religion meant making my own moral rules so there is a good chance I would have rejected it as just another text trying to control my life (yes my younger self was quite dramatic)
comment by flori86 · 2010-11-08T16:23:10.033Z · LW(p) · GW(p)
I'm Floris Nool a 24 year old recently graduated Dutch ex-student. I came across this site while reading Harry's new rational adventures, which I greatly enjoy by the way. I must say I'm intrigued by several of the subjects being talked about here. Although not everything makes sense at first and I'm still working my through the immense amounts of interesting posts on this site, I find myself endlessly scrolling through posts and comments.
The last few years I increasingly find myself trying to understand things, why they are like they are. Why I act like I do etc. Reading about the greater scientific theories and trying to relate to them in everyday life. While I do not understand as much as I want to, and probably never will seeing the amounts of information and theories out there, I hope to come to greater understanding of basically everything.
It's great to see so many people talking about these subjects, as in daily life hardly anyone seems to think about it like I do. Which can be rather frustrating when trying to talk about what I find interesting subjects.
I hope to be able to some day contribute to the community as I see other posters do, but until I feel comfortable enough about my understanding of everything going on here I will stay lurking for a while. Only having discovered the site two days ago doesn't exactly help.
comment by Alex_Altair · 2010-07-21T21:01:24.528Z · LW(p) · GW(p)
I recently found Less Wrong through Eliezer's Harry Potter fanfic, which has become my second favorite book. Thank you so much Eliezer for reminding my how rich my Art can be.
I was also delighted to find out (not so surprisingly) that Eliezer was an AI researcher. I have, over the past several months, decided to change my career path to AGI. So many of these articles have been helpful.
I have been a rationalist since I can remember. But I was raised as a Christian, and for some reason it took me a while to think to question the premise of God. Fortunately as soon as I did, I rejected it. Then it was up to me to 1) figure out how to be immortal and 2) figure out morality. I'll be signing up for cryonics as soon as I can afford it. Life is my highest value because it is the terminal value; it is required for any other value to be possible.
I've been reading this blog every day since I've found it, and hope to get constant benefit from it. I'm usually quiet, but I suspect the more I read, the more I'll want to comment and post.
Replies from: Vladimir_Nesov, steven0461↑ comment by Vladimir_Nesov · 2010-07-21T21:17:32.390Z · LW(p) · GW(p)
- AGI is death, you want Friendly AI in particular and not AGI in general.
- "Life" is not the terminal value, terminal value is very complex.
↑ comment by Alex_Altair · 2010-07-21T21:37:45.987Z · LW(p) · GW(p)
"AGI is death, you want Friendly AI in particular and not AGI in general."
I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.
"'Life' is not the terminal value, terminal value is very complex."
I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don't mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
Replies from: Vladimir_Nesov, JGWeissman↑ comment by Vladimir_Nesov · 2010-07-21T21:49:38.655Z · LW(p) · GW(p)
If you are determined to read the sequences, you'll see. At least read the posts linked from the wiki pages.
I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.
Well, you'll have the same chance of successfully discovering that AI does what you want as a sequence of coin tosses spontaneously spelling out the text of "War and Peace". Even if you have a perfect test, you still need for the tested object to have a chance of satisfying the testing criteria. And in this case, you'll have neither, as reliable testing is also not possible. You need to construct the AI with correct values from the start.
I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive.
Acting in the world might require you being alive, but it's not necessary for you to be alive in order for the world to have value, all according to your own preference. It does matter to you what happens with the world after you die. A fact doesn't disappear the moment it can no longer be observed. And it's possible to be mistaken about your own values.
↑ comment by JGWeissman · 2010-07-21T21:49:34.215Z · LW(p) · GW(p)
I'm not sure of the technical definition of AGI, but essentially I mean a machine that can reason. I don't plan to give it outputs until I know what it does.
I am not sure what you mean by "give it outputs", but you may be interested in this investigation of attempting to contain an AGI.
I don't mean that life is the terminal value that all human's actions reduce to. I mean it in exactly the way I said above; for me to achieve any other value requires that I am alive. I also don't mean that every value I have reduces to my desire to live, just that, if it comes down to one or the other, I choose life.
Then I think you meant that "Life is the instrumental value."
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-07-21T23:36:40.069Z · LW(p) · GW(p)
Then I think you mant that "Life is the instrumental value."
to amplify: Terminal Values and Instrumental Values
↑ comment by steven0461 · 2010-07-21T21:08:04.084Z · LW(p) · GW(p)
Life is my highest value because it is the terminal value; it is required for any other value to be possible.
A value that's instrumental to every other value is still instrumental.
comment by Tesseract · 2010-07-08T01:12:41.116Z · LW(p) · GW(p)
Hello! I'm Sam. I'm 17, a newly minted high school graduate, and I'll be heading off to Reed College in Portland, Oregon next month.
I discovered Less Wrong through a link (whose origin I no longer remember) to "A Fable of Science and Politics" a couple of months ago. The post was rather striking, and the site's banner was alluring, so I clicked on it. The result, over the past couple of months, has been a massive accumulation of bookmarks (18 directly from Less Wrong at the time of this writing) accompanied by an astonishing amount of insight.
This place is probably the most intellectually stimulating site I've ever found on the internet, and I'm very much looking forward to discovering more posts, as well as reading through the ones I've stored up. I have, until now, mostly read bits and pieces that I've seen on the main page or followed links to, partially because I haven't had time and partially because some of the posts can be intimidatingly academic (I don't have the math and science background to understand some of what Eliezer writes about), but I've made this account and plan to delve into the Sequences shortly.
To some degree, I think I've always been a rationalist. I've always been both inquisitive and argumentative (captain of my school's debate team, by the way), and those qualities combined tend to lead one to questioning established thought. Although my parents are mildly religious, I don't think I ever actually believed in God (haven't gone to synagogue since my Bar Mitzvah), and that lack of belief hardened into strong atheism.
I'm very fond of logic, and I've argued myself from atheism to materialism and hence to determinism, with utilitarianism thrown in along the way. They're not popular viewpoints, but they're internally consistent, and the world becomes much clearer and simpler when seen from them. I'm still trying to refine my philosophies to create a truly coherent view of the world. I very much enjoy Less Wrong both because it's a hub of my low-percentage philosophy and because it's uniquely clarifying in its perspectives.
I enjoy psychology and philosophy, the former of which I'm considering as a major, and was heavily influenced by reading The Moral Animal (which I highly recommend if you haven't already read it) during my freshman year of high school. I love reading, practice introspection, and am continually attempting to incorporate as much information as I can into my worldview.
I actually already have about one and a half posts ready (one on consciousness, one on post rem information), but I'll readily wait until I've read through the Sequences and accumulated some karma before I publish them.
I've written too much already, so I'll cut this off here. Once again: Hi everyone! My mind is open.
Replies from: lsparrish, Kevin↑ comment by lsparrish · 2010-07-08T01:42:19.673Z · LW(p) · GW(p)
Good to meet you! If you're interested in cryonics at all, you'll be pleased to note that there is a local group headed by my friends Chana and Aschwin de Wolf. http://www.cryonicsoregon.com/
comment by chillaxn · 2010-04-29T01:54:22.587Z · LW(p) · GW(p)
Hi. I'm Cole from Maryland. I found this blog through a list of "greatest blogs of the year." I've forgot who published that list.
I'm in my 23rd year. I value happiness and work to spread it to others. I've been reading this blog for about a month. I enjoy reading blogs like this, because I'm searching for a sustainable lifestyle to start after college.
Cheers
comment by taiyo · 2010-04-19T19:47:39.292Z · LW(p) · GW(p)
My name is Taiyo Inoue. I am a 32, male, father of a 1 year old son, married, and a math professor. I enjoy playing the acoustic guitar (American primitive fingerpicking), playing games, and soaking up the non-poisonous bits of the internet.
I went through 12 years of math study without ever really learning that probability theory is the ultimate applied math. I played poker for a bit during the easy money boom for fun and hit on basic probability theory which the 12 year old me could have understood, but I was ignorant of the Bayesian framework for epistemology until I was 30 years old. This really annoys me.
I blame my education for leaving me ignorant about something so fundamental, but mostly I blame myself for not trying harder to learn about fundamentals on my own.
This site is really good for remedying that second bit. I have a goal to help fix the first bit -- I think we call it "raising the sanity waterline".
As a father, I also want to teach my son so he doesn't have the same regret and annoyance at my age.
Replies from: Nonecomment by clarissethorn · 2010-03-15T10:24:47.727Z · LW(p) · GW(p)
I go by Clarisse and I'm a feminist, sex-positive educator who has delivered workshops on both sexual communication and BDSM to a variety of audiences, including New York’s Museum of Sex, San Francisco’s Center for Sex and Culture, and several Chicago universities. I created and curated the original Sex+++ sex-positive documentary film series at Chicago’s Jane Addams Hull-House Museum; I have also volunteered as an archivist, curator and fundraiser for that venerable BDSM institution, the Leather Archives & Museum. Currently, I'm working on HIV mitigation in southern Africa. I blog at clarissethorn.wordpress.com and Twitter at @clarissethorn.
Besides sex, other interests include gaming, science fiction and fantasy, and housing cooperatives.
I've read some posts here that I thought had really awful attitudes about sexuality and BDSM in particular, so I'm sure I'll be posting about those. I would like it if people were more rational about sex, inasmuch as we can be.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-15T10:34:28.080Z · LW(p) · GW(p)
I've read some posts here that I thought had really awful attitudes about sexuality and BDSM in particular
?? Not any of mine, I hope.
EDIT: I see, Phil Goetz on masochism. Well, I downvoted it. Not much else to say, aside from noting that it had net 4 points and that karma rules do make it easier to upvote than downvote.
This is a community blog and I think it's pretty fair to say that what has not been voted high or promoted ought not to be blamed on "Less Wrong".
Replies from: clarissethorn↑ comment by clarissethorn · 2010-03-15T10:52:02.720Z · LW(p) · GW(p)
That's fair. And I'll add that for a site populated mainly by entitled white guys (I kid, I kid), this site does much better at being generally feminist than most within that demographic.
PS It's kind of exciting to be talking to you, EY. Your article on heuristics and biases in the context of extinction events is one of my favorites ever. I probably think about it once a week.
comment by RobinZ · 2009-07-08T21:33:25.652Z · LW(p) · GW(p)
Ignoring the more obvious jokes people make in introduction posts: Hi. My name is Robin. I grew up in the Eastern Time Zone of the United States, and have lived in the same place essentially all my life. I was homeschooled by secular parents - one didn't discuss religion and the other was agnostic - with my primary hobby being the reading of (mostly) speculative fiction of (mostly) quite high quality. (Again, my parent's fault - when I began searching out on my own, I was rather less selective.) The other major activity of my childhood was participation in the Boy Scouts of America.
I entered community college at the age of fifteen with an excellent grounding in mathematics, a decent grounding in physics, superb fluency with the English language (both written and spoken), and superficial knowledge of most everything else. After earning straight As for three years, I applied to four-year universities, and my home state university offered me a full ride. At present, I am a graduate student in mechanical engineering at the same institution.
In the meantime, I have developed an affection for weblogs, web comics, and online chess, much to the detriment of my sleep schedule and work ethic. I suspect I discovered Overcoming Bias through "My Favorite Liar" like everyone else, but Eliezer Yudkowsky's sequences (and, to a lesser extent, Robin Hanson's essays) were what drew me in. I lost interest around when EY jumped to lesswrong.com, but was drawn back in when I opened up the bookmark again in the past day or so, particularly thanks to a few of Yvain's contributions.
Being all of twenty-four and with less worldly experience than the average haddock, I imagine I shan't contribute much to the conversation, but I'll give it my best shot.
(P.S. I am not registered for cryonics and I'm skeptical about the ultimate potential of AI. I'm an modern-American-style liberal registered as a Republican for reasons which seemed good at the time. Also, I am - as is obvious in person but not online - both male and black.)
Replies from: Alicorn, thomblake↑ comment by Alicorn · 2009-07-08T21:45:27.334Z · LW(p) · GW(p)
Being all of twenty-four and with less worldly experience than the average haddock
What gave you the idea that anyone cares about age and experience around here? ;)
Replies from: RobinZ↑ comment by RobinZ · 2009-07-09T02:11:31.044Z · LW(p) · GW(p)
Oh, I'm sure someone does, but the real reason I mentioned it is because I usually don't have a lot more to say about a subject than "that sounds reasonable to me". (:
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-07-09T10:11:10.030Z · LW(p) · GW(p)
So, that was a rationalization above the bottom line of observation that you choose to not say much?
Replies from: RobinZ↑ comment by thomblake · 2009-07-08T22:23:32.254Z · LW(p) · GW(p)
Welcome! As Alicorn pointed out, age and experience don't count for much here, as compared to rationality and good ol'fashioned book-learnin'. If it helps any, you even have more education than a lot of the folks about (though we have a minor infestation of doctors)
Replies from: RobinZcomment by [deleted] · 2015-12-30T21:10:16.611Z · LW(p) · GW(p)
Hey everyone,
My name is Owen, and I'm 17. I read HPMOR last year, but really got into the Sequences and additional reading (GEB, Thinking Fast and Slow, Influence) around this summer.
I'm interested in time management, with respect to dealing with distractions, especially with respect to fighting akrasia. So I'm trying to use what I know about how my own brain operates to create a suite of internalized beliefs, primers, and defense strategies for when I get off-track (or stopping before I get to that point).
Personally, I'm connected with a local environmental movement, which stems from a fear I had about global warming as the largest threat to humanity a few years ago. This was before I looked into other ex-risks. I'm now evaluating my priorities, and I'd also like to bring some critical thinking to the environmental movement, where I feel some EA ideals would make things more effective (prioritizing some actions over others, examining cost-benefits of actions, etc.).
Especially after reading GEB, I'm coming to realize that a lot of rather things I hold central to my "identity" are rather arbitrarily decided and then maintained through a need to stay consistent. So I'm reevaluating my beliefs and assumptions (when I notice them) and ask if they are actually things I would like to maintain. A lot of this ties back to the self-improvement with regards to time management.
In day-to-day life, it's hard to find others who have a similar info diet/ reading background as me, so I'm considering getting more friends/family interested in rationality a goal for me, especially my (apparently) very grades-driven classmates. I feel this would lead to more constructive discussions and a better ability to look at the larger picture for most people.
Finally, I also perform close-up coin magic, which isn't too relevant to most aspects of rationality, but certainly looks pretty.
I look forward to sharing ideas and learning from you all here!
Replies from: gjmcomment by [deleted] · 2011-10-18T18:25:15.582Z · LW(p) · GW(p)
Hello Lesswrong
I am a nameless, ageless, genderless internet-being who may sometimes act like a 22 year old male from canada. I have always been quite rational and consciously aiming to become more rational, though I had never read any actual discussion of rationality, unless you count cat-v. I did have some possibly wrong ideas that I protected with anti-epistemology, but that managed to collapse on its own recently.
I got linked to lesswrong from reddit. I didn't record the details so don't ask. I do remember reading a few lesswrong articles and thinking this is awesome. Then I read the sequences. The formal treatment of rationality has really de-crufted my thinking. I'm still working on getting to a superhuman level of rationality tho.
I do a lot of thinking and I have some neat ideas to post. Can't wait.
Also, my human alter-ego is formally trained as a mechanical engineer.
I hope to contribute and make the world more awesome!
Replies from: kilobugcomment by Michelle_Z · 2011-07-14T23:49:47.639Z · LW(p) · GW(p)
Hello LessWrong!
My name is Michelle. I am from the United States and am entering college this August. I am a graphic design student who is also interested in public speaking. I was lead to this site one day while browsing fanfiction. I am an avid reader and spend a good percentage of my life reading novels and other literature. I read HPMOR and found the story intriguing and the theories very interesting. When I finally reached the end, I read the author's page and realized that I could find more information on the ideas presented in the book. Naturally, I was delighted. The ideas were mainly why I kept reading. I had not encountered anything similar and found it refreshing to read something that had so many theories that rang true to my ear.
I am not a specialist in any science field or math field. I consider rationality to be something that I wish everyone would get interested in. I really want this idea to stick in more people's heads, but know better than to preach it. I hope to help people become more involved in it, and learn more about rationality and the like.
I'm learning. I'm no expert and hardly consider myself a rationalist. If this were split into ranks like karate, I'd still be a white belt.
I'm looking forward to learning more about rationality, philosophy, and science with all of you here, and hopefully one day contributing, myself!
Replies from: None, Alicorn↑ comment by [deleted] · 2011-12-20T16:08:13.330Z · LW(p) · GW(p)
Greetings!
While I naturally feel superior to people who came here via fanfiction.... I want to use this opportunity to peddle some of the fiction that got me here way back in 2009.
Replies from: Michelle_Z↑ comment by Michelle_Z · 2011-12-25T02:51:25.649Z · LW(p) · GW(p)
I've read that, as well.
↑ comment by Alicorn · 2011-07-15T00:06:18.487Z · LW(p) · GW(p)
Here, have some more fanfiction!
Replies from: Michelle_Z↑ comment by Michelle_Z · 2011-07-15T00:11:36.078Z · LW(p) · GW(p)
Not a huge fan of the Twilight series, but I'll pick it up when I have a bit more time to get into it. I am currently working on a summer essay for college. In other words, I am productively procrastinating by reading this blog instead of writing the remaining two thirds of my essay.
Replies from: Alicorn↑ comment by Alicorn · 2011-07-15T00:27:07.194Z · LW(p) · GW(p)
You don't have to be a fan of Twilight. A lot of people who like my fic hate canon Twilight.
Replies from: Michelle_Z↑ comment by Michelle_Z · 2011-07-15T00:33:17.903Z · LW(p) · GW(p)
I'll give it a look, then.
comment by Ronny Fernandez (ronny-fernandez) · 2011-06-15T12:38:02.562Z · LW(p) · GW(p)
Hello Less wrong.
I've been reading Yudkowsky for a while now. I'm a philosophy major from NJ and he's been quite popular around here since I showed some of my friends three worlds collide. I am here because I think I can offer this forum new and well considered views on cognition, computability, epistemology, ontology, valid inference in general and also have my views kicked around a bit. Hopefully our mutual kicking around of each others views will toughen them up for future kicking battles.
I have studied logic at high levels, and have an intricate understanding of Godel's incompleteness theorem and of Tarski's undefinability theorem. I plan to write short posts that might make the two accessible when I have the Karma to do so. So the sooner you give me 20 Karma the sooner you will have a non-logician friendly explanation of Godel's first incompleteness theorem.
Replies from: None, Benquo↑ comment by Benquo · 2011-06-15T13:00:36.501Z · LW(p) · GW(p)
Welcome! It sounds like you have a lot to offer here.
You could put your Godel post in the discussion section now, it only requires 2 Karma to do that, and transfer it to the main page later if/when it's popular.The karma threshold is not very high, but asking for free karma instead of building up a record of commenting/discussion posts defeats the purpose of the 20-karma threshold.
Replies from: ronny-fernandez↑ comment by Ronny Fernandez (ronny-fernandez) · 2011-06-15T21:49:30.593Z · LW(p) · GW(p)
Good point, I've already written a discussion page to get people talking about the epistemic status of undecidable propositions, but I feel like a full description of Godel's first incompleteness theorem might be a bit much for a discussion page.
comment by DavidAgain · 2011-03-12T17:19:38.680Z · LW(p) · GW(p)
Hi
Didn't realise that this thread existed, so this 'hello' is after 20 or so posts. Oh well! I found Less Wrong because my brother recommended TVtropes, that linked to Harry Potter and the Methods of Rationality, and THAT led me back here. I've now recommended this site to my brother, completing the circle.
I've always been interested in rationality, I guess: I wouldn't identify any particular point of 'becoming a rationalist', though I've had times where I've come across ideas that help me be more accurate. Including some on here, actually. There's a second strand to my interest: I work in government and am interested in practical applications of rational thinking in large and complex organisations.
The Singularity Institute and Future of Humanity stuff is not something I've looked at before: I find it fairly interesting on an amateur level, and have some philosophy background that means the discussions make sense to me. I have zero computer science though, and generally my education is in the humanities rather than anything scientific.
Replies from: free_rip, Alexandros↑ comment by free_rip · 2011-03-29T08:43:11.092Z · LW(p) · GW(p)
Hi, David. I was very happy when I read
I work in government and am interested in practical applications of rational thinking in large and complex organisations.
A huge amount of people here have math/computing/science majors and/or jobs. I'm in the same basket as you, though - very interested in the applications of rationality, but with almost no education relevant to it. I'm currently stuck between politics and academia (in psychology, politics, economics maybe?) as a career choice, but either way...
And we need that - people from outside the field, who extend the ideas into other areas of society, whether we understand it all as in-depth or not.
So best of luck to you! And as Alexandros says, don't hesitate to put a post in the discussion forum with any progress, problems or anything of interest you come across in your quest. I'll be keeping an eye out for it.
↑ comment by Alexandros · 2011-03-14T09:55:17.918Z · LW(p) · GW(p)
Welcome! If you do make any progress on your quest, do share your findings with us.
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-18T12:02:14.328Z · LW(p) · GW(p)
Hi everyone!
I found this blog by clicking a link on Eliezer's site...which I found after seeing his name in a transhumanist mailing list...which I subscribed to after reading Ray Kurzweil's The Singularity is Near when I was fifteen. I found Harry Potter and the Methods of Rationality at the same time, and I've now successfully addicted my 16-year-old brother as well.
I'm 19 and I'm studying nursing in Ottawa. I work as a lifeguard and swim instructor at a Jewish Community Centre. (I'm not Jewish.) I sing in a girls' choir at an Anglican church. (I'm not Christian.) This usually throws people off a little. My favourite hobbies are writing and composing music. I can program in Java at a fairly beginner level after taking one class as my elective.
I've been reading this site for about a year and I decided it was time to start being useful. Cheers!
comment by luminosity · 2010-06-17T05:04:17.633Z · LW(p) · GW(p)
Hi there,
My name is Lachlan, 25 years old, and I too am a computer programmer. I found less wrong via Eliezer's site; having been linked there by a comment on Charles Stross's blog, if I recall correctly.
I've read through a lot of the LW backlog and generally find it all very interesting, but haven't yet taken the time and effort to try to apply the useful seeming guidelines to my life and evaluate the results. I blame this on having left my job recently, and feeling that I have enough change in my life right now. I worry that this excuse will metamorphose into another though, and become a pattern of not examining my thinking as best as possible.
All that said, I do often catch myself thinking thoughts that on examination don't hold up, and re-evaluating them. The best expression of this that I've seen is Pratchett's first, second, third thoughts.
Replies from: Alicorn↑ comment by Alicorn · 2010-06-17T06:14:01.412Z · LW(p) · GW(p)
Love the username!
Replies from: luminosity↑ comment by luminosity · 2010-06-18T01:20:11.596Z · LW(p) · GW(p)
Completely coincidental -- just a word I liked the sound of 10 years ago. It does fit in here rather well though.
comment by Gigi · 2010-06-02T15:23:44.719Z · LW(p) · GW(p)
Hi, everyone, you can call me Gigi. I'm a Mechanical Engineering student with a variety of interests ranking among everything from physics to art (unfortunately, I know more about the latter than the former). I've been reading LW frequently and for long sessions for a couple of weeks now.
I was attracted to LW primarily because of the apparent intelligence and friendliness of the community, and the fact that many of the articles illuminated and structured my previous thoughts about the world (I will not bother to name any here, many are in the Sequences).
While the rationalist viewpoint is fairly new to me (aside from various encounters where I could not identify ideas as "rationalist"), I am looking forward to expanding my intellectual horizons by reading, and hopefully eventually contributing something meaningful back to the community.
If anyone has recommendations for reading outside LW that may be interesting or relevant to me, I welcome them. I've got an entire summer ahead of me to rearrange my thinking and improve my understanding.
Replies from: Vive-ut-Vivas, RobinZ, NancyLebovitz↑ comment by Vive-ut-Vivas · 2010-06-04T03:04:30.424Z · LW(p) · GW(p)
I'm a Mechanical Engineering student with a variety of interests ranking among everything from physics to art (unfortunately, I know more about the latter than the former).
Why "unfortunately"? I'd love to see more discussion about art on Less Wrong.
Replies from: Gigi↑ comment by Gigi · 2010-06-04T04:59:57.973Z · LW(p) · GW(p)
Hah, the relative lack of discussion on art was exactly why it seemed to me as if the physics was more useful here. But who knows, I may be able to start up some discussion once I've gotten into the swing of things.
Replies from: RobinZ, NancyLebovitz, RomanDavis↑ comment by RobinZ · 2010-06-04T18:54:00.647Z · LW(p) · GW(p)
There was Rationality and the English Language and Human Evil and Muddled Thinking a while ago that brought in a literary angle (George Orwell, to be specific) - but I think Yudkowsky talked about how people talk about wanting "an artist's perspective" disingenously before. That there is a relative lack of discussion on art is not a reflection of the particular lack of interest in art, but the fact that we do not know what to say about art that is relevant to rationality.
(Although commentary spinning off of the drawing-on-the-right-side-of-the-brain insight into failure modes of illustration could be illuminating...)
Replies from: Gigi↑ comment by Gigi · 2010-06-05T18:01:06.004Z · LW(p) · GW(p)
I've been thinking on that, actually. So far all I've come up with is the fact that learning to exercise your creativity and think more abstractly can help very much with finding new ways of approaching problems and looking at your universe, thereby helping to shed new light on certain subjects. The obvious flaw is, of course, that you can learn to be creative without art; there are legions of scientists who show it to be so.
If I happen to come up with something that I think is particularly relevant or interesting I will definitely show it to the community, though.
↑ comment by NancyLebovitz · 2010-06-04T07:52:14.537Z · LW(p) · GW(p)
I was thinking about recommending Effortless Mastery by Kenny Werner-- it's about the hard work of eliminating effort so as to become an excellent jazz musician, but has more general application. For example, it's the only book I've seen about getting over anxiety-driven procrastination.
It seemed too far off topic, but now that you mention art....
↑ comment by RomanDavis · 2010-06-04T10:31:40.938Z · LW(p) · GW(p)
I've been trying to use drawing as a test case in this thread:
http://lesswrong.com/lw/2ax/open_thread_june_2010/23am
Just Ctrl+F my name and you'll find my derails and their replies.
↑ comment by RobinZ · 2010-06-03T01:00:57.385Z · LW(p) · GW(p)
Many people here loved Gödel, Escher, Bach by Douglas Hofstadter. It's quite a hodge-podge, but there's a theme underlying the eclectic goodness.
I have a peculiar fondness for Consciousness Explained by Daniel Dennett, which I find to be an excellent attempt (although [edit: I suspect] obsolete and probably flawed) to provide a reductionist explanation of an apparently-featureless phenomenon - many people, including many people here, found it dissatisfying.
I cannot think of other specifically LessWrongian recommendations off the top of my head - as NancyLebovitz said, elaboration would help.
Replies from: Gigi, mattnewport↑ comment by Gigi · 2010-06-04T02:23:54.325Z · LW(p) · GW(p)
Gödel, Escher, Bach is definitely a good recommendation, at least it appears to be from my cursory research on it.
As to what sort of recommendations I am looking for, I've noticed that LW appears to have a few favorite philosophers (Dennett among them) and a few favorite topics (AI, bias, utilitarian perspective, etc.) which I might benefit from understanding better, nice as the articles are. Some recommendations of good books on some of LW's favorite topics would be a wonderful place to start.
Thanks much for your help.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-06-04T05:21:22.460Z · LW(p) · GW(p)
- Good and Real (excellent reductionist philosophy, touching on a lot of the same topics as LW)
- Global Catastrophic Risks
- Nick Bostrom's papers
↑ comment by mattnewport · 2010-06-03T01:10:22.252Z · LW(p) · GW(p)
I'm a fan of Consciousness Explained as well, though that may be partly nostalgia as in some ways I feel it marks the beginning of (or at least a major milestone on) my rationalist journey.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-03T01:37:11.541Z · LW(p) · GW(p)
Wow, I'm surprised to hear that two people referred to Consciousness Explained as obsolete. If there's a better book on consciousness out there, I'd love to hear about it.
Replies from: mattnewport, RobinZ↑ comment by mattnewport · 2010-06-03T02:06:15.326Z · LW(p) · GW(p)
I didn't intend to imply I thought it was obsolete, just that I may hold it in higher regard because of when I read it than if I discovered it today.
↑ comment by RobinZ · 2010-06-03T01:43:53.237Z · LW(p) · GW(p)
As would I, actually. I guessed "obsolete" because the book came out in 1991 (and Dennett has written further books on the subject in the following nineteen years). I've not investigated its shortcomings.
Replies from: Blueberry↑ comment by Blueberry · 2010-06-03T18:47:57.737Z · LW(p) · GW(p)
Good point: thanks. Dennett wrote Sweet Dreams in 2005 to update Consciousness Explained, and in the preface he wrote
The theory I sketched in Consciousness Explained in 1991 is holding up pretty well . . . I didn't get it all right the first time, but I didn't get it all wrong either. It is time for some revision and renewal.
I highly recommend Sweet Dreams to Gigi and anyone else interested in consciousness. (It's also shorter and more accessible than Consciousness Explained.)
Replies from: Gigi↑ comment by Gigi · 2010-06-04T02:26:04.629Z · LW(p) · GW(p)
Thank you for the updated recommendation. I will probably look into reading Sweet Dreams. Would I benefit from reading Consciousness Explained first, or would I do well with just the one?
Replies from: Blueberry, Blueberry↑ comment by Blueberry · 2010-06-04T08:43:34.835Z · LW(p) · GW(p)
I'd recommend reading them both, and you'd probably benefit from reading CE first. But I'd actually start with Godel, Escher, Bach (by Hofstadter) and The Mind's I (which Dennett co-wrote with Hofstadter).
Replies from: Tyrrell_McAllister, RobinZ↑ comment by Tyrrell_McAllister · 2010-06-04T19:13:17.204Z · LW(p) · GW(p)
But I'd actually start with Godel, Escher, Bach (by Hofstadter) and The Mind's I (which Dennett co-wrote with Hofstadter).
A while back, colinmarshall posted a detailed chapter-by-chapter review of The Mind's I.
↑ comment by RobinZ · 2010-06-04T19:09:38.245Z · LW(p) · GW(p)
Oh, The Mind's I was excellent - it is a compilation of short works with commentary that touches on a lot of nifty themes with respect to identity and personhood.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-06-04T19:14:12.604Z · LW(p) · GW(p)
A while back, colinmarshall posted a detailed chapter-by-chapter review of The Mind's I.
Replies from: RobinZ↑ comment by RobinZ · 2010-06-04T19:19:24.672Z · LW(p) · GW(p)
Thanks for the link!
...which links to the recommended reading list for new rationalists, which I suppose we should have given to Gigi in the first place. The sad thing is, I contributed to that list, and completely forgot it until now.
↑ comment by Blueberry · 2010-06-04T08:49:27.342Z · LW(p) · GW(p)
Oh, and also Hofstadter's Metamagical Themas. (Yes, that's the correct spelling.)
Replies from: RobinZ↑ comment by RobinZ · 2010-06-04T19:12:52.935Z · LW(p) · GW(p)
The title - being the title of Hofstadter's column in Scientific American (back when Scientific American was a substantive publication), of which the book is a collection - is an anagram of Mathematical Games, the name of his predecessor's (Martin Gardner's) column. That, too, is an enjoyable and eclectic read.
↑ comment by NancyLebovitz · 2010-06-02T23:03:30.791Z · LW(p) · GW(p)
Welcome!
Could you expand a little more on what sort of books you're interested in?
comment by mitchellb · 2010-04-30T15:38:57.848Z · LW(p) · GW(p)
Hi, I`m Michèle. I'm 22 years old and studying biology in Germany. My parents are atheists and so am I.
I stumbled upon this blog, started reading and couldn't stop reading. Nearly every topic is very interesting for me and I'm really glad I found people to talk about these things!
Sometimes I find myself over emotional and unable to get the whole picture of situations. I'm trying to work on that and I hope I could get some insight reading this blog.
comment by Hook · 2010-03-05T15:40:01.984Z · LW(p) · GW(p)
Hello.
My name is Dan, and I'm a 30 year old software engineer living in Maryland. I was a mostly lurking member of the Extropian mailing list back in the day and I've been following the progress of the SIAI sporadically since it's founding. I've made a few donations, but nothing terribly significant.
I've been an atheist for half my life now, and as I've grown older I've tended more and more to rational thinking. My wife recently made a comment that she specifically uses rational argument with me much more so than anyone else she has to deal with, even at work, because she knows that is what will work. (Obviously, she wins frequently enough to make it worth her while.)
I hope to have something minor to contribute to the akrasia discussion, although I haven't fully formulated it yet. I used to be an avid video game player and I don't play anymore. The last few times I played any games I didn't even enjoy it. I plan to describe the experiences that led to this state. Unfortunately for general applicability, one of those experiences is "grow older and have a child."
It's not the most altruistic of motives, but what most draws me to this community is that I enjoy being right, and there seem to be lots of things I can learn here to help me to be right more often. What I would dream about getting out of this community is a way to find or prepare for meaningful work that helped reduce existential risk. I have a one year old daughter and I was recently asking myself "What is most likely to kill my children and grandchildren?" The answer I came up with was "The same thing that kills everyone else."
Replies from: orthonormal↑ comment by orthonormal · 2010-03-22T03:05:20.259Z · LW(p) · GW(p)
I have a one year old daughter and I was recently asking myself "What is most likely to kill my children and grandchildren?" The answer I came up with was "The same thing that kills everyone else."
That's a pretty compelling way to start a conversation on existential risk. I like it.
comment by Karl_Smith · 2010-02-19T00:23:20.489Z · LW(p) · GW(p)
Name: Karl Smith
Location: Raleigh, North Carolina
Born: 1978
Education: Phd Economics
Occupation: Professor - UNC Chapel Hill
I've always been interested in rationality and logic but was sidetracked for many (12+) years after becoming convinced that economics was the best way to improve the lives of ordinary humans.
I made it to Less Wrong completely by accident. I was into libertarianism which lead me to Bryan Caplan which lead me Robin Hanson (just recently). Some of Robin's stuff convinced me that Cryonics was a good idea. I searched for Cryonics and found Less Wrong. I have been hooked ever since. About 2 weeks now, I think.
Also, skimming this I see there is a 14 year-old on this board. I cannot tell you how that makes burn with jealousy. To have found something like this at 14! Soak it in Ellen. Soak it in.
Replies from: realitygrill↑ comment by realitygrill · 2010-02-20T04:43:39.170Z · LW(p) · GW(p)
Awesome. I'd love to hang with you if I'm there next year; you don't have any connections to BIAC do you? I just applied for a postbac fellowship there..
What's your specialty in econ?
Replies from: Karl_Smith↑ comment by Karl_Smith · 2010-02-21T21:45:58.623Z · LW(p) · GW(p)
I don't have any connection to BIAC.
My specialty is human capital (education) and economic growth and development
Replies from: realitygrill↑ comment by realitygrill · 2010-02-24T04:20:30.005Z · LW(p) · GW(p)
Ah. I know something of the former and little of the latter. I'd presume your interests are much more normative than mine.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-24T04:31:26.415Z · LW(p) · GW(p)
Does the term 'normative' work in that context?
Replies from: Karl_Smith↑ comment by Karl_Smith · 2010-02-24T17:25:50.145Z · LW(p) · GW(p)
Yes,
I could try to say that my work focuses only on understand how growth and development take place for example but this in practice this it doesn't work that way.
A conversation with students, policy makers, even fellow economists will not go more than 5 - 10 mins without taking a normative tact. Virtually everyone is in favor of more growth and so the question is invariably, "what should we DO to achieve it"
comment by Psilence · 2010-02-04T20:28:40.872Z · LW(p) · GW(p)
Hi all, my name's Drew. I stumbled upon the site from who-knows-where last week and must've put in 30-40 hours of reading already, so suffice to say I've found the writing/discussions quite enjoyable so far. I'm heavily interested in theories of human behavior on both a psychological and moral level, so most of the subject matter has been enjoyable. I was a big Hofstader fan a few years back as well, so the AI and consciousness discussions are interesting as well.
Anyway, thought I'd pop in and say hi, maybe I'll take part in some conversations soon. Looks like a great thing you've got going here.
comment by hrishimittal · 2009-05-17T13:35:51.177Z · LW(p) · GW(p)
Hi, I'm Hrishi, 26, male. I work in air pollution modelling in London. I'm also doing a part-time PhD.
I am an atheist but come from a very religious family background.
When I was 15, I once cried uncontrollably and asked to see God. If there is indeed such a beautiful supreme being then why didn't my family want to meet Him? I was told that their faith was weak and only the greatest sages can see God after a lot of self-afflicted misery. So, I thought nevermind.
I've signed up for cryonics. You should too, or it'll just be 3 of us from LW when we wake up on the other side. I don't mind hogging all the press, but inside me lives a shiny ball of compassion which wants me to share the glory with you.
I wish to live a happy and healthy life.
comment by Alicorn · 2009-04-16T15:15:53.937Z · LW(p) · GW(p)
- Handle: Alicorn
- Location: Amherst, MA
- Age: The number of full years between now and October 21, 1988
- Gender: Female
Atheist by default, rationalist by more recent inclination and training. I found OB via Stumbleupon and followed the yellow brick road to Less Wrong. In the spare time left by schoolwork and OB/LW, I do art, write, cook, and argue with those of my friends who still put up with it.
Replies from: MBlume, Jack↑ comment by Jack · 2009-04-16T16:30:38.639Z · LW(p) · GW(p)
Do you know your what areas you want to focus on in philosophy?
Replies from: Alicorn↑ comment by Alicorn · 2009-04-16T16:32:48.955Z · LW(p) · GW(p)
Not sure yet. I have a fledgeling ethics of rights kicking around in the back of my head that I might do something with. Alternately, I could start making noise about my wacky opinions on personal identity and be a metaphysicist. I also like epistemology, and I find philosophy of religion entertaining (although I wouldn't want to devote much of my time to it). I'm pretty sure I don't want to do philosophy of math, hardcore logic, or aesthetics.
Replies from: Jack↑ comment by Jack · 2009-04-16T18:48:02.703Z · LW(p) · GW(p)
I hope we get to hear your wacky opinions on personal identity some time, I think my senior thesis will be on that subject.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-16T22:37:41.298Z · LW(p) · GW(p)
I think I have to at least graduate before anyone besides me is allowed to write a thesis on my wacky opinions on personal identity ;)
In a nutshell, I think persons just are continuous self-aware experiences, and that it's possible for two objects to be numerically distinct and personally identical. For instance (assuming I'm not a brain in a vat myself) I could be personally identical to a brain in a vat while being numerically distinct. The upshot of being personally identical to someone is that you are indifferent between "yourself" and the "other person". For instance, if Omega turned up, told me I had an identical psychological history with "someone else" (I use terms like that of grammatical necessity), and that one of us was a brain in a vat and one of us was as she perceived herself to be, and that Omega felt like obliterating one of us, "we" would "both" prefer that the brain in a vat version be the one to be obliterated because we're indifferent between the two as persons, and just have a general preference that (ceteris paribus) non brains-in-vats are better.
Persons can share personal parts in the same way that objects can share physical parts. We should care about our "future selves" because they will include the vast majority of our personal parts (minus forgotten tidbits and diluted over time by new experiences) and respect (to a reasonable extent) the wishes of our (relatively recent) past selves because we consist mostly of those past selves. If we fall into a philosophy example and undergo fission of fusion, fission yields two people who diverge immediately but share a giant personal part. Fusion yields one person who shares a giant personal part each with the two people fused.
Replies from: loqi, michaelhoney, Jack↑ comment by loqi · 2009-04-18T21:04:46.906Z · LW(p) · GW(p)
In a nutshell, I think persons just are continuous self-aware experiences, and that it's possible for two objects to be numerically distinct and personally identical.
I've found this position to be highly intuitive since it first occurred/was presented to me (don't recall which, probably the latter from Egan).
One seemingly under-appreciated (disclaimer: haven't studied much philosophy) corollary of it is that if you value higher quantities of "personality-substance", you should seek (possibly random) divergence as soon as you recognize too much of yourself in others.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-19T01:52:41.598Z · LW(p) · GW(p)
Not really. Outside of philosophy examples and my past and future selves, I don't actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her. Besides, that's if I value personal uniqueness. Frankly, I'd be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.
Replies from: loqi↑ comment by loqi · 2009-04-19T03:41:35.030Z · LW(p) · GW(p)
I don't actually share any personal parts with anyone; the personal parts are continuity of perspective, not abstract personality traits. I can be very much like someone and still share no personal parts with him or her.
The term "continuity of perspective" doesn't reduce much beyond "identity" for me in this context. How similar can you be without sharing personal parts? If the difference is at all determined by differences in external inputs, how can you be sure that your inputs are effectively all that different?
Frankly, I'd be thrilled to discover that there are several of me. After all, Omega might take it into his head to obliterate one, and there ought to be backups.
I think the above addresses a slightly different concern. Suppose that some component of your decision-making or other subjective experience is decided by a pseudo-random number generator. It contains no interesting structure or information other than the seed it was given. If you were to create a running (as opposed to static, frozen) copy of yourself, would you prefer to keep the current seed active for both of you, or introduce a divergence by choosing a new seed for one or the other? It seems that you would create the "same amount" of personal backup either way.
↑ comment by michaelhoney · 2009-04-16T23:47:11.136Z · LW(p) · GW(p)
I think you're on the right track. There'll be a lot of personal-identity assumptions re-evaluated over the next generation as we see more interpenetration of personal parts as we start to offload cognitive capacity to shared resources on the internet.
Semi-related: I did my philosophy masters sub-thesis [15 years ago, not all opinions expressed therein are ones I would necessarily agree with now] on personal identity and the many-world interpretation of quantum physics. Summary: personal identity is spread/shared along all indistinguishable multiversal branches: indeterminacy is a feature of not knowing which branch you're on. Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.
Replies from: RobinZ, Nick_Tarleton↑ comment by RobinZ · 2009-07-20T19:51:36.407Z · LW(p) · GW(p)
Technically, that's non-transitive - non-commutative would be A=B but B≠A.
(Also, it is mildly confusing to use an equality symbol to indicate a relationship which is not a mathematical equality relationship - i.e. reflexive, commutative, and transitive.)
(Also, a Sorites-paradox argument would suggest that identity is a matter of degree.)
↑ comment by Nick_Tarleton · 2009-04-19T02:09:03.176Z · LW(p) · GW(p)
Personal identity across possible worlds may be non-commutative: A=B, B=C, but A≠C.
I think I understand (and agree with) the other parts, but how is this possible?
comment by Catnip · 2011-12-12T16:16:24.240Z · LW(p) · GW(p)
Hello, Less Wrong.
I am Russian, atheistic, 27, trying to be rational.
Initially I came here to read a through explanation of Bayes theorem, but noticed that LessWrong contains a lot more than that and decided to stay for a while.
I am really pleased by quality of material and pleasantly surprised by quality of comments. It is rare to see useful comments on the Internet.
I am going to read at least some sequences first and comment if I have something to say. Though, I know I WILL be sidetracked by HP:MoR and "Three worlds collide". Well, my love for SF always got me.
comment by Phasmatis · 2011-09-10T21:19:29.932Z · LW(p) · GW(p)
Salutations, Less Wrong.
I'm an undergraduate starting my third year at the University of Toronto (in Toronto, Ontario, Canada), taking the Software Engineer specialist program in Computer Science.
I found Less Wrong through a friend, who found it through Harry Potter and the Methods of Rationality, who found that through me, and I found HP: MoR through a third friend. I'm working my way through the archive of Less Wrong posts (currently in March of 2009).
On my rationalist origins: One of my parents has a not-insignificant mental problem that result in subtle psychoses. I learned to favor empirical evidence and rationality in order to cope with the incongruency of reality and some of said parent's beliefs. It has been an ongoing experience since then, including upbringing in both Protestant Anglicanism and Secular Humanistic Judaism; the dual religious background was a significant contributor towards both my rationalism and my atheism.
I eagerly anticipate interesting discussions here.
Replies from: kilobugcomment by [deleted] · 2011-06-12T06:45:06.434Z · LW(p) · GW(p)
I'm 17 and I'm from Australia.
I've always been interested in science, learning, and philosophy. I've had correct thinking as a goal in my life since reading a book by John Stossel when I was 13.
I first studied philosophy at school in grade 10, when I was 14 and 15. I loved the mind/body problem, and utilitarianism was the coolest thing ever. I had great fun thinking about all these things, and was fairly good at it. I gave a speech about the ethics of abortion last year which I feel really did strike to the heart of the matter, and work as a good use of rationality, albeit untrained.
I came across Less Wrong via Three Worlds Collide, via Tv Tropes, last September. I then read HPMOR. By this point, I was convinced Eliezer Yudkowsky was the awesomest guy ever. He had all the thoughts I wanted to have, but wasn't smart enough to. I read everything on his website, then started trying to read the sequences. They were hard to understand for me, but I got some good points from them. I attended the National Youth Science Forum in January this year, and spent the whole time trying to explain the Singularity to people. Since then I've made my way through most of Eliezer's writings. I agree with most of what he says, except for bits which I might just not understand, like the Zombies sequence, and some of his more out there claims.
But yeah. Since reading his stuff, I've become stronger. Self improvement is now more explicitly one of my goals. I have tried harder to consider my beliefs. I have learnt not to get into pointless arguments. One of the most crucial lessons was the "learning to lose" from HPMOR. This has prevented me from more than a few nasty situations.
What can I contribute here? Nothing much as of yet. If I know anything, it's the small segment of rationality I've learned here. I'm good at intuitively understanding philosophy and math, but not special by Less Wrong standards.
One thing I do believe in strongly is the importance of mentoring people younger than you. I know two kids a bit younger than me. One is a really smart sciency kid, one a really talented musicish kid. I think that by linking them good science and good music, I can increase their rate of improvement. I wish that someone had told me about, for instance, Bayes's Theorem, or FAI, or Taylor series, when I was younger. You need a teacher. Sadly, there's no textbooks on this topic. But random walks through Wikipedia are a slow, frustrating way to learn when you're a curious 14 year old.
And so yeah. Pleased to meet you kids.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-06-12T11:42:35.636Z · LW(p) · GW(p)
He had all the thoughts I wanted to have, but wasn't smart enough to.
You are 17. See Yudkowsky_1998, there is room for improvement at any age.
Replies from: None↑ comment by [deleted] · 2011-06-12T23:55:37.613Z · LW(p) · GW(p)
Yeah, you're right. The difference is, he made mistakes that I also wouldn't have thought of, and expressed himself better as he did so.
Hey, I'm not despairing that I'll ever be cool, just find it unlikely I'll ever be as cool as him.
Replies from: cousin_itcomment by dvasya · 2011-05-14T19:08:06.551Z · LW(p) · GW(p)
- Handle: dvasya (from Darth Vasya)
- Name: Vasilii Artyukhov
- Location: Houston, TX (USA)
- Age: 26
- Occupation: physicist doing computational nanotechnology/materials science/chemistry, currently on a postdoctoral position at Rice University. Also remotely connected to the anti-aging field, as well as cryopreservation. Not personally interested in AI because I don't understand it very well (though I do appreciate its importance adequately), but who knows -- maybe that could change with prolonged exposure to LW :)
comment by jslocum · 2011-03-03T17:10:00.205Z · LW(p) · GW(p)
Hello, people.
I first found Less Wrong when I was reading sci-fi stories on the internet and stumbled across Three Worlds Collide. As someone who places a high value on the ability to make rational decisions, I decided that this site is definitely relevant to my interests. I started reading through the sequences a few months ago, and I recently decided to make an account so that I could occasionally post my thoughts in the comments. I generally only post things when I think I have something particularly insightful to say, so my posts tend to be infrequent. Since I am still reading through the sequences, you probably won't be seeing me commenting on any of the more recent posts for a while.
I'm 21 years old, and I live in Cambridge, Mass. I'm currently working on getting a master's degree in computer science. My classes for the spring term are in machine vision, and computational cognitive science; I have a decent background in AI-related topics. Hopefully I'll be graduating in August, and I'm not quite sure what I'll be doing after that yet.
comment by MoreOn · 2010-12-09T23:33:43.613Z · LW(p) · GW(p)
Okay. Demographics. Boring stuff. Just skip to the next paragraph. I’m a masters student in mathematics (hopefully soon-to-be PhD student in economics). During undergrad, I majored in Biology, Economics and Math, and minored in Creative Writing (and nearly minored in Chemistry, Marine Science, Statistics and PE) … I’ll spare you the details, but most of those you won’t see on my resume for various reasons. Think: Master of None, not Omnidisciplinary Scientist.
My life goal is to write a financially self-sustainable computer game… for reasons I’ll keep secret for now. Seems like I’m not the first one in this thread to have this life goal.
I found LW through Harry Potter & MOR. I’d found HP&MOR through Tvtropes. I’d found Tvtropes through webcomic The Meek. I’d found The Meek through The Phoenix Requiem. Which I’d found through Top Web Comics site. That’s as far as I remember, 2 years ago.
I haven’t read most of the site, so far only about Bayes and the links off of that. And I’d started reading Harry Potter 3 weeks ago. So as far as you can see, I’m an ignorant newbie who speaks first and listens second.
I don’t identify myself as a rationalist. Repeat: I DO NOT identify myself as a rationalist. I didn’t notice that I’m different from everyone else when I was eleven. Or twelve. Or thereafter. I’m not smart enough to be a rationalist. I don't mean that in Socratic sense, "I know nothing, but at least I know more than you, idiot." I mean I'm just not smart.. I have the memory of a house cat. I can't name-drop on cue. I'm irrational. And I have BELIEFS (among them emergence, and when I model them, it'll be a Take That but for now it's just a belief).
Oh, and my name is a reference to BOTH Baldur's Gate 2, and to my intention of trying to challenge everything on this blog (what's my alternative? mindlessly agree?), and to how morons can't add 1+1.
comment by hangedman · 2010-10-13T21:41:31.195Z · LW(p) · GW(p)
Hi LW,
My name's Dan LaVine. I forget exactly how I got linked here, but I haven't been able to stop following internal links since.
I'm not an expert in anything, but I have a relatively broad/shallow education across mathematics and the sciences and a keen interest in philosophical problems (not quite as much interest in traditional approaches to the problems). My tentative explorations of these problems are broadly commensurate with a lot of the material I've read on this site so far. Maybe that means I'm exposing myself to confirmation bias, but so far I haven't found anywhere else where these ideas or the objections to them are developed to the degree they are here.
My aim in considering philosophical problems is to try to understand the relationship between my phenomenal experience and whatever causes it may have. Of course, it's possible that my phenomenal experience is uncaused, but I'm going to try to exhaust alternative hypotheses before resigning myself to an entirely senseless universe. Which is how I wind up as a rationalist -- I can certainly consider such possibilities as the impossibility of knowledge, that I might be a Boltzmann brain, that I live in the Matrix, etc., but I can't see any way to prove or provide evidence of these things, and if I take the truth of any of them as foundational to my thinking, it's hard to see what I could build on top of them.
Looking forward to reading a whole lot more here. Hopefully, I'll be able to contribute at least a little bit to the discussion as well.
Replies from: CronoDAScomment by EchoingHorror · 2010-07-26T20:41:09.486Z · LW(p) · GW(p)
Hello, community. I'm another recruit from Harry Potter and the Methods of Rationality. After reading the first few chapters and seeing that it lacked the vagueness, unbending archetypes, and overt because the author says so theme that usually drives me away from fiction, then reading Less Wrong's (Eliezer's?) philosophy of fanfiction, I proceeded to read through the Sequences.
After struggling with the question of when I became a rationalist, I think the least wrong answer is that I just don't remember. I both remember less of my childhood than others seem to and developed more quickly. I could rationalize a few things, but I don't think that's going to be helpful.
Anyway, I'm 21 with an A.A. in Nothing in Particular and going for a B.S. in Mathematics and maybe other useful majors in November.
P.S. Quirrell FTW
Replies from: EStokes, arundelo↑ comment by EStokes · 2010-07-26T22:38:04.419Z · LW(p) · GW(p)
Welcome!
There's a MOR discussion thread, if you hadn't seen it.
P.S. "Sometimes, when this flawed world seems unusually hateful, I wonder whether there might be some other place, far away, where I should have been... But the stars are so very, very far away... And I wonder what I would dream about, if I slept for a long, long time." Quirrell FTW, indeed.
comment by AndyCossyleon · 2010-07-08T21:04:21.860Z · LW(p) · GW(p)
deleted
Replies from: SilasBarta↑ comment by SilasBarta · 2010-07-08T21:27:17.481Z · LW(p) · GW(p)
Welcome to Less Wrong! You seem to know your way around pretty well already! Thanks for introducing yourself.
Also, I really appreciate this:
I alternatively describe myself as a naturalistic pantheist, since the Wikipedia article on it nails my self-perception on the head, not to mention it's less confrontational ...
The article says that of naturalistic pantheism:
Naturalistic pantheism (also known as Scientific Pantheism) is a naturalistic form of pantheism that encompasses feelings of reverence and belonging towards Nature and the wider Universe, concern for the rights of humans and all living beings, care for Nature, and celebration of life. It is realist and respects reason and the scientific method. It is based on philosophical naturalism and as such it is without belief in supernatural realms, afterlives, beings or forces
Wow, I had no idea you could believe all that and still count as a kind of theism! Best. Marketing. Ever.
Replies from: ciphergoth, DanielVarga↑ comment by Paul Crowley (ciphergoth) · 2010-07-22T12:03:36.015Z · LW(p) · GW(p)
Richard Dawkins:
Pantheism is sexed-up atheism. Deism is watered-down theism.
↑ comment by DanielVarga · 2010-07-22T11:53:55.841Z · LW(p) · GW(p)
Best. Marketing. Ever.
So true. From now on, depending on who am I talking to, I will call myself either a reductionist materialist humanist, or a naturalistic pantheist. :)
comment by Lorenzo · 2010-04-19T20:57:03.121Z · LW(p) · GW(p)
Huh, I guess I should have come here earlier...
I'm Lorenzo, 31, from Madrid, Spain (but I'm Italian). I'm an evolutionary psychologist, or try to be, working on my PhD. I'm also doing a Master's Degree in Statistics, in which I discovered (almost by accident) the Bayesian approach. As someone with a longstanding interest in making psychology become a better science, I've found this blog a very good place for clarifying ideas.
I've been a follower of Less Wrong after reading Eliezer's essays on Bayesian reasoning some 3-4 months ago. I've known the Bayes theorem for quite a long time, but little or nothing about the bayesian approach to propability theory. The frecuentist paradigm dominates much of psychology, which is a shame, because I think bayesian reasoning is much better suited to the study of mind. There is still a lot of misunderstanding about what a bayesian approach entails, at least in this part of the world. Oh, well. We'll deal with it.
Thanks and keep up the good work!
comment by misterpower · 2010-04-19T06:17:07.964Z · LW(p) · GW(p)
Bueno! I'm Jason from San Antonio, Texas. Nice to say 'hi' to all you nice people! (Nice, also, to inflate the number of comments for this particular post - give the good readers of Less Wrong an incrementally warmer feeling of camaraderie.)
I've been reading Overcoming Bias and Less Wrong for over a year since I found a whole bunch of discussions on quantum mechanics. I've stayed for the low, low cost intellectual gratification.
I (actually, formally) study physics and math, and read these blogs to the extent that I feel smarter...also, because the admittedly limited faculties of reason play out a fascinating and entertaining show of bravery against their own project of rationality. What I learn about these shortcomings helps to buttress my own monoliths, as much as what I learn might should could erode these pillars' unsubstantial foundations. It's a thrilling undertaking.
Thanks, all!
comment by Mass_Driver · 2010-03-30T21:24:13.587Z · LW(p) · GW(p)
Hi everyone!
I'm graduating law school in May 2010, and then going to work in consumer law at a small firm in San Francisco. I'm fascinated by statistical political science, space travel, aikido, polyamory, board games, and meta-ethics.
I first realized that I needed to make myself more rational when I bombed an online confidence calibration test about 6 years ago; it asked me to provide 95% confidence intervals for 100 different pieces of numerical trivia (e.g. how many nukes does China have, how many counties are in the U.S., how many species of spiders are there), and I only got about 72 correct. I can't find the website anymore, which is frustrating; I like to think I would do better now.
I am a pluralist about what should be achieved -- I believe there are several worthy goals in life, the utility of which cannot be meaningfully compared. However, I am passionately convinced that people should be consciously aware of their goals and should attempt to match their actions to their stated goals. Whatever kind of future we want, we are flabbergastingly unlikely to get it unless we identify and carry out the tasks that can lead us there.
Despite reading and pondering roughly 80 LW articles, together with some of their comments, I continue to believe a few things that will rub many LW readers the wrong way. My confidence in these beliefs has gone down, but is still over 50%. For example, I still believe in a naturalistic deity, and I still believe in ontologically basic consciousness. I am happy to debate these issues with individuals who are interested, but I do not plan on starting any top-level posts about them; I do not have the stamina or inclination to hold the field against an entire community of intelligent debaters all by myself.
I am not sure that I have anything to teach LW in the sense of delivering a prepared lecture, but I hope to contribute to discussions about how to best challenge Goodhart's Law in various applied settings.
Finally, thanks to RobinZ for the warm welcome!
comment by gimpf · 2010-02-21T21:39:55.882Z · LW(p) · GW(p)
Hello All,
my name is Markus, and just decided, after, well, years? of lurk-jumping from sl4 to OvercomingBias to LessWrong that maybe I should participate in the one or another discussion; not doing so seems to lead to constant increase of things I have a feeling I know but actually fall flat on the first occasion of another person posing a question.
The process of finding to (then non-existing) LW started during senior high, when I somehow got interested into philosophy, soon enough into AI. The interest in AI lead to interest in Weiqi (Chess was publicly shot already a handful years ago), lead to an interest into eastern philosophy, lead to (interest, not really doing) Zen, lead to frustration, back to start. I was playing trumpet during those times, too; as a consequence of all interests, I did, well, not so much productive stuff. Procrastination is an often discussed topic here; I was and I am of type-A: do nothing. Well, I played Quake. Now I click links on Facebook.
I would still not call myself a rationalist by execution, but just by aspiration. However, from my philosophical gut-level feeling, just everything else does not make any sense.
I am somehow missing the real-life link; for people with IQ << 160, who are not working on AI or similarly hard topics, I cannot see the potential of the full-blown Bayesian BFG; just doing what is consensus being the best choice is most often the only thing one can do, lacking any data, even more often competence. I really do have a hard time seeing the practical benefits.
So, this one is getting too long already, I'm a chatty person...
Just for completeness, on "what you're doing": I'm currently working as a part-time software developer, and am a philosophy/math/computer science/electrical engineering college-dropout.
BTW, as English is not my mother-tongue, I often fall-back to the dictionary when writing in it; if some things seem to be taken from an overly strange thesaurus, or of especially unorthodox style, you now know why.
Replies from: realitygrill↑ comment by realitygrill · 2010-02-24T04:23:57.921Z · LW(p) · GW(p)
I wonder how many of us play Weiqi/Igo/Baduk? I only play sporadically now but it was a bit of an obsession for a time.
Replies from: wedrifid↑ comment by wedrifid · 2010-02-24T04:27:56.403Z · LW(p) · GW(p)
There's a few people who have reported liking Go. Is that the same game?
Replies from: Morendil↑ comment by Morendil · 2010-02-24T08:49:42.463Z · LW(p) · GW(p)
Yep. Peter de Blanc and I are currently "playing for a cause", the game is here.
comment by Zvi · 2009-04-16T20:37:53.747Z · LW(p) · GW(p)
- Handle: Zvi
- Name: Zvi Mowshowitz
- Location: New York City
- Age: 30
- Education: BA, Mathematics
I found OB through Marginal Revolution, which then led to LW. A few here know me from my previous job as a professional Magic: The Gathering player and writer and full member of the Competitive Conspiracy. That job highly rewarded the rationality I already had and encouraged its development, as does my current one which unfortunately I can't say much about here but which gives me more than enough practical reward to keep me coming back even if I wasn't fascinated anyway. I'm still trying to figure out what my top level posts are going to be about when I get that far.
While I have told my Magic origin story I don't have one for rationality or atheism; I can't remember ever being any other way and I don't think anyone needs my libertarian one. If anything it took me time to realize that most people didn't work that way, and how to handle that, which is something I'm still working on and the part of OB/LW I think I've gained the most from.
comment by DanielH · 2012-07-11T03:00:02.452Z · LW(p) · GW(p)
TL;DR: I found LW through HPMoR, read the major sequences, read stuff by other LWers including the Luminosity series, and lurked for six months before signing up.
My name, as you can see above if you don't have the anti-kibitzing script, Daniel. My story of how I came to self-identify as a rationalist, and then how I later came to be a rationalist, breaks down into several parts. I don't remember the order of all of them.
Since well before I can remember (and I have a fairly good long-term memory), I've been interested in mathematics, and later science. One of my earliest memories, if not my earliest, is of me, on my back, under the coffee table (well before I could walk). I had done this multiple times, I think usually with the same goal, but one time in particular sticks in my memory. I was kicking the underside of the coffee table, trying to see what was moving. This time, I moved it, got out, and saw that the drawer of the coffee table was open; this caused me to realize that this was what was moving, and I don't think I crawled under there again.
Many years later, I discovered Star Trek TNG, and from that learned a little about Star Trek. I wanted to be more rational from the role models of Data and Spock, and I did not realize at the time how non-rational Spock was. It was very quickly, however, that I realized that emotions are not the opposite of logic, and the first time I saw the TOS episode that Luke references [here][http://facingthesingularity.com/2011/why-spock-is-not-rational/], I realized that Spock was being an idiot (though at the time I thought it was unusually idiotic, not standard behavior; I hadn't and still haven't seen much of the original series). It was around this time that I thought I myself was "rational" or "logical".
Of course, it wasn't until much later that I actually started learning about rationalism. Around Thanksgiving 2011, I was on fanfiction.net looking for a Harry Potter fanfic I'd seen before and liked (I still haven't found it) that I stumbled upon Harry Potter and the Methods of Rationality. I read it, and I liked it, and it slowly took over my life. I decided to look for other works by that author, and went to the link to Less Wrong because it was recommended (not realizing that the Sequences were written by the same person as HPMoR yet). Since then, I've read the sequences and most other stuff written by EY (that's still easily accessible and not removed), and it all made sense. I finally understood that yes, in fact, I and the other "confused" students WERE correct in that probability class where the professor said that "the probability that this variable is in this interval" didn't exist, I noticed times when I was conforming instead of thinking, and I noticed some accesses of cached thoughts. At first I was a bit skeptical of the overly-atheistic bit (though I'd always had doubts and was pretty much agnostic-though-I-wouldn't-admit-it), until I read the articles about how unlikely the hypothesis of God was and thought about them.
I did not know much about Quantum Mechanics when I read that sequence, but I had heard of the "waveform collapse" and had not understood it, and I realized fairly quickly how that was an unnecessary hypothesis. When I saw one of the cryonics articles (I'm cryocrastinating, trying to get my parents to sign up) taking the idea seriously, I thought "Oh, duh! I should have seen that the first time I heard of it, but I was specifically told that the person involved was an idiot and it didn't work, so I never reevaluated" (later I remembered my horror at Picard's attitude in the relevant TNG episode, and I've always only believed in the information-theoretic definition of "death").
After I read the major sequences, I read some other stuff I found through the Wiki and through googling "Less Wrong __" for various things I wanted the LW community opinion on. I found my favorite LW authors (Yvain, Luke, Alicorn, and EY) and read other things by them (Facing the Singularity and Luminosity). I subscribed to the RSS feed (I don't know how that'll work when I want to strictly keep to anti-kibitzing), and I now know that I want to help SIAI as much as possible (I was planning to be a computer scientist anyway); I'm currently reading through a lot of their recommended reading. I'm also about to start GEB, followed by Jaynes and Pearl. I plan to become a lot more active comment-wise, but probably not post-wise for a while yet. I may even go to one of the meetups if one is held somewhere I can get to.
Now we've pretty much caught up to the present. Let's see... I read some posts today, I read Luke's Intuitive Explanation to EY's Intuitive Explanation, I found an error in it (95% confidence), I sent him an email, and I decided to sign up here. Now I'm writing this post, and I'm supposed to put some sort of conclusion on it. I estimate that the value of picking a better conclusion is not that high compared to the cost, so I'll just hit th submit button after this next period.
Edit: Wow, I just realized how similar my story is to parts of Comment author: BecomingMyself's. I swear we aren't the same person!
Replies from: shminux, beoShaffer↑ comment by Shmi (shminux) · 2012-07-11T04:57:42.836Z · LW(p) · GW(p)
I did not know much about Quantum Mechanics when I read that sequence, but I had heard of the "waveform collapse" and had not understood it, and I realized fairly quickly how that was an unnecessary hypothesis.
I recommend learning QM from textbooks, not blogs. This applies to most other subjects, as well.
Replies from: DanielH↑ comment by DanielH · 2012-07-18T02:03:47.254Z · LW(p) · GW(p)
I did not mean to imply that I had actual knowledge of QM, just that I had more now than before. If I was interested in understanding QM in more detail, I would take a course on it at my college. It turns out that I am so interested, and that I plan to take such a course in Spring 2013.
I also know that there are people on this site, apparently a greater percentage than with similar issues, who disagree with EY about the Many Worlds Interpretation. I have not been able to follow their arguments, because the ones I have seen generally assume a greater knowledge of quantum mechanics than I possess. Therefore, MWI is still the most reasonable explanation that I have heard and understood. Again, though, that means very little. I hope to revisit the issue once I have some actual background on the subject.
EDIT: To clarify, "similar issues" means issues where the majority of people have one opinion, such as theism, the Copenhagen Interpretation, or cryonics not being worth considering, while Less Wrong's general consensus is different.
↑ comment by beoShaffer · 2012-10-08T04:34:42.824Z · LW(p) · GW(p)
Hi Daniel, do you follow Yvian's blog? Also, the term is rationality, not rationalism. I wouldn't nitpick except that rationalism already refers to a fairly major thing in mainstream philosophy.
comment by kmacneill · 2012-02-15T18:52:04.480Z · LW(p) · GW(p)
Hey, I've been an LW lurker for about a year now, and I think it's time to post here. I'm a cryonicist, rationalist and singularity enthusiast. I'm currently working as a computer engineer and I'm thinking maybe there is more I can do to promote rationality and FAI. LW is an incredible resource. I have a mild fear that I don't have enough rigorous knowledge about rationality concepts to contribute anything useful to most discussion.
LW has changed my life in a few ways but the largest are becoming a cryonicist and becoming polyamorous (naturally leaned toward this, though). I feel like I am in a one-way friendship with EY, does anyone else feel like that?
Replies from: Alex_Altair↑ comment by Alex_Altair · 2012-02-16T17:05:04.194Z · LW(p) · GW(p)
I am also in a one-way friendship with EY.
comment by Dmytry · 2011-12-29T18:56:04.912Z · LW(p) · GW(p)
I am a video game developer. I find most of this site fairly interesting albeit once in a while I disagree with description of some behaviour as irrational, or the explanation projected upon that behaviour (when I happen to see a pretty good reason for this behaviour, perhaps strategic or as matter of general policy/cached decision).
comment by [deleted] · 2011-12-24T08:44:37.277Z · LW(p) · GW(p)
Uh...uhm...hello?
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-12-24T18:24:01.240Z · LW(p) · GW(p)
Hi!
comment by DanPeverley · 2011-07-18T02:36:10.383Z · LW(p) · GW(p)
Salutations, LessWrong!
I am Daniel Peverley, I lurked for a few months and joined not too long ago. I was first introduced to this site via HPatMOR, my first and so far only foray into the world of fan-fiction. I've been raised as a mormon, and I've been a vague unbeliever for a few years, but the information on this site really solidified the doubts and problems I had with my religion. Just knowing how to properly label common logical fallacies has been vastly helpful in my life, and a few of the posts on social dynamics have likewise been of great utility. I'm seventeen, headed into my senior year of highschool, and on-track to attend a high end university. My hobbies include Warhammer 40k, watching anime, running, exercising, studying chinese, video games, webcomics, and reading and writing speculative fiction and poetry. I live in the skeptic-impoverished Salt Lake City area. I look forward to posting, but I'll probably LURK MOAR for a while just to make sure what I have to say is worth reading.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-07-27T20:07:42.006Z · LW(p) · GW(p)
Welcome :)
comment by HopeFox · 2011-06-12T12:11:24.153Z · LW(p) · GW(p)
Hi, I've been lurking on Less Wrong for a few months now, making a few comments here and there, but never got around to introducing myself. Since I'm planning out an actual post at the moment, I figured I should tell people where I'm coming from.
I'm a male 30-year-old optical engineer in Sydney, Australia. I grew up in a very scientific family and have pretty much always assumed I had a scientific career ahead of me, and after a couple of false starts, it's happened and I couldn't ask for a better job.
Like many people, I came to Less Wrong from TVTropes via Methods of Rationality. Since I started reading, I've found that it's been quite helpful in organising my own thoughts and casting aside unuseful arguments, and examining aspects of my life and beliefs that don't stand up under scrutiny.
In particular, I've found that reading Less Wrong has allowed, nay forced, me to examine the logical consistency of everything I say, write, hear and read, which allows me to be a lot more efficient in dicussions, both by policing my own speech and being more usefully critical of others' points (rather than making arguments that don't go anywhere).
While I was raised in a substantively atheist household, my current beliefs are theist. The precise nature of these beliefs has shifted somewhat since I started reading Less Wrong, as I've discarded the parts that are inconsistent or even less likely than the others. There are still difficulties with my current model, but they're smaller than the issues I have with my best atheist theory.
I've also had a surprising amount of success in introducing the logical and rationalist concepts from Less Wrong to one of my girlfriends, which is all the more impressive considering her dyscalculia. I'm really pleased that that this site has given me the tools to do that. It's really easy now to short-circuit what might otherwise become an argument by showing that it's merely a dispute about definitions. It's this sort of success that has kept me reading the site these past months, and I hope I can contribute to that success for other people.
Replies from: Kaj_Sotala, Oscar_Cunningham↑ comment by Kaj_Sotala · 2011-06-12T13:03:36.870Z · LW(p) · GW(p)
Welcome!
There are still difficulties with my current model, but they're smaller than the issues I have with my best atheist theory.
What issues does your best atheist theory have?
Replies from: HopeFox↑ comment by HopeFox · 2011-06-12T13:42:47.742Z · LW(p) · GW(p)
What issues does your best atheist theory have?
My biggest problem right now is all the stuff about zombies, and how that implies that, in the absence of some kind of soul, a computer program or other entity that is capable of the same reasoning processes as a person, is morally equivalent to a person. I agree with every step of the logic (I think, it's been a while since I last read the sequence), but I end up applying it in the other direction. I don't think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value. Therefore I either accept a lack of moral value to humanity (both distasteful and unlikely), or accept the presence of something, let's call it a soul, that makes people worthwhile (also unlikely). I'm leaning towards the latter, both as the less unlikely, and the one that produces the most harmonious behaviour from me.
It's a work in progress. I've been considering the possibility that there is exactly one soul in the universe (since there's no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that's a low-probability hypothesis for now.
Replies from: Oscar_Cunningham, Vladimir_Nesov, jimrandomh, Kaj_Sotala, Laoch↑ comment by Oscar_Cunningham · 2011-06-12T16:49:09.226Z · LW(p) · GW(p)
In the spirit of your (excellent) new post, I'll attack all the weak points of your argument at once:
- You define "soul" as:
the presence of something, let's call it a soul, that makes people worthwhile
This definition doesn't give souls any of their normal properties, like being the seat of subjective experience, or allowing free will, or surviving bodily death. That's fine, but we need to be on the look-out in case these meanings sneak in as connotations later on. (In particular, the "Zombies" sequence doesn't talk about moral worth, but does talk about subjective experience, so its application here isn't straight forward. Do you believe that a simulation of a human would have subjective experience?)
"Souls" don't provide any change in anticipation. You haven't provided any mechanism by which other people having souls causes me to think that those other people have moral worth. Furthermore it seems that my belief that others have moral worth can be fully explained by my genes and my upbringing.
You haven't stated any evidence for the claim that computer programs can't have moral value, and this isn't intuitively obvious to me.
You've produced a dichotomy between two very unlikely hypotheses. I think the correct answer in this case isn't to believe the least unlikely hypothesis, but is instead to assume that the answer is some third option you haven't thought of yet. For instance you could say "I withhold judgement on the existence of souls and the nature of moral worth until I understand the nature of subjective experience".
The existence of souls as you've defined them doesn't imply theism. Not even slightly. (EDIT: Your argument goes: 'By the "Zombies" sequence, simulations are concious. By assumption, simulations have no moral worth. Therefore concious does not imply moral worth. Call whatever does imply moral worth a soul. Souls exist, therefore theism.' The jump between the penultimate and the ultimate step is entirely powered by connotations of the word "soul", and is therefore invalid.)
Also you say this:
I've been considering the possibility that there is exactly one soul in the universe (since there's no reason to consider souls to propagate along the time axis of spacetime in any classical sense), but that's a low-probability hypothesis for now.
(I'm sorry if what I say next offends you.) This sounds like one of those arguments clever people come up with to justify some previously decided conclusion. It looks like you've just picked a nice sounding theory out of hypothesis space without nearly enough evidence to support it. It would be a real shame if your mind became tangled up like an Escher painting because you were too good at thinking up clever arguments.
↑ comment by Vladimir_Nesov · 2011-06-12T16:34:54.107Z · LW(p) · GW(p)
You don't need an additional ontological entity to reflect a judgment (and judgments can differ between different people or agents). You don't need special angry atoms to form an angry person, that property can be either in the pattern of how the atoms are arranged, or in the way you perceive their arrangement. See these posts:
↑ comment by jimrandomh · 2011-06-12T14:36:06.287Z · LW(p) · GW(p)
I don't think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value.
It's hard to build intuitions about the moral value of intelligent programs right now, because there aren't any around to talk to. But consider a hypothetical that's as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?
Replies from: Perplexed, HopeFox↑ comment by Perplexed · 2011-06-12T16:19:44.072Z · LW(p) · GW(p)
... consider a hypothetical that's as close to human as possible: uploads.
I would have suggested pets. Or the software objects of Chang's story.
It is interesting that HopeFox's intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-12T16:46:02.668Z · LW(p) · GW(p)
I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...
Replies from: Perplexed↑ comment by Perplexed · 2011-06-12T17:09:23.140Z · LW(p) · GW(p)
... that has nothing to do with having a soul.
Well, it has nothing to do with what you think of as a 'soul'.
Personally, I'm not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word 'soul'.
Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.
↑ comment by HopeFox · 2011-06-12T16:04:15.504Z · LW(p) · GW(p)
I agree, intuition is very difficult here. In this specific scenario, I'd lean towards saying yes - it's the same person with a physically different body and brain, so I'd like to think that there is some continuity of the "person" in that situation. My brain isn't made of the "same atoms" it was when I was born, after all. So I'd say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn't 100% sure.
However, if the original brain and body weren't destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I'd be more dubious. I'd be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we're supposing) and assigning them the moral status of twenty people. "People", of the sort deserving of rights and dignity and so forth, shouldn't be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there's a problem there too.)
Actually, come to think of it... if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it's the same software, could they be considered the same person? After all, they'll make all the same decisions given similar stimuli, and thus are using the same decision process.
Replies from: MixedNuts, hairyfigment↑ comment by MixedNuts · 2011-06-12T16:17:26.864Z · LW(p) · GW(p)
Yes, the consensus seems to be that running two copies of yourself in parallel doesn't give you more measure or moral weight. But if the copies receive diferent inputs, they'll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can't retrieve Copy-A's current state from Copy-B's current state and the respective inputs, because information about the initial state has been destroyed?)
↑ comment by hairyfigment · 2011-06-13T00:25:52.670Z · LW(p) · GW(p)
Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?
↑ comment by Kaj_Sotala · 2011-06-12T15:21:55.546Z · LW(p) · GW(p)
Thanks.
Can you be more specific about what you mean by a soul? To me, it sounds like you're just using it as a designation of something that has moral value to you. But that doesn't need to imply anything supernatural; it's just an axiom in your moral system.
↑ comment by Oscar_Cunningham · 2011-06-12T13:14:58.606Z · LW(p) · GW(p)
Welcome!
I'm planning out an actual post at the moment
Exciting! What's it about?
Replies from: HopeFox↑ comment by HopeFox · 2011-06-12T13:23:13.792Z · LW(p) · GW(p)
It's about how, if you're attacking somebody's argument, you should attack all of the bad points of it simultaneously, so that it doesn't look like you're attacking one and implicitly accepting the others. With any luck, it'll be up tonight.
comment by MrMind · 2011-04-20T12:12:55.245Z · LW(p) · GW(p)
Hello everybody, I'm Stefano from Italy. I'm 30, and my story about becoming a rationalist is quite tortuous... as a kid I was raised as a christian, but not strictly so: my only obligation was to attend mass every sunday morning. At the same time since young age I was fond of esoteric and scientific literature... With hindsight, I was a strange kid: by the age of 13 I already knew quite a lot about such things as the Order of the Golden Dawn or General Relativity... My fascination with computer and artificial intelligence begun approximately at the same age, when I met a teacher that first taught me how to program: I then realized that this would be one of my greatest passion. To cut short a long story, during the years I discarded all the esoteric nonsense (by means of... well, experiments) and proceeded to explore deeper and deeper within physics, math and AI.
I found this site some month ago, and after a reasonable recognition and after having read a fair amount of the sequences, I feel ready to contribute... so here I am.
comment by MinibearRex · 2011-04-02T17:59:44.740Z · LW(p) · GW(p)
I started posting a while ago (and was lurking for a while beforehand), and only today found this post.
My parents were both science teachers, and I got an education in traditional rationality basically since birth (I didn't even know it had such a name as "traditional rationality", I assumed it was just how you were supposed to think). I've always used that experimental mindset in order to understand people and the rest of the universe. I'm an undergrad in the Plan II honors program at the University of Texas at Austin, majoring in Chemistry Pre-Med. A friend of mine found HP:MoR on StumbleUpon and shared it with me. I caught up with the story very quickly, and one day as I was bored waiting for Elizeer to post the next chapter, I came to Less Wrong. Lurked for a long time, read the sequences, and adopted technical rationality. One day I had something to say, so I created an account.
Goal in life: Astronaut.
comment by jefftk (jkaufman) · 2010-11-04T21:57:19.430Z · LW(p) · GW(p)
Jeff Kaufman. Working as a programmer doing computational linguistics in the boston area. Found "less wrong" twice: first through the intuitive explanation of bayes' theorem and then again recently through "hp and the methods of rationality". I value people's happiness, valuing that of those close to me more than that of strangers, but I value strangers' welfare enough that I think I have an obligation to earn as much as I can and live on as little as I can so I can give more to charity.
comment by SoulAllnighter · 2010-09-26T08:53:01.790Z · LW(p) · GW(p)
G'day LW Im an Aussie currently studying at the Australian National University in Canberra. My name is Sam and i should point out that the 'G'day' is just for fun, most Australians never use that phase and it kinda makes me cringe.
At at this very moment i'm trying to finish my thesis on the foundations of inductive reasoning, which i guess is pretty relevant to this community. A big part of my thesis is to translate a lot of very technical mathematics regarding Bayesianism and Sollomonoff induction into philosophical and intuitive explanations, so this whole site is really useful to me in just seeing how people think about rationalism and the mechanics of beliefs.
Although I my entire degree has been focused on the rational side of the human spectrum I remain alot more open minded and I think our entire education system regards math and physics too highly and does not leave enough room for creativity. Although create subjects exist in the arts, the generally culture is to regard them as intellectually inferior in some sense which has led to a hugely skewed idea of intelligence.
The saying goes "the map is not the territory" and although we can continually refine our maps through science and math I think truly understanding the territory can only be achieved through direct experience.
Im also very worried about the state of the world and It is exactly through rational open forums such as this that much needed progress can be discussed and advanced.
I guess i have a lot to say and instead of posting it here I should save it for an actual post, whenever i get time. But Its refreshing to see such an interesting online community amongst the seemingly endless rubbish on the net.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-09-26T19:39:20.933Z · LW(p) · GW(p)
Welcome to Less Wrong! I think you might be interested in these posts of mine, where I develop some standard and non-standard interpretations of probability theory. Let me know what you think. (BTW, I think you misspelled Solomonoff 's name?)
comment by aurasprw · 2010-08-24T00:33:18.974Z · LW(p) · GW(p)
Hey Lesswrong! I'm just going to ramble for a second..
I like art, social sciences, philosophy, gaming, rationality and everything that falls in between. Examples include Go, Evolutionary Psychology, Mafia (aka Werewolves), Improvisation, Drugs and Debate.
See you if I see you!
comment by srjskam · 2010-06-15T23:17:22.435Z · LW(p) · GW(p)
Heikki, 30, Finnish student of computer engineering. Found Less Wrong by via the IRC-channel of the Finnish Transhumanist Association, which was found by random surfing ("Oh, there's a name for what I am?")
As for becoming a rationalist, I'd say the recipe was no friends and a good encyclopedia... Interest in ideas, unhindered by the baggage of standard social activities. One of the most influential single things was probably finding evolution quite early on. I remember (might be a false memory) having thought it would sure make sense if a horse's hoof was just one big toe, and then finding the same classic observation explained in the mentioned encyclopedia... That or dinosaurs. Anyway, fast forward via teenage bible-bashing and a fair amount of (hard) scifi etc. to now being here.
As the first sentence might suggest, I'm not doing nor have done anything of much interest to anyone. Well. Back to lurking, thanks to SilasBarta for the friendly welcome :) .
comment by Leafy · 2010-02-18T23:45:46.512Z · LW(p) · GW(p)
Hi everyone.
My name is Alan Godfrey.
I am fascinated by rational debate and logical arguments, and I appear to have struck gold in finding this site! I am the first to admit my own failings in these areas but am always willing to learn and grow.
I'm a graduate of mathematics from Trinity Hall, Cambridge University and probability and statistics have always been my areas of expertise - although I find numbers so much more pleasant to play with than theorems and proofs so bear with me!
I'm also a passive member of Mensa. While most of it does not interest me the numerical, pattern spotting and spatial awareness puzzles that it is associated with have always been a big passion of mine.
I have a personal fascination in human psychology, especially my own in a narcissistic way! Although I have no skill in this area.
I currently work for a specialist insurance company and head the catastrophe modelling function, which uses a baffling mixture of all of the above! It was through this that I attended a brief seminar at the 21st Century School in Oxford which mentioned this site as an affiliation although I had already found it a few months previously.
I come to this site with open eyes and an open mind. I hope to contribute insightful observation, engage in healthy discussion and ultimately come away better than I came in.
Replies from: bgrah449↑ comment by bgrah449 · 2010-02-19T00:06:42.020Z · LW(p) · GW(p)
Out of curiosity, are you an actuary?
Replies from: Leafy↑ comment by Leafy · 2010-02-19T08:45:08.723Z · LW(p) · GW(p)
Actually no I am not. I began studying the Actuarial exams when I started work and have passed the ones that I took but stopped studying 3 years ago.
I found them very interesting but sadly of only minor relevance to the work that I was doing and, since I was not intending on becoming an Actuary and therefore was not being afforded any study leave in which to progress in them, I decided to focus my spare time on my own career path instead.
Why do you ask?
comment by arthurlewis · 2009-04-16T16:13:56.406Z · LW(p) · GW(p)
- Handle: arthurlewis
- Location: New York, NY
- Age: 28
- Education: BA in Music.
- Occupation: Musician / Teacher / Mac Support Guy
- Blog/Music: http://arthurthefourth.com
My career as a rationalist began when I started doing tech support, and realized the divide between successful troubleshooting and what most customers tried to do. I think the key to "winning" is to challenge your assumptions about how to win, and what winning is. I think that makes me an instrumental rationalist, but I'm not quite sure I understand the term. I'm here because OB and LW are among the closest things I've ever seen to an honest attempt to discover truth, whatever that may turn out to mean. And because I really like the phrase "Shut up and calculate!"
Note to new commenters: The "Help" link below the comment box will give you formatting tips.
Replies from: MBlumecomment by Paul Crowley (ciphergoth) · 2009-04-16T09:44:34.730Z · LW(p) · GW(p)
This community is too young to have veterans. Since this is the first such post, I think we should all be encouraged to introduce ourselves.
Thanks for doing this!
Replies from: MBlumecomment by kajro · 2012-06-23T00:06:26.476Z · LW(p) · GW(p)
I'm a 20 year old mathematics/music double major at NYU. Mainly here because I want to learn how to wear Vibrams without getting self conscious about it.
Replies from: Kevin, John_Maxwell_IV↑ comment by Kevin · 2012-06-23T01:11:12.193Z · LW(p) · GW(p)
I get nothing but positive social affect from Ninja Zemgears. http://www.amazon.com/s/ref=nb_sb_noss_1?url=search-alias%3Daps&field-keywords=zemgear
Cheaper than Vibrams, more comfortable, less durable, less agile, much friendlier looking.
Replies from: kajro, Alicorn↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-06-23T01:05:22.523Z · LW(p) · GW(p)
Hi there!
This might help: http://www.psych.cornell.edu/sec/pubPeople/tdg1/Gilo.Medvec.Sav.pdf
Replies from: kajro↑ comment by kajro · 2012-06-23T03:05:46.652Z · LW(p) · GW(p)
Is this some kind of LW hazing, linking to academic papers in an introduction thread? (I joke, this looks super interesting).
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-06-23T03:24:02.531Z · LW(p) · GW(p)
It was either that or the Psychology Today article. (Pretty sure Psychology Today is where I learned about the concept, but googling found the paper.)
comment by kateblu · 2011-12-04T03:44:36.051Z · LW(p) · GW(p)
Hello. I found this place as a result of reading Yudkowski's intuitive explanation of Bayes Theorem. I think we are like a very large group of blind people each trying to describe the elephant on the basis of the small part we touch. However, if I can aggregate the tactile observations of a large number of us blind people, I might end up with a pretty good idea of what that elephant looks like. That's my goal - to build a coherent and consistent mental picture of that elephant.
Replies from: Desrtopa↑ comment by Desrtopa · 2011-12-06T17:19:12.016Z · LW(p) · GW(p)
I honestly have some pretty bad associations for that metaphor. The parable makes sense, but I find that it's almost invariably (indeed, even in its original incarnations) presented with the implication "if we could pool our knowledge and experiences, we would come away with an understanding that resembles what I already believe."
Replies from: kateblu↑ comment by kateblu · 2011-12-07T02:52:15.700Z · LW(p) · GW(p)
I have no prior belief as to what this elephant looks like and I am continuously surprised and challenged by the various pieces that have to somehow fit into the overall picture. I don't worry whether my mental construct accords to Reality. I live with the fact that my limited understanding of quarks is probably not how they are understood by a particle physicist. But I am driven to keep learning more and somehow to fit it all together. Commenting helps me to articulate my personal theory of everything. But I need critical feedback from others to help me spot the inconsistencies. to force me not to be lazy, and to point out the gaps in my knowledge.
comment by zntneo · 2011-04-04T18:35:32.758Z · LW(p) · GW(p)
Hello my name is Zachary Aletheia (when my wife and i got married we decided to choose a last name based on something that had meaning to use aletheia means truth in greek and we both have a passion for finding out the truth). Looking back on my journey to being a rationalist i think it was a 2 step process (though given how i've repeatedly thought about it i probably have changed the details in my memory quite a bit). I think the first step was during a anthropology class i watched this film about "magic" (i was a neo-pagan at the time who believed i could manipulate energy with my mind) and how absurd the video seemed really made me want to find a way to be able to have beliefs that aren't easy to see as absurd or laughable by others. From there i read quite a lot about logic (i still have a love affair with pure logic, i think i own 3 books on the subject and recently bought one from lukeprog). This all occured while i was a computer engineer undergraduate.
When i couldn't pass physics (which at the time i thought , due to self-serving bias, was because i was more interested in what i got my degree in) i decided to switch majors to psychology. During this time i still took lots of vitamins and supplements and was even a 9/11 truther for a while. Then i took a class called "Cognition" where we learned about quite a few of the biases and heuristics that are talked about on LW. Since then I started listening to a ton of skeptic podcasts and in general have tried to be a better rationalist.
One area that i do seem to have a hard time being a rationalist about is myself. I hold my self in very low self esteem (for instance i truly debated if i should post on here because everyone seems so brilliant how could i possibly add anything). I am hoping on trying to apply reason to that area of my life.
When it comes to life goals i am still trying to figure that out. I am leaning towards becoming a psychology prof but really not sure.
Oh i found lesswrong due to lukeprog and CSA. I have since become basically addicted to reading posts on it.
Oh i live in seaside,ca if anyone lives near there i would love to go to a LW meet-up.
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2011-04-04T18:38:50.716Z · LW(p) · GW(p)
Hi, welcome to LessWrong!
comment by artsyhonker · 2010-12-28T13:36:56.132Z · LW(p) · GW(p)
I came across a post on efficiency of charity, and joined in order to be able to add my comments. I'm not sure I would identify myself as a rationalist at all, though I share some of what I understand to be rationalist values.
I am a musician and a teacher. I'm also a theist, though I hope to be relatively untroublesome about this and I have no wish to proselytize. Rather, I'm interested in exploring rational ways of discussing or thinking about moral and ethical issues that have more traditionally been addressed within a religious framework.
comment by Deltamatic · 2010-12-22T11:06:30.087Z · LW(p) · GW(p)
Hello all. I want to sign up for cryonics, but am not sure how. Is there a guide? What are the differences in the process for minors? [I pressed enter in the comment box but there aren't any breaks in the comment itself; how do you make breaks between lines in comments?] I'm a sixteen-year-old male from Louisiana in the US. I was raised Christian and converted to atheism a few months ago. I found Less Wrong from Eliezer's site--I don't remember how I found that--and have been lurking and reading sequences since.
Replies from: None, ArisKatsaris, quinsie, Oscar_Cunningham↑ comment by [deleted] · 2011-10-31T03:58:34.952Z · LW(p) · GW(p)
Contact Rudi Hoffman. Today.
Cryonics is expensive on a sixteen-year-old's budget. Rudi can get you set up with something close to your price range. You can expect it to be the cost of life insurance, plus maybe $200 a year, with the Cryonics Institute. If you're in good health, my vague expectation is that your life insurance will be on the order of $60/month.
This is judging by my experiences and assuming that these things scale linearly and that CI hasn't significantly changed their rates.
↑ comment by ArisKatsaris · 2011-10-31T04:27:22.305Z · LW(p) · GW(p)
Putting two spaces after a line (before the line break) will produce a single line break, like this:
Line One
Line Two
Line Three
Putting two returns will produce a new paragraph like this:
Paragraph 1
Paragraph 2
Paragraph 3
↑ comment by quinsie · 2011-10-31T04:19:28.802Z · LW(p) · GW(p)
You make breaks in the comment box with two returns.
Just one will not make a line.
As to your actual question, you should probably check your state's laws about wills. I don't know if Louisiana allows minors to write a will for themselves, and you will definately want one saying that your body is to be turned over to the cryonics agency of your choice (usually either the Cryonics Institute or Alcor) upon your death. You'll also probably want to get a wrist bracelet or dog tags informing people to call your cryonicist in the event that you're dead or incapacitated.
↑ comment by Oscar_Cunningham · 2010-12-22T12:04:13.599Z · LW(p) · GW(p)
[I pressed enter in the comment box but there aren't any breaks in the comment itself; how do you make breaks between lines in comments?]
Press enter twice. I don't know why.
comment by danield · 2010-08-30T10:52:01.585Z · LW(p) · GW(p)
Hi Less Wrong,
I'm a computer scientist currently living in Seattle. I used to work for Google, but I've since left to work on a game-creation-software startup. I came to Less Wrong to see Eliezer's posts about Friendly AI and stayed because a lot of interesting philosophical discussion happens here. It's refreshing to see people engaging earnestly with important issues, and the community is supportive rather than combative; nice work!
I'm interested in thinking clearly about my values and helping other people think about theirs. I was surprised to see that there hasn't been much discussion here about moral, animal-suffering-based vegetarianism or veganism. It seems to me that this is a simple, but high-impact, step towards reflective equilibrium. Has there been a conclusive argument against it here, or is everyone on LW already vegetarian (I wish)?
I'd be very happy to talk with anyone about moral vegetarianism in a PM or in a public setting. Even if you don't want to discuss it, I encourage you to think about it; my relationship with animals was a big inconsistency in my value system, and in retrospect it was pretty painless to patch, since the arguments are unusually straightforward and the causal chain is short.
Replies from: Morendil, jacob_cannell, thomblake↑ comment by jacob_cannell · 2010-09-01T23:19:46.527Z · LW(p) · GW(p)
I've since left to work on a game-creation-software startup
Does this startup have a website or anything? I working for a gaming tech startup ATM in a different area, but I'm quite interested in game-creation-software.
Replies from: danield↑ comment by danield · 2010-09-02T22:52:26.066Z · LW(p) · GW(p)
No, no website-- it's just me right now, and work started about a week ago, so it'll be a while yet. Calling it a "startup" is just a way to reassure my parents that I'm doing something with my time :)
The basic premises behind my approach to game-creation software are:
- The game must always be runnable
- The game must be easily shareable to OSX and Windows users
- The user cannot program program (variables, threading, even loops and math should be as scarce as possible)
- Limitations must be strict (don't let users try to create blockbuster-level games, or they'll become discouraged and stop trying)
I'd like to get a working prototype up, send it around to a few testers, and iterate before getting sidetracked into web design. I've found that I can sink a distressing amount of time into getting my CSS "just right". I'll definitely put you on my list for v.0.5 if you PM me an email address.
I see you did a game startup for some years; any tips for someone just starting out? And does your current venture have a website?
comment by daedalus2u · 2010-07-20T23:08:57.508Z · LW(p) · GW(p)
Hi, my name is Dave Whitlock, I have been a rationalist my whole life. I have Asperger's, so rationalism comes very easily to me, too easily ;) I have a blog
http://daedalus2u.blogspot.com/
Which is mostly about nitric oxide physiology, but that includes a lot of stuff. Lately I have been working a lot on neurodevelopment and especially on autism spectrum disorders.
I comment a fair amount in the blogosphere, Science Based Medicine, neurologica, skepchick, Left brain-right brain and sometimes Science blogs; pretty much only under the daedalus2u pseudonym. Sb seems to be in a bit of turmoil right now, so it is unclear how that will fall out.
I am extremely liberal and I think I come by that completely rationally coming from the premise that all people have the same human rights and the same human obligations to other humans (including yourself). This is pretty well codified in the universal declaration of human rights (which I think is insufficiently well followed in many places).
comment by sclamons · 2010-04-19T21:10:05.417Z · LW(p) · GW(p)
Hello from the lurking shadows!
Some stats:
- Name: Samuel Clamons
- Birth Year: 1990
- Location: College of William and Mary or northern VA, depending on the time of year
- Academic interests: Biology, mathematics, computer science *Personal interests: Science fiction, philosophy, understanding quantum mechanics, writing.
I've pretty much always been at least an aspiring rationalist, and I convinced myself of atheism at a pretty early age. My journey to LW started with my discovery of Aubrey de Gray in middle school and my discovery of the transhumanism movement in high school. Some internet prodding brought me to SL4, but I was intimidated with the overwhelming number of prior posts and didn't really read much of it. The little I did read, however, led me to Eliezer's Creating Friendly AI, which struck me on perusal as the most intelligently-written thing I'd read since The Selfish Gene. Earlier this year, the combination of reading through a few of Gardner Dozois' short "best of" short story collections and the discovery of Google Reader brought me to some of Eliezer's posts on AI and metaethics, and I've been reading through LW ever since. I'm currently plowing slowly through Eliezer's quantum physics sequence while trying not to fall behind too much on new threads.
My primary short-term goal is to learn as much as I can while I'm still young and plastic. My primary mid-range goals are to try to use technology to enhance my biology and to help medical immortality become practical and available while I'm still alive. My long-term goals include understanding physics, preserving what's left of the environment, and maximizing my happiness (while remaining within reasonable bounds of ethics).
I also have a passing but occasionally productive interest in writing science fiction, as well as a strong interest in reading it.
Replies from: 6n5x1hn1sq↑ comment by 6n5x1hn1sq · 2010-08-04T16:55:42.840Z · LW(p) · GW(p)
Didn't know where else to find S. E. C. Don't know if you'll see this.
comment by utilitymonster · 2010-04-19T12:31:25.246Z · LW(p) · GW(p)
I'm a philosophy PhD student. I studied math and philosophy as an undergrad. I work on ethics and a smattering of Bayesian topics. I care about maximizing the sum of desirable experiences that happen in the future. In less noble moments, I care more immediately about advancing my career as a philosopher and my personal life.
I ran into OB a couple years ago when Robin Hanson came and gave a talk on disagreement at a seminar I was attending. I started reading OB, and then I drifted to LW territory a few months ago.
At first, much of the discussion here sounded crazy to me. It often still does. But I thought I'd give it a detailed look, since everyone here seems to have the same philosophical prejudices as me (Bayesian utilitarian atheist physicalists).
I like discussion of Bayesian topics and applied ethics best.
comment by ThomasRyan · 2010-02-02T17:48:29.445Z · LW(p) · GW(p)
Hello.
Call me Thomas. I am 22. The strongest force directing my life can be called an extreme phobia of disorder. I came across overcoming bias and Eliezer Yudkowsky's writings, around the same time, in high school, shortly after reading GEB and The Singularity Is Near.
The experience was not a revelation but a relief. I am completely sane! Being here is solace. The information here is mostly systematized, which has greatly helped to organize my thoughts on rationality and has saved me a great amount of time.
I am good at tricking people into thinking I am smart, which you guys can easily catch. And I care about how you guys will perceive me, which means that I have to work hard if I want to be a valuable contributor. Something I am not used to (working hard), since I do good enough work with minimal effort.
My greatest vices are romantic literature, smooth language, and flowery writing. From Roman de la Rose, to The Knight's Tale, to Paradise Lost, to One Hundred Years of Solitude. That crap is like candy to me.
Bad music repulses me. I get anxious and irritable and will probably throw a fit if I don't get away from the music. Anything meticulous, or tedious, will make me antsy and shaky. Bad writing also has the same effect on me. Though, I am punctilious. There's a difference.
My favorite band it Circulatory System, which speaks directly to my joys and fears and hopes. If you haven't listened to them, I highly recommend you do so. The band name means "Human." It is about what is means to be us, about the circular nature of our sentience, and about the circles drawn in history with every new generation. http://www.youtube.com/watch?v=a_jidcdzXuU
I have opted out of college. I do not learn well in lectures. They are too slow, tedious, and meticulous. Books hold my attention better.
My biggest mistake? In school, never practicing retaining information. I do not have my months memorized and my vocabulary is terrible. It was much funner to use my intelligence to "get the grade" than it was to memorize information. Now, this is biting me on the butt. I need to start practicing memorizing stuff.
I am currently in a good situation. My mom got a job far from her house, and she has farm animals. I made a deal with her, where I watch her house and the animals for free if she lets me stay there. I will be in this position for at least another year.
I have enough web design skills to be useful to web design firms, which brings me my income. I am also a hobbyist programmer, though not good enough yet to turn that skill into money.
I want to teach people to be more rational; that's what I want to do with my life. I am far from being the writer I want to be, and I have not yet made my ideas congruent and clear.
Anybody with good recommendations on how to best spend this year?
Thomas.
Replies from: ciphergoth, Saviorself138↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T18:38:59.411Z · LW(p) · GW(p)
Hello, and welcome to the site!
Replies from: ThomasRyan↑ comment by ThomasRyan · 2010-02-02T20:06:49.681Z · LW(p) · GW(p)
Thank you, I'll be seeing you around :) .
Anyway, I have been thinking of starting my year off by reading Chris Langan's CTMU, but I haven't seen anything written about it here or on OB. And I am very wary of what I put into my brain (including LSD :P).
Any opinions on the CTMU?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T20:17:45.318Z · LW(p) · GW(p)
Google suggests you mean this CTMU.
Looks like rubbish to me, I'm afraid. If what's on this site interests you, I think you'll get a lot more out of the Sequences, including the tools to see why the ideas in the site above aren't really worth pursuing.
Replies from: ThomasRyan↑ comment by ThomasRyan · 2010-02-02T20:51:33.414Z · LW(p) · GW(p)
Yeah, I know what it looks like: meta-physical rubbish. But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense. Also, from what I skimmed, it looks like a much deeper examination of reductionism and strange loops, which are ideas that I hold to dearly.
I've read and understand the sequences, though I'm not familiar enough with them to use them without a rationalist context.
Replies from: Eliezer_Yudkowsky, Morendil, mattnewport, pjeby, gregconen, ciphergoth, advael↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T21:28:09.256Z · LW(p) · GW(p)
But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense.
Eh, I'm smart too. Looks to me like you were right the first time and need to have greater confidence in yourself.
Replies from: Morendil↑ comment by Morendil · 2010-02-02T21:55:01.024Z · LW(p) · GW(p)
More to the point, you do not immediately fail the "common ground" test.
Pragmatically, I don't care how smart you are, but whether you can make me smarter. If you are so much smarter than I as to not even bother, I'd be wasting my time engaging your material.
Replies from: Eliezer_Yudkowsky, MrHen↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T22:05:18.144Z · LW(p) · GW(p)
I should note that the ability to explain things isn't the same attribute as intelligence. I am lucky enough to have it. Other legitimately intelligent people do not.
Replies from: Morendil, Username↑ comment by Morendil · 2010-02-02T22:11:30.549Z · LW(p) · GW(p)
If your goal is to convey ideas to others, instrumental rationality seems to demand you develop that capacity.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-02-02T22:26:40.993Z · LW(p) · GW(p)
Considering the extraordinary rarity of good explainers in this entire civilization, I'm saddened to say that talent may have something to do with it, not just practice.
Replies from: realitygrill↑ comment by realitygrill · 2010-02-20T17:28:52.833Z · LW(p) · GW(p)
I wonder what I should do. I'm smart, I seem to be able to explain things that I know to people well.. to my lament, I got the same problem as Thomas: I apparently suck at learning things so that they're internalized and in my long term memory.
↑ comment by MrHen · 2010-02-02T22:06:30.292Z · LW(p) · GW(p)
I can learn from dead people, stupid people, or by watching a tree for an hour. I don't think I understand your point.
Replies from: Morendil↑ comment by Morendil · 2010-02-02T22:22:12.185Z · LW(p) · GW(p)
I didn't use the word "learn". My point is about a smart person conveying their ideas to someone. Taboo "smart". Distinguish ability to reach goals, and ability to score high on mental aptitude tests. If they are goal-smart, and their goal is to convince, they will use their iq-smarts to develop the capacity to convince.
↑ comment by Morendil · 2010-02-02T21:49:45.578Z · LW(p) · GW(p)
However intelligent he is, he fails to present his ideas so as to gradually build a common ground with lay readers. "If you're so smart, how come you ain't convincing?"
The "intelligent design" references on his Wikipedia bio are enough to turn me away. Can you point us to a well-regarded intellectual who has taken his work seriously and recommends his work? (I've used that sort of bridging tactic at least once, Dennett convincing me to read Julian Jaynes.)
Replies from: Cyan↑ comment by Cyan · 2010-02-02T22:08:53.200Z · LW(p) · GW(p)
"If you're so smart, how come you ain't convincing?"
"Convincing" has long been a problem for Chris Langan. Malcolm Gladwell relates a story about Langan attending a calculus course in first year undergrad. After the first lecture, he went to offer criticism of the prof's pedagogy. The prof thought he was complaining that the material was too hard; Langan was unable to convey that he had understood the material perfectly for years, and wanted to see better teaching.
↑ comment by mattnewport · 2010-02-02T21:23:58.610Z · LW(p) · GW(p)
Being very intelligent does not imply not being very wrong.
Replies from: MartinB↑ comment by pjeby · 2010-11-02T18:40:25.713Z · LW(p) · GW(p)
Yeah, I know what it looks like: meta-physical rubbish.
It is. I got as far as this paragraph of the introduction to his paper before I found a critical flaw:
Of particular interest to natural scientists is the fact that the laws of nature are a language. To some extent, nature is regular; the basic patterns or general aspects of structure in terms of which it is apprehended, whether or not they have been categorically identified, are its “laws”. The existence of these laws is given by the stability of perception.
At this point, he's already begging the question, i.e. presupposing the existence of supernatural entities. These "laws" he's talking about are in his head, not in the world.
In other words, he hasn't even got done presenting what problem he's trying to solve, and he's already got it completely wrong, and so it's doubtful he can get to correct conclusions from such a faulty premise.
Replies from: Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-05T22:04:40.045Z · LW(p) · GW(p)
That's not a critical flaw. In metaphysics, you can't take for granted that the world is not in your head. The only thing you really can do is to find an inconsistency, if you want to prove someone wrong.
Langan has no problems convincing me. His attempt at constructing a reality theory is serious and mature and I think he conducts his business about the way an ordinary person with such aims would. He's not a literary genius like Robert Pirsig, he's just really smart otherwise.
I've never heard anyone to present such criticism of the CTMU that would actually imply understanding of what Langan is trying to do. The CTMU has a mistake. It's that Langan believes (p. 49) the CTMU to satisfy the Law Without Law condition, which states: "Concisely, nothing can be taken as given when it comes to cosmogony." (p. 8)
According to the Mind Equals Reality Principle, the CTMU is comprehensive. This principle "makes the syntax of this theory comprehensive by ensuring that nothing which can be cognitively or perceptually recognized as a part of reality is excluded for want of syntax". (p. 15) But undefinable concepts can neither be proven to exist nor proven not to exist. This means the Mind Equals Reality Principle must be assumed as an axiom. But to do so would violate the Law Without Law condition.
The Metaphysical Autology Principle could be stated as an axiom, which would entail the nonexistence of undefinable concepts. This principle "tautologically renders this syntax closed or self-contained in the definitive, descriptive and interpretational senses". (p. 15) But it would be arbitrary to have such an axiom, and the CTMU would again fail to fulfill Law Without Law.
If that makes the CTMU rubbish, then Russell's Principia Mathematica is also rubbish, because it has a similar problem which was pointed out by Gödel. EDIT: Actually the problem is somewhat different than the one addressed by Gödel.
Langan's paper can be found here EDIT: Fixed link.
Replies from: Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-10T15:28:52.263Z · LW(p) · GW(p)
To clarify, I'm not the generic "skeptic" of philosophical thought experiments. I am not at all doubting the existence of the world outside my head. I am just an apparently competent metaphysician in the sense that I require a Wheeler-style reality theory to actually be a Wheeler-style reality theory with respect to not having arbitrary declarations.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-10T18:34:42.623Z · LW(p) · GW(p)
There might not be many people here to who are sufficiently up to speed on philosophical metaphysics to have any idea what a Wheeler-style reality theory, for example, is. My stereotypical notion is that the people at LW have been pretty much ignoring philosophy that isn't grounded in mathematics, physics or cognitive science from Kant onwards, and won't bother with stuff that doesn't seem readable from this viewpoint. The tricky thing that would help would be to somehow translate the philosopher-speak into lesswronger-speak. Unfortunately this'd require some fluency in both.
Replies from: Tuukka_Virtaperko, Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-13T01:02:05.626Z · LW(p) · GW(p)
It's not like your average "competent metaphysicist" would understand Langan either. He wouldn't possibly even understand Wheeler. Langan's undoing is to have the goals of a metaphysicist and the methods of a computer scientist. He is trying to construct a metaphysical theory which structurally resebles a programming language with dynamic type checking, as opposed to static typing. Now, metaphysicists do not tend to construct such theories, and computer scientists do not tend to be very familiar with metaphysics. Metaphysical theories tend to be deterministic instead of recursive, and have a finite preset amount of states that an object can have. I find the CTMU paper a bit sketchy and missing important content besides having the mistake. If you're interested in the mathematical structure of a recursive metaphysical theory, here's one: http://www.moq.fi/?p=242
Formal RP doesn't require metaphysical background knowledge. The point is that because the theory includes a cycle of emergence, represented by the power set function, any state of the cycle can be defined in relation to other states and prior cycles, and the amount of possible states is infinite. The power set function will generate a staggering amount of information in just a few cycles, though. Set R is supposed to contain sensory input and thus solve the symbol grounding problem.
↑ comment by Tuukka_Virtaperko · 2012-01-13T13:25:40.169Z · LW(p) · GW(p)
Of course the symbol grounding problem is rather important, so it doesn't really suffice to say that "set R is supposed to contain sensory input". The metaphysical idea of RP is something to the effect of the following:
Let n be 4.
R contains everything that could be used to ground the meaning of symbols.
- R1 contains sensory perceptions
- R2 contains biological needs such as eating and sex, and emotions
- R3 contains social needs such as friendship and respect
- R4 contains mental needs such as perceptions of symmetry and beauty (the latter is sometimes reducible to the Golden ratio)
N contains relations of purely abstract symbols.
- N1 contains the elementary abstract entities, such as symbols and their basic operations in a formal system
- N2 contains functions of symbols
- N3 contains functions of functions. In mathematics I suppose this would include topology.
- N4 contains information of the limits of the system, such as completeness or consistency. This information form the basis of what "truth" is like.
Let ℘(T) be the power set of T.
The solving of the symbol grounding problem requires R and N to be connected. Let us assume that ℘(Rn) ⊆ Rn+1. R5 hasn't been defined, though. If we don't assume subsets of R to emerge from each other, we'll have to construct a lot more complicated theories that are more difficult to understand.
This way we can assume there are two ways of connecting R and N. One is to connect them in the same order, and one in the inverse order. The former is set O and the latter is set S.
O set includes the "realistic" theories, which assume the existence of an "objective reality".
- ℘(R1) ⊆ O1 includes theories regarding sensory perceptions, such as physics.
- ℘(R2) ⊆ O2 includes theories regarding biological needs, such as the theory of evolution
- ℘(R3) ⊆ O3 includes theories regarding social affairs, such as anthropology
- ℘(R4) ⊆ O4 includes theories regarding rational analysis and judgement of the way in which social affairs are conducted
The relationship between O and N:
- N1 ⊆ O1 means that physical entities are the elementary entities of the objective portion of the theory of reality. Likewise:
- N2 ⊆ O2
- N3 ⊆ O3
- N4 ⊆ O4
S set includes "solipsistic" ideas in which "mind focuses to itself".
- ℘(R4) ⊆ S1 includes ideas regarding what one believes
- ℘(R3) ⊆ S2 includes ideas regarding learning, that is, adoption of new beliefs from one's surroundings. Here social matters such as prestige, credibility and persuasiveness affect which beliefs are adopted.
- ℘(R2) ⊆ S3 includes ideas regarding judgement of ideas. Here, ideas are mostly judged by how they feel. Ie. if a person is revolted by the idea of creationism, they are inclined to reject it even without rational grounds, and if it makes them happy, they are inclined to adopt it.
- ℘(R1) ⊆ S4 includes ideas regarding the limits of the solipsistic viewpoint. Sensory perceptions of objectively existing physical entities obviously present some kind of a challenge to it.
The relationship between S and N:
- N4 ⊆ S1 means that beliefs are the elementary entities of the solipsistic portion of the theory of reality. Likewise:
- N3 ⊆ S2
- N2 ⊆ S3
- N1 ⊆ S4
That's the metaphysical portion in a nutshell. I hope someone was interested!
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-14T11:00:20.119Z · LW(p) · GW(p)
We were talking about applying the metaphysics system to making an AI earlier in IRC, and the symbol grounding problem came up there as a basic difficulty in binding formal reasoning systems to real-time actions. It doesn't look like this was mentioned here before.
I'm assuming I'd want to actually build an AI that needs to deal with symbol grounding, that is, it needs to usefully match some manner of declarative knowledge it represents in its internal state to the perceptions it receives from the outside world and to the actions it performs on it. Given this, I'm getting almost no notion of what useful work this theory would do for me.
Mathematical descriptions can be useful for people, but it's not given that they do useful work for actually implementing things. I can define a self-improving friendly general artificial intelligence mathematically by defining
FAI = <S, P*>
as an artificial intelligence instance, consisting of its current internal state S and the history of its perceptions up to the present P*,a: FAI -> A*
as a function that gives the list of possible actions for a given FAI instanceu: A -> Real
as a function that gives the utility of each action as a real number, with higher numbers given to actions that advance the purposes of the FAI better based on its current state and perception history andf: FAI * A -> S, P
as an update function that takes an action and returns a new FAI internal state with any possible self-modifications involved in the action applied, and a new perception item that contains whatever new observations the FAI made as a direct result of its action.
And there's a quite complete mathematical description of a friendly artificial intelligence, you could probably even write a bit of neat pseudocode using the pieces there, but that's still not likely to land me a cushy job supervising the rapid implementation of the design at SIAI, since I don't have anything that does actual work there. All I did was push all the complexity into the black boxes of the u
, a
and f
.
I also implied a computational approach where the system enumerates every possible action, evaluates them all and then picks a winner with how I decided to split up the definition. This is mathematically expedient, given that in mathematics any concerns of computation time can be pretty much waved off, but appears rather naive computationally, as it is likely that both coming up with possible actions and evaluating them can get extremely expensive in the artificial general intelligence domain.
With the metaphysics thing, beyond not getting a sense of it doing any work, I'm not even seeing where the work would hide. I'm not seeing black box functions that need to do an unknowable amount of work, just sets with strange elements being connected to other sets with strange elements. What should you be able to do with this thing?
Replies from: Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-14T13:24:11.816Z · LW(p) · GW(p)
You probably have a much more grassroot-level understanding of the symbol grounding problem. I have only solved the symbol grounding problem to the extent that I have formal understanding of its nature.
In any case, I am probably approaching AI from a point of view that is far from the symbol grounding problem. My theory does not need to be seen as an useful solution to that problem. But when an useful solution is created, I postulate it can be placed within RP. Such a solution would have to be an algorithm for creating S-type or O-type sets of members of R.
More generally, I would find RP to be useful as an extremely general framework of how AI or parts of AI can be constructed in relation to each other, ecspecially with regards to understanding lanugage and the notion of consciousness. This doesn't necessarily have anything to do with some more atomistic AI projects, such as trying to make a robot vacuum cleaner find its way back to the charging dock.
At some point, philosophical questions and AI will collide. Suppose the following thought experiment:
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
- 1) The brain scanner is broken
- 2) The person is broken
In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
RP should help with such problems because it is intended as an elegant, compact and flexible way of defining recursion while allowing the solution of the symbol grounding problem to be contained in the definition in a nontrivial way. That is, RP as a framework of AI is not something that says: "Okay, this here is RP. Just perform the function RP(sensory input) and it works, voilá." Instead, it manages to express two different ways of solving the symbol grounding problem and to define their accuracy as a natural number n. In addition, many emergence relations in RP are logical consequences of the way RP solves the symbol grounding problem (or, if you prefer, "categorizes the parts of the actual solution to the symbol grounding problem").
In the previous thought experiment, the AI should manage to understand that the scanner deterministically performs the operation ℘(R) ⊆ S, and does not define S in terms of anything else. The person, on the other hand, is someone whose information processing is based on RP or something similar.
But what you read from moq.fi is something we wrote just a few days ago. It is by no means complete.
- One problem is that ℘(T) does not seem to define actual emergences, but only all possible emergences.
- We should define functions for "generalizing" and "specifying" sets or predicates, in which generalization would create a new set or predicate from an existing one by adding members, and specifying would do so by reducing members.
- We should add a discard order to sets. Sets that are used often have a high discard order, but sets that are never used end up erased from memory. This is similar to nonused pathways in the brain dying out, and often used pathways becoming stronger.
- The theory does not yet have an algorithmic part, but it should have. That's why it doesn't yet do anything.
- ℘(Rn) should be defined to include a metatheoretic approach to the theory itself, facilitating modification of the theory with the yet-undefined generalizing and specifying functions.
Questions to you:
- Is T -> U the Cartesian product of T and U?
- What is *?
I will not guarantee having discussions with me is useful for attaining a good job. ;)
Replies from: Risto_Saarelma, Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-14T18:39:28.493Z · LW(p) · GW(p)
We have managed to create such a sophisticated brain scanner, that it can tell whether a person is thinking of a cat or not. Someone is put into the machine, and the machine outputs that the person is not thinking of a cat. The person objects and says that he is thinking of a cat. What will the observing AI make of that inconsistency? What part of the observation is broken and results in nonconformity of the whole?
1) The brain scanner is broken 2) The person is broken In order to solve this problem, the AI may have to be able to conceptualize the fact that the brain scanner is a deterministic machine which simply accepts X as input and outputs Y. The scanner does not understand the information it is processing, and the act of processing information does not alter its structure. But the person is different.
I don't really understand this part.
"The scanner does not understand the information but the person does" sounds like some variant of Searle's Chinese Room argument when presented without further qualifiers. People in AI tend to regard Searle as a confused distraction.
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
I suppose the idea here is that there is some difference whether there is a human being sitting in the scanner, or, say, a toy robot with a state of two bits where one is I am thinking about cats and the other is I am broken and will lie about thinking about cats. With the robot, we could just check the "broken" bit as well from the scan when the robot is disagreeing with the scanner, and if it is set, conclude that the robot is broken.
I'm not seeing how humans must be fundamentally different. The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat. Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.
Sorry I keep skipping over your formalism stuff, but I'm still not really grasping the underlying assumptions behind this approach. (The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else", "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")
The whole philosophical theory of everything thing does remind me of this strange thing from a year ago, where the building blocks for the theory were made out of nowadays more fashionable category theory rather than set theory though.
Replies from: Tuukka_Virtaperko, Tuukka_Virtaperko, Tuukka_Virtaperko, Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-02-08T11:59:46.893Z · LW(p) · GW(p)
I've read some of this Universal Induction article. It seems to operate from flawed premises.
If we prescribe Occam’s razor principle [3] to select the simplest theory consistent with the training examples and assume some general bias towards structured environments, one can prove that inductive learning “works”. These assumptions are an integral part of our scientific method. Whether they admit it or not, every scientist, and in fact every person, is continuously using this implicit bias towards simplicity and structure to some degree.
Suppose the brain uses algorithms. An uncontroversial supposition. From a computational point of view, the former citation is like saying: "In order for a computer to not run a program, such as Indiana Jones and the Fate of Atlantis, the computer must be executing some command to the effect of "DoNotExecuteProgram('IndianaJonesAndTheFateOfAtlantis')".
That's not how computers operate. They just don't run the program. They don't need a special process for not running the program. Instead, not running the program is "implicitly contained" in the state of affairs that the computer is not running it. But this notion of implicit containment makes no sense for the computer. There are infinitely many programs the computer is not running at a given moment, so it can't process the state of affairs that it is not running any of them.
Likewise, the use of an implicit bias towards simplicity cannot be meaningfully conceptualized by humans. In order to know how this bias simplifies everything, one would have to know, what information regarding "everything" is omitted by the bias. But if we knew that, the bias would not exist in the sense the author intends it to exist.
Furthermore:
This is in some way a contradiction to the well-known no-free-lunch theorems which state that, when averaged over all possible data sets, all learning algorithms perform equally well, and actually, equally poorly [11]. There are several variations of the no-free-lunch theorem for particular contexts but they all rely on the assumption that for a general learner there is no underlying bias to exploit because any observations are equally possible at any point. In other words, any arbitrarily complex environments are just as likely as simple ones, or entirely random data sets are just as likely as structured data. This assumption is misguided and seems absurd when applied to any real world situations. If every raven we have ever seen has been black, does it really seem equally plausible that there is equal chance that the next raven we see will be black, or white, or half black half white, or red etc. In life it is a necessity to make general assumptions about the world and our observation sequences and these assumptions generally perform well in practice.
The author says that there are variations of the no free lunch theorem for particular contexts. But he goes on to generalize that the notion of no free lunch theorem means something independent of context. What could that possibly be? Also, such notions as "arbitrary complexity" or "randomness" seem intuitively meaningful, but what is their context?
The problem is, if there is no context, the solution cannot be proven to address the problem of induction. But if there is a context, it addresses the problem of induction only within that context. Then philosophers will say that the context was arbitrary, and formulate the problem again in another context where previous results will not apply.
In a way, this makes the problem of induction seem like a waste of time. But the real problem is about formalizing the notion of context in such a way, that it becomes possible to identify ambiguous assumptions about context. That would be what separates scientific thought from poetry. In science, ambiguity is not desired and should therefore be identified. But philosophers tend to place little emphasis on this, and rather spend time dwelling on problems they should, in my opinion, recognize as unsolvable due to ambiguity of context.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-02-11T16:56:44.402Z · LW(p) · GW(p)
The omitted information in this approach is information with a high Kolmogorov complexity, which is omitted in favor of information with low Kolmogorov complexity. A very rough analogy would be to describe humans as having a bias towards ideas expressible in few words of English in favor of ideas that need many words of English to express. Using Kolmogorov complexity for sequence prediction instead of English language for ideas in the construction gets rid of the very many problems of rigor involved in the latter, but the basic idea is pretty much the same. You look into things that are briefly expressible in favor of things that must be expressed in length. The information isn't permanently omitted, it's just depriorized. The algorithm doesn't start looking at the stuff you need long sentences to describe before it has convinced itself that there are no short sentences that describe the observations it wants to explain in a satisfactory way.
One bit of context that is assumed is that the surrounding universe is somewhat amenable to being Kolmogorov-compressed. That is, there are some recurring regularities that you can begin to discover. The term "lawful universe" sometimes thrown around in LW probably refers to something similar.
Solomonoff's universal induction would not work in a completely chaotic universe, where there are no regularities for Kolmogorov compression to latch on. You'd also be unlikely to find any sort of native intelligent entities in such universes. I'm not sure if this means that the Solomonoff approach is philosophically untenable, but needing to have some discoverable regularities to begin with before discovering regularities with induction becomes possible doesn't strike me as that great a requirement.
If the problem of context is about exactly where you draw the data for the sequence which you will then try to predict with Solomonoff induction, in a lawless universe you wouldn't be able to infer things no matter which simple instrumentation you picked, while in a lawful universe you could pick all sorts of instruments, tracking the change of light during time, tracking temperature, tracking the luminousity of the Moon, for simple examples, and you'd start getting Kolmogorov-compressible data where the induction system could start figuring repeating periods.
The core thing "independent of context" in all this is that all the universal induction systems are reduced to basically taking a series of numbers as input, and trying to develop an efficient predictor for what the next number will be. The argument in the paper is that this construction is basically sufficient for all the interesting things an induction solution could do, and that all the various real-world cases where induction is needed can be basically reduced into such a system by describing the instrumentation which turns real-world input into a time series of numbers.
Replies from: Tuukka_Virtaperko, Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-02-15T20:35:33.417Z · LW(p) · GW(p)
Okay. In this case, the article does seem to begin to make sense. Its connection to the problem of induction is perhaps rather thin. The idea of using low Kolmogorov complexity as justification for an inductive argument cannot be deduced as a theorem of something that's "surely true", whatever that might mean. And if it were taken as an axiom, philosophers would say: "That's not an axiom. That's the conclusion of an inductive argument you made! You are begging the question!"
However, it seems like advancements in computation theory have made people able to do at least remotely practical stuff on areas, that bear resemblance to more inert philosophical ponderings. That's good, and this article might even be used as justification for my theory RP - given that the use of Kolmogorov complexity is accepted. I was not familiar with the concept of Kolmogorov complexity despite having heard of it a few times, but my intuitive goal was to minimize the theory's Kolmogorov complexity by removing arbitrary declarations and favoring symmetry.
I would say, that there are many ways of solving the problem of induction. Whether a theory is a solution to the problem of induction depends on whether it covers the entire scope of the problem. I would say this article covers half of the scope. The rest is not covered, to my knowledge, by anyone else than Robert Pirsig and experts of Buddhism, but these writings are very difficult to approach analytically. Regrettably, I am still unable to publish the relativizability article, which is intended to succeed in the analytic approach.
In any case, even though the widely rejected "statistical relevance" and this "Kolmogorov complexity relevance" share the same flaw, if presented as an explanation of inductive justification, the approach is interesting. Perhaps, even, this paper should be titled: "A Formalization of Occam's Razor Principle". Because that's what it surely seems to be. And I think it's actually an achievement to formalize that principle - an achievement more than sufficient to justify the writing of the article.
Replies from: Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-02-15T21:31:54.424Z · LW(p) · GW(p)
Commenting the article:
"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."
I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
Wait a second. Wikipedia already knows this stuff is a formalization of Occam's razor. One article seems to attribute the formalization of that principle to Solomonoff, another one to Hutter. In addition, Solomonoff induction, that is essential for both, is not computable. Ugh. So Hutter and Rathmanner actually have the nerve to begin that article by talking about the problem of induction, when the goal is obviously to introduce concepts of computation theory? And they are already familiar with Occam's razor, and aware of it having, at least probably, been formalized?
Okay then, but this doesn't solve the problem of induction. They have not even formalized the problem of induction in a way that accounts for the logical structure of inductive inference, and leaves room for various relevance operators to take place. Nobody else has done that either, though. I should get back to this later.
↑ comment by Tuukka_Virtaperko · 2012-02-15T21:08:45.383Z · LW(p) · GW(p)
Commenting the article:
"When artificial intelligence researchers attempted to capture everyday statements of inference using classical logic they began to realize this was a difficult if not impossible task."
I hope nobody's doing this anymore. It's obviously impossible. "Everyday statements of inference", whatever that might mean, are not exclusively statements of first-order logic, because Russell's paradox is simple enough to be formulated by talking about barbers. The liar paradox is also expressible with simple, practical language.
↑ comment by Tuukka_Virtaperko · 2012-01-19T22:38:46.785Z · LW(p) · GW(p)
The intelligent agent model still deals with deterministic machines that take input and produce output, but it incorporates the possibility of changing the agent's internal state by presenting the output function as just taking the entire input history X* as an input to the function that produces the latest output Y, so that a different history of inputs can lead to a different output on the latest input, just like it can with humans and more sophisticated machines.
At first, I didn't quite understand this. But I'm reading Introduction to Automata Theory, Languages and Computation. Are you using the * in the same sense here as it is used in the following UNIX-style regular expression?
- '[A-Z][a-z]*'
This expression is intended to refer to all word that begin with a capital letter and do not contain any surprising characters such as ö or -. Examples: "Jennifer", "Washington", "Terminator". The * means [a-z] may have an arbitrary amount of iterations.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-20T04:15:25.042Z · LW(p) · GW(p)
Yeah, that's probably where it comes from. The [A-Z] can be read as "the set of every possible English capital letter" just like X can be read as "the set of every possible perception to an agent", and the * denotes some ordered sequence of elements from the set exactly the same way in both cases.
↑ comment by Tuukka_Virtaperko · 2012-01-15T10:56:55.871Z · LW(p) · GW(p)
I don't find the Chinese room argument related to our work - besides, it seems to possibly vaguely try to state that what we are doing can't be done. What I meant is that AI should be able to:
- Observe behavior
- Categorize entities into deterministic machines which cannot take a metatheoretic approach to their data processing habits and alter them.
- Categorize entities into agencies who process information recursively and can consciously alter their own data processing or explain it to others.
- Use this categorization ability to differentiate entities whose behavior can be corrected or explained by means of social interaction.
- Use the differentiation ability to develop the "common sense" view that, given permission by the owner of the scanner and if deemed interesting, the robot could not ask for the consent of the brain scanner to take it apart and fix it.
- Understand that even if the robot were capable of performing incredibly precise neurosurgery, the person will understand the notion, that the robot wishes to use surgery to alter his thoughts to correspond with the result of the brain scanner, and could consent to this or deny consent.
- Possibly try to have a conversation with the person in order to find out, why they said that they were not thinking of a cat.
Failure to understand this could make the robot naively both take machines apart and cut peoples brains in order to experimentally verify, which approach produces better results. Of course there are also other things to consider when the robot tries to figure out what to do.
I don't consider robots and humans fundamentally different. If the AI were complex enough to understand the aforementioned things, it also would understand the notion that someone wants to take it apart and reprogam it, and could consent or object.
The scanner can already do the extremely difficult task of mapping a raw brain state to the act of thinking about a cat, it should also be able to tell from the brain state whether the person has something going on in their brain that will make them deny thinking about a cat.
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.
Things being deterministic and predictable from knowing their initial state doesn't mean they can't have complex behavior reacting to a long history of sensory inputs accompanied by a large amount of internal processing that might correspond quite well to what we think of as reflection or understanding.
Even the simplest entities, such as irrational numbers or cellular automata, can have complex behavior. Humans, too, could be deterministic and predictable given that the one analyzing a human has enough data and computing power. But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself. Every iteration of the map representing itself needs also to be included in the map, resulting in a requirement that the map should contain an infinite amount of information. Only an external observer could make a finite map, but that's not what I had in mind when beginning this RP project. I do consider the goals of RP somehow relevant to AI, because I don't suppose it's ok a robot cannot conceptualize its own thought very elaborately, if it were intended to be as much human as possible, and maybe even be able to write novels.
I am interested in the ability to genuinely understand the worldviews of other people. For example, the gap between scientific and religious people. In the extreme, these people think of each other in such a derogatory way, that it would be as if they would view each other as having failed the Turing test. I would like robots to understand also the goals and values of religious people.
I'm still not really grasping the underlying assumptions behind this approach.
Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.
I see here the danger for rather trivial debates, such as whether I believe an AI could "experience" consciousness or reality. I don't know what such a question would even mean. I am interested of whether it can conceptualize them in ways a human could.
(The underlying approach in the computer science approach are, roughly, "the physical world exists, and is made of lots of interacting, simple, Turing-computable stuff and nothing else"
The CTMU also states something to the effect of this. In that case, Langan is making a mistake, because he believes the CTMU to be a Wheeler-style reality theory, which contradicts the earlier statement. In your case, I guess it's just an opinion, and I don't feel a need to say you should believe otherwise. But I suppose I can present a rather cogent argument against that within a few days. The argument would be in the language of formal logic, so you should be able to understand it. Stay tuned...
, "animals and humans are just clever robots made of the stuff", "magical souls aren't involved, not even if they wear a paper bag that says 'conscious experience' on their head")
I don't wish to be unpolite, but I consider these topics boring and obvious. Hopefully I haven't missed anything important when making this judgement.
Your strange link is very intriguing. I like very much being given this kind of links. Thank you.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-15T14:54:17.372Z · LW(p) · GW(p)
About the classification thing: Agree that it's very important that a general AI be able to classify entities into "dumb machines" and things complex enough to be self-aware, warrant an intentional stance and require ethical consideration. Even putting aside the ethical concerns, being able to recognize complex agents with intentions and model their intentions instead of their most likely massively complex physical machinery is probably vital to any sort of meaningful ability to act in a social domain with many other complex agents (cf. Dennett's intentional stance)
The latter has, to my knowledge, never been done. Arguably, the latter task requires different ability which the scanner may not have. The former requires acquiring a bitmap and using image recognition. It has already been done with simple images such as parallel black and white lines, but I don't know whether bitmaps or image recognition were involved in that. If the cat is a problem, let's simplify the image to the black and white lines.
I understood the existing image reconstruction experiments measure the activation on the visual cortex when the subject is actually viewing an image, which does indeed get you a straightforward mapping to a bitmap. This isn't the same as thinking about a cat, a person could be thinking about a cat while not looking at one, and they could have a cat in their visual field while daydreaming or suffering from hysterical blindness, so that they weren't thinking about a cat despite having a cat image correctly show up in their visual cortex scan.
I don't actually know what the neural correlate of thinking about a cat, as opposed to having one's visual cortex activated by looking at one, would be like, but I was assuming interpreting it would require much more sophisticated understanding of the brain, basically at the level of difficult of telling whether a brain scan correlates with thinking about freedom, a theory of gravity or reciprocality. Basically something that's entirely beyond current neuroscience and more indicative of some sort of Laplace's demon like thought experiment where you can actually observe and understand the whole mechanical ensemble of the brain.
But RP is about the understanding a consciousness could attain of itself. Such an understanding could not be deterministic within the viewpoint of that consciousness. That would be like trying to have a map contain itself.
Quines are maps that contain themselves. A quining system could reflect on its entire static structure, though it would have to run some sort of emulation slower than its physical substrate to predict its future states. Hofstadter's GEB links quines to reflection in AI.
Well, that's supposed to be a good thing, because there are supposed to be none. But saying that might not help. If you don't know what consciousness or the experience of reality mean in my use (perhaps because you would reduce such experiences to theoretical models of physical entities and states of neural networks), you will probably not understand what I'm doing. That would suggest you cannot conceptualize idealistic ontology or you believe "mind" to refer to an empty set.
"There aren't any assumptions" is just a plain non-starter. There's the natural language we're using that's used to present the theory and ground the concepts in the theory, and natural language basically carries a billion years of evolution leading to the three billion base pair human genome loaded with accidental complexity, leading to something from ten to a hundred thousand years of human cultural evolution with even more accidental complexity that probably gets us something in the ballpark of 100 megabytes irreducible complexity from the human DNA that you need to build up a newborn brain and another 100 megabytes (going by the heuristic of one bit of permanently learned knowledge per one second) for the kernel of the cultural stuff a human needs to learn from their perceptions to be able to competently deal with concepts like "income tax" or "calculus". You get both of those for free when talking with other people, and neither when trying to build an AGI-grade theory of the mind.
This is also why I spelled out the trivial basic assumptions I'm working from (and probably did a very poor job at actually conveying the whole idea complex). When you start doing set theory, I assume we're dealing with things at the complexity of mathematical objects. Then you throw in something like "anthropology" as an element in a set, and I, still in math mode, start going, whaa, you need humans before you have anthropology, and you need the billion years of evolution leading to the accidental complexity in humans to have humans, and you need physics to have the humans live and run the societies for anthropology to study, and you need the rest of the biosphere for the humans to not just curl up and die in the featureless vacuum and, and.. and that's a lot of math. While the actual system with the power sets looks just like uniform, featureless soup to me. Sure, there are all the labels, which make my brain do the above i-don't-get-it dance, but the thing I'm actually looking for is the mathematical structure. And that's just really simple, nowhere near what you'd need to model a loose cloud of hydrogen floating in empty space, not to mention something many orders of magnitude more complex like a society of human beings.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
Replies from: Tuukka_Virtaperko, Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-15T20:07:15.461Z · LW(p) · GW(p)
It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
There are many ways to answer that question. I have a flowchart and formulae. The opposite of that would be something to the effect of having the source code. I'm not sure why you expect me to have that. Was it something I said?
I thought I've given you links to my actual work, but I can't find them. Did I forget? Hmm...
If you dislike metaphysics, only the latter is for you. I can't paste the content, because the formatting on this website apparently does not permit html formulae. Wait a second, it does permit formulae, but only LaTeX. I know LaTeX, but the formulae aren't in that format right now. I should maybe convert them.
You won't understand the flowchart if you don't want to discuss metaphysics. I don't think I can prove that something, of which you don't know what it is, could be useful to you. You would have to know what it is and judge for yourself. If you don't want to know, it's ok.
I am currently not sure why you would want to discuss this thing at all, given that you do not seem quite interested of the formalisms, but you do not seem interested of metaphysics either. You seem to expect me to explain this stuff to you in terms of something that is familiar to you, yet you don't seem very interested to have a discussion where I would actually do that. If you don't know why you are having this discussion, maybe you would like to do something else?
There are quite probably others in LessWrong who would be interested of this, because there has been prior discussion of CTMU. People interested in fringe theories, unfortunately, are not always the brightest of the lot, and I respect your abilities to casually namedrop a bunch of things I will probably spend days thinking about.
But I don't know why you wrote so much about billions of years, babies, human cultural evolution, 100 megabytes and such. I am troubled by the thought that you might think I'm some loony hippie who actually needs a recap on those things. I am not yet feeling very comfortable in this forum because I perceive myself as vulnerable to being misrepresented as some sort of a fool by people who don't understand what I'm doing.
I'm not trying to change LessWrong. But if this forum has people criticizing the CTMU without having a clue of what it is, then I attain a certain feeling of entitlement. You can't just go badmouthing people and their theories and not expect any consequences if you are mistaken. You don't need to defend yourself either, because I'm here to tell you what recursive metaphysical theories such as the CTMU are about, or recommend you to shut up about the CTMU if you are not interested of metaphysics. I'm not here to bloat my ego by portraying other people as fools with witty rhetoric, and if you Google about the CTMU, you'll find a lot of people doing precisely that to the CTMU, and you will understand why I fear that I, too, could be treated in such a way.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-01-16T08:11:08.821Z · LW(p) · GW(p)
I'm mostly writing this stuff trying to explain what my mindset, which I guess to be somewhat coincident with the general LW one, is like, and where it seems to run into problems with trying to understand these theories. My question about the assumptions is basically poking at something like "what's the informal explanation of why this is a good way to approach figuring out reality", which isn't really an easy thing to answer. I'm mostly writing about my own viewpoint instead of addressing the metaphysical theory, since it's easy to write about stuff I already understand, and a lot harder to to try to understand something coming from a different tradition and make meaningful comments about it. Sorry if this feels like dismissing your stuff.
The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.
I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.
For another thing, being aware of the evolutionary history of humans and the current physical constraints of human cognition and DNA can guide making an actual theory of mind from the ground up. The kludged up and sorta-working naturally evolved version might be equal to 100 000 pages of math, which is quite a lot, but also tells us that we should be able to get where we want without having to write 1 000 000 000 pages of math. A straight-up mysterian could just go, yeah, the human intelligence might be infinitely complex and you'll never come up with the formal theory. Before we knew about DNA, we would have had a harder time coming up with a counterargument.
I keep going on about the basic science stuff, since I have the feeling that the LW style of approaching things basically starts from mid-20th century computer science and natural science, not from the philosophical tradition going back to antiquity, and there's some sort of slight mutual incomprehension between it and modern traditional philosophy. It's a bit like C.P. Snow's Two Cultures thing. Many philosophers seem to be from Culture One, while LW is people from Culture Two trying to set up a philosophy of their own. Some key posts about LW's problems with philosophy are probably Against Modal Logics and A Diseased Discipline. Also there's the book Good and Real, which is philosophy being done by a computer scientist and which LW folk seem to find approachable.
The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get, so you'll need to practice empirical science to figure out what's actually going on with life, plain old thinking hard won't help since that'll just lead to your broken head machinery tripping you up again, and that the end result of what you're trying to do should be a computable algorithm. Neither of these things show up in traditional philosophy, since traditional philosophy got started before there was computer science or cognitive science or molecular biology. So LessWrongers will be confused about non-empirical attempts to get to the bottom of real-world stuff and they will be confused if the get to the bottom attempt doesn't look like it will end up being an algorithm.
I'm not saying this approach is better. Philosophers obviously spend a long time working through their stuff, and what I am doing here is basically just picking low-hanging fruits from science that's so recent that it hasn't percolated into the cultural background thought yet. But we are living in interesting times when philosophers can stay mulling through the conceptual analysis, and then all of a sudden scientists will barge in and go, hey, we were doing some empiric stuff with machines, and it turns out conterfactual worlds are actually sort of real.
Replies from: Tuukka_Virtaperko, Tuukka_Virtaperko↑ comment by Tuukka_Virtaperko · 2012-01-16T13:01:36.692Z · LW(p) · GW(p)
Sorry if this feels like dismissing your stuff.
You don't have to apologize, because you have been useful already. I don't require you to go out of your way to analyze this stuff, but of course it would also be nice if we could understand each other.
The reason I went on about the complexity of the DNA and the brain is that this is stuff that wasn't really known before the mid-20th century. Most of modern philosophy was being done when people had some idea that the process of life is essentially mechanical and not magical, but no real idea on just how complex the mechanism is. People could still get away with assuming that intelligent thought is not that formally complex around the time of Russell and Wittgenstein, until it started dawning just what a massive hairball of a mess human intelligence working in the real world is after the 1950s. Still, most philosophy seems to be following the same mode of investigation as Wittgenstein or Kant did, despite the sudden unfortunate appearance of a bookshelf full of volumes written by insane aliens between the realm of human thought and basic logic discovered by molecular biologists and cognitive scientists.
That's a good point. The philosophical tradition of discussion I belong to was started in 1974 as a radical deviation from contemporary philosophy, which makes it pretty fresh. My personal opinion is that within decades of centuries, the largely obsolete mode of investigation you referred to will be mostly replaced by something that resembles what I and a few others are currently doing. This is because the old mode of investigation does not produce results. Despite intense scrutiny for 300 years, it has not provided an answer to such a simple philosophical problem as the problem of induction. Instead, it has corrupted the very writing style of philosophers. When one is reading philosophical publications by authors with academic prestige, every other sentence seems somehow defensive, and the writer seems to be squirming in the inconvenience caused by his intuitive understanding that what he's doing is barren but he doesn't know of a better option. It's very hard for a distinguished academic to go into the freaky realm and find out whether someone made sense but had a very different approach than the academic approach. Aloof but industrious young people, with lots of ability but little prestige, are more suitable for that.
Nowadays the relatively simple philosophical problem of induction (proof of the Poincare conjecture is relatiely extremely complex) has been portrayed as such a difficult problem, that if someone devises a theoretic framework which facilitates a relatively simple solution to the problem, academic people are very inclined to state that they don't understand the solution. I believe this is because they insist the solution should be something produced by several authors working together for a century. Something that will make theoretical philosophy again appear glamorous. It's not that glamorous, and I don't think it was very glamorous to invent 0 either - whoever did that - but it was pretty important.
I'm not sure what good this ranting of mine is supposed to do, though.
I'm not expecting people to rewrite the 100 000 pages of complexity into human mathematics, but I'm always aware that it needs to be dealt with somehow. For one thing, it's a reason to pay more attention to empiricism than philosophy has traditionally done. As in, actually do empirical stuff, not just go "ah, yes, empiricism is indeed a thing, it goes in that slot in the theory". You can't understand raw DNA much, but you can poke people with sticks, see what they do, and get some clues on what's going on with them.
The metaphysics of quality, of which my RP is a much-altered instance, is an empiricist theory, written by someone who has taught creative writing in Uni, but who has also worked writing technical documents. The author has a pretty good understanding of evolution, social matters, computers, stuff like that. Formal logic is the only thing in which he does not seem proficient, which maybe explains why it took so long for me to analyze his theories. :)
If you want, you can buy his first book, Zen and the Art of Motorcycle Maintenance from Amazon at the price of a pint of beer. (Tap me in the shoulder if this is considered inappropriate advertising.) You seem to be logically rather demanding, which is good. It means I should tell you that in order to attain understanding of MOQ that explains a lot more of the metaphysical side of RP, you should also read his second book. They are also available in every Finnish public library I have checked (maybe three or four libraries).
What more to say... Pirsig is extremely critical of the philosophical tradition starting from antiquity. I already know LW does not think highly of contemporary philosophy, and that's why I thought we might have something in common in the first place. I think we belong to the same world, because I'm pretty sure I don't belong to Culture One.
The key ideas in the LW approach are that you're running on top of a massive hairball of junky evolved cognitive machinery that will trip you up at any chance you get
Okay, but nobody truly understands that hairball, if it's the brain.
the end result of what you're trying to do should be a computable algorithm.
That's what I'm trying to do! But it is not my only goal. I'm also trying to have at least some discourse with World One, because I want to finish a thing I began. My friend is currently in the process of writing a formal definition related to that thing, and I won't get far with the algorithm approach before he's finished that and is available for something else. But we are actually planning that. I'm not bullshitting you or anything. We have been planning to do that for some time already. And it won't be fancy at first, but I suppose it could get better and better the more we work on it, or the approach would maybe prove a failure, but that, again, would be an interesting result. Our approach is maybe not easily understood, though...
My friend understands philosophy pretty well, but he's not extremely interested of it. I have this abstract model of how this algortihm thing should be done, but I can't prove to anyone that it's correct. Not right now. It's just something I have developed by analyzing an unusual metaphysical theory for years. The reason my friend wants to do this apparently is that my enthusiasm is contagious and he does enjoy maths for the sake of maths itself. But I don't think I can convince people to do this with me on grounds that it would be useful! And some time ago, people thought number theory is a completely useless but a somehow "beautiful" form of mathematics. Now the products of number theory are used in top-secret military encryption, but the point is, nobody who originally developed number theory could have convinced anyone the theory would have such use in the future. So, I don't think I can have people working with me in hopes of attaining grand personal success. But I think I could meet someone who finds this kind of activity very enjoyable.
The "state basic assumptions" approach is not good in the sense that it would go all the way to explaining RP. It's maybe a good starter, but I can't really transform RP into something that could be understood from an O point of view. That would be like me needing to express equation x + 7 = 20 to you in such terms that x + y = 20. You couldn't make any sense of that.
I really have to go now, actually I'm already late from somewhere...
↑ comment by Tuukka_Virtaperko · 2012-01-16T18:18:01.109Z · LW(p) · GW(p)
I commented Against Modal Logics.
↑ comment by Tuukka_Virtaperko · 2012-01-15T18:12:59.574Z · LW(p) · GW(p)
A theory of mind that can actually do the work needs to build up the same sort of kernel evolution and culture have set up for people. For the human ballpark estimate, you'd have to fill something like 100 000 pages with math, all setting up the basic machinery you need for the mind to get going. A very abstracted out theory of mind could no doubt cut off an order of magnitude or two out of that, but something like Maxwell's equations on a single sheet of paper won't do. It isn't answering the question of how you'd tell a computer how to be a mind, and that's the question I keep looking at this stuff with.
You want a sweater. I give you a baby sheep, and it is the only baby sheep you have ever seen that is not completely lame or retarded. You need wool to produce the sweater, so why are you disappointed? Look, the mathematical part of the theory is something we wrote less than a week ago, and it is already better than any theory of this type I have ever heard of (three or four). The point is not that this would be excruciatingly difficult. The point is that for some reason, almost nobody is doing this. It probably has something to do with the severe stagnation in the field of philosophy. The people who could develop philosophy find the academic discipline so revolting they don't.
I did not come to LessWrong to tell everyone I have solved the secrets of the universe, or that I am very smart. My ineptitude in math is the greatest single obstacle in my attempts to continue development. If I didn't know exactly one person who is good at math and wants to do this kind of work with me, I might be in an insane asylum, but no more about that. I came here because this is my life... and even though I greatly value the MOQ community, everyone on those mailing lists is apparently even less proficient in maths and logic as I am. Maybe someone here thinks this is fun and wants to have a fun creative process with me.
I would like to write a few of those 100 000 pages that we need. I don't get your point. You seem to require me to have written them before I have written them.
My confusion about the assumptions is basically that I get the sense that analytic philosophers seem to operate like they could just write the name of some complex human concept, like "morality", then throw in some math notation like modal logic, quantified formulas and set memberships, and call it a day. But what I'm expecting is something that teaches me how to program a computer to do mind-stuff, and a computer won't have the corresponding mental concept for the word "morality" like a human has, since the human has the ~200M special sauce kernel which gives them that. And I hardly ever see philosophers talking about this bit.
Do you expect to build the digital sauce kernel without any kind of a plan - not even a tentative one? If not, a few pages of extremely abstract formulae is all I have now, and frankly, I'm not happy about that either. I can't teach you nearly anything you seem interested of, but I could really use some discussion with interested people. And you have already been helpful. You don't need to consider me someone who is aggressively imposing his views on individual people. I would love to find people who are interested of these things because there are so few of them.
I had a hard time figuring out what you mean by basic assumptions, because I've been doing this for such a long time I tend to forget what kind of metaphysical assumptions are generally held by people who like science but are disinterested of metaphysics. I think I've now caught up with you. Here are some basic assumptions.
- RP is about definable things. It is not supposed to make statements about undefinable things - not even that they don't exist, like you would seem to believe.
- Humans are before anthropology in RP. The former is in O2 and the latter in O4. I didn't know how to tell you that because I didn't know you wanted to hear that and not some other part of the theory in order to not go whaaa. I'd need to tell you everything but that would involve a lot of metaphysics. But the theory is not a theory of the history of the world, if "world" is something that begins with the Big Bang.
- From your empirical scientific point of view, I suppose it would be correct to state that RP is a theory of how the self-conscious part of one person evolves during his lifetime.
- At least in the current simple isntance of RP, you don't need to know anything about the metaphysical content to understand the math. You don't need to go out of math-mode, because there are no nonstandard metaphysical concepts among the formulae.
- If you do go out of the math mode and want to know what the symbols stand foor, I think that's very good. But this can only be explained to you in terms of metaphysics, because empirical science simply does not account for everything you experience. Suppose you stop by in the grocery store. Where's the empirical theory that accounts for that? Maybe some general sociological theory would. But my point is, no such empirical theory is actually implemented. You don't acquire a scientific explanation for the things you did in the store. Still you remember them. You experienced them. They exist in your self-conscious mind in some way, which is not dependent of your conceptions of what is the relationship between topology and model theory, or of your understanding of why fission of iron does not produce energy, or how one investor could single-handedly significantly affect whether a country joins the Euro. From your personal, what you might perhaps call "subjective", point of view, it does not even depend on your conception of cognition science, unless you actually apply that knowledge to it. You probably don't do that all the time although you do that sometimes.
- I don't subscribe to any kind of "subjectivism", whatever that might be in this context, or idealism, in the sense that something like that would be "true" in a meaningful way. But you might agree that when trying to develop the theory underlying self-conscious phenomenal and abstract experience, you can't begin from the Big Bang, because you weren't there.
- You could use RP to describe a world you experience in a dream, and the explanation would work as well as when you are awake. Physical theories don't work in that world. For example, if you look at your watch in a dream, then look away, and look at it again, the watch may display a completely different time. Or the watch may function, but when you take it apart, you find that instead of clockwork, it contains something a functioning mechanical watch will not contain, such as coins.
- RP is intended to relate abstract thought (O, N, S) to sensory perceptions, emotions and actions (R), but to define all relations between abstract entities to other abstract entities recursively.
- One difference between RP and the empiric theories of cosmology and such, that you mentioned, is that the latter will not describe the ability of person X to conceptualize his own cognitive processess in a way that can actually be used right now to describe what, or rather, how, some person is thinking with respect to abstract concepts. RP does that.
- RP can be used to estimate the metaphysical composure of other people. You seem to place most of the questions you label "metaphysical" or "philosophical" in O.
- I don't yet know if this forum tolerates much metaphysical discussion, but my theory is based on about six years of work on the Metaphysics of Quality. That is not mainstream philosophy and I don't know how people here will perceive it. I have altered the MOQ a lot. It's latest "authorized" variant in 1991 decisively included mostly just the O patterns. Analyzing the theory was very difficult for me in general. But maybe I will confuse people if I say nothing about the metaphysical side. So I'll think what to say...
- RP is not an instance of relativism (except in the Buddhist sense), absolutism, determinism, indeterminism, realism, antirealism or solipsism. Also, I consider all those theories to be some kind figures of speech, because I can't find any use for them except to illustrate a certain point in a certain discussion in a metaphorical fashion. In logical analysis, these concepts do not necessarily retain the same meaning when they are used again in another discussion. These concepts acquire definable meaning only when detached from the philosophical use and being placed within a specific context.
- Structurally RP resembles what I believe computer scientists call context-free languages, or programming languages with dynamic typing. I am not yet sure what is the exact definition of the former, but having written a few programs, I do understand what it means to do typing run-time. The Western mainstream philosophical tradition does not seem to include any theories that would be analogues of these computer science topics.
I have read GEB but don't remember much. I'll recap what a quine is. I tend to need to discuss mathematical things with someone face to face before I understand them, which slows down progress.
The cat/line thing is not very relevant, but apparently I didn't remember the experiment right. However, if the person and the robot could not see the lines at the same time for some reason - such as the robot needing to operate the scanner and thus not seeing inside the scanner - the robot could alter the person's brain to produce a very strong response to parallel lines in order to verify that the screen inside the scanner, which displays the lines, does not malfunction, is not unplugged, the person is not blind, etc. There could be more efficient ways of finding such things out, but if the robot has replaceable hardware and can thus live indefinitely, it has all the time in the world...
↑ comment by Tuukka_Virtaperko · 2012-01-15T23:39:03.664Z · LW(p) · GW(p)
According to the abstract, the scope of the theory you linked is a subset of RP. :D I find this hilarious because the theory was described as "ridiculously broad". It seems to attempt to encompass all of O, and may contain interesting insight my work clearly does not contain. But the RP defines a certain scope of things, and everything in this article seems to belong to O, with perhaps some N without clearly differentiating the two. S is missing, which is rather usual in science. From the scientific point of view, it may be hard to understand what Buddhists could conceivably believe to achieve by meditation. They have practiced it for millenia, yet they did not do brain scans that would have revealed its beneficial effects, and they did not perform questionnaires either and compile the results into a statistic. But they believed it is good to meditate, and were not very interested of knowing why it is good. That belongs to the realm of S.
In any case, this illustrates an essential feature of RP. It's not so much a theory about "things", you know, cars, flowers, finances, than a theory about what are the most basic kind of things, or about what kind of options for the scope of any theory or statement are intelligible. It doesn't currently do much more because the algorith part is missing. It's also not necessarily perfect or anything like that. If something apparently coherent cannot be included to the scope of RP in a way that makes sense, maybe the theory needs to be revised.
Perhaps I could give a weird link in return. This is written by someone who is currently a Professor of Analytic Philosophy at the University of Melbourne. I find the theory to mathematically outperform that of Langan in that it actually has mathematical content instead of some sort of a sketch. The writer expresses himself coherently and appears to understand in what style do people expect to read that kind of text. But the theory does not recurse in interesting ways. It seems quite naive and simple to me and ignores the symbol grounding problem. It is practically an N-type theory, which only allegedly has S or O content. The writer also seems to make exagerrating interpretations of what Nagarjuna said. These exagerrating interpretations lead to making the same assumptions which are the root of the contradiction in CTMU, but The Structure of Emptiness is not described as a Wheeler-style reality theory, so in that paper, the assumptions do not lead to a contradiction although they still seem to misunderstand Nagarjuna.
By the way, I have thought about your way of asking for basic assumptions. I guess I initially confused it with you asking for some sort of axioms, but since you weren't interested of the formalisms, I didn't understand what you wanted. But now I have the impression that you asked me to make general statements of what the theory can do that are readily understood from the O viewpoint, and I think it has been an interesting approach for me, because I didn't use that in the MOQ community, which would have been unlikely to request that approach.
↑ comment by Risto_Saarelma · 2012-01-14T16:12:15.945Z · LW(p) · GW(p)
I'll address the rest in a bit, but about the notation:
Questions to you:
- Is T -> U the Cartesian product of T and U?
- What is *?
T -> U
is a function from set T to set U. P*
means a list of elements in set P, where the difference from set is that elements in a list are in a specific order.
The notation as a whole was a somewhat fudged version of intelligent agent formalism. The idea is to set up a skeleton for modeling any sort of intelligent entity, based on the idea that the entity only learns things from its surroundings though a series of perceptions, which might for example be a series of matrices corresponding to the images a robot's eye camera sees, and can only affect its surroundings by choosing an action it is capable of, such as moving a robotic arm or displaying text to a terminal.
The agent model is pretty all-encompassing, but also not that useful except as the very first starting point, since all of the difficulty is in the exact details of the function that turns the most likely massive amount of data in the perception history into a well-chosen action that efficiently furthers the goals of the AI.
Modeling AIs as the function from a history of perceptions to an action is also related to thought experiments like Ned Block's Blockhead, where a trivial AI that passes the Turing test with flying colors is constructed by merely enumerating every possible partial conversation up to a certain length, and writing up the response a human would make at that point of that conversation.
Scott Aaronson's Why philosophers should care about computational complexity proposes to augment the usual high-level mathematical frameworks with some limits to the complexity of the black box functions, to make the framework reject cases like Blockhead, which seem to be very different from what we'd like to have when we're looking for a computable function that implements an AI.
↑ comment by gregconen · 2010-02-02T22:29:15.502Z · LW(p) · GW(p)
But my dilemma is that Chris Langan is the smartest known living man, which makes it really hard for me to shrug the CTMU off as nonsense.
You can't rely too much on intelligence tests, especially in the super-high range. The tester himself admitted that Langan fell outside the design range of the test, so the listed score was an extrapolation. Further, IQ measurements, especially at the extremes and especially on only a single test (and as far as I could tell from the wikipedia article, he was only tested once) measure test-taking ability as much as general intelligence.
Even if he is the most intelligent man alive, intelligence does not automatically mean that you reach the right answer. All evidence points to it being rubbish.
↑ comment by Paul Crowley (ciphergoth) · 2010-02-02T22:24:23.701Z · LW(p) · GW(p)
Chris Langan is the smartest known living man
Many smart people fool themselves in interesting ways thinking about this sort of thing. And of course, when predicting general intelligence based on IQ, remember to account for return to the mean: if there's such a thing as the smartest person in the world by some measure of general intelligence, it's very unlikely it'll be the person with the highest IQ.
↑ comment by advael · 2015-06-09T17:14:09.561Z · LW(p) · GW(p)
A powerful computer with a bad algorithm or bad information can produce a high volume of bad results that are all internally consistent.
(IQ may not be directly analogous to computing power, but there are a lot of factors that matter more than the author's intelligence when assessing whether a model bears out in reality.)
↑ comment by Saviorself138 · 2010-02-02T19:03:04.900Z · LW(p) · GW(p)
Id say the best way to spend the rest of this year is to fry your brain on acid over and over again.
Replies from: thomblake↑ comment by thomblake · 2010-02-02T19:06:28.440Z · LW(p) · GW(p)
N.B. - LSD doesn't do something well characterized by "fry your brain" (most of the time). And if you meant acid in the chemical sense, that was very bad advice.
Replies from: aausch, Saviorself138↑ comment by Saviorself138 · 2010-02-02T19:27:01.666Z · LW(p) · GW(p)
yeah, I know. I was just being a jackass because that guy's post was ridiculous
Replies from: JGWeissman↑ comment by JGWeissman · 2010-02-02T19:36:23.022Z · LW(p) · GW(p)
This is the Welcome Thread, for people to introduce themselves. People should have more leeway to talk about personal interests that would elsewhere be considered off topic.
comment by HughRistik · 2009-04-17T06:42:32.344Z · LW(p) · GW(p)
- Handle: HughRistik (if you don't get the joke immediately, then say "heuristic" out loud)
- Age: 23
- Education: BA Psychology, thinking of grad school
- Occupation: Web programmer
- Hobbies: Art, clubbing, fashion, dancing, computer games, too many others to mention
- Research interests: Mate preferences, sex differences, sex differences in mate preferences, biological and social factors in homosexuality, and the psychology of introversion, social anxiety, high sensitivity, and behavioral inhibition
I came to Less Wrong via Overcoming Bias. I heard a talk by Eliezer around 2004-2005, and I've run into him a couple times since then.
I've been interested in rationality as long as I can remember. I obsessively see patterns in the world and try to understand it better. I use this ability to get good at stuff.
I once had social anxiety disorder, no social skills, and no idea what to do with women (see love-shyness; I'm sure there are people on here who currently have it). Thanks to finding the seduction community, I figured out that I could translate thinking ability into social skills, and that I could get good at socializing just like how I got good at everything else. Through observation, practice, and theories from social psychology, evolutionary psychology, and the seduction community, I built social skills and abilities with women from scratch.
Meanwhile, I attempted to eradicate the disadvantages of my personality traits and scientific approach to human interaction. For instance, I learned to temporarily disable analytical and introverted mental states and live more in the moment. I started identifying errors and limiting aspects of the seduction community's philosophy and characterization of women and female preferences. While my initial goal was to mechanistically manipulate people into liking me by experimenting on them socially, an unexpected outcome occurred: I actually became a social person. I started to enjoy connecting with people and emotionally vibing. I cultivated social instincts, so that I no longer had to calculate everything cognitively.
In the back of my head, I've been working on a theory of sexual ethics, particularly the ethics of seduction.
I will write more about heuristic and the seduction community as I've promised, but I've been organizing thoughts for a top-level post, and figuring out whether I'm going to address those topics with analytical posts, or with more of a personal narrative, and whether I would mix them. Anyone have any suggestions or requests?
Replies from: jasonmcdowell, HughRistik↑ comment by jasonmcdowell · 2009-04-17T10:09:48.900Z · LW(p) · GW(p)
It sounds like you are currently very much pushing your personality where you want it to go. I would be interested in hearing about your transition from being shy to being comfortable with people. Do you still remember how you were?
I more or less consciously pushed myself into sociability when I was 12 and made a lot of progress. Previously I was much shyer. I've changed so much since then, it feels strange to connect with my earlier memories. I've also experienced "calculating" social situations, emulating alien behaviors - and then later finding them to have become natural and enjoyable.
For the past few years, I've just been coasting - I haven't changed much and I don't know how to summon up the drive I had before.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-19T01:38:39.758Z · LW(p) · GW(p)
Do you still remember how you were?
Yes, though the painfulness of the memory is fading.
I've also experienced "calculating" social situations, emulating alien behaviors - and then later finding them to have become natural and enjoyable.
Do you have a particular example? For me, one of them is smalltalk. I don't necessarily enjoy all smalltalk all the time, but I enjoy it a lot more than I ever thought that I would, back when I viewed it as "pointless" and "meaningless" (because I didn't understand that the purpose of most social communication is to share emotions, not to share interesting factual information and theories). Similar story with flirting.
With such social behaviors, everyone "learned" them at some point. Most people just learned them during their formative experiences. Some people, due to a combination of biological and social factors, learn this stuff later, or not at all. The cruel thing is that once you fall off the train, it's harder and harder to get back on. See the diagram here for a graphic illustration.
For the past few years, I've just been coasting - I haven't changed much and I don't know how to summon up the drive I had before.
I've gone through periods of growth, and periods of plateaus. Once I got to a certain level of slightly above average social skills, it became easy to get complacent with mediocrity. I start making progress again when I keep trying new things, going new places, and focusing on what on what I want.
↑ comment by HughRistik · 2009-04-17T06:45:39.874Z · LW(p) · GW(p)
I am also interested in gender politics. I started off with reflexively feminist views, yet I soon realized flaws in certain types of feminism. Like with religions, I think that there some really positive goals and ideas in feminism, and some really negative ones, all mixed together with really bad epistemic hygiene.
There are more rational formulations of some feminist ideas, yet more rational feminists often fail to criticize less rational feminists (instead calling them "brilliant" and "provocative"), causing a quality control problem leading to dogmatism and groupthink. I am one of the co-bloggers on FeministCritics.org, where we try to take a critical but fair look at feminism and start dialogues with feminists. I'm not very active there anymore, but here's an example of the kind of epistemic objections that I make towards feminism.
My eventual goal is to formulate a gender politics that subsumes the good things about feminism.
comment by ektimo · 2009-04-16T16:54:06.631Z · LW(p) · GW(p)
- Name: Edwin Evans
- Location: Silicon Valley, CA
- Age: 35
I read the "Meaning of Life FAQ" by a previous version of Eliezer in 1999 when I was trying to write something similar, from a Pascal’s Wager angle (even a tiny possibility of objective value is what should determine your actions). I've been a financial supporter of the Organization That Can't Be Named and a huge fan of Eliezer's writings since that same time. After reading "Crisis of Faith" along with "Could Anything Be Right?" I finally gave up on objective value; the "light in the sky" died. Feeling my mind change was an emotional experience that lasted about two days.
This is seriously in need of updating, but here is my home page.
By the way, would using Google AdWords be a good way to draw people to 12 Virtues? Here is an example from the Google keyword tool:
- Search phrase: how to be better
- Cost per click: $0.05
- Approximate volume per month: 33,100
[Edit: added basic info/clarification/formatting]
comment by Paul Crowley (ciphergoth) · 2009-04-16T12:58:30.351Z · LW(p) · GW(p)
OK, let's get this started. There seems to be no way of doing this that doesn't sound like a personal ad.
- Handle: ciphergoth
- Name: Paul Crowley
- Location: London
- Age: born 1971
- Occupation: Programmer
- Web pages, Blog, LiveJournal, Twitter feed
As well as programming for a living, I'm a semi-professional cryptographer and cryptanalyst; read more on my work there. Another interest important to me is sexual politics; I am bi, poly and kinky, and have been known to organise events related to these themes (BiCon, Polyday, and a fetish nightclub). I get the impression that I'm politically to the left of much of this site; one thing I'd like to be able to talk about here one day is how to apply what we discuss to everyday politics.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-16T16:35:17.375Z · LW(p) · GW(p)
What would it look like to apply rationalist techniques to sexual politics? The best guess I have is "interesting", but I don't know in what way.
Replies from: HughRistik↑ comment by HughRistik · 2009-04-16T17:03:41.583Z · LW(p) · GW(p)
Yes, it would be interesting. It would involve massively changing the current gender political programs on all sides, which are all ideologies with terrible epistemic hygiene. I'll try to talk about this more when I can.
comment by research_prime_space · 2017-06-14T21:40:43.339Z · LW(p) · GW(p)
Hi! I'm 18 years old, female, and a college student (don't want to release personal information beyond that!). I'm majoring in math, and I hopefully want to use those skills for AI research :D
I found you guys from EA, and I started reading the sequences last week, but I really do have a burning question I want to post to the Discussion board so I made an account.
Replies from: cousin_it↑ comment by cousin_it · 2017-06-15T08:20:19.944Z · LW(p) · GW(p)
Welcome! You can ask your question in the open thread as well.
comment by volya · 2013-10-07T12:54:22.209Z · LW(p) · GW(p)
Hi, I am Olga, female, 40, programmer, mother of two. Got here from HPMoR. Can not as yet define myself as a rationalist, but I am working on it. Some rationality questions, used in real life conversations, have helped me to tackle some personal and even family issues. It felt great. In my "grown-up" role I am deeply concerned to bring up my kids with their thoughts process as undamaged as I possibly can and maybe even to balance some system-taught stupidity. I am at the start of my reading list on the matter, including LW sequences.
comment by GloriaSidorum · 2013-03-06T23:24:39.746Z · LW(p) · GW(p)
Hello. My name is not, in fact, Gloria. My username is merely (what I thought was) a pretty-sounding Latin translation of the phrase "the Glory of the Stars", though it would actually be "Gloria Siderum" and I was mixing up declensions.
I read Three Worlds Collide more than a year ago, and recently re-stumbled upon this site via a link from another forum. Reading some of Elizier's series', I realized that most of my conceptions about the world were were extremely fuzzy, and they could be better said to bleed into each other than to tie together. I realized that a large amount of what I thought of as my "knowledge" is just a set of passwords, and that I needed to work on fixing that. And I figured that a good way to practice forming coherent, predictive models and being aware of what mental processes may affect those models would be to join an online community in which a majority of posters would have read a good number of articles on bias, heuristic, and becoming more rational, and will thus be equipped to some degree to call flaws in my thinking.
comment by [deleted] · 2012-02-14T17:18:53.452Z · LW(p) · GW(p)
Hello,
I am a world citizen with very little sense of identification or labelling. Perhaps "Secular Humanist" could be my main affiliation. As for belonging to nations and companies and teams... I don't believe in this thrust-upon, unchosen unity. I'm a natural expatriate. And I believe this site is awesomeness incarnate.
Though some lesswrongers really seem to go out of their way to make their readers feel stupid... though I'd guess that's the whole point, right?
comment by peuddO · 2010-11-05T23:10:20.940Z · LW(p) · GW(p)
I like to call myself Sindre online. I'm just barely 18, and I go to school in Norway - which doesn't have a school system entirely similar to any other that I'm familiar with, so I'll refrain from trying to describe what sort of education I'm getting - other than to say that I'm not very impressed with how the public school system is laid out here in Norway.
I found Less Wrong through a comment on this blog, where it was mentioned as a place populated by reasonably intelligent people. Since I thought that was an intriguing endorsement, I decided to give it a look. I've been lurking here ever since - up until now, anyway.
How I came to identify as a rationalist
There's really not much to that story. I can't even begin to remember at what age I endorsed reason as a guiding principle. I was mocked as a 'philosopher' as far back as when I was nine years old, and probably earlier still.
I value and work to achieve an understanding of human psychology, as well as a diversity of meditative achievements derived from yoga. There's certainly more, but that's all I can think of right now.
P.S.: Some of the aesthetic choices I've made in this post, like italicizing my name, are mostly just to see if I understood the instructions correctly and are otherwise arbitrary.
Replies from: wedrifid↑ comment by wedrifid · 2010-11-05T23:14:18.185Z · LW(p) · GW(p)
I like to call myself Sindre online.
Out of curiosity...
Replies from: peuddO↑ comment by peuddO · 2010-11-05T23:16:26.802Z · LW(p) · GW(p)
Out of curiosity...
Out of curiosity... what?
Edit: Since that seems to have earned me a downvote, I'd like to clarify that I'm just wondering as to what, specifically, you're curious about. Why I choose to call myself that? If I'm some other Sindre you know? Why my username is not Sindre? etc.
Replies from: wedrifid↑ comment by wedrifid · 2010-11-05T23:23:17.532Z · LW(p) · GW(p)
No idea why someone would downvote a reasonable question. That would be bizarre.
'Username not' was the one. :)
Replies from: peuddO↑ comment by peuddO · 2010-11-05T23:30:12.653Z · LW(p) · GW(p)
Hrm. Now someone's downvoted your question, it seems. It's all a great, sinister conspiracy.
Well, regardless... peuddO is a username I occasionally utilize on internet forums. It's "upside down" in Norwegian, written upside down in Norwegian (I'm so very clever). Even so, I know that I personally prefer to know the names people go by out-of-internet. It's a strange quirk, perhaps, but it makes me feel obligated to provide my real first name when introducing myself.
Replies from: wedrifidcomment by Vaniver · 2010-10-27T02:09:43.227Z · LW(p) · GW(p)
Hello!
I take Paul Graham's advice to keep my identity small, and so describing myself is... odd. I'm not sure I consider rationalism important enough to make it into my identity.
The most important things, I think, are that I'm an individualist and an empiricist. I considered "pragmatist" for the second example, and perhaps that would be more appropriate.
Perhaps vying for third place is that I'm an optimizer. I like thinking about things, I like understanding systems, I like replacing parts with better parts. I think that's what I enjoy about LW; there's quite a bit of interest in optimization around here. Now, how to make that function better... :P
comment by Alexei · 2010-08-02T15:40:00.996Z · LW(p) · GW(p)
Hello everyone!
I've been quietly lurking on this website for a while now, reading articles as fast as I can with much enthusiasm. I've read most of Eliezer's genius posts and started to read through others' posts now. I've came to this website when I learned about AI-in-a-box scenario. I am a 23 year old male. I have a B.S. in computer science. I like to design and program video games. My goal in life is to become financially independent and make games that help people improve themselves. I find the subject of rationality to be very interesting and helpful in general, though I have trouble seeing the application for the more scientific parts (bayes) of rationality in real life, since there is no tag attached to most events in life. I would like to pose a question to this community: do you think video games can help spread the message and the spirit of this website? What kind of video games will accomplish that? Would you be interested in working on a game or contributing to one in other ways (e.g. donations or play testing)? Or maybe instead of writing games I should just commit to S.I. and work on F.A.I.?
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2010-08-02T15:55:17.577Z · LW(p) · GW(p)
We've already thought about the possibility of a game. See this page. IIRC PeerInfinity is particularly fond of the idea.
comment by realitygrill · 2010-02-20T04:40:51.681Z · LW(p) · GW(p)
Hi. My name's Derrick.
I've been reading LW and HN for a while now but have only just started to learn to participate. I'm 23, ostensibly hold a bachelor's in economics, and interested in way too much - a dilettante of sorts. Unfortunately I have the talent of being able to sound like I know stuff just by quickly reading around a subject.
Pretty much have always been a Traditional Rationalist; kind of treated the site discussions as random (if extremely high impact) insights. Getting interested in Bayesian modeling sort of sent me on a path here. Lots of Eliezer's Coming of Age sequence reminds me of myself. Is 23 the magical age for Bayesian Enlightenment?
My current interest is in the Art of Strategy, in the way Musashi set down.
Just discovered the sequences and some recommended books! Think I'm going to be sidetracked for a while now...
comment by FeministX · 2009-11-07T08:06:12.074Z · LW(p) · GW(p)
Hi,
I am FeministX of FeministX.blogspot.com. I found this blog after Eliezer commented on my site. While my online name is FeministX, I am not a traditional feminist, and many of my intellectual interests lie outside of feminism.
Lately I am interestedin learning more about the genetic and biological basis for individual and group behavior. I am also interested in cryonics and transhumanism. I guess this makes me H+BD.
I am a rationalist by temperament and ideology. Why am I a rationalist? To ask is to answer the question. A person who wishes to accurately comprehend the merits of a rationalist perspective is already a rationalist. It's a deeply ingrained thinking style which has grown with me since the later days of my childhood.
I invite you all to read my blog. I can almost guarenteee that you will like it. My awesomeness is reliably appealing. (And I'm not so hard on the eyes either :) )
Replies from: RobinZ↑ comment by RobinZ · 2009-11-07T14:05:29.897Z · LW(p) · GW(p)
Welcome!
Edit: I don't know if you were around when Eliezer Yudkowsky was posting on Overcoming Bias, but if you weren't, I'd highly, highly recommend Outside the Laboratory. Also, from Yudkowsky's own site, The Simple Truth and An Intuitive Explanation of Bayes' Theorem.
And do check out some of the top scoring Less Wrong articles.
comment by Alex Flint (alexflint) · 2009-07-23T09:48:04.581Z · LW(p) · GW(p)
Hi,
I'm Alex and I'm studying computer vision at Oxford. Essentially we're trying to build AI that understands the visual world. We use lots of machine learning, probabilistic inference, and even a bit of signal processing. I arrived here through the Future of Humanity Institute website, which I found after listening to Nick Bostrom's TED talk. I've been lurking for a few weeks now but I thought I should finally introduce myself.
I find the rationalist discussion on LW interesting both on a personal interest level, and in relation to my work. I would like to get some discussion going on the relationship between some of the concrete tools and techniques we use in AI and the more abstract models of rationality being discussed here. Out of interest, how many people here have some kind of computer science background?
Replies from: Vladimir_Nesov, Kaj_Sotala↑ comment by Vladimir_Nesov · 2009-07-23T10:58:00.021Z · LW(p) · GW(p)
Hi Alex, welcome to LessWrong. You can find some info about the people here in the survey results post. Quite a lot are with CS background, and some grok machine learning.
↑ comment by Kaj_Sotala · 2009-07-23T10:00:24.184Z · LW(p) · GW(p)
Out of interest, how many people here have some kind of computer science background?
Quite a few if not most, it seems. See http://lesswrong.com/lw/fk/survey_results/ - the summary there doesn't mention the educational background, but looking through the actual spreadsheet, lots of people have listed a "computing" background.
comment by XFrequentist · 2009-04-18T20:31:20.764Z · LW(p) · GW(p)
- Name: Alex D
- Age: 26
- Education: MSc Epidemiology/Biostatistics
- Occupation: Epidemiologist
- Location: Canada
- Hobbies: Reading, travel, learning, sport.
I found OB/LW through Eliezer's Bayes tutorial, and was immediately taken in. It's the perfect mix of several themes that are always running through my head (rationality, atheism, Bayes, etc.) and a great primer on lots of other interesting stuff (QM, AI, ev. psych., etc). The emphasis on improving decision making and clear thinking plus the steady influx of interesting new areas to investigate makes for an intoxicating ambrosia. Very nice change from many other rationality blogs, which seem to mostly devote themselves to the fun-but-eventually-tiresome game of bashing X for being stupid/illogical/evil (clearly, X is all of these things and more, but that's not the point). Generally very nice writing, too.
As for real-life impact, LW has:
- grown my reading list exponentially,
- made me want to become a better writer,
- forced me to admit that my math is nowhere near where it needs to be,
- made my unstated ultimate goal of understanding the world as a coherent whole seem less silly, and
- altered my list of possible/probable PhD topics.
I'll put some thought into my rationalist origins story, but it may have been that while passing several (mostly enjoyable) summers as a door-to-door salesman, I encountered the absolutely horrible decision making mechanisms of lots and lots of people. It kind of made me despair for the world, and probably made me aspire to do better. But that could be a false narrative.
Replies from: Nonecomment by Vladimir_Nesov · 2009-04-16T21:24:33.770Z · LW(p) · GW(p)
- Vladimir Nesov
- Age: 24
- Location: Moscow
- MS in Computer Science, minor in applied math and physics, currently a grad student in CS (compiler technologies, static analysis of programs).
Having never been interested in AI before, I became obsessed with it about 2 years ago, after getting impressed with its potential. Got a mild case of AI-induced raving insanity, have been recuperating for a last year or so, treating it with regular dosage of rationality and solid math. The obsession doesn't seem to pass though, which I deem a good thing.
comment by [deleted] · 2009-04-16T16:48:12.607Z · LW(p) · GW(p)
deleted
Replies from: rhollerith, ciphergoth↑ comment by rhollerith · 2009-04-16T18:40:01.172Z · LW(p) · GW(p)
Most mystics reject science and rationality (and I think I have a pretty good causal model of why that is) but there have been scientific rational mystics, e.g., physicist David Bohm. I know of no reason why a person who starts out committed to science and rationality should lose that commitment through mystical training and mystical experience if he has competent advice.
My main interest in mystical experience is that it is a hole in the human motivational system -- one of the few ways for a person to become independent from what Eliezer calls the thousand shards of desire. Most of the people in this community (notably Eliezer) assign intrinsic value to the thousand shards of desire, but I am indifferent to them except for their instrumental value. (In my experience the main instrumental value of keeping a connection to them is that it makes one more effective at interpersonal communication.)
Transcending the thousand shards of desire while we are still flesh-and-blood humans strikes me as potentially saner and better than "implementing them in silicon" and relying on cycles within cycles to make everything come out all right. And the public discourse on subjects like cryonics would IMHO be much crisper if more of the participants would overcome certain natural human biases about personal identity and the continuation of "the self".
I am not a mystic or aspiring mystic (I became indifferent to the thousand shards of my own desire a different way) but have a personal relationship of long standing with a man who underwent the full mystical experience: ecstacy 1,000,000 times greater than any other thing he ever experienced, uncommonly good control over his emotional responses, interpersonal ability to attract trusting followers without even trying. And yes, I am sure that he is not lying to me: I had a business relationship with him for about 7 years before he even mentioned (causally, tangentially) his mystical experience, and he is among the most honest people I have ever met.
Marin County, California, where I live, has an unusually high concentration of mystics, and I have in-depth personal knowledge of more than one of them.
Mystical experience is risky. (I hope I am not the first person to tell you that, Stefan!) It can create or intensify certain undesirable personality traits, like dogmatism, passivity or a messiah complex, and even with the best advice available, there is no guarantee that one will not lose one's commitment to rationality. But it has the potential to be extremely valuable, according to my way of valuing thing.
If you really do want to transcend the natural human goal system, Stefan, I encourage you to contact me.
Replies from: Vladimir_Nesov, Bongo, None↑ comment by Vladimir_Nesov · 2009-04-16T21:07:07.250Z · LW(p) · GW(p)
Most of the people in this community (notably Eliezer) assign intrinsic value to the thousand shards of desire, but I am indifferent to them except for their instrumental value.
Not so. You don't assign value to your drives because they were inbuilt in you by evolution, you don't value your qualities just because they come as a package deal, just because you are human [*]. Instead, you look at what you value, as a person. And of the things you value, you find that most of them are evolution's doing, but you don't accept all of them, and you look at some of them in a different way from what evolution intended.
[*] Related, but overloaded with other info: No License To Be Human.
Replies from: rhollerith↑ comment by rhollerith · 2009-04-16T22:13:14.171Z · LW(p) · GW(p)
Nesov points out that Eliezer picks and chooses rather than identifying with every shard of his desire.
Fair enough, but the point remains that it is not too misleading to say that I identify with fewer of the shards of human desire than Eliezer does -- which affects what we recommend to other people.
↑ comment by Bongo · 2009-04-17T12:15:46.757Z · LW(p) · GW(p)
I would be interested to know what it is then that you desire nowadays.
And does everyone who gives up the thousand shards of desire end up desiring the same thing?
Replies from: rhollerith↑ comment by rhollerith · 2009-04-17T22:11:09.550Z · LW(p) · GW(p)
Bongo asks me what is it then that I desire nowadays?
And my answer is, pretty much the same things everyone else desires! There are certain things you have to have to remain healthy and to protect your intelligence and your creativity, and getting those things takes up most of my time. Also, even in the cases where my motivational structure is different from the typical, I often present a typical facade to the outside world because typical is comfortable and familiar to people whereas atypical is suspicious or just too much trouble for people to learn.
Bongo, the human mind is very complex, so the temptation is very great to oversimplify, which is what I did above. But to answer your question, there is a ruthless hard part of me that views my happiness and the shards of my desire as means to an end. Kind of like money is also a means to an end for me. And just as I have to spend some money every day, I have to experience some pleasure every day in order to keep on functioning.
A means to what end? I hear you asking. Well, you can read about that. The model I present on the linked page is a simplification of a complex psychological reality, and it makes me look more different from the average person than I really am. Out of respect for Eliezer's wishes, do not discuss this "goal system zero" here. Instead, discuss it on my blog or by private email.
Now to bring the discussion back to mysticism. My main interest in mysticism is that it gives the individual flexibility that can be used to rearrange or "rationalize" the individual's motivational structure. A few have used that flexibility to rearrange emotional valences so that everything is a means to one all-embracing end, resulting in a sense of morality similar to mine. But most use it in other ways. One of the most notorious way to use mysticism is to use it to develop the interpersonal skills necessary to win a person's trust (because the person can sense that you are not relating to him in the same anxious or greedy way that most people relate to him) and then once you have his trust, to teach him to overcome unnecessary suffering. This is what most gurus do. If you want a typical example, search Youtube for Gangaji, a typical mystic skilled at helping ordinary people reduce their suffering.
I take you back to the fact that a full mystical experience is 1,000,000 times more pleasurable than anything a person would ordinarily experience. That blots out or makes irrelevant everything else that is happening to the person! So the person is able to sit under a tree without moving for weeks and months while his body slowly rots away. People do that in India: a case was in the news a few years ago.
Of course he should get up from sitting under the tree and go home and finish college. Or fetch wood, carry water. Or whatever it is he needs to do to maintain his health, prosperity, intelligence and creativity. But the experience of sitting under the tree can put the petty annoyances and the petty grievances of life in perspective so that they do not have as much influence on the person's thinking and behavior as they used to. Which is quite useful.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T16:59:53.479Z · LW(p) · GW(p)
I've always thought of a mystic as someone who likes mysterious answers to mysterious questions - I guess you mean something else by it?
Replies from: Nonecomment by mapnoterritory · 2012-06-02T07:02:11.198Z · LW(p) · GW(p)
Hi everybody,
I've been lurking here for maybe a year and joined recently. I work as an astrophysicist and I am interested in statistics, decision theory, machine learning, cognitive and neuro-psychology, AI research and many others (I just wish I had more time for all these interests). I find LW to be a great resource and it introduced me to many interesting concepts. I am also interested in articles on improving productivity and well-being.
I haven't yet attended any meet-up, but if there was one in Munich I'd try to come.
comment by [deleted] · 2011-10-27T03:18:02.847Z · LW(p) · GW(p)
Hey, everyone!
I'm currently an (actual) college student and (aspiring) omniscient. I'm also a transhumanist, which is possibly how I got here in the first place.
I've been lurking on and off since the days of Overcoming Bias, and I've finally decided to (try to) actually become involved with the community. As you can probably guess from my period of inactivity I have a tendency to read much more than I write, so this may prove more difficult than it sounds.
I've been very interested in "how things work," both outside and inside my head, for as long as I can remember, although I wouldn't have self-identified as a rationalist until about four or five years ago. I've read through most of the sequences, but I'm always grateful for constructive criticism.
Fair warning: My writing tends to be a little formal unless I work at making it less so. This is (mostly) a bad habit I picked up in high-school English and has essentially no bearing on my thought process.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-27T23:07:27.401Z · LW(p) · GW(p)
essentially no bearing on my thought process.
How do you know?
Replies from: None↑ comment by [deleted] · 2011-10-27T23:43:42.848Z · LW(p) · GW(p)
I did say "mostly" and "essentially."
By "essentially no bearing on my thought process" I don't mean that I'm perfectly rational. Rather, I mean that I often come across as bored or distant even when I'm emotionally invested in a subject. Avoiding the word "know," I'm sufficiently sure to say such a thing because I tend to write in a semi-formal style regardless of my topic or recent experiences.
comment by free_rip · 2011-01-28T01:58:38.161Z · LW(p) · GW(p)
Does anyone know a good resource to go with Eliezer's comic guide on Lob's Theorem? It's confusing me a... well, a lot.
Or, if it's the simplest resource on it out there, are there any prerequisites for learning it/ skills/ knowledge that would help?
I'm trying to build up a basis of skills so I can participate better here, but I've got a long way to go. Most of my skills in science, maths and logic are pretty basic.
Thanks in advance.
Replies from: ata, arundelo↑ comment by ata · 2011-01-28T06:40:34.491Z · LW(p) · GW(p)
Or, if it's the simplest resource on it out there, are there any prerequisites for learning it/ skills/ knowledge that would help?
I'm trying to build up a basis of skills so I can participate better here, but I've got a long way to go. Most of my skills in science, maths and logic are pretty basic.
Definitely read Gödel, Escher, Bach. Aside from that, here's a great list of reading and resources for better understanding various topics discussed on LW. (The things under Mathematics -> Logic and Mathematics -> Foundations would be the most relevant to Löb's Theorem.)
Replies from: free_rip↑ comment by arundelo · 2011-01-28T02:09:05.517Z · LW(p) · GW(p)
I bet if you found the first spot in it where you get confused and asked about it here, someone could help. (Maybe not me; I have barely a nodding acquaintance with Löb's theorem, and the linked piece has been languishing on my to-read list for a while.)
Edit: cousin_it recommends part of a Scott Aaronson paper.
Replies from: free_rip↑ comment by free_rip · 2011-01-28T05:37:26.761Z · LW(p) · GW(p)
Okay. The part where I start getting confused is the statement: 'Unfortunately, Lob's Theorem demonstrates that if we could prove the above within PA, then PA would prove 1 + 2 = 5'. How does PA 'not prov(ing) 1 + 2 = 5' (the previous statement) mean that it would prove 1 + 2 = 5?
Maybe it's something I'm not understanding about something proving itself - proof within PA - as I admit I can't see exactly how this works. It says earlier that Godel developed a system for this, but the theorem doesn't seem to explain that system... my understanding of the theorem is this: 'if PA proves that when it proves something it's right, then what it proves is right.' That statement makes sense to me, but I don't see how it links in or justify's everything else. I mean, it seems to just be word play - very basic concept.
I feel like I'm missing something fundamental and basic. What I do seem to understand is so self-explanatory as to need no mention, and what I don't seems separate from it. It's carrying on from points as if they are self-explanatory and link, when I can't see the explanations or make the links. I also don't see the point of this as a whole - what, practically, is it used for? Or is it simply an exercise in thinking logically?
Oh, I also don't know what the arrows and little squares stand for in the problem displayed after the comic. That's a separate issue, but answers on it would be great.
Any help would be appreciated. Thanks.
Replies from: arundelo↑ comment by arundelo · 2011-01-28T06:30:00.367Z · LW(p) · GW(p)
'Unfortunately, Lob's Theorem demonstrates that if we could prove the above within PA, then PA would prove 1 + 2 = 5'.
I believe that that's just a statement of Löb's theorem, and the rest of the Cartoon Guide is a proof.
It says earlier that Godel developed a system for this, but the theorem doesn't seem to explain that system
The exact details aren't important, but Gödel came up with a way of using a system that talks about natural numbers to talk about things like proofs. As Wikipedia puts it:
Thus, in a formal theory such as Peano arithmetic in which one can make statements about numbers and their arithmetical relationships to each other, one can use a Gödel numbering to indirectly make statements about the theory itself.
Actually, going through a proof (it doesn't need to be formal) of Gödel's incompleteness theorem(s) would probably be good background to have for the Cartoon Guide. The one I read long ago was the one in Gödel, Escher, Bach; someone else might be able to recommend a good one that's available online not embedded in a book (although you should read GEB at some point anyway).
arrows and little squares
The rightward-pointing arrows mean "If [thing to the left of the arrow] then [thing to the right of the arrow]". E.g. if A stands for "Socrates is drinking hemlock" and B stands for "Socrates will die" then "A -> B" means "If Socrates is drinking hemlock then Socrates will die".
I suspect the squares were originally some other symbol when this was first posted on Overcoming Bias, and they got messed up when it was moved here [Edit: nope, they're supposed to be squares], but in any case, here they mean "[thing to the right of the square] is provable". And the parentheses are just used for grouping, like in algebra.
Replies from: free_rip↑ comment by free_rip · 2011-01-28T07:24:00.507Z · LW(p) · GW(p)
Ah, okay, I think I understand it a bit better now. Thank you!
I think I will order Godel, Escher, Bach. I've seen it mentioned a few times around this site, but my library got rid of the last copy a month or so before I heard of it - without replacing it. Apparently it was just too old.
comment by Dreaded_Anomaly · 2011-01-05T01:40:46.772Z · LW(p) · GW(p)
I'm a 22-year-old undergraduate senior, majoring in physics, planning to graduate in May and go to graduate school for experimental high energy physics. I also have studied applied math, computer science, psychology, and politics. I like science fiction and fantasy novels, good i.e. well-written TV, comic books, and the occasional video game. I've been an atheist and science enthusiast since the age of 10, and I've pursued rational philosophy since high school.
I got here via HPMoR, lurked since around the time Chapter 10 was posted, and found that a lot of the ideas resonated with my own conclusions about rationality. I still don't have a firm grasp on all of the vocabulary that gets used here, so if it seems like I'm expressing usual ideas in an unusual way, that's the reason.
comment by userxp · 2011-01-04T23:08:40.272Z · LW(p) · GW(p)
Hello, I've been lurking LessWrong for some time, and I have finally decided to make an account to add my comments. I also expect to post an article every now and then.
I am Spanish, which seems rare because it's hard to find non-English people here. I'm studying Computer Science and maybe in the future I'll get a master's degree in AI. I'm interested in computers, the brain, AI, rationality, futurism, transhumanism, etc. Oh, and I love Harry Potter and the Methods of Rationality.
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2011-01-05T05:16:54.339Z · LW(p) · GW(p)
Bienvenido!
↑ comment by wedrifid · 2011-01-04T23:38:32.076Z · LW(p) · GW(p)
I am Spanish, which seems rare because it's hard to find non-English people here.
There are quite a few for whom English is a second language.
Replies from: userxp↑ comment by userxp · 2011-01-05T00:10:28.739Z · LW(p) · GW(p)
Well, maybe, but in this thread I counted 14 people that lived in English-speaking countries and only 3 that didn't. And there doesn't seem to be anyone Spanish (which is the most spoken language after English and Chinese) in the whole site.
Replies from: wedrifidcomment by jferguson · 2010-10-28T02:11:46.108Z · LW(p) · GW(p)
I'm currently an electrical engineering student. I suppose the main thing that drew me here is that I hold uncommon political views (market libertarian/minarchist, generally sympathetic to non-coercive but non-market collective action); I think that view is "correct" for now, but I'm sure that a lot of my reasons for holding those beliefs are faulty, or there'd probably be at least a few more people who agree with me. I want to determine exactly what's happening (and why) when politics and political philosophy come up in a conversation/internal monologue and I end up thinking to myself "Ah, good, my prior beliefs were exactly correct!", with the eventual goal of refining/discarding/bolstering those beliefs, because the chances that they actually were correct 100% of the time is vanishingly small.
That's what got me hooked on LW, at least, but pretty much everything here is interesting.
comment by shokwave · 2010-10-14T11:45:35.174Z · LW(p) · GW(p)
Hi! 21 year old university dropout located in Melbourne, Australia. Coming from a background of mostly philosophy, linguistics, and science fiction but now recognising that my dislike for maths and hard science comes from a social dynamic at my high school: humanities students were a separate clique from the maths/sci students and both looked down on each other, and I bought into it to gain status with my group. So that's one major thing that LW has done for me in the few months I've been reading it: helped me recognise and eventually remove a rationalisation that was hurting my capabilities.
That explains why I stayed here; I think I first got here through something about the Agreement Theorem, as well as reading this pretty interesting Harry Potter fanfic. I'd gotten through to about ten chapter when I checked the author and thought it was quite odd that it was also LessWrong... but if you see an odd thing once, you start seeing it everywhere, right? So I very nearly chalked it up to some sort of perceptual sensitivity. The point about knowing biases making you weaker is very clear to me from that.
Anyway, I'm somewhat settled on being an author as a profession, I'd like to add to LessWrong in the capacity of exploring current philosophical questions that impinge on rationality, truthseeking, and understanding of the mind, and I would like to take from LessWrong the habit of being rational at all times.
Replies from: wedrifidcomment by jacob_cannell · 2010-08-24T08:45:14.650Z · LW(p) · GW(p)
Greetings All.
I've been a Singularitan since my college years more than a decade ago. I still clearly remember the force with which that worldview and its attendant realizations colonized my mind.
At that time I was strongly enamored with a vision of computer graphics advancing to the point of pervasive, Matrix-like virtual reality and that medium becoming the creche from which superhuman artificial intelligence would arise. (the Matrix of Gibson's Neuromancer, as this was before the film of the same name). Actually, I still have that vision, and although it has naturally changed, we do appear finally to be on the brink of a major revolution in graphics and perhaps the attendant display tech to materialize said vision.
Anyway, I studied computer graphics, immersed myself in programming and figured making a video game startup would be a good first step to amassing some wealth so that I could then do the 'real work' of promoting the Singularity and doing AI research. I took a little investment, borrowed some money, and did consulting work on the side. After four years or so the main accomplishment was taking a runner up prize in a business plan competition and paying for a somewhat expensive education. That isn't as bad as it sounds though - I did learn a good deal of atypical knowledge.
Eventually I threw in the towel with the independent route and took a regular day job as a graphics programmer in the industry. After working so much on startups I had some fun with life for a change. I went to a couple of free 'workshops' at a strange house where some unusual guys with names like 'Mystery' and 'Style' taught the game, back before Style wrote his book and that community blew up. I found some interesting roommates (not affiliated with the above), and moved into a house in the Hollywood Hills. One of our neighbors had made a fortune from a website called Sextoy.com and threw regular pool parties, sometimes swinger parties. Another regular life in LA.
Over the years I had this mounting feeling that I was wasting my life, that there was something important I had forgotten. I still read and followed some of the Singularity related literature, but wasn't that active. But occasionally it would come back and occupy my mind, albeit temporarily. Kurzweil's TSIN reactivated my attention, and I attended the Singularity Summit in 2008, 2010. I already had a graphics blog and had written some articles for gaming publications, but in the last few years started reading more neuroscience and AI. I have a deep respect for the brain's complexity, but I'm still somewhat surprised at the paucity of large-scale research and the concomitant general lack of success in AGI. I'm not claiming (as of yet) to have some deep revolutionary new metamathical insight, but a graphics background gives one a particular visualizing intuition and toolbox for optimizing simulations that should come in handy.
All that being said, and even though I'm highly technical by trade, I actually think the engineering challenge is the easier part of the problem (only in relation), and I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task. I agree with the importance of the challenge, but I do not find the most likely hypothesis to be: SIAI develops FriendlyAI before anyone else in the world develops AI in general. I do not think that SIAI currently holds >50% of the lottery tickets, not even close.
However, I do think the movement can win regardless, if we can win on the social engineering front. To me now it seems that the most likely hypothesis is that the winning ticket will be some academic team or startup in this decade or the next, and thus the winning ticket (with future hindsght) is currently held by someone young. So it is a social engineering challenge.
The Singularity challenges everything: our social institutions, politics, religion, economic infrastructure, all of our current beliefs. I share the deep concern about existential risk and Hard Takeoff scenarios, although perhaps differing in particulars with typical viewpoints I've seen on this site.
How can we get the world to wake up?
I somehow went to two Singularity Summits without ever reading LessWrong or OverComingBias. I think I had read partly through EY's Seed AI doc at some point previously, but that was it. I went to school with some folks who are now part of LessWrong or SIAI: (Anna, Steve, Jennifer), and was pointed to this site through them. I've quite enjoyed reading through most of the material so far, and I don't think i'm half way through yet, although I don't see a completion meter anywhere.
I'm somewhat less interested in: raw 'Bayesianism' as enlightment, and Evo Psych. I used to be more into Evo Psych when I was into the game, but I equate that with my childish years. I do believe it has some utility in understanding the brain, but not nearly as much as neuroscience or AI themselves.
Also, as an aside, I'm curious about the note for theists. From what I gather, many LWers find the Simulation Argument to work. If so, that technically makes you a deist, and theism is just another potential hypothesis. Its actually even potentially a testable hypothesis. And even without the Simulation Argument, the Singularity seriously challenges strict atheism - most plausible Singluarity aware Eschatologies result in some black-hole diety spawning new universes - a god in every useful sense of the term at the end of our timeline.
I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company. Of course, that isolation was only ever self-imposed, and this site has opened my mind to the possibility that there's many now who have ventured along similar lines.
Replies from: Nick_Tarleton, Mitchell_Porter, NancyLebovitz, Nick_Tarleton↑ comment by Nick_Tarleton · 2010-08-25T18:57:07.843Z · LW(p) · GW(p)
Welcome to LW!
I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task.
Not entirely. Less Wrong is about raising the sanity waterline, not just recruiting FAI theorists.
Also, as an aside, I'm curious about the note for theists.
Theists in the usual supernatural sense, not the (rare, and even more rarely called 'theism') simulation or future-'god' senses.
I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company
It seems to me that there are plenty of open-minded, technical circles in which one can do this, as long as one takes basic care not to sound fanatical.
↑ comment by Mitchell_Porter · 2010-08-24T10:36:13.503Z · LW(p) · GW(p)
I do think the movement can win regardless, if we can win on the social engineering front.
What is the outcome that you want to socially engineer into existence?
How can we get the world to wake up?
What is it that you want the world to realize?
Replies from: jacob_cannell↑ comment by jacob_cannell · 2010-08-24T22:25:06.490Z · LW(p) · GW(p)
What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-25T12:16:36.710Z · LW(p) · GW(p)
What is the outcome that you want to socially engineer into existence? What is it that you want the world to realize?
Global Positive Singularity. As opposed to annihilation, or the many other likely scenarios.
You remind me of myself maybe 15 years ago. Excited about the idea of escaping the human condition through advanced technology, but with the idea of avoiding bad (often apocalyptically bad) outcomes also in the mix; wanting the whole world to get excited about this prospect; writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.
So let me see if I have anything coherent to say about such an outlook, from the perspective of 15 years on. I am certainly jaded when it comes to breathless accounts of the incomprehensible transcendence that will occur: the equivalent of all Earth's history happening in a few seconds, societies of inhuman meta-minds discovering the last secret of how the cosmos works and that's just the beginning, passages about how a googol intelligent beings will live inside a Planck length and so forth.
If you haven't seen them, you should pay a visit to Dale Carrico's writings on "superlative futurology". Whatever the future may bring, it's a fact that this excited anticipation of everything good multiplied by a trillion (or terrified anticipation of badness on a similar scale, if we decide to entertain the negative possibilities) is built entirely from imagination. It is not surprising that after more than a decade, I have become skeptical about the value of such emotional states, and also about their realism; or at least, a little bored with them. I find myself trying to place them in historical perspective. 2000 years ago there were gnostics raving about transcendental, sublime hierarchies of gods, and how mind, time, and matter were woven together in strange ways. History and science tell us that all that was mostly just a strange conceptual storm happening in the skulls of a few people who died like anyone else and who made little discernible impact on the course of events - that being reserved more for the worldly actors like the emperors and generals. Yet one has to suppose that gnosticism was not an accident, that it was a symptom of what was happening to culture and to human consciousness at that time.
It seems very possible that a great deal of the ecstasy (leavened with dread) that one finds in singularity and transhumanist writing is similarly just an epiphenomenal symptom of the real processes of the age. Lots of people say that, of course; it's the capitalist ego running amok, denying ecological limits, a new gnostic body-denial that fetishizes calculating machines, blah blah blah. Such criticisms themselves tend to repress or deny the radicalism of what is happening technologically.
So, OK, there shall be robots, cyborgs, brain implants, artificial intelligence, artificial life, a new landscape of life and mind which gets called postbiological or posthuman but much of which is just hybridization of natural and artificial. All that is a huge development. But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is; millions of subjective years of posthuman civilizations squeezed into a few seconds; and various other quantitative amplifications of life as we know it, by large powers of ten?
I think at best it is rational to give these ideas a chance. These technologies are new, this hasn't happened before, we don't know how far it goes; so we might want to remain open to the possibility that almost infinite space and time lie on the other side of this transition. But really, open to the possibility is about all we can say. This hasn't happened before, and we don't know what new barriers and pitfalls lie ahead; and it somehow seems unhealthy to be deriving this ecstatic hope from a few exponential numbers.
Something that the critics of extreme transhumanism often fail to note is the highly utopian altruism that exists within the subculture. To be sure, there are many individualist transhumanists who are cynics and survivalists; but there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies (those which possess no trace of the desire to punish or to achieve transformation through violence). It's the dream of world peace, raised to the nth power, and achieved because there's no death, scarcity, involuntary work, ageing process, and other such pains and frustrations to drive people mad. I wanted to emphasize this aspect because the critics of singularity thought generally love to explain it by imputing disreputable motives: it's all adolescent power fantasy and death denial and so forth. There should be a little more respect for this aspect, and if they really think it's impossible, they should show a little more regret about this. (Incidentally, Carrico, who I mentioned above, addresses this aspect too, saying it's a type of political infantilism, imagining that conflict and loss can be eliminated from the world.)
The idea of "waking up the world" to the imminence of the Singularity, to its glories and terrors, can have an element of this profoundly unworldly optimism about human nature - along with the more easily recognized aspect of self-glorification: I, and maybe my colleagues and guru figures, am the messenger of something that will gain the attention of the world. I think it can be expected that the world will continue to "wake up" to the dawning possibilities of biological rejuvenation, artificial intelligence, brain emulation, and so on, and that it will do this not just in a sober way, but also with bursts of zany enthusiasm and shuddering terror; and it even makes sense to want to foster the sober advance of understanding, if only we can figure out what's real and what's illusion about these anticipations.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the "knowledge" of immortality through mind uploading (just one example)... that, almost certainly, achieves nothing deeply useful. And the expectation that in a few years everyone will agree with the Singularity outlook (I've seen this idea expressed most recently by the economist James Miller) I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?! It's a logical deduction: you understand the possibilities of the Singularity, you don't understand how anyone could want to reject them or dismiss them, and you observe that most people are not singularity futurists; therefore, you deduce that the idea is about to sweep the world like wildfire, and you just happen to be one of the lucky first to be exposed to it. That thought process is naivety and unfamiliarity with normal psychology. It may partly be due to a person of above-average intelligence not understanding how different their own subjectivity is to that of a normal person; it may also be due to not yet appreciating how incredibly cruel life can be, and how utterly helpless people are against this. The passivity of the human race, its resignation and wishful thinking, its resistance to "good news", is not an accident. And there is ample precedent for would-be vanguards of the future finding themselves powerless and ignored, while history unfolds in a much duller way than they could have imagined.
So much for the general cautionary lecture. I have two other more specific things to say.
First, it is very possible that the quasi-scientific model of mind which underlies so many of these brave new ideas about copies and mind uploads is simply wrong, a sort of passing historical crudity that will be replaced by something very new. The 19th century offers many examples in physics and biology of paradigms which informed a whole generation of thought and futurology, and which are now dead and forgotten. Computing hardware is a fact, but consciousness in a program is not yet a fact and may never be a fact. I've posted a lot about this here.
Second, since you're here, you really should think about whether something like the SIAI notion of friendly singularity really is the only natural way to achieve a "global positive singularity". The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to promote the idea of immortality through mind uploading, or reverse engineering the brain. I think it's a genuine conceptual advance on the older idea of hoping to ride the technological wave to a happy ending, just by energetic engagement with new developments and a will to do whatever is necessary. We still don't know if the premises of such futurisms are valid, but if they are accepted as such, then the SIAI strategy is a very reasonable one.
Replies from: jacob_cannell, jacob_cannell↑ comment by jacob_cannell · 2010-08-25T21:33:21.141Z · LW(p) · GW(p)
writing essays and SF short short stories about digital civilizations which climb to transcendence within a few human days or hours (I have examined your blog); a little vague about exactly what a "positive Singularity" might be, except a future where the good things happen and the bad things don't.
The most recent post on my blog is indeed a very short story, but it is the only such post. Most of the blog is concerned with particular technical ideas and near term predictions about the impact of technology on specific fields: namely the video game industry. As a side note, several of the game industry blog posts have been published. The single recent hastily written story was more about illustrating the out of context problem and speed differential, which I think are the most well grounded important generalizations we can make about the Singularity at this point. We all must make quick associative judgements to conserve precious thought-time, but please be mindful of generalizing from a single example and lumping my mindstate into the "just like me 15 years ago." But I'm not trying to take the argumentative stance by saying this, I'm just requesting it: I value your outlook.
Yes, my concept of a positive Singularity is definitely vague, but that of a Singularity less so, and within this one can draw a positive/negative delineation.
But is it rational to anticipate: immortality; existence becoming transcendentally better or worse than it is;
Immortality with the caveat of continuous significant change (evolution in mindstate) is rational, and it is pretty widely accepted inherent quality of future AGI. Mortality is not an intrinsic property of minds-in-general, its a particular feature of our evolutionary history. On the whole, there's a reasonable argument that its net utility was greater before the arrival of language and technology.
Uploading is a whole other animal, and at this point I think physics permits it, but it will be considerably more difficult than AGI itself and would come sometime after (but of course, time acceleration must be taken into account). However, I do think skepticism is reasonable, and I accept that it may prove to be impossible in principle at some level, even if this proof is not apparent now. (I have one article about uploading and identity on my blog)
If you haven't seen them, you should pay a visit to Dale Carrico's writings on "superlative futurology".
I will have to investigate Carrico's "superlative futurology".
Imagination guides human future. If we couldn't imagine the future, we wouldn't be able to steer the present towards it.
there are also many who aspire to something resembling sainthood, and whose notion of what is possible for the current inhabitants of Earth exhibits an interpersonal utopianism hitherto found only in the most benevolent and optimistic religious and secular eschatologies
Yes, and this is the exact branch of transhumanism that I subscribe to, in part simply because I believe it has the most potential, but moreover because I find it has the strongest evolutionary support. That may sound like a strange claim, so I should qualify it.
Worldviews have been evolving since the dawn of language. Realism, the extent to which the worldview is consistent with evidence, the extent to which it actually explains the way the world was, the way the world is, and the way the world can be in the future, is only one aspect of the fitness landscape which shapes the evolution of worldviews and ideas.
Worldviews also must appeal to our sense of what we want the world to be, as opposed to what it actually is. The scientific worldview is effective exactly because it allows us to think rationally and cleanly divorce is-isms from want-isms.
AGI is a technology that could amplify 'our' knowledge and capability to such a degree that it could literally enable 'us' to shape our reality in any way 'we' can imagine. This statement is objectively true or false, and its veracity has absolutely nothing to do with what we want.
However, any reasonable prediction of the outcome of such technology will necessarily be nearly equivalent to highly evolved religious eschatologies. Humans have had a long, long time to evolve highly elaborate conceptions of what we want the world to become, if we only had the power. A technology that gives us such power will enable us to actualize those previous conceptions.
The future potential of Singularity technologies needs to be evaluated on purely scientific grounds, but everyone must be aware that the outcome and impact of such technologies will necessarily tech the shape of our old dreams of transcendence, and this is no way, shape, or form is anything resembling a legitimate argument concerning the feasibility and timelines of said technologies.
In short, many people when they hear about the Singularity reach this irrational conclusion - "that sounds like religious eschatologies I've heard before, therefore its just another instance of that". You can trace the evolution of ideas and show that the Singularity inherits conceptions of what-the-world-can-become from past gnostic transcendental mythology or christian utopian millennialism or whatever, but using that to dismiss the predictions themselves is irrational.
I had enthusiasm a decade ago when I was in college, but this faded and recessed into the back of my mind. More lately, it has been returning.
I look at the example of someone like Elisier and I see one who was exposed to the same ideas, in around the same timeframe, but did not delegate them to a dusty shelf and move on with a normal life. Instead he took upon himself to alert the world and attempt to do what he could to create that better imagined future. I find this admirable.
But enthusiasm for spreading the singularity gospel, the desire to set the world aflame with the "knowledge" of immortality through mind uploading (just one example)... that, almost certainly, achieves nothing deeply useful.
Naturally, I strongly disagree, but I'm confused as to whether you doubt 1.) that the world outcome would improve with greater awareness, or 2.) whether increasing awareness is worth any effort.
I think is just unrealistic, and usually the product of some young person who realizes that maybe they can save themselves and their friends from death and drudgery if all this comes to pass, so how can anyone not be interested in it?
Most people are interested in it. Last I recall, well over 50% of Americans are Christians and believe that just through acceptance of a few rather simple memes and living a good life, they will be rewarded with a unimaginably good afterlife.
I've personally experienced introducing the central idea to previous unexposed people in the general atheist/agnostic camp, and seeing it catch on. I wonder if you have had similar experiences.
I was once at a party at some film producer's house and I saw the Singularity is Near sitting alone as a center piece on a bookstand as you walk in, and it made me realize that perhaps there is hope for wide-scale recognition in a reasonable timeframe. Ideas can move pretty fast in this modern era.
Computing hardware is a fact, but consciousness in a program is not yet a fact and
I've yet to see convincing arguments showing "consciousness in a program is impossible", and at the moment I don't assign special value to consciousness as distinguishable from human level self-awareness and intelligence.
The idea of the first superintelligent process following a particular utility function explicitly selected to be the basis of a humane posthuman order I consider to be a far more logical approach to achieving the best possible outcome, than just wanting to
My position is not to just "promote the idea of immortality through mind uploading, or reverse engineering the brain" - those are only some specific component ideas, although they are important. But I do believe promoting the overall awareness does increase the probability of positive outcome.
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a "humane posthuman order" (which itself is a funny play on words - don't you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle. I find this to be the largest defining weakness in the SIAI's current mission.
To put it another way: whose utility function?
To many technical, Singularity aware outsiders (such as myself) reading into FAI theory for the first time, the idea that the future of humanity can be simplified down into a single utility function or a transparent, cleanly casual goal system appears to be delusion at best, and potentially dangerous.
I find it far more likely (and I suspect that most of the Singularity-aware mainstream agrees), that complex concepts such as "humane future of humanity" will have to be expressed in human language, and the AGI will have to learn them as it matures in a similar fashion to how human minds learn the concept. This belief is based on reasonable estimates of the minimal information complexity required to represent concepts. I believe the minimal requirements to represent even a concept as simple as "dog" are orders of magnitude higher than anything that could be cleanly represented in human code.
However, the above criticism is in the particulars of implementation, and doesn't cause disagreement with the general idea of FAI or ethical AI. But as far as actual implementation goes, I'd rather support a project exploring multiple routes, and brain-like routes in particular - not only because there are good technical reasons to believe such routes are the most viable, but because they also accelerate the path towards uploading.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-27T09:47:57.108Z · LW(p) · GW(p)
I agree with the general idea of ethical or friendly AI, but I find some of the details sorely lacking. Namely, how do you compress a supremely complex concept, such as a "humane posthuman order" (which itself is a funny play on words - don't you think) into a simple particular utility function? I have not seen even the beginnings of a rigid analysis of how this would be possible in principle.
Ironically, the idea involves reverse-engineering the brain - specifically, reverse-engineering the basis of human moral and metamoral cognition. One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history, and the genetics and life history of the individual, and then extrapolate it until it stabilizes. That is, the moral and metamoral cognition of our species is held to instantiate a self-modifying decision theory, and the human race has not yet had the time or knowledge necessary to take that process to its conclusion. The ethical heuristics and philosophies that we already have are to be regarded as approximations of the true theory of right action appropriate to human beings. CEV is about outsourcing this process to an AI which will do neuroscience, discover what we truly value and meta-value, and extrapolate those values to their logical completion. That is the utility function a friendly AI should follow.
I'll avoid returning to the other issues for the moment since this is the really important one.
Replies from: jacob_cannell↑ comment by jacob_cannell · 2010-08-27T18:54:11.077Z · LW(p) · GW(p)
I agree with your general elucidation of the CEV principle, but this particular statement stuck out like a red flag:
One is to extract the essence of this, purifying it of variations due to the contingencies of culture, history,
Our morality and 'metamorality' already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong - it is cultural.
The flaw then is assuming there is a single evolutionary target for humanity's future, when in fact the more accurate evolutionary trajectory is adaptive radiation. So the C in CEV is unrealistic. Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
There will be convergent cultural effects (trends we see now), but there will also be powerful divergent effects imposed by the speed of light when posthuman minds start thinking thousands and millions of times accelerated. This is a constraint of physics which has interesting implications. more on this towards the bottom area of this post
If one single religion and culture had taken over the world, a universal CEV might have a stronger footing. The dominant religious branch of the west came close, but not quite.
Its more than just a theory of right action appropriate to human beings, its also what do you do with all the matter, how do you divide resources, political and economic structure, etc etc.
Given the success of Xtianity and related worldviews, we have some guess at features of the CEV - people generally will want immortality in virtual reality paradises, and they are quite willing (even happy) to trust an intelligence far beyond their own to run the show - but they have a particular interest in seeing it take a human face. Also, even though willing to delegate up ultimate authority, they will want to take an active role in helping shape universes.
The other day I was flipping through channels and happened upon some late night christian preacher channel, and he was talking about new Jerusalem and all that and there was this one bit that I found amusing. He was talking about how humans would join god's task force and help shape the universe and would be able to zip from star system to star system without anything as slow or messy as a rocket.
I found this amusing, because in a way its accurate (physical space travel will be too slow for beings that think a million times accelerated and have molecular level computers for virtual reality simulation.)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-08-28T05:14:40.341Z · LW(p) · GW(p)
Our morality and 'metamorality' already exists, the CEV in a sense has already been evolving for quite some time, but it is inherently a cultural & memetic evolution that supervenes on our biological brains. So purging it of cultural variations is less than wrong - it is cultural.
Existing human cultures result from the cumulative interaction of human neurogenetics with the external environment. CEV as described is meant to identify the neurogenetic invariants underlying this cultural and memetic evolution, precisely so as to have it continue in a way that humans would desire. The rise of AI requires that we do this explicitly, because of the contingency of AI goals. The superior problem-solving ability of advanced AI implies that advanced AI will win in any deep clash of directions with the human race. Better to ensure that this clash does not occur in the first place, by setting the AI's initial conditions appropriately, but then we face the opposite problem: if we use current culture (or just our private intuitions) as a template for AI values, we risk locking in our current mistakes. CEV, as a strategy for Friendly AI, is therefore a middle path between gambling on a friendly outcome and locking in an idiosyncratic cultural notion of what's good: you try to port the cognitive kernel of human ethical progress (which might include hardwired metaethical criteria of progress) to the new platform of thought. Anything less risks leaving out something essential, and anything more risks locking in something inessential (but I think the former risk is far more serious).
Mind uploading is another way you could try to humanize the new computational platform, but I think there's little prospect of whole human individuals being copied intact to some new platform, before you have human-rivaling AI being developed for that platform. (One might also prefer to have something like a theory of goal stability before engaging in self-modification as an uploaded individual.)
Instead of a single coherent future, we will have countless many, corresponding to different universes humans will want to create and inhabit after uploading.
I think we will pass through a situation where some entity or coalition of entities has absolute power, thanks primarily to the conjunction of artificial intelligence and nanotechnology. If there is a pluralistic future further beyond that point, it will be because the values of that power were friendly to such pluralism.
↑ comment by jacob_cannell · 2010-08-25T17:54:26.241Z · LW(p) · GW(p)
I liked this, will reply when I have a chance.
↑ comment by NancyLebovitz · 2010-08-25T14:03:46.415Z · LW(p) · GW(p)
o me now it seems that the most likely hypothesis is that the winning ticket will be some academic team or startup in this decade or the next, and thus the winning ticket (with future hindsght) is currently held by someone young.
What do you think of the possibility of a government creating the first AI?
Replies from: jacob_cannell↑ comment by jacob_cannell · 2010-08-25T18:01:37.430Z · LW(p) · GW(p)
Its certainly a possibility, ranging from the terrifying if its created as something like a central intelligence agent, to the beneficial if its created as a more transparent public achievement, like landing on the moon.
The potential for arms race seems to contribute to possibility of doom.
The government seems on par with the private sector in terms of likelihood, but I dont have a strong notion of that. At this point it is already some sort of blip on their radar, even if small.
↑ comment by Nick_Tarleton · 2010-08-25T18:56:44.494Z · LW(p) · GW(p)
Welcome to LW!
I'm more concerned with the social engineering challenge. From my current reading, I gather that EY and the SIAI folks here believe that is all rolled up into the FAI task.
Not entirely. Less Wrong is about raising the sanity waterline, not just recruiting FAI theorists.
Also, as an aside, I'm curious about the note for theists.
Theists in the usual supernatural sense, not the (rare, and even more rarely called 'theism') simulation or future-'god' senses.
I've always felt this great isolation imposed by my worldview: something one cannot discuss in polite company
It seems to me that there are plenty of open-minded, technical circles in which one can do this, as long as one takes basic care not to sound fanatical.
comment by edgar · 2010-08-12T13:16:06.913Z · LW(p) · GW(p)
Hello I am a professional composer/composition teacher and adjunct instructor teaching music aesthetics to motion graphic artists at the Fashion Institute of Technology and in the graduate computer arts department at the School of Visual Arts. I have a masters from the Juilliard School in composition and have been recorded on Newport Classics with Kurt Vonnegut and Michael Brecker.
I live and work in New York City. I spend my life composing and explaining music to students who are not musicians, connecting the language of music to the principles of the visual medium. Saying the accurate thing getting others to question me letting them find their way and admitting often that I am wrong is a life long journey.
Replies from: komponisto↑ comment by komponisto · 2010-08-12T13:44:13.033Z · LW(p) · GW(p)
Welcome! Always nice to have more music people around here.
comment by red75 · 2010-06-06T10:18:05.084Z · LW(p) · GW(p)
Hello. I'm 35, Russian, work as very applied programmer. I end up here by side effect of following path RNN -> RBM -> DBN -> G. E. Hinton -> S. Legg's blog.
I was almost confident about my biases, when "Generalizing From One Example" take me by surprise (some time ago I noticed that I cannot visualize abstract colored cube without thinking color's name, so I generalized. Now I generalized this case of generalization, and had a strange feeling). I'd attention switch and desided to explore.
Replies from: RobinZ↑ comment by RobinZ · 2010-06-07T16:21:19.681Z · LW(p) · GW(p)
Welcome!
If you want a cool place to start, I recommend the links on the About page and whatever strikes your fancy when you page through the Sequences - "Knowing About Biases Can Hurt People" is a particularly interesting one if you liked "Generalizing From One Example".
Replies from: red75comment by dyokomizo · 2010-05-29T11:29:40.588Z · LW(p) · GW(p)
Hi, I'm Daniel. I've read OB for a long time and followed on LW right in the beginning, but work /time issues in the last year made my RSS reading queue really long (I had all LW posts in the queue). I'm a Brazilian programmer, long time rationalist and atheist.
comment by clarissethorn · 2010-03-15T10:55:15.795Z · LW(p) · GW(p)
I looked around for an FAQ link and didn't see one, and I've gone through all my preferences and haven't found anything relevant. Is there any way to arrange for followup comments (I suppose, the contents of my account inbox) to be emailed to me?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-15T11:19:31.262Z · LW(p) · GW(p)
Is there any way to arrange for followup comments (I suppose, the contents of my account inbox) to be emailed to me?
Not that I know of, I'm afraid. There are lots of requested features that we would implement if we had the programmatic resources, but alas, we don't. One just has to check if the envelope is red once in a while.
Replies from: taryneast↑ comment by taryneast · 2010-12-12T16:10:46.271Z · LW(p) · GW(p)
What language is LessWrong written in? Is it Open Source?
I'd suspect that there may be a number of "programmatic resources" (ie us computer geeks) on LW that would be willing to contribute if it were open enough to do so.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-12-12T16:21:40.822Z · LW(p) · GW(p)
http://lesswrong.com/lw/1t/wanted_python_open_source_volunteers/
Replies from: taryneastcomment by markrkrebs · 2010-02-26T14:07:57.447Z · LW(p) · GW(p)
Hi! Vectored here by Robin who's thankfully trolling for new chumps and recommending initial items to read. I note the Wiki would be an awesome place for some help, and may attempt to put up a page there: NoobDiscoveringLessWrongLeavesBreadcrumbs, or something like that.
My immediate interest is argument: how can we disagree? 1+1=2. Can't that be extrapolated to many things. I have been so happy to see a non-cocky (if prideful) attitude in the first several posts that I have great hopes for what I may learn here. We have to remember ignorance is an implacable enemy, and being insulting won't defeat it, and we may be subject to it ourselves. I've notice I am.
First post from me is coming shortly.
- mark krebs
↑ comment by wedrifid · 2010-02-26T14:52:44.500Z · LW(p) · GW(p)
Hi! Vectored here by Robin who's thankfully trolling for new chumps and recommending initial items to read.
Aaahhh. Now I see. RobinZ.
I usually read 'Robin' as Robin Hanson from Overcoming Bias, the 'sister site' from the sidebar. That made me all sorts of confused when I saw that you first found us when you were talking to a biased crackpot.
Anyway, welcome to Lesswrong.com.
How can we disagree? 1+1=2. Can't that be extrapolated to many things?
Let's see:
- One of us is stupid.
- One of us doesn't respect the other (thinks they are stupid).
- One of us is lying (or withholding or otherwise distorting the evidence).
- One of doesn't trust the other (thinks they aren't being candid with evidence so cannot update on all that they say).
- One of us doesn't understand the other.
- The disagreement is not about facts (ie. normative judgments and political utterances).
- The process of disagreement is not about optimally seeking facts (ie. it is a ritualized social battle.)
Some combination of the above usually applies, where obviously I mean "at least one of us" in all cases. Of course, each of those dot points can be broken down into far more detail. There are dozens of posts here describing how "one of use could be stupid". In fact, you could also replace the final bullet point with the entire Overcoming Bias blog.
Replies from: RobinZ↑ comment by RobinZ · 2010-02-26T14:59:12.406Z · LW(p) · GW(p)
I usually read 'Robin' as Robin Hanson from Overcoming Bias, the 'sister site' from the sidebar. That made me all sorts of confused when I saw that you first found us when you were talking to a biased crackpot.
So do I, actually. He got here first, is the thing.
comment by oliverbeatson · 2009-09-11T00:10:20.679Z · LW(p) · GW(p)
Hello! I'm Oliver, as my username should make evident. I'm 17 years old, and this site was recommended to me by a friend, whose LW username I observe is 'Larks'. I drift over to Overcoming Bias occasionally, and have RSS feeds to Richard Dawkins' site and (the regrettably sensationalist) NewScientist magazine. As far as I can see past my biases, I aspire to advance my understanding of the kinds of things I've seen discussed here, science, mathematics, rationality and a large chunk of stuff that at the moment rather confuses me.
I started education with a prominent interest in mathematics, which later expanded to include the sciences and writing, and consider myself at least somewhat lucky to have escaped ten years of light indoctrination from church-school education, later finding warm comfort in the intellectual bosom of Richard Dawkins. I've also become familiar with the likes of Alan Turing, Steven Pinker and yet others, from fields of philosophy, mathematics, computing and science.
I'm currently at college in the UK studying my second year of Mathematics, Philosophy, English Language and entering a first year of Physics (I have concluded a year of Computing). As much as I enjoy and value philosophy as a mechanism for genuine rational learning and discovery, I often despise the canon for its almost religious lack of progression and for affixing value to ultimately meaningless questions. It is for this reason that I value having access to Less Wrong et alia. Mathematics is a subject which I learned (the hard way) that I cannot live without.
I think I've said as much here as I can and as much as I need to, so I'll conclude with a toast: to a future of enlightenment, learning, overcoming biases and most importantly fun.
comment by Larks · 2009-08-11T23:21:03.861Z · LW(p) · GW(p)
Handle: Larks (also commonly Larklight, OxfordLark, Artrix)
Name: Ben
Sex: Male
Location: Eastbourne, UK
Age: at 17 I suspect I may be the baby of the group?
Education: results permitting (to which I assign a probability in excess of 0.99) I'll be reading Mathematics and Philosophy at Oxford
Occupation: As yet, none. Currently applying for night-shift work at a local supermarket
I came to LW through OB, which I found as a result of Bryan Caplan's writing on Econlog (or should it be at Econlog?). I fit much of the standard pattern: atheist, materialist, economist, reductionist, etc. Probably my only departure is being a Conservative Liberal rather than a libertarian; an issue of some concern to me is the disconnect between the US/Econlog/OB/LW/Rationalist group and the UK/Classical Liberal/Conservative Party group, both of which I am interested in. Though Hayek, of course, pervades all.
In an impressive display, I suppose, of cognitive dissidence, I realised that the Bible and Evolution were contradictory in year 4 (age:8), and so came to the conclusion that the continents had originally been separated into islands on opposite sides of the planet. Eden was on one side, evolution on the other, and then continental drift occurred. I have since rejected this hypothesis. I came to Rationalism partly as a result of debating on the NAGTY website.
There are probably two notable influences OB/LW have had on my life. Firstly, I've begun to reflexively refer to what would or would not be empirically the case under different policies, states of affairs, etc., thus making discourse notably more efficient (or at least, it makes it harder for other people to argue back. Hard to tell the difference.)
Secondly, I've given up trying out out-argue my irrational Marxist friend, and instead make money off him by making bets about political and economic matters. This does not seem to have affected his beliefs, but it is profitable.
Replies from: Alicorn↑ comment by Alicorn · 2009-08-11T23:28:46.091Z · LW(p) · GW(p)
cognitive dissidence
I suspect you mean "cognitive dissonance". Perhaps you meant "cognitive dissidents", though, which is closer in spelling and would be a charming notion.
Edit: I looked it up and apparently, unbeknownst to me, "dissidence" is a word. But I still suspect that "dissonance" was meant and that "dissidents" would have been charming.
Replies from: conchiscomment by Whisper · 2009-07-22T06:56:31.615Z · LW(p) · GW(p)
Greetings. To this community, I will only be known as "Whisper". I'm a believer in science and rationality, but also a polythiest and a firm believer that there are some things that science cannot explain. I was given the site's address by one Alicorn, who I've been trying to practice Far-Seeing with...with much failure.
I'm 21 years old right now, living in NY, and am trying to write my novels. As for who I am, well, I believe you'll all just have to judge me for yourself by my actions (posts) rather than any self-description. Thankee to any of you who bothered to read.
Replies from: thomblake↑ comment by thomblake · 2009-07-22T14:19:42.136Z · LW(p) · GW(p)
a firm believer that there are some things that science cannot explain
I think this is a common enough epistemic position to be in, though some of us might define our terms a bit differently.
For any decent definitions of 'explain' and 'science', though, whatever "science can't explain" is not going to be explained by anything else any better.
comment by AnnaSalamon · 2009-05-04T07:52:43.296Z · LW(p) · GW(p)
(This is in response to a comment of brynema’s elsewhere; if we want LW discussions to thrive even in cases where the discussions require non-trivial prerequisites, my guess is that we should get in the habit of taking “already discussed exhaustively” questions to the welcome thread. Or if not here, to some beginner-friendly area for discussing or debating background material.)
brynema wrote:
So the idea is that a unique, complex thing may not necessarily have an appreciation for another unique complexity? Unless appreciating unique complexity has a mathematical basis.
Kind of. The idea is that:
- Both human minds, and whatever AIs can be built, are mechanistic systems. We’re complex, but we still do what we do for mechanistic reasons, and not because the platonic spirit of “right thing to do”ness seeps into our intelligence.
- Goals, and “optimization power / intelligence” with which to figure out how to reach those goals, are separable to a considerable extent. You can build many different systems, each of which is powerfully smart at figuring out how to hit its goals, but each of which has a very different goal from the others.
- Humans, for example, have some very specific goals. We value, say, blueberry tea (such a beautiful molecule...), or particular shapes and kinds of meaty creatures to mate with, or particular kinds of neurologically/psychologically complex experiences that we call “enjoyment”, “love”, or “humor”. Each of these valued items has tons of arbitrary-looking details; just as you wouldn’t expect to find space aliens who speak English as their native language, you also shouldn’t expect an arbitrary intelligence to have human (as opposed to parrot, octopus, or such-and-such variety of space aliens) aesthetics or values.
- If you’re dealing with a sufficiently powerful optimizing system, the question isn’t whether it would assign some value to you. The question is whether you are the thing that it would value most of all, compared to all the other possible things it could do with your atoms/energy/etc. Humans re-arranged the world far more than most species, because we were smart enough to see possibilities that weren’t in front of us, and to figure out ways of re-arranging the materials around us to better suit our goals. A more powerful optimizing system can be expected to change things around considerably more than we did.
That was terribly condensed, and may well not make total sense at this point. Eliezer’s OB posts fill in some of this in considerably better detail; also feel free, here in the welcome thread, to ask questions or to share counter-evidence.
comment by Cyan · 2009-04-21T00:59:48.588Z · LW(p) · GW(p)
- Handle: Cyan
- Age: 31
- Species: Pan sapiens (male)
- Location: Ottawa, Ontario, Canada
- Education: B.Sc. biochemistry, B.A.Sc. chemical engineering, within pages of finishing my Ph.D. thesis in biomedical engineering
Occupation: statistical programmer (would be a postdoc if I were actually post the doc) at the Ottawa Institute of Systems Biology
I'm principally interested in Bayesian probability theory (as applied in academic contexts as opposed to rationalist ones). I don't currently attempt to apply rationalist principles in my own life, but I find the discussion interesting.
comment by MorganHouse · 2009-04-18T18:06:05.163Z · LW(p) · GW(p)
- Handle: MorganHouse
- Age: 25
- Education: Baccalaureate in natural sciences
- Occupation: Freelance programmer
- Location: West Europe
- Hobbies: Programming, learning, traveling, dancing
I discovered Less Wrong from a post on Overcoming Bias. I discovered Overcoming Bias from a comment on Slashdot.
I have been promoting rationality for as long as I can remember, although I have improved much in the past few years and even more after discovering this forum. About the same time as "citation needed" exploded on Wikipedia, I started applying this standard rigorously to my conversations, and I look for outside sources in my discussions every day. This community has taught me to promote nothing less than the full truth, which I have been striving to do ever since. The latter doesn't always work very well socially, but I apply it nevertheless, hoping to lead by example (which I have succeeded in doing several times before).
I have read a lot of "Internet self help", including the various teachings of the seduction community, Tim Ferris, Paul Graham, Steve Pavlina. At age 18 I was planning on getting an master's degree, then going in to a full-time job for life, and I had no idea about how to deal with women. The aforementioned sources led me to start working freelance instead, striving for financial freedom, and seeing the most attractive woman as equals with whom I can share experiences.
I am very good at my profession, programming, and make about as much per hour as expensive lawyers. Because of this, I only have to work a couple of months per year. My goal is to have automated income forever, so I can work exclusively on writing free software (as in Free Software Foundation) for the rest of my life. I am presently nowhere near this goal, and looking for how I can make it happen.
Replies from: John_Maxwell_IV, Eliezer_Yudkowsky↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-04-20T17:27:58.454Z · LW(p) · GW(p)
I am very good at my profession, programming, and make about as much per hour as expensive lawyers.
Any advice on how to become this good?
Replies from: MorganHouse↑ comment by MorganHouse · 2009-04-20T22:43:11.480Z · LW(p) · GW(p)
Several studies[1] have concluded that you need to spend at least 10,000 hours doing something to become a top expert. 10,000 hours is equivalent to 5 years of working full-time, but don't think you can count each work hour as one hour towards this total, since you are much more likely to be forced to work on mundane tasks than when you're doing this as a hobby. Enrolling in a university without mandatory attendance for 3-5 years without caring about your grades can give you enough spare time to accomplish this. If you don't already have one, a university degree with poor grades can still be useful for visa purposes, when traveling or emigrating.
Regarding programming specifically, I would do a broad spectrum of "hard" stuff that most programmers avoid, as part of your learning. For example: writing video decoders (H.264 uses several delightfully complex algorithms), transactional databases, implementations of several Internet standards and software for embedded devices.
Finally, I find that it's easiest to get paid your worth if you work as a freelancer for several companies that have prior experience with outsourcing programming tasks to freelancers.
- You can find sources for this by googling "10 000 hours".
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-18T18:48:42.464Z · LW(p) · GW(p)
If you're literate in Python, we've got some free software programming tasks going here on Less Wrong...
comment by JGWeissman · 2009-04-17T03:10:09.397Z · LW(p) · GW(p)
- Handle: JGWeissman
- Name: Jonathan Weissman
- Location: Orange County, California
- Age: 27
- Education: Majored in Math and Physics, minored in Computer Science
- Occupation: Programmer
- Hobby: Sailboat racing
I found OB through StumbleUpon.
comment by PhilGoetz · 2009-04-17T00:01:54.816Z · LW(p) · GW(p)
- Location: Washington DC, USA
- Education: BS math (writing minor), PhD comp sci/artificial intelligence (cog sci/linguistics minors), MS bioinformatics
- Jobs held (chronological): robot programmer in a failed startup, cryptologist, AI TA, lecturer, virtual robot programmer in a failed startup, distributed simulation project manager, AI research project manager, computer network security research, patent examiner, founder of failed AIish startup, computational linguist, bioinformatics engineer
- Blog
I was a serious fundamentalist evangelical until about age 20. Factors that led me to deconvert included Bible study, successful simulations of evolution, and observation of radical cognitive biases in other Christians.
I was active on the Extropian mailing list, and published a couple of things in Extropy, about 1991-1995.
Like EY, I think AI is inevitable, and is the most important problem facing us. I have a lot of reservations about his plans, to the point of seeing his FAI as UFAI (don't ask in this thread). I think the most difficult problem isn't developing AI, or even making it friendly, but figuring out what kind of possible universes we should aim for; and we have a limited time in which we have large leverage over the future.
I prioritize slowing aging over work on AI. I expect that partial cures for aging will be developed 10-20 years before they are approved in the US, and so I want to be in a position to take published research and apply it to myself when the time comes.
I believe that rationality is instrumental, and repeatedly dissent when people on LW make what I see as ideological claims about rationality, such as that it is defined as that which wins; and at presenting rationality as a value-system or a lifestyle. There's room for that too; I mainly want people to recognize that being rational doesn't require all that.
comment by outlawpoet · 2009-04-16T21:26:27.416Z · LW(p) · GW(p)
- Handle: outlawpoet
- Name: Justin Corwin
- Location: Playa del Rey California
- Age: 27
- Gender: Male
- Education: autodidact
- Job: researcher/developer for Adaptive AI, internal title: AI Psychologist
- aggregator for web stuff
Working in AI, cognitive science and decision theory are of professional interest to me. This community is interesting to me mostly out of bafflement. It's not clear to me exactly what the Point of it is.
I can understand the desire for a place to talk about such things, and a gathering point for folks with similar opinions about them, but the directionality implied in the effort taken to make Less Wrong what it is escapes me. Social mechanisms like karma help weed out socially miscued or incompatible communications, they aren't well suited for settling questions of fact. The culture may be fact-based, but this certainly isn't an academic or scientific community, it's mechanisms have nothing to do with data management, experiment, or documentation.
The community isn't going to make any money(unless it changes) and is unlikely to do more than give budding rationalists social feedback(mostly from other budding rationalists). It potentially is a distribution mechanism for rationalist essays from pre-existing experts, but Overcoming Bias is already that.
It's interesting content, no doubt. But that just makes me more curious about goals. The founders and participants in LessWrong don't strike me as likely to have invested so much time and effort, so much specific time and effort getting it to be the way it is, unless there were some long-term payoff. I suppose I'm following along at this point, hoping to figure that out.
Replies from: ciphergoth, None↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T22:53:02.891Z · LW(p) · GW(p)
I suspect we're going to hear more about the goal in May. We're not allowed to talk about it, but it might just have to do with exi*****ial r*sk...
comment by mattnewport · 2009-04-16T18:47:36.706Z · LW(p) · GW(p)
- Handle: mattnewport
- Name: Matt Newport
- Location: Vancouver, Canada
- Age: 30
- Occupation: Programmer (3D graphics for games)
- Education: BA, Natural Sciences (experimental psychology by way of maths, physics, history and philosophy of science and computer science)
I'm here by way of Overcoming Bias which attracted me with its mix of topics I'm interested in (psychology, economics, AI, atheism, rationality). With a lapsed catholic mother and agnostic father I had a half-heartedly religious upbringing but have been an atheist for as long as I can remember thinking about it. Politically my parents were left-liberal/socialist and I would have described myself that way until my early 20s. I've been trending increasingly libertarian ever since.
I'm particularly interested in applying rationality to actually 'winning' in everyday life. I'm interested in the broad 'life-hacking' movement but think it could benefit from a more rigorously rational/scientific approach. I hope to see more discussion of this kind of thing on less wrong.
comment by lavalamp · 2009-04-16T18:05:12.985Z · LW(p) · GW(p)
Hi, I've been lurking for a few weeks and am likely to stay in lurker mode indefinitely. But I thought I should comment on the welcome thread.
I would prefer to stay anonymous at the moment, but I'm male, 20's, BS in computer programming & work as a software engineer.
As an outsider, some feedback for you all:
Interesting topics -- keep me reading Jargon -- a little is fine, but the more there is, the harder it is to follow. The fact that people make go (my favorite game) references is a nice plus.
I would classify myself as a theist at the moment. As such (and having been raised in a very christian environment), I have some opinions on how you guys could more effectively proselytize--but I'm not sure it's worth my time to speak up.
Replies from: ciphergoth, ChrisHibbert, pnkflyd831↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T20:13:03.234Z · LW(p) · GW(p)
Thanks for commenting, if this thread gives cause to you and more like you to stick their heads above the parapet and say hello it will have been a good thing.
People here have mixed feelings about the desirability of proselytization, since the ideas that are most vigorously proselytized are so often the worst. I think that we will want to do so, but we will want to work out a way of doing it that at least gives some sort of advantage to better ideas over worse but more appealing ones. I think we'll definitely want to hear from people like you who probably have more real experience in this field than many of us put together.
And since you're a theist, I'm afraid you'll be one of the people we're proelytizing to, so if you can teach us how to do it without pissing people off that would help too :-)
Replies from: lavalamp↑ comment by lavalamp · 2009-04-17T18:36:33.708Z · LW(p) · GW(p)
Thanks for the welcome, everyone.
Personally, I pretty much have no desire to proselytize anyone for anything. Waste of time, in my experience. Maybe you all are different, but no one I've ever met will actually change their mind in response to hearing a new line of reasoning, anyway.
What I do have an interest in is people actually taking the time to understand each other and present points in ways that the other party will understand. Atheists and Christians are particularly bad at this. Unfortunately, the worst offenders on the christian side are the least likely to change, or even see the problem. Perhaps there's more hope for those on the other side.
Anyway, I have no desire to debate theism here.
Replies from: mattnewport↑ comment by mattnewport · 2009-04-17T18:45:46.252Z · LW(p) · GW(p)
I have changed my mind in response to hearing a new line of reasoning. One particular poster on a forum I used to frequent changed my mind about politics by patiently giving sound arguments that I had not been presented with before. My political beliefs have been undergoing a continual evolution since then but I can pretty much point to that one individual as instrumental in shifting my political opinions in a new direction.
↑ comment by ChrisHibbert · 2009-04-16T18:23:25.571Z · LW(p) · GW(p)
I have some opinions on how you guys could more effectively proselytize--but I'm not sure it's worth my time to speak up.
If you post about things that are interesting to you, we'll talk about them more.
If you act like you have something valuable to say, we'll read it and respond. We would all be likely to learn something in the process.
↑ comment by pnkflyd831 · 2009-04-16T20:51:03.580Z · LW(p) · GW(p)
lava, You aren't the only one on LW that feels the same way. I have similar background and concerns. We are not outsiders. LW's dedication to attacking the reasoning of a post/comment, but not the person has been proved over and over.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T22:43:36.647Z · LW(p) · GW(p)
LW's dedication to attacking the reasoning of a post/comment, but not the person has been proved over and over.
This is very good to hear; I wouldn't put it quite that strongly, but I had the impression it was an axis we did well on and it's nice to know someone else sees it that way too.
comment by jimrandomh · 2009-04-16T18:04:12.794Z · LW(p) · GW(p)
- Handle: jimrandomh
- Location: Bedford, MA
- Age: 22
- Education: Master's in CS
- Occupation: Programmer
I read Less Wrong for the insight of the authors, which on other blogs would be buried in drivel. Unlike most blogs, Less Wrong has both norms against sloppy thinking and a population of users who know enough to enforce it. Many other blogs have posts that are three-fourths repetition of news stories that I've already seen, and comments that are three-fourths canned responses and confabulation.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-16T17:15:45.609Z · LW(p) · GW(p)
Perhaps take out the "describe what it is that you protect" part. That's jargon / non-obvious new concept.
Replies from: AnnaSalamon, MBlume↑ comment by AnnaSalamon · 2009-04-16T19:51:16.774Z · LW(p) · GW(p)
Oh, I thought it was nice, because it linked newcomers to one of my favorite posts as one of the orienting-aspects of the site (if people come here new). Maybe if linking text was made transparent, e.g. "describe what it is you value and work to achieve"?
I also like the idea of implicitly introducing LW as a community of people who care about things, and who learn rationality for a reason.
Replies from: MBlumecomment by [deleted] · 2009-04-16T17:04:52.367Z · LW(p) · GW(p)
- Handle: jamesnvc
- Location: Toronto, ON
- Age: 19
- Education: Currently 2nd year engineering science
- Occupation: Student/Programmer
- Blog: http://jamesnvc.blogspot.com
As long as I can remember, I've been an atheist with a strong rationalist bent, inspired by my grandfather, a molecular biologist who wanted at least one grandchild to be a scientist. I discoverd Overcoming Bias a year or so ago and became completely enthralled by it: I felt like I had discovered someone who really knew what was going on and what they were talking about.
comment by MBlume · 2009-04-16T16:53:04.724Z · LW(p) · GW(p)
A couple of possible additions to the page which I'm still a bit unsure of:
You may have noticed that all the posts and all the comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. Try not to take this too personally. Voting is used mainly to get the most useful comments up to the top of the page where people can see them. It may be difficult to contribute substantially to ongoing conversations when you've just gotten here, and you may even see some of your comments get voted down. Don't be discouraged by this. If you've any questions about karma or voting, please feel free to ask here.
and
A note for theists: you will find a pretty uniformly atheist community here at LW. You may assume that this is an example of groupthink in action, but please allow for the possibility that we really truly have given full consideration to theistic claims and have found them to be false. If you'd like to know how we came to this conclusion, you might be interested to read (list of OB posts, probably including Alien God, Religion's Claim, Belief in Belief, Engines of Cognition, Simple Truth, Outside the Lab etc.) In any case, we're happy to have you participating here, but please don't be too offended to see other commenters treating religion as an open-and-shut case
Any thoughts?
Replies from: timtyler, Jack, zaph, MrHen, ciphergoth, byrnema, thomblake↑ comment by timtyler · 2009-04-16T17:21:22.600Z · LW(p) · GW(p)
Maybe single out the theists? Buddhism and Taoism are "religions" too - by most accounts - but they are "significantly" less full of crap.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-04-16T23:31:10.941Z · LW(p) · GW(p)
I'm not convinced Buddhism has less crap. It's just more evasive about it. The vast majority of Buddhist practitioners have no idea what Buddhism is about. When you come right down to it, it's a religion that teaches that the world is bad, love is bad, and if you work very hard for thousands of lifetimes, you might finally attain death.
Replies from: timtyler↑ comment by timtyler · 2009-04-17T22:54:35.290Z · LW(p) · GW(p)
I'm not sure where you are getting that from. A more conventional summary:
"Buddhists recognize him as an awakened teacher who shared his insights to help sentient beings end their suffering by understanding the true nature of phenomena, thereby escaping the cycle of suffering and rebirth (saṃsāra), that is, achieving Nirvana. Among the methods various schools of Buddhism apply towards this goal are: ethical conduct and altruistic behaviour, devotional practices, ceremonies and the invocation of bodhisattvas, renunciation of worldly matters, meditation, physical exercises, study, and the cultivation of wisdom."
↑ comment by Jack · 2009-04-16T17:07:12.939Z · LW(p) · GW(p)
I vote definitely yes to the first.
As to the second the message isn't a bad idea. But there are so many OB posts being linked to I'm not sure linking to more is the right idea. Maybe once the wiki gets going there can be a summary of our usual reasons there?
Replies from: MBlume↑ comment by zaph · 2009-04-16T17:05:43.082Z · LW(p) · GW(p)
I think the first one's good to have: it's positive, and gets people somewhat acclimated to the whole karma thing. I really don't know what to say about the 2nd; if there were a perfect boilerplate response to religious criticism of rationalism, I suppose this forum probably wouldn't exist. Yours is still as good an effort as any, though could we possibly take debating evolution completely off the table? That and calling any scientific theory "just a theory"?
↑ comment by MrHen · 2009-04-16T16:59:22.145Z · LW(p) · GW(p)
After the note to the religious, perhaps a nice, comforting "you are still welcome here as long as you don't cause trouble." That is, of course, assuming they are still welcome here. Because they are, right?
Replies from: zaph, MBlume, MBlume↑ comment by MBlume · 2009-04-16T19:50:48.720Z · LW(p) · GW(p)
In any case, we're happy to have you participating here, but please don't be too offended to see other commenters treating religion as an open-and-shut case.
Something like that?
Replies from: MrHen↑ comment by MrHen · 2009-04-16T22:42:42.055Z · LW(p) · GW(p)
Yeah, that works. If I had to edit it myself I would do something like this:
A note to the religious: you will find LW overtly atheist. If you'd like to know how we came to this conclusion you may find these related posts a good starting point. We are happy to have you participating but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to religious claims and found them to be false.
Just food for thought. I trimmed it up a bit and tried being a little more charitable. I also started an article on the wiki but someone else may want to approve it or move it. The very last sentence is a bit aggressive, but I think it is the softest way to make the point that this is an unmovable object.
Replies from: Bongo, MBlume↑ comment by Bongo · 2009-04-17T12:07:39.861Z · LW(p) · GW(p)
Shouldn't just assert that it isn't groupthink. Maybe it is. Let them judge that for themselves. Now it sounds defensive, even.
It's probably always dangerous and often wrong to assert that you, or your group, is free of any given bias.
Otherwise I do like the paragraph.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T22:46:19.764Z · LW(p) · GW(p)
I like both of these (though yes, theism rather than religion will avoid some nitpicking).
↑ comment by byrnema · 2009-04-16T17:56:57.458Z · LW(p) · GW(p)
I appreciate the links.
and especially if you intend to argue the matter, it will almost certainly profit you
a little snippy, and not necessary -- remember these newcomers haven't done anything wrong yet
I would replace the word "supernaturalist" with "religious" again. No reason to be even that tiny bit confrontational.
Replies from: MBlume↑ comment by MBlume · 2009-04-16T18:41:28.485Z · LW(p) · GW(p)
a little snippy, and not necessary
Removed then -- it was not at all my intention to be snippy, only to motivate the reading
I would replace the word "supernaturalist" with "religious" again. No reason to be even that tiny bit confrontational.
Done, but do keep in mind that, at least on LW, "supernatural" has a clearly defined meaning, being used to describe theories which grant ontologically fundamental status to things of the mind -- intelligence, emotions, desires, etc.
Replies from: byrnema↑ comment by byrnema · 2009-04-16T21:03:52.623Z · LW(p) · GW(p)
Can we delete this thread in the spirit of taking out noise?
Replies from: MBlume↑ comment by thomblake · 2009-04-16T17:38:53.480Z · LW(p) · GW(p)
The first paragraph seems good.
Despite the vocal atheist and nonreligious majority, I wouldn't doubt that there are many religious people here. Is the second paragraph really helpful? Any religious folks (even pagans, heathens, unitarians, buddhists, etc) here to back me up on this?
Replies from: John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2009-04-16T18:06:42.146Z · LW(p) · GW(p)
I know one evangelical christian who reads but does not post to Less Wrong.
comment by Jack · 2009-04-16T16:50:31.987Z · LW(p) · GW(p)
- Handle: Jack
- Location: Washington D.C.
- Age: 21
- Education: Feeling pretty self-conscious about being the only person to post so far without a B.A. I'll finish it next year, major is philosophy with a minor in cognitive science and potentially another minor/major in government. After that its more school of some kind.
I wonder if those of us on the younger end of things will be dismissed more after posting our age and education. I admit to be a little worried, but I'm pretty sure everyone here is better than that. Anyway, I was a late joiner to OB (I think I got there after seeing a Robin Hanson bloggingheads) and then came here. I'm an atheist/materialist by way of Catholicism- but pretty bored by New Atheism. I was raised in a pretty standard liberal/left wing home but have moved libertarian. I'm very sympathetic to the "liberaltarian" idea. Free markets with direct and efficient redistribution are where its at.
Replies from: thomblake, ThoughtDancer, MorgannaLeFey, byrnema↑ comment by thomblake · 2009-04-16T17:40:33.169Z · LW(p) · GW(p)
I wonder if those of us on the younger end of things will be dismissed more after posting our age and education.
Don't worry - the top contributor and minor demigod 'round these parts doesn't have a degree, either.
ETA: Since Lojban doesn't think it's clear, I'm somewhat snarkily referring to Eliezer Yudkowsky.
Replies from: Lojban↑ comment by ThoughtDancer · 2009-04-16T18:03:04.038Z · LW(p) · GW(p)
Actually, I'm a bit afraid of the opposite--as an older fart who has a degree through an English Department... I'm often more than a little unsure and I'm concerned I'll be rejected out of hand, or, worse, simply ignored.
I suspect, though, that this crowd is inherently friendly, even when the arguments end up using sarcasm. ;-)
Replies from: MBlume↑ comment by MorgannaLeFey · 2009-04-17T13:47:59.133Z · LW(p) · GW(p)
I was sitting here thinking "Wow, I think I'm older than anyone here" and wondering if I might be dismissed in some way. Funny, that.
↑ comment by byrnema · 2009-04-16T17:45:57.608Z · LW(p) · GW(p)
For some reason I actually thought you were 13, and thought you were you were a terrific 13 year old to be here on LW, being well-read with astute comments. I'll delete this comment in about 5 minutes, it's just chatty.
Replies from: MBlumecomment by zaph · 2009-04-16T16:49:24.271Z · LW(p) · GW(p)
Handle: zaph Location: Baltimore, MD Age: 35 Education: BA in Psychology, MS in Telecommunications Occupation: System Performance Engineer
I'm mostly here to learn more about applied rationality, which I hope to use on the job. I'm not looking to teach anybody anything, but I'd love to learn more about tools people use (I'm mostly interested in software) to make better decisions.
comment by Richard_Kennaway · 2009-04-16T16:37:30.051Z · LW(p) · GW(p)
- Handle: You can see it just above. (Edit: I didn't realise that one can read LW with handles hidden, so: RichardKennaway.)
- Name: Like the handle.
- Gender: What the name suggests.
- Location: Norwich, U.K (a town about two hours from London and 1.5 from Cambridge).
- Age: Over 30 :-)
- Education: B.Sc., D.Phil. in mathematics.
- Occupation: Academic research. Formerly in theoretical computer science; since 10 to 12 years ago, applied mathematics and programming. (I got disillusioned with sterile crossword puzzle solving.)
Like, I suspect, most of the current readership, I'm here via OB. I think I discovered OB by chance, while googling to see if AI was still twenty years away (it was -- still is).
Atheist, materialist, and libertarian views typical for this group; no drastic conversion involved from any previous views, so not much of a rationalist origin story. My Facebook profile actually puts down my religion as "it's complicated", but I won't explain that, it's complicated.
Replies from: GuySrinivasan, Richard_Kennaway, MrHen↑ comment by SarahSrinivasan (GuySrinivasan) · 2009-04-16T16:51:09.109Z · LW(p) · GW(p)
This is pretty funny if you happen to have the anti-kibitzer (which hides handles) turned on. :D
↑ comment by Richard_Kennaway · 2009-04-16T19:55:22.304Z · LW(p) · GW(p)
I wrote:
Atheist, materialist, and libertarian views typical for this group; no drastic conversion involved from any previous views, so not much of a rationalist origin story.
Bit of a non sequitur I made there. How did I come to value rationality itself, rather than all those other things that are some of its fruits? I always have, to the extent that I knew there was such a thing. I remember coming across the books of Korzybski, Tony Buzan, Edward de Bono, and the like, in my teens, and enjoyed similar themes in science fiction. OB is the most interesting thing I've come across in recent years. For the same reasons I've also been interested in "mysticism", but still have no idea what it is or any experience of it. Who will found "Overcoming Woo" to write a blog-book on the subject?
↑ comment by MrHen · 2009-04-16T18:13:01.176Z · LW(p) · GW(p)
Edit: I didn't realise that one can read LW with handles hidden
Whoa, you can? Where did I miss that preference?
Replies from: MBlume↑ comment by MBlume · 2009-04-16T18:26:12.209Z · LW(p) · GW(p)
It's not actually a native preference. Marcello wrote us a script which, run under a particular Firefox extension, produces this effect.
Replies from: MrHencomment by MrHen · 2009-04-16T14:44:13.722Z · LW(p) · GW(p)
- Handle: MrHen
- Name: Adam Babcock
- Location: Tyler, TX
- Age: 24
- Education: BS in Computer Science, minors in Math and Philosophy
- Occupation: Software engineer/programmer/whatever the current term is now
I found LW via OB via a Google search on AI topics. The first few OB posts I read were about Newcomb's paradox and those encouraged me to stick it on my blogroll.
Personal interests in rationality stem from a desire to eliminate "mental waste". I hold pragmatic principles to be of higher value than Truth for Truth's sake. As it turns out, this means something similar to systemized winning.
comment by Jonathan Doolin (jonathan-doolin) · 2018-05-26T16:10:50.087Z · LW(p) · GW(p)
Hi. This is my first time to this website, and my third comment today. I've been listening to the show "Bayesian Conspiracy" and made some posts to the subreddit. So I guess I'm not a good lurker.
I was intrigued by Arandur's article entitled "The Goal of the Bayesian Conspiracy" which was essentially,
(1) eliminate most pain and suffering and inequity.
(2) develop technologies for eternal life.
The ordering here, that Arandur suggested, I thought, was quite wise. I recently saw the series "Dollhouse" and I felt like it gave a pretty good description of what would probably happen if you reversed the order.
And then I went on to read the article on "The Failures of Eld Science"... Well, skim.... Like I said, I'm not a good lurker. And then I read "Rationality as a Martial Art" which was short so I read the whole thing.
I guess I have very entrenched views on the failures of Eld science, and Rationality as a martial art, because, I've been arguing about Special and General Relativity online for about two decades, and occasionally debating biblical interpretation with Christians for most of my life.
Hide in plain sight
Before you can step forward you have to be where you are.
Don't be ashamed of your ignorance, but don't cling to it either.
Desire the things you have, commit to what you love.
Don't look for false things. Don't seek out error to make yourself look smart. Don't confuse counterattack with defense.
Stand up for what you believe in--especially when you realize you look foolish, and still believe it.
When pulled in different directions, stick with your commitments.
Get good at what you have to do. It will be more fun and people will appreciate you more.
Be clear with your meaning.
Try to understand others from their own perspectives, and with their own meanings.
Acknowledge the hypothesis. Don't confuse what you believe to be a false belief with a moral failure.
Be the heart before you be the head. Agreeing to disagree is the start of a conversation... not the end.
I have two MS degrees, one in physics, and one in math... I got them in the wrong order... as knowing how to do a differential equation would have been REALLY helpful, in physics. But I'm really good at trig, both regular and hyperbolic.
comment by Taily · 2012-12-07T21:28:02.634Z · LW(p) · GW(p)
Hello, please call me 'Taily' (my moniker does not refer to a "tail" or a cartoon character). I'm an atypical 30 year old psychology student, still working to get my PhD. I also spend a significant amount of time on thinking, writing, and gaming. Among other things.
One reason I am joining this community is my mother, oddly. She is a stay-at-home mom with few (if any) real life friends. She interacts on message boards. I...well I don't want to be like that at all honestly, and I've only on occasion been a part of a message-board community. But I recognize the value of social exchange and community, and as my real-life friends are limited by time and location in how we can meet, this forum may be a good supplement.
My MAIN reason though - why did I choose this 'place'? It seems Very Interesting. I've read a bunch of EY's writings (the five PDF files?), and I've gotten to the point where I've wanted to interact - to ask questions and give opinions and objections - and I'm hoping that's some of what this message board is about.
Also to note:
-I first became acquainted through this community via Harry Potter and the Methods of Rationality, as, Harry Potter fan-fiction is a 'guilty pleasure' of mine.
-I am not an atheist, although I personally cannot stand organized religion. I very much respect the idea of coming to conclusions and developing opinions without the aid of "religion" or "spirituality" though.
-I have read a significant amount of each of those aforementioned PDF files - fun theory and utopias, quantum physics, Bayesian, all that - but I'm not done yet (and I don't yet get quantum mechanics nearly as much as I would like to).
-I consider myself rather well-versed in psychology and associated theories and I am sure I have something to contribute in that area. I wish I were an expert on all the cognitive theories and heuristics/biases, but I'm not (yet). But that's one reason I became interested specifically in EY's writings and this message board.
-One of my main personal philosophies is on doubt and possibility. Nothing's a 100% certain, and considering the way the universe is made, I have trouble believing anything we 'know' is 100% accurate. Conversely, I don't believe 'anything' is 100% inaccurate. So...I tend to hedge a lot.
-I think that the general use of statistics in current psychological research is flawed, and I'm looking to learn more about how to refine psychology research practices, such as by using Bayes' Theorem and all that.
-That's probably more than enough of an introduction for now. I hope I find a place to fit in!
comment by TGM · 2012-07-10T20:21:23.029Z · LW(p) · GW(p)
I’m 20, male and a maths undergrad at Cambridge University. I was linked to LW a little over a year ago, and despite having initial misgivings for philosophy-type stuff on the internet (and off, for that matter), I hung around long enough to realise that LW was actually different from most of what I had read. In particular, I found a mix of ideas that I’ve always thought (and been alone amongst my peers in doing so), such as making beliefs pay rent; and new ones that were compelling, such as the conservation of expected evidence post.
I’ve always identified as a rationalist, and was fortunate enough to be raised to a sound understanding of what might be considered ‘traditional’ rationality. I’ve changed the way I think since starting to read LW, and have dropped some of the unhelpful attitudes that were promoted by status-warfare at a high achieving all-boys school (you must always be right, you must always have an answer, you must never back down…)
I’m here because the LW community seems to have lots of straight-thinking people with a vast cumulative knowledge. I want to be a part of and learn from that kind of community, for no better reason than I think I would enjoy life more for it.
comment by Benevolence · 2011-09-08T19:29:55.941Z · LW(p) · GW(p)
Greeting Less Wrong!
My name is Dimitri Karcheglo. I'm 21, I live in Vancouver, Canada. I was born in Ukraine and immigrated to Vancouver with my family in 1998.
I found my way here via a recommendation from a friend i have in The Zeitgeist Movement. He recommended Harry Potter and the Methods of Rationality to me, as well as the Less Wrong wiki/sequences. I've red HPMOR at least 10 times over now (I have a thing with re-reading. I don't get bored by it.) I've also read some of the material on the site (though not a lot yet. Just "map and territory" and "mysterious answers to mysterious questions").
In terms of my education, I have studied one year of computer programming back in 07/08 and one year of civil Engineering in 08/09. The last couple years have been taking a course or two and working, living on my own taking a break from serious school. I plan to continue Civil Engineering full time next fall (12/13 year).
I was raised by parents who are both fairly proficient in math and problem solving. As such i is not surprising that i developed a talent for those spheres, and, by extension, rationality. I have a tendency to over-analyze things, which often ends up prolonging discussions beyond reasonable time frames.
I've come here partly for myself and partly for others. I want to improve myself and get rid of as many of my flaws as possible. At the same time i want to learn how to teach others rational thinking as well. Hopefully some teaching methods on this site that will (again hopefully) work for me will work for those i talk to as well. I find it's extremely difficult to teach people to think rationally, because naturally, they think they already are. Its hard to make people understand the depths to which you need to go in your thinking process to really start looking at things properly and getting rid of biases. And the hardest thing of all seems to be to get people to admit they're wrong. if anyone has some good tactics for this i would greatly appreciate you sharing it.
Some of my main interests:
Politics: Mainly in the sphere of removing corruption. Ultimately i hold no political beliefs other then that politics is useless and that a rational society has no need for government. I'm not left, right, center. I'm not up or down. I'm simply not there. If we attach "poltics" to the structuring of society then yes, i have a lot of ideas and belifs there that i hold fairly strongly (though of course they are set in stone). However me going into those may be too much for this one post to handle ;)
Economics: I know a fair bit about our economic and especially our monetary system via some documentaries and independent research i've done. I hold the view that a resource-based economy is the way to go for us right now given we have the technological capability to pull it off now. Capitalism was useful in the late 19th and early 20th century, but has run out of utility ( or at least its utility has vastly diminished and it's consequences have exponentially increased)
Psychology, especially in connection to developing it. Nature vs nurture argument. I'm interested in how people become what they become psychologically. why they arrive at their decisions. The influences and stimuli that lead (i go as far as to say forced) them there. I'm a believer in both nature and nurture working together. My view is that genes are not pre-deterministic in their influences on psychology, but rather give us propensities towards certain psychological traits. Our environment and upbringing are what determine which genes are activated and which aren't, as well as what genetic mutations occur. My view point largely comes from the documentary "Zeitgeist 3: Moving Forward." It's available on youtube for free for anyone interested in learning more on this subject (as well as what a resource based economy is).
The last paragraph brought to mind that in my current state of mind i'm largely influenced in the way i think from what i've learned from The Zeitgeist Movement, and the further research it inspired me to do.
Anyway, thats a little about me. Anyone interested can ask more, I'm fairly open with sharing info about myself (but no, you can't have my bank account number).
I'd like to thank the founders of the site and especially EY for his work on both this website, the goldmine of information and thought-provoking ideas that it is, and for HPMOR, which i enjoyed immensely and will continue to follow as long as it is updated. I hope to learn a lot from all of you, and hopefully eventually be able to teach others myself. Sharing is caring, especially for knowledge and understandings.
Cheers, Dimitri Karcheglo IRL Benevolence on the internet
comment by saph · 2011-07-09T14:45:38.077Z · LW(p) · GW(p)
Hi,
- Handle: saph
- Location: Germany (hope my English is not too bad for LW...)
- Birth: 1983
- Occupation: mathematician
I was thinking quite a lot for myself about topics like
- understanding and mind models
- quantitative arguments
- scientific method and experiments
- etc...
and after discovering LW some days ago I have tried to compare my "results" to the posts here. It was interesting to see that many ideas I had so far were also "discovered" by other people but I was also a little bit proud that I have got so far on my own. Probably this is the right place for me to start reading :-).
I am an Atheist, of course, but cannot claim many other standard labels as mine. Probably "a human being with a desire to understand as much of the universe as possible" is a good approximation. I like learning and teaching, which is why I am interested in artificial intelligence. I am surrounded by people with strange beliefs, which is why I am interested in learning methods on how to teach someone to question his/her beliefs. And while doing so, I might discover the one or other wrong assumption in my own thinking.
I hope to spend some nice time here and probably I can contribute something in the future...
Replies from: jsalvatier↑ comment by jsalvatier · 2011-07-27T20:10:26.550Z · LW(p) · GW(p)
Welcome!
comment by gjayb · 2011-06-16T01:24:36.438Z · LW(p) · GW(p)
Hi! My name is Jay, I'm 20ish, and I study mathematics and physics. I found this through HPMOR which came to me as a recommendation from another physicist.
I'm interested in learning logic, winning arguments, and being better able to understand philosophical debates. I'll be starting by going through the major sequences, as that seems generally recommended.
I have a blog, A Model of Reality , whose name seems particularly amusing now. It is so called because my main interest in scientific research is to improve the models for predicting reality (eg how corn flows out of a silo, how cracks propagate in a material, and why classical physics is frequently good enough)
ttfn -Jay
Replies from: nhamann↑ comment by nhamann · 2011-06-16T01:33:51.914Z · LW(p) · GW(p)
I'm interested in ... winning arguments ...
Ack, that won't do. It is generally detrimental to be overly concerned with winning arguments. Aside from that, though, welcome to LW!
Replies from: khafra↑ comment by khafra · 2011-06-16T20:16:56.554Z · LW(p) · GW(p)
But winning arguments is what reason is for!
edit: I don't think I've ever gotten 4 replies to a comment, let alone 4 replies at once to a six-month-old comment. But since it got so much attention, I should clarify that I intentionally conflated different meanings of purposefulness for dramatic effect.
Replies from: None, MarkusRamikin, Insert_Idionym_Here, dlthomas↑ comment by MarkusRamikin · 2011-12-20T16:48:21.402Z · LW(p) · GW(p)
Then we're this aberrant group who stubbornly insists on misapplying it. I can live with that.
↑ comment by Insert_Idionym_Here · 2011-12-20T17:04:56.211Z · LW(p) · GW(p)
Feet are for standing, not hands, but that doesn't keep us from admiring the gymnast.
comment by Laoch · 2010-11-30T20:15:33.856Z · LW(p) · GW(p)
Hello,
I'm a software engineer working in Ireland, desperately trying to find something else to do in the area of science preferably. I've been a transhumanist for a while now and an atheist a lot longer so I'm finding less wrong very interesting as it appeals to my rational and empirical leanings.
I've been reading the sequences, they're fun to say the least. I'm learning a lot and I have a lot to learn, I'm looking forward to it!
For what its worth I'm a male and 27 years gathering experience from the west of Ireland.
comment by hairyfigment · 2010-10-14T23:01:59.904Z · LW(p) · GW(p)
Do what thou wilt shall be the whole of the Law. I'm a currently unemployed library school graduate with a fondness for rationality. I believe as a child I read most of Korzybski's Bible of general-semantics, which I now think breaks its own rules about probability but still tends to have value for the people most likely to believe in it.
I didn't plan to mention it before seeing this, but I practice an atheistic form of Crowley's mystical path. I hope to learn how to produce certain experiences in myself (for whoever I saw arguing about a priori certainty, call them non-Kantian experiences) while connected to whatever brain-scanners exist fourteen-odd years from now.
In that Crowley thread I saw a few bits that seem misleading, and I think I can explain some of them if people here still have an interest. Oh, and did Yvain really link to a copy of this without telling people to beware the quotation marks? That's just mean. ^_^
I also think Friendly AI seems like a fine idea, and I hope if the SIAI doesn't produce an FAI in EY's lifetime, they at least publicize a more detailed theory of Friendliness.
comment by arundelo · 2010-04-17T00:43:06.564Z · LW(p) · GW(p)
Hi! I've been on Less Wrong since the beginning. I'm finally getting around to posting in this thread. I found Less Wrong via Overcoming Bias, which I (presumably) found by wandering around the libertarian blogosphere.
- real life name: Aaron Brown
- location: metropolitan Detroit (Michigan, US)
- sex: male
- birth year: 1973
- profession: computer programmer
- avocation: musician
- some other interests: Ayn Rand, Breaking Bad, Esperanto, progressive rock, Vim
- see also: arundelo.com, arundelo.livejournal.com
comment by Kevin · 2010-01-27T09:54:50.234Z · LW(p) · GW(p)
Hi. My name's Kevin. I'm 23. I graduated with a degree in industrial engineering from the University of Pittsburgh last month. I have a small ecommerce site selling a few different kinds of herbal medicine, mainly kratom, and I buy and sell sport and concert tickets. Previously I started a genetic testing startup and I am gearing up for my next startup.
I post on Hacker News a lot as rms. kfischer &$ gmail *^ com for email and IM, kevin143 on Twitter, kfischer on Facebook.
I signed up for Less Wrong when it was first started but have just recently reached the linguistic level where I feel I can almost keep up with the conversation. 9 months ago I found myself bored by the nearly exclusive focus on meta-conversation and rationality. I would just read Eliezer's less meta stuff. But since graduating from school and having a job that requires me to work no more than 2 hours a day, I've been able to dedicate myself to social hedonism/relationship building and philosophy. I've learned more in one month of posting here than I did in my last two years of college classes.
I posted my rationalist origin story a while ago. http://lesswrong.com/lw/2/tell_your_rationalist_origin_story/74
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-27T10:41:10.681Z · LW(p) · GW(p)
I'm sure you're not surprised by this question :-) but if you're a rationalist, how come you sell herbal medicines?
Replies from: Kevin↑ comment by Kevin · 2010-01-27T10:59:19.287Z · LW(p) · GW(p)
Herbal medicine is a polite euphemism for legal drugs. The bulk of our business comes from one particular leaf that does have legitimate medical use and is way, way more effective than a reasonable prior says it should be.
We were actually planning on commercializing the active ingredient (called 7H), based on this gap we found in the big pharma drug discovery process, and it would have been a billion dollar business. However, it would have required us to raise money for research, so we could iterate through all of the possible derivatives of the molecule and it's nearly impossible to raise money for research without having a PhD in the relevant area. We tried but kept hitting catch 22's.
At the most recent Startup School, I met someone who introduced me to a young CEO funded by top VCs who assured me that this idea fit the VC model perfectly, that he was pretty confident we could raise a million dollars for research and a patent, and that for something with potential like this, it did not matter at all that our team was incomplete, the VCs would find us people. I told him to give me a day to revise our one pager. I did a quick patent search and found that the Japanese discoverers of 7H had just filed a patent on all possible derivatives of 7H -- and they found some really awesome derivatives. They discovered 7H in 2001 and filed for the patent of the derivative molecules in 2009. For various reasons, we believed that their funding was not for all derivatives of 7H and that they were chasing an impossible pharmaceutical dream, but in retrospect we believe they were selectively publishing papers of their discoveries to throw others off of their tracks, why else would they have published the discovery of a medically useless derivative?
We came so close, but it always seemed a little too good to be true. There's always the next thing. For now, selling the leaf itself pays the rent.
PM or email for more details about the herb/molecule in question; I think it's probably inappropriate to post the links to my business or even the relevant Wikipedia page here.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-27T11:08:39.371Z · LW(p) · GW(p)
OK, that makes sense, thanks!
comment by ideclarecrockerrules · 2010-01-06T08:29:22.882Z · LW(p) · GW(p)
Male, 26; Belgrade, Serbia. Graduate student of software engineering. Been lurking here for a few months, reading sequences and new stuff through RSS. Found the site through reddit, likely.
Self-diagnosed (just now) with impostor syndrome. Learned a lot from reading this site. Now registered an account to facilitate learning (by interaction), and out of desire to contribute back to the community (not likely to happen by insightful posts, so I'll check out the source code).
comment by Sly · 2010-01-03T11:30:35.773Z · LW(p) · GW(p)
- Anthony
- Age 21
- Computer Science Student
- Seattle/Redmond area
I have been lurking LW and OB since summer and finally became motivated/bored enough to post. I do not remember exactly how I came to find this site, but it was probably from following a link on some atheist blog or forum.
I became interested in rationality after taking some philosophy classes my freshman year and discovering that I had been wrong about religion. Everything followed from that.
Interests that you probably do not care about: Gaming and game design in particular. I have thus far made a flash game and an iPhone game, both of which are far too difficult for most people.
comment by Matt_Duing · 2009-10-17T04:00:19.886Z · LW(p) · GW(p)
Name: Matt Duing Age: 24 Location: Pittsburgh, PA Education: undergraduate
I've been an overcoming bias reader since the beginning, which I learned of from Michael Anissimov's blog. My long term goal is to do what I can to help mitigate existential risks and my short term goals include using rationality to become a more accurate map drawer and a more effective altruist.
comment by pdf23ds · 2009-09-21T06:32:52.760Z · LW(p) · GW(p)
Eh. Might as well.
Chris Capel (soon to be) Mount Pleasant, TX (hi MrHen!) Programmer
I've been following Eliezer since the days of CFAI, and was an early donor to SIAI. I struggle with depression, and thus am much less consistently insightful than I wish I'd be. I'm only 24 and I already feel like I've wasted my life, fallen permanently behind a bunch of the rest of you guys, which kind of screws up my high ambitions. Oh well.
I'd like to see a link explaining the mechanics of the karma system (like how karma relates to posting, for instance) in this post.
Replies from: orthonormal↑ comment by orthonormal · 2009-12-30T21:16:04.320Z · LW(p) · GW(p)
Welcome, Chris!
I'm only 24 and I already feel like I've wasted my life, fallen permanently behind a bunch of the rest of you guys, which kind of screws up my high ambitions. Oh well.
It's poor form of me to analyze you from outside, but this reminds me of the discussion of impostor syndrome we've been having in another thread. I definitely identify with this kind of internal monologue, and it's helped me to recognize that others suffer it too (and that it's typically a distorted view).
I'd like to see a link explaining the mechanics of the karma system (like how karma relates to posting, for instance) in this post.
I second this, especially now that the karma threshold for posting has been changed.
Replies from: pdf23ds↑ comment by pdf23ds · 2010-01-03T03:44:49.230Z · LW(p) · GW(p)
I don't think I have a problem with imposter syndrome in particular. I believe I'm appropriately proud of some of my real accomplishments.
Replies from: orthonormal↑ comment by orthonormal · 2010-01-03T05:00:26.288Z · LW(p) · GW(p)
As well you should be. Great idea, and (reading the comments) well executed!
comment by ajayjetti · 2009-07-23T01:24:38.555Z · LW(p) · GW(p)
Hi
I am Ajay from India. I am 23. I was a highly rebellious person(still am i think), flunked out college, but completed it to become a programmer. But as soon as i finished college, i had severe depression because of a woman. I than thought of doing Masters degree in US, and applied, but then dropped the idea.Then i recaptured a long gone passion to make music, so i started drumming. I got accepted to berklee college of music, but then i lost interest to make a career out of it, i have some reasons for it. Then i started reading a lot(parallel to some programming). I face all the problems that an average guy faces(from social to economic problems). I graduated from one of the top colleges in india and now don't do my degree any justice. sometimes i think about the fact that all my colleagues are happy working with companies like google, oracle, etc. In a spur to make a balance, i gave gmat and applied and got admit to some supposedly TOP MBA schools. But i again lost interest for pursuing that thing. Now i write a bit, and read and i teach primary school mathematics in a local school. I love music ranging from art tatum to balamurlikrishna to illayraja to blues. I have been to US once when i was working with Perot systems bangalore(i was campus placed there). I would like to travel more, but i dont see that happening in near future because of financial contraints and constraints by governments of this world.
So, i always keep searching for some interesting "cures" on internet. One fine day i found paul grahams website through some Ajax site. Then i was reading something on hacker news, something related to cult following and stuff. There was a name mentioned there--Eliezer yudkowsky(hope i spelled it right). So i wikied that name. i found his site and then from there to less wrong and overcoming bias. Since 2 months, i am really obsessed by this blog. I dont know how will this help me "practically", but i am quite happy reading and demystifying my brain on certain things.
One thing: I have noticed that this forum has people who are relatively intellectual. Lot of them seem to be from developed countries, who have got very less idea about how things work in a country like India. Sitting here, all these things that are happening in "developed" world seem incredulous to me. I get biased like lot of indians who think US or Europe is a better place. I dont need to say that there are millions of indians in these regions. Then i think some more. So far, i dont think anybody is doing things any differently when it comes to living a life. Even in this community i dont see we are living differently, i dont know whether we even need to!!
We are born, we live and we die, that is the only truth that appeals me so far. One might think that a different state of my mind would give different opinion about what my brain thinks is "truth", but i doubt that. But i love this site, if anybody doubts that whether this site has practical benefits or not---I say that it is very useful. Onething stands out, people here are open to criticism. Even if we don't get truth from this site, we have so many better routes to choose from!! This site seems to be a map. For a timeless travel. Dont give a shit about what others have to say. People can come with theories about everything it seems. And i dont like when people have -ve stuff to say about this forum. I am and would like to loyal to the forum which serves me good.
I hope something happens that we are able to live for atleast 500 years. I think that would be a good time to know few things( my fantasy)
i have recently started writing at http://ajayjetti.com/
thanks for reading if u have reached here!!
Replies from: RobinZcomment by Nick_Tarleton · 2009-04-19T20:02:13.426Z · LW(p) · GW(p)
- Name: Nick Tarleton (!)
- Age: 18
- Location: Pittsburgh, PA / Cary, NC
- Education: freshman, Carnegie Mellon University, undecided field
I discovered OB in early 2007, after my interest in transhumanism led me to Eliezer Yudkowsky's other works. I care about preventing the future from being lost, and think that Eliezer is right about how to do this. I also care plenty about being less wrong for its own sake.
I don't feel like I have much to share in this thread; my beliefs and values are probably pretty typical for Singularitarian Bayesian-wannabes (atheist, consequentialist, MWI, ...), and there's not much more to my origin story (not raised religious or anything like that, although I did have a difficult time figuring out a sane metaethic after being forced to seriously consider the issue for the first time). I do have quite a few ideas stored up to post on when I have the time this summer, though.
I would appreciate contact with any other undergraduates interested in existential risk and/or Friendly AI.
comment by MichaelBone · 2009-04-18T01:50:56.306Z · LW(p) · GW(p)
- Handle: MichaelBone
- Name: Michael Bone
- Location: Toronto, ON
- Age: 25
- Education: Bachelor of Design, currently studying cognitive science and AI.
I find minds to be the most beautiful objects in the known universe; at once, natural and artifact, localized and distributed, intuitively clear and epistemically ephemeral the mind continually beguiles, delights and terrifies me. Of particular personal interest is a mind's propensity and capability for creativity and, separately, wisdom.
Like the majority of artists, I dream of creating beautiful and profound reflections of reality through a human lens. I believe the creation of a mind would be the ultimate expression of this desire.
Like the majority of parents, I dream of my creation surpassing me in all aspects. I believe the design of a mind could be the ultimate expression of this desire.
But a mind is no passive statue or oil painting. The very dynamic nature of the mind that makes it so beautiful also implies grave ethical concerns both for humanity and for the artificial intelligence itself (a subject I am sure you're all familiar with).
It is in ethical decisions that rationality is most needed, and yet least practiced: where one is continually admonished to follow one's “gut” and not one's “brain”. As such, rationality as it pertains to ethics is my primary concern.
As far as contributing goes, I don't imagine that I'm yet expert enough on any particular topic to be of much use, but I have been reading up on the wisdom literature with the intention of tying cognitive mechanisms associated with wisdom to concepts in machine learning, so there is some hope...
comment by orthonormal · 2009-04-17T23:28:29.172Z · LW(p) · GW(p)
- Handle: orthonormal
- Name: Patrick
- Location: Berkeley, CA
- Age: 25
- Occupation: Math PhD Student
- Interest in rationality: Purely epistemic, negligibly instrumental.
- Atheist (see origin story), tentative one-boxer, MWI evangelist, cryocrastinator.
I'm driven towards rationality by three psychological factors— first, that I love to argue on philosophical and related matters, secondly that I'm curious about most fields of intellectual endeavor, and thirdly that it pains me to realize I'm being less than fully honest with myself.
Ye gods, that sounds like a personals ad. Should compensate by adding that I'm rather selfish compared to the standards of altruism espoused here; my typical desire is to observe and comprehend, not necessarily to help.
comment by Pierre-Andre · 2009-04-17T17:59:27.990Z · LW(p) · GW(p)
- Handle: Pierre-Andre
- Name: Pierre-André Noël
- Age: 26
- Gender: Male.
- Location: Québec City, Québec, Canada
- Education: B.Sc. Physics, M.Sc. Physics and currently midway through Ph.D. Physics.
- Research interests: Dynamics, networks, dynamics over networks, statistical mechanics.
- Newcomb: Commited to one-box if facing a decent Omega.
- Prisoner: Cooperate if I judges that the other will.
I discovered OB some months ago (don't remember how) and reads both OB and LW. For now, I am mostly a lurker.
I have been raised as a Catholic Christian and became atheist midway through high school. I think that Science should take a clear position on the topic of religions, for the good of mankind.
I plan to write top-level posts on some of the following topics when I will have the time (and the karma) to do so.
- Beyond the fad: the word "emergence" carries > 0 information.
- Telling the truth.
- Universal priors.
- Many Bayesian-related topics.
By the way, does the "be half accessible" request holds for LW too?
Replies from: ciphergoth, Eliezer_Yudkowsky↑ comment by Paul Crowley (ciphergoth) · 2009-04-18T10:20:29.228Z · LW(p) · GW(p)
For those wondering what this conversation is about:
Contributors: Be Half Accessible, Overcoming Bias, December 21, 2006
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-17T18:15:21.968Z · LW(p) · GW(p)
Re: be half accessible - I'd say no. There are accessible posts aplenty here. But "don't be gratuitously inaccessible" is still good advice.
comment by Nanani · 2009-04-17T04:29:31.197Z · LW(p) · GW(p)
Handle: Nanani
Location: Japan
Age: 25
Gender: Female (not that it matters)
Education: BSc Astrophysics
Occupation: Interpretation/Translation (Mostly English and Japanese, in both directions)
Goal : To Win.
I found this site through Overcoming Bias, and had already been lurking at the latter for years beforehand. When I first came across Overcoming Bias, it was for too difficult for me. I have since become stronger, enough to read most of its archives and become even stronger. I intend to keep this positive cycle active.
I must say that I hardly feel like a newcomer due to those years of lurking in the shadows. Let's see how the light feels.
Replies from: MBlume, SilasBarta, Amanojack↑ comment by MBlume · 2009-04-17T04:51:16.160Z · LW(p) · GW(p)
Goal : To Win.
But what does a win look like to you?
Replies from: Nanani↑ comment by Nanani · 2009-04-17T05:06:09.825Z · LW(p) · GW(p)
Not "A" win, but winning in general, Winning at Life if you will.
To me, this means :
Staying true to myself, becoming only what I decide I want to be (which is in turn based on achieving sub-goals)
Achieving my lesser and short-term goals.
Being able to constantly improve myself
Not Dying (I'm only not signed up for cryo because I live in Japan and have trouble with the creation of a suitable policy. Ideally, I'd like to go transhuman.)
Explicit failure scenarios involve becoming a future self that stays still instead of moving forward. If I became a person who was satisfied with the status quo without any desire to expand her horizons, that would be a dramatic failure. Another possibility to avoid is giving in to biology, blindly following urges and, yes, succumbing to biases.
In other words, Winning is Future-Bending to get to be the Me I want to be.
Replies from: Mass_Driver, Mass_Driver↑ comment by Mass_Driver · 2010-10-09T05:34:10.777Z · LW(p) · GW(p)
I often get frustrated by definitions like yours, because they are so recursive. Moving through your criteria, you want to be true to yourself (references 'yourself'), achieve your goals (references 'your goals'), improve yourself (references 'yourself'), and not die (implicitly references the continued existence of your self).
Do you have any notion at all of what the self is that you're trying to be true to and improve? Put another way, why would it be a tragedy if you died?
Please don't take this as a personal attack -- I don't know you, and don't dislike you. I just want to learn more about your reasoning.
↑ comment by Mass_Driver · 2010-10-09T05:31:41.065Z · LW(p) · GW(p)
I often get frustrated by definitions like yours, because they are so recursive. Moving through your criteria, you want to be true to yourself (references 'yourself'), achieve your goals (references 'your goals'), improve yourself (references 'yourself'), and not die (implicitly references the continued existence of your self).
Do you have any notion at all of what the self is that you're trying to be true to and improve? Put another way, why would it be a tragedy if you died?
Please don't take this as a personal attack -- I don't know you, and don't dislike you. I just want to learn more about your reasoning.
↑ comment by SilasBarta · 2010-03-11T20:51:15.992Z · LW(p) · GW(p)
So does your name mean "seven two"? (n00b student of Japanese here)
Replies from: Nanani↑ comment by Amanojack · 2010-03-11T20:47:49.396Z · LW(p) · GW(p)
Hi! I'm a J-E translator in Japan as well. Both directions? Wow.
Replies from: Nanani↑ comment by Nanani · 2010-03-15T00:38:02.990Z · LW(p) · GW(p)
Oh really? Where are you based, if you don't mind my asking? I'm in Kansai myself.
Yes, both directions, mostly out of necessity. Being in-house, sometimes it isn't possible to have someone on hand with the right native language. Working into my non-native language is hard, but also a great a learning experience.
Replies from: Amanojack↑ comment by Amanojack · 2010-03-16T05:08:28.475Z · LW(p) · GW(p)
I'm in Kansai as well.
I work freelance, so I'd probably never be asked to translate into my non-native language, given that other freelancers could do it much better and more cheaply. Sometimes I wish I had the chance, though, because I'd surely learn a lot.
comment by swestrup · 2009-04-16T20:33:12.207Z · LW(p) · GW(p)
I never knew I had an inbox. Thanks for telling us about that, but I wonder if we might not want to redesign the home page to make some things like that a bit more obvious.
Replies from: arundelo, ChrisHibbert, PhilGoetz↑ comment by ChrisHibbert · 2009-04-17T06:57:02.421Z · LW(p) · GW(p)
Yes, this was valuable. I've been using my user page and re-displaying each of the comments to find new comments. Now I've added my inbox to my bookmark list of places to check every morning (right after the cartoons.)
comment by DragonGod · 2017-06-13T12:43:42.144Z · LW(p) · GW(p)
I’m a 19-year-old Nigerian male. I am strictly heterosexual and an atheist. I am a strong narcissist, and I may have Narcissist Personality Disorder (though I am cognizant of this vulnerability and do work against it which would lower the probability of me suffering from NPD). I am ambitious, and my goal in life is to plant my flag on the sand of time; engrave my emblem in history; immortalise myself in the memory of humanity. I desire to be the greatest man of the 21st century. I am a transhumanist, and intend to live indefinitely, but failing that being the greatest man of the 21st century would suffice. I fear death.
I'm an insatiably curious person. My interests are broad; rationality, science, mathematics, philosophy, economics, computing, literature.
My hobbies include discourse and debate, writing, reading, anime and manga, strategy games, problem solving and learning new things.
I find intelligence the most attractive quality in a potential partner—ambition and drive form a close second.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2017-06-13T13:34:45.599Z · LW(p) · GW(p)
I desire to be the greatest man of the 21st century.
A good preliminary estimate of the probability of this happening would be one in ten billion, given the number of people who will live during the 21st century.
comment by Lauryn · 2013-03-22T19:23:32.857Z · LW(p) · GW(p)
Hello all. I'm Lauryn, a 15-year old Christian- and a Bayesian thinker. Now, I'm sure that I'm going to get criticized because I'm young and Christian, but I undertand a lot more than you might first think (And a lot less than I'd like to). But let me finish first, yeah? I found LessWrong over a year ago and just recently felt that I just might fit in just enough to begin posting. I'd always considered myself clever (wince) and never really questioned myself or my beliefs, just repeated them back. But then I read Harry Potter and the Methods of Rationality, and was linked over here... And you can guess most of the story from there on. I devoured the sequences in less than a month, and started reading gobs and gobs of books by people I'd never heard of- but once I did, I realized that they were everywhere. Freud, Feynman, Orwell... And here I am, a good year later, a beginning rationalist. Before I get attacked, I'd like to say that I have seriously questioned my religon (as is implied above), and still came back to it. I do have ways that I believe Christianity could be disproved (I already posted some here ), and I have seen quite a bit of evidence for evolution. All right, NOW you may attack me.
Replies from: CCCcomment by [deleted] · 2012-07-03T13:48:43.055Z · LW(p) · GW(p)
Hello!
I'm an 18 year old Irish high school student trying to decide what to do after I leave school. I want to make as much happiness as I can and stop as much suffering as I can but I'm unsure how to do this. I'm mostly here because I think reducing x risk may be a good idea, but to be honest I think there are other things which seem better, but anyway I hope to talk to people here about this!
Some of you may be members of 80000hours I imagine, so heres me on 80k : http://80000hours.org/members/ruairi-donnelly
Replies from: Laoch↑ comment by Laoch · 2012-07-03T13:52:07.713Z · LW(p) · GW(p)
Hey welcome to lesswrong.com fellow Irish person.
Replies from: None↑ comment by [deleted] · 2012-07-03T22:34:14.851Z · LW(p) · GW(p)
Your name means "hero" in Irish! I have actually used the same username as you! on youtube for example! where do you live if you don't mind me asking? :)
Replies from: Laoch, None↑ comment by Laoch · 2012-07-04T08:33:16.056Z · LW(p) · GW(p)
I live in D4 actually. My Irish has faded drastically but then again it never really was that good, tá brón orm.
Replies from: None↑ comment by [deleted] · 2012-07-08T00:46:59.462Z · LW(p) · GW(p)
Sure thats grand, personally I like speaking it and go to an Irish school but I think its pretty shocking that the government spends half a billion a year trying to keep it alive (and its not working) and apparently it takes up 15% of ones schooling time.
I live in Bray, Wicklow :) Maybe we can make a meet up sometime :)
comment by MetaLogicalMind · 2010-04-19T12:14:27.240Z · LW(p) · GW(p)
Greetings. Since this is more of a "blog" than a forum, I have hesitated to join the conversation. But since this is an open invitation I figured I would introduce myself.
I've been "lurking" on this site for over a year. I am a young professional working in the Philadelphia area as a computer programmer, though my background is in Engineering. I also consider myself an amateur philosopher. I stumbled upon the Overcoming Bias blog sometime in 2008 and found many of the posts to be thought-provoking and insightful. I have both OvercomingBias and LessWrong on my RSS feed.
I am interested in Artificial Intelligence, Ontology, Epistemology, General Semantics, Cognition, and many other aspects of rationalism. I am particularly interested in following the work of the Singularity Institute, and I wish them success.
I also occasionally participate in the "Thothica" online Philosophy community in Second Life. If there is a LessWrong or Singularity Institute contingent in Second Life, I would love to hear about it.
Cheers. :-)
comment by avalot · 2009-07-20T16:32:21.270Z · LW(p) · GW(p)
Hello.
I'm Antoine Valot, 35 years old, Information Architect and Business Analyst, a frenchman living in Colorado, USA. I've been lurking on LW for about a month, and I like what I see, with some reservations.
I'm definitely an atheist, currently undecided as to how anti-theist I should be (seems the logical choice, but the antisocial aspects suggest that some level of hypocrisy might make me a more effective rational agent?)
I am nonetheless very interested in some of the philosophical findings of Buddhism (non-duality being my pet idea). I think there's some very actionable and useful tools in Buddhism at the juncture of rationality and humanity: How to not believe in santa, but still fulfill non-rational human needs and aspirations. Someone's going to have to really work on convincing me that "utility" can suffice, when Buddhist concepts of "happiness" seem to fit the bill better for humans. "Utility" seems too much like pleasure (unreliable, external, variable), as opposed to happiness (maintainable, internal, constant).
Anyway, I'm excited to be here, and looking forward to learning a lot and possibly contributing something of value.
A special shout-out to Alicorn: I read you post on male bias, and I dig, sister. I'll try to not make matters worse, and look for ways to make them better.
comment by Jonii · 2009-07-20T09:58:24.870Z · LW(p) · GW(p)
Hello.
My name is Joni, I'm 21 years old, I study mathematics at Helsinki University, Finland, I'm male...
So, yeah. Reason behind my interest in rationality would probably be something that is likely to earn me ADHD-diagnosis in near future. Since I've been mentally impaired to some weird degree, I've tried to find a Way to overcome that. My earlier efforts weren't all that effective, but now that I found a site that gathers results of systematic study around this field, I expect a lot.
My school grades were about medium throughout my life. I enjoy a board game called "go" a lot, and I used it to find and eliminate some biases and cognitive mistakes(I'm Finnish shodan). Other than math, I like psychology, I also find transhumanism very interesting topic, and I have many times thought that I could make my own super-AI. I like computers, I know superfically some programming languages, but I haven't had any larger projects on any real languages(Some 500 line scripts occasionally).
I found this site through irc-channel for Finnish transhumanist movement. Whole notion of "refining the art of human rationality" was like a dream come true. I try to avoid commenting to avoid quality of discussion dropping, so for the months to come, I'll be mostly doing my homework to gather some basic knowledge.
Replies from: cousin_itcomment by Ttochpej · 2009-05-29T13:15:01.591Z · LW(p) · GW(p)
Hi, I'm James, 24, male, and a Information Technology student in my last year of my degree, and live in Australia, Central Queensland. I have been trying to answer big questions like "What is the meaning of life?", "What is Intelligence?", and trying to come up with a Grand Theory Of Everything, for as long as I can remember. I have written a lot on my theory's and hypotheses but everything I have ever written is saved on my computer and I have never shared any of my ideas with anyone, it has just been a private hobby of mine. I'm hoping I'll be able to learn so more by reading the posts on Less Wrong and maybe eventually post some of my own ideas.
I have read on here that a few people are signed up for cryonics, I think cryonics sounds interesting and I might sign up for it as well one day, but I think more of my self living on through knowledge. By that I mean If you say a person is made up by there knowledge and experience and not by there body, then if I can write my knowledge and experiences down, and then once I die people read and learn that knowledge and about my experiences, then I see it as a ship of theseus paradox, my knowledge and experience still exists just in a different body.
comment by Alexandros · 2009-04-24T21:21:15.016Z · LW(p) · GW(p)
- Handle: Alexandros
- Name: Alexandros Marinos
- Location: Guildford, UK
- Age: 27
- Gender: Male
- Education/Occupation: Currently in 3rd year of PhD in Computing
- Links: Blog, FriendFeed
Hi all,
I spent the first 25 years of my life in a christian quasi-fundamentalist environment. As time went by I was increasingly struggling to reach a consistent mindset within the christian belief-constraints. Over time, I kept removing elements of the belief system while nominally retaining the fundamentals even if simply as shells. At some point, I lost someone deeply important to me due to not providing her definition of a spiritual relationship, a situation similar to MBlume's even if predating my explicit conversion to atheism. This led me to distance myself from the christian circles, as I considered being truly accepted without effectively leading a double life an impossibility. About a year later, discovering Eliezer's writings provided me with the mature articulation of many thoughts that had existed in embryonic unexpressed form in my mind and added many others. In this sense it provided the coup de grâce to my theistic beliefs.
Simultaneously to the above, I am a programmer who has not seriously written code in the last three years. This is because I have hit on a problem the solution to which I need to thoroughly formulate before resuming my efforts. The essence of the problem is that whenever I code, my intuition is to take soft-coding to the extreme. That is, I see each (algorithm/process/program) as a compilation of items of source knowledge and try to factor each item out. Taken to its logical conclusion, this leads to something that could be called knowledge-oriented programming or some such. I did not consider this related to artificial intelligence but I am now not entirely certain.
Additionally, I am involved in Digital Ecosystem research, what I consider the effort to make control the property of networks rather than individual agents in the network. As an extension of this field, is my interest in social computing and the goal to make an unmoderated online community that allows freedom from coercion to its members while at the same time is able to collectively control itself. However, among the three, if I had to state only one goal, it would certainly be the effort to achieve 'extreme soft coding'
I recently have grown increasingly unsatisfied by the contents of my feed reader, I find in this community a higher a satisfaction-to-noise ratio than even Hacker News, and intend to try and participate as much as I can, although I don't expect to have any major contributions any time soon.
comment by Iguanodontist · 2009-04-21T18:16:50.701Z · LW(p) · GW(p)
Howdy.
My name's Schuyler. I'm a 22 year old 1st year law student in NYC, with my undergraduate degree in Economics and Philosophy. I spend my free time as a volunteer fireman/EMT out on Long Island.
Stumbled over to OB in the beginning of September, as I fleshed out my Google Reader in preparation for the upcoming year of law school (gotta kill time in class somehow). The Babyeaters got me hooked, and when LessWrong opened up I started lurking here, as well. Never posted or commented on either site, except to express my appreciation for the Babyeaters series. Always been kind of intimidated, to be honest.
I suppose I became interested in rationality when I started taking my Econ theory courses. The first assumption of economics is that people are rational - and in my class, as well as all the others I've TA'ed for, the students invariably respond 'No, they aren't.' Immediately. So when I branched out into my second major, and reading Friedman and Nozick, I tried to both understand why people aren't rational, and try to bring myself closer to that ideal.
I don't think I've done such a good job, all told. But I am grateful to the contributors on this website and over at OB for helping so frequently.
comment by derekz · 2009-04-21T15:01:24.251Z · LW(p) · GW(p)
Hello all. I don't think I identify myself as a "rationalist" exactly -- I think of rationality more as a mode of thought (for example, when singing or playing a musical instrument, that is a different mode of thought, and there are many different modes of thought that are natural and appropriate for us human animals). It is a very useful mode of thought, though, and worth cultivating. It does strike me that the goals targeted by "Instrumental Rationality" are only weakly related to what I would consider "rationality" and for most people things like focus, confidence, and other similar skills far surpass things like Bayesian update for the practical achievement of goals. I also fear that our poor ability to gauge priors very often makes human-Bayesianism provide more of the appearance of rationality than actual improvement in tangible success in day-to-day reasoning.
Still, there's no denying that epistemic and instrumental rationality drive much of what we call "progress" for humanity and the more skilled we are in their use, the better. I would like to improve my own world-modeling skills.
I am also very interested in a particular research program that is not presently an acceptable topic of conversation. Since that program has no active discussion forum anywhere else (odd given how important many people here think it to be), I am hopeful that in time it will become an active topic -- as "rationality incarnate" if nothing else.
I thank all of the authors here for providing interesting material and hope to contribute myself, at least a little.
Oh, I'm a 45-year-old male software designer and researcher working for a large computer security company.
comment by JamesCole · 2009-04-21T07:27:36.770Z · LW(p) · GW(p)
James Cole
31, Brisbane Australia
Bachelor of info tech. Worked for a few years in IT research, now undertaking PhD on what information is.
I've always been interested in 'flawed thinking' and how to avoid it, and I've always thought flawed thinking was a great contributer to so many of the world's ills. Most of my life I hadn't come across many others with similar views, so it has been great to come across this community.
I came across this through Overcoming Bias, which I think I originally found via a link on reddit.
comment by james_edwards · 2009-04-19T23:29:37.494Z · LW(p) · GW(p)
- Name: James Edwards
- Handle: james_edwards
- Location: Auckland, New Zealand
- Education: BA (Philosophy, plus some Statistics and Chinese); will finish my law degree within a few months.
- Occupation: Tutor for a stage one (freshman) Critical Thinking course - teaching old-school rationalism, focused on diagnosing and preventing fallacious arguments.
Came upon Eliezer's simple truth years ago, then happened upon a link to OB during a phase of reading econblogs. As a teenager I was appalled that many people believed the unsupported claims of homeopathy and other less-than-evidence based medical treatments.
I worry that my limited mathematical education is a barrier to becoming a better rationalist, and intend to learn more. A bigger barrier still is akrasia - I struggle to follow through on my well-intentioned plans.
Rationalist lawyers seem to be rare. There may be good reasons for this which I have failed to consider. For the time being, I'm planning to write my dissertation on whether the current law makes cryogenics viable for New Zealanders.
comment by Kaj_Sotala · 2009-04-19T19:20:37.966Z · LW(p) · GW(p)
- Name: Kaj Sotala
- Nick: Xuenay
- Location: Helsinki, Finland
- Sex: Male
- Age: 22
- Education: Working on a Bachelor's degree in Cognitive Science (University of Helsinki)
- Blog, website (hasn't been updated in a while)
- Positions of note: Board member and spokesman, the Pirate Party of Finland (a political party seeking to strengthen privacy and freedom of speech laws, legalize non-commercial filesharing and drastically cut the duration of commercial copyright - 5-10 years is the official suggestion)
- Interests: Far too many, ranging from role-playing and gender politics to economics and AI. I have written one book about RPGs, currently finishing one on emerging technologies, and should start on a third one (not sure if I can disclose the topic yet) soon. Unfortunately for the readership of this site, they're all in Finnish.
comment by Chase_Johnson · 2009-04-18T00:52:01.667Z · LW(p) · GW(p)
- Handle: Chase_Johnson
- Name: same
- Location: Richardson, TX (near Dallas)
- Age: 22
- Occupation: Software Developer
- Education: BS Electrical Engineering '09
- Things I Fiddle With: Electronics, Software, Cars, Motorcycles
- Things I Read: SF, math, physics, lifehacking, technical manuals
I read LW and OB in part as procrastination. It's interesting stuff. I don't spend a lot of time implementing the LW/OB rationality techniques right now, and I am not sure I ever will. What drew me in in the first place was the discussion of AIs. However, I am more interested in the implementation of AGI than in the development of rationality that seems to be dominating at LW. Introspection can be interesting and useful, but I have a lot more fun building and tinkering.
Within my domains of specific knowledge, software and electrical engineering, I am interested in creating systems with novel uses that were impossible five or ten years ago, e.g., I am trying to get involved with the nascent GandhiCam project. Ambient intelligence, autonomous systems, things of that nature. I see a world of data all around us almost entirely unprobed and unanalyzed, and I want to collect that data. I am an inveterate generalist and interested in almost everything.
I suppose within the jargon of OB/LW, I would be considered an instrumental rationalist. I have little interesting in anything of a purely theoretical nature; I want to see something happen in reality. As a result, I pursue rationality with the intent of understanding the world and making things to expand our human abilities.
Currently, LW is losing interest for me. This is probably not a problem with LW, just a mismatch of interests. I probably won't participate much, but I do hope to see the cause succeed. However, I think I'd be more happy with the entire world being somewhat less irrational, rather than a few of us being extremely more rational.
EDIT: i suck at formatting
comment by dreeves · 2009-04-17T22:24:45.880Z · LW(p) · GW(p)
I'm Daniel Reeves (not that other Daniel Reeves who I've seen comment on OvercomingBias, although conveniently I think every post by him I've seen I've agreed with!), a research scientist at Yahoo in New York City. I work on game theory and mechanism design though I'm a computer scientist by training. At the moment I'm particularly interested in anti-akrasia tools and techniques.
PS: You pointed out a handy inbox link -- lesswrong.com/message/inbox -- but I can't seem to find that anywhere else on the site.
comment by MorgannaLeFey · 2009-04-17T14:39:38.062Z · LW(p) · GW(p)
- Handle: MorgannaLeFey
- Name: Siobhan
- Location: Central Vermont (via Ohio, Indiana, Michigan, Iowa, Minnesota, and Alaska)
- Age: At this point, I'm 43. I expect that to change.
- Occupation: database and web applications developer
- Education: I studied theatre arts and communications (no degree). Eleven years later I studied psychology, women's studies, and community development (primarily online and non-academic). Again, no degree.
When I registered I didn't consider that my handle might not be the most apt for this community, it is simply who I have been online for over fifteen years (though I have been participating in online communities since 1983). The original reasons for my handle have faded, but my attachment to the name has remained. So please, don't read more into my handle than my having a preference for the way it sounds.
I was pushed away from mathematics and the sciences from an early age by the limitations of our public school system, though I had the ability to excel in both. I was not encouraged to develop the habits of intellectual discipline that would have carried me beyond those limitations. I was content to glide through my classes, doing only the minimum necessary to maintain my A average without bothering to push much beyond that. My social life, outside activities, and connections were more important to me. This isn't something I have any regrets over, I bring it up to somewhat explain my intellectual inertia and lack of familiarity with certain standard concepts found here.
The immediate circumstance that led me to LW is that a close friend found this site and forwarded the link to my husband. My husband forwarded the link to me. However, the path that led me here started much earlier. I was of a skeptical nature from an early age, though I have only come to realize this in the course of years of self-examination. I go through periods of studying things, then leaving them behind in favor of other, less stringent pursuits. Yet, as I age my brain gets mushy more easily, so I've been looking for ways to stave that off. In researching the issue, I found that the effects of aging on the brain can be mitigated through intellectual exercise. Not much of a surprise, really. So that has me poking around for ways to exercise my brain.
There is, of course, so much more to the story of how I got here. I could fill pages that I suspect most would find uninteresting. So I'll stop here.
Replies from: CronoDAS↑ comment by CronoDAS · 2009-04-17T17:37:57.801Z · LW(p) · GW(p)
I found that the effects of aging on the brain can be mitigated through intellectual exercise. Not much of a surprise, really. So that has me poking around for ways to exercise my brain.
I recommend video games, or Magic: the Gathering.
Replies from: SoullessAutomaton↑ comment by SoullessAutomaton · 2009-04-17T18:01:10.895Z · LW(p) · GW(p)
Good old-fashioned learning also works, and I believe there is some documented evidence for things like crossword puzzles helping as well.
The main thing is activities that are neither passive nor physical. Likely, mind puzzles are better than non-reflex-based videogames are better than most fiction are better than watching television or some rot like that.
comment by blogospheroid · 2009-04-17T09:54:51.490Z · LW(p) · GW(p)
- Handle: blogospheroid - (I'm fat)
- Name: Prakash Chandrashekar
- Location: Bangalore, India
- Age: 29
- Education: Btech (BS) in civil engineering, Post Grad diploma in Management
- Occupation: Functional consultant for Enterprise software implementation
- Hobby: Browsing the net. I lurk a lot, comment very little.
- Political beliefs - georgist libertarian, enthusiastic about dynamic geography
- Religious, ethical beliefs - Atheist about omnipotent god, Agnostic about simulation controllers/watchers, believer in karma and reincarnation, searching for a true dharma in this weird age, greater intelligence is one of the few true ways of finding win-win situations
philosophical influences - vivekananda, ayn rand, nietszche, pirsig, eliyahu goldratt, the economist/technophile cluster, yudkowsky
Short term goal - Lose fat, keep job
- Medium Term goal - conquer fear (of failure, mostly), achieve financial independence
- Long term goal - expand mental capacity and live a full life
comment by prase · 2009-04-17T09:37:47.772Z · LW(p) · GW(p)
- handle: prase
- name: Hynek Bíla
- gender/sex: male
- age: 27
- location: Prague
- occupation: theoretical physicist
I am not sure whether I am a newcomer, since I read OB regularly more than a year and comment occasionaly. I have found OB almost randomly, via link from other website.
comment by CronoDAS · 2009-04-17T07:06:54.681Z · LW(p) · GW(p)
- Handles: CronoDAS, Ronfar, Doug S.
- Facebook profile
- Education: BS in Computer Engineering, minor in Mathematics. Oh, and lots of web-surfing.
- Currently job-free, by choice
- Politics: Liberal, with some libertarian sympathies
- Meta-ethics: Desire Utilitarianism
- Former tournament Magic player. Gave it up because he wanted to save money.
- Takes antidepressant medication. Wishes that he never came into existence, but is not an immediate danger to himself.
- Has read basically the entire Overcoming Bias blog
- Is a big fan of the TV Tropes Wiki.
comment by MBlume · 2009-04-17T02:28:35.801Z · LW(p) · GW(p)
- Handle: MBlume
- Name: Michael Blume
- Location: Santa Barbara, California
- Age: 23
- Gender: Male
- Education: Physics BS, pursuing PhD
- External: UCSB Physics, Livejournal, Twitter, OKCupid (any advice here would be appreciated, incidentally), Facebook
I'm Mike, I'm a grad student, research assistant, and teacher's aide at UC Santa Barbara.
I got here by way of OB (as many of us did), got there by way of a Reddit link to, if memory serves, Explainers Shoot High. Aim Low!, though my memory is pretty hazy, since I wound up reading a lot of posts very quickly. I got to Reddit by way of XKCD, and got to XKCD by way of my roommate sending me an amusing comic about string theory.
Let's see. I'm car-free, and a lifestyle biker. I love to ride, and enjoy the self-sufficiency of getting everywhere by my own muscle.
I'm currently a pescatarian, and haven't eaten any land-critters since I was 11. I continue to do this because I remain uncertain about the nature of consciousness, and thus am not certain to what extent animals suffer or experience morally significant pain. I suspect that morally significant consciousness is limited to the primates, but having not yet been fully convinced, I accept the (relatively minor) inconvenience of avoiding meat. If anyone would like to help me resolve this uncertainty, I'd certainly enjoy the conversation.
I've been an atheist for about a year now -- Eliezer's OB writing, along with some other writings I found through Reddit, pushed me in that direction throughout the end of 2007, but I did not accept the matter as fully determined until February 2008. This was not without personal consequences.
I have been rather addicted lately to the music of Tim Minchin -- I'd recommend him to anyone here.
I'm currently working in high-energy particle physics under the direction of professor Jeff Richman and in collaboration with the good folks at the Large Hadron Collider at CERN I'm hoping to, in this way, gain some first-hand experience with how science progresses, and then spend the bulk of my life trying to explain this to the world -- trying to convey a gut-level understanding of what it is scientists do, and why they can be trusted when they tell you how old a rock is, or what's likely to happen if you keep putting the same amount of carbon in the atmosphere every day for the next 50 years.
That's the plan, anyway.
Replies from: Alicorn↑ comment by Alicorn · 2009-04-18T01:27:58.207Z · LW(p) · GW(p)
I'm a pescetarian too, and have been since I was seventeen. I have recently stopped eating octopus and squid as well as things I share air with. There are a lot of reasons to reduce or eliminate meat consumption, not just ethical concerns about animal consciousness - as long as you can enjoy a good quality of life and level of health without eating animals, it's easy to find adequate reason not to. (It's more efficient, more environmentally sound, less expensive, healthier, and, yes, provides a nice ethical nervousness buffer zone to make sure you're in the clear as far as the moral significance of animals is concerned.)
Replies from: MichaelBishop↑ comment by Mike Bishop (MichaelBishop) · 2009-04-19T17:58:43.391Z · LW(p) · GW(p)
i'm also mostly vegetarian. i eat some fish.
comment by jasonmcdowell · 2009-04-17T01:53:31.611Z · LW(p) · GW(p)
- Handle: jasonmcdowell
- Name: Jason McDowell
- Location: Santa Cruz, CA
- Age: 27
- Gender: Male
- Education: Currently 1st year Ph.D student in Applied Optics
I found Less Wrong through http://transhumangoodness.blogspot.com/ I don't remember what link brought me there though. I read the Extropians list (quietly) for a few years staring in maybe 2002. I've been reading assorted transhumanist sites ever since.
I'm always happy when I find new sources of dense, high quality thinking on the internet. The TED talks have been one such treasure trove for instance. I really like Eliezer's writing and think Less Wrong will be a great source.
For the last few years I've been paying the most attention to politics. I think now is a good time for me to reengage with transhumanism. I have very rarely posted or commented in the past, preferring to just read and learn instead. Maybe with Less Wrong I'll have a reason to write. Hi!
comment by cousin_it · 2009-04-16T22:06:53.889Z · LW(p) · GW(p)
- Handle: cousin_it
- Name: Vladimir Slepnev
- Location: Moscow, Russia
- Age: 26
- Education: Masters in Math
- Occupation: Programmer
- Work, open source, music.
I have no strong desire to be a rationalist, just interested in the talk here.
Replies from: PhilGoetzcomment by byrnema · 2009-04-16T20:55:17.443Z · LW(p) · GW(p)
- Handle: byrnema
My Rationalist Origin Story
In this context, I think of "rational" as being open to questioning your assumptions. (I adore Simulacra's first step described as "separation of ideas from the self".) I agree with the general view here that being rational is a result of cognitive dissonance -- if your map doesn't fit the landscape then you're motivated to find a new map. The amount of cognitive dissonance throughout my life has been really extraordinary. I suspect that this is true for most people here.
I think I am rational enough, in the sense of being open to new ideas, as I have somewhat fewer assumptions than I need to get by comfortably already. As a small kid scaring myself with extreme philosophical views, I happily observed that afterwards I could just go downstairs and have a turkey sandwich.
I don't feel very well adapted to the real world. I often feel like everyone got a rule book and I didn't. (I recall once in elementary school that some kids said that when God was passing out brains I was hold holding the door open. I had a reputation for asking stupid (obvious) questions and, bewilderingly, I was holding the door open.) So from my point of view, LW is an amazing social micro-niche where it is OK to ask about the rulebook. In fact, you guys are analyzing the rulebook.
That’s the over-arching (hopeful) goal for being here. On a local level, I really enjoy debating and learning about stuff. Regarding learning, I don’t think we are pooling our resources in the most efficient way to get to the bottom of things. I think it would be cool to develop some kind of group strategy to effectively answer questions that should have answers:
“Given a controversial question in which there are good and bad arguments on both sides, how do you determine the answer when you’re not yourself an expert in the subject?”
Replies from: PhilGoetz, ciphergoth↑ comment by PhilGoetz · 2009-04-16T23:23:35.363Z · LW(p) · GW(p)
(I recall once in elementary school that some kids said that when God was passing out brains I was hold holding the door open. I had a reputation for asking stupid (obvious) questions and, bewilderingly, I was holding the door open.)
I've notice time and time again that, if you ask a teacher a lot of questions, most people will assume you're incompetent.
Replies from: pjeby, Douglas_Knight↑ comment by pjeby · 2009-04-16T23:31:01.129Z · LW(p) · GW(p)
I've notice time and time again that, if you ask a teacher a lot of questions, most people will assume you're incompetent.
Interesting -- my experience was that they (the class, but sometimes also the teacher) found me annoying, instead.
During my (brief) venture in college, taking a beginning calculus class, I tended to run way behind the teacher, trying to figure out why he'd done some particular step, and would finally give in and ask about it.
Invariably, he would glance at that step, and go, "Oh, you're right. That's wrong, I should have done..." And trailing off, he would erase nearly half the blackboard, back to the place where I was, and start over from there. About half the class would then glare at me, for having made them have to get rid of all the notes they just took.
Apparently, they were copying everything down whether they understood it or not, whereas I was only writing down what I could actually do. Craziest damn thing I ever saw. (But then, I didn't spend very many years in school, either before or after that point.)
↑ comment by Douglas_Knight · 2009-04-17T03:10:23.202Z · LW(p) · GW(p)
Really? I'd expect that (1) most teachers would like lots of questions; (2) the teacher's opinion would be visible to the class; and (3) the class would trust the opinion of the teacher.
Where am I going wrong?
Replies from: Larks↑ comment by Larks · 2009-08-11T22:48:02.373Z · LW(p) · GW(p)
1) is true for good teachers, and increasingly as one progresses through education, but not always. My physics teacher imposed a 5 question/day limit on me, albeit somewhat in jest.
2) is probably true, but may harm the student before they're saved by college/ banding by ability, as 3) becomes increasing true with time.
↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T22:57:45.055Z · LW(p) · GW(p)
Your last question is of towering importance.
I'd slightly rephrase that as "...in which both sides have arguments that a non-expert might be convinced by..." - there's no barrier to such a problem arising even where there are no inherently good arguments at all on one side, such as the MMR-autism scare.
Replies from: byrnemacomment by hamflask · 2009-04-16T20:15:29.587Z · LW(p) · GW(p)
- Handle: hamflask
- Name: Eric Hofreiter
- Location: Champaign/Urbana, IL
- Age: 19
- Education: 2nd year in electrical engineering
I started with OB after being linked to Eliezer's series on quantum physics, and I've been absorbed with OB and now LW ever since. I'm more of a lurker, and I've never really commented at OB for fear that my input would be deemed useless. Perhaps I'll begin commenting here on LW now that we have a voting system.
comment by badger · 2009-04-16T19:08:22.179Z · LW(p) · GW(p)
- Handle: badger
- Age: 22
- Location: Tempe, AZ, but soon to be Champaign, IL
- Education: BS in math, soon to be an economics grad student
- Occupation: Fiscal analyst for the state legislature
I'm interested in rationality on a personal level and it's relevance in economics. I lurked at OB since its beginning, and am rather surprised I've been active on this site. I have a tendency to over analyse social situations, even over the internet, which resulting in lurking. I've been very impressed by the cooperative nature of this community, its openness to beginners, and the prominent lack of a bystander effect here.
Other interests include: programming (some experience in Java, Scala, and Scheme), political philosophy (left-libertarian, somewhere between Kevin Carson and Will Wilkinson), ethics, science fiction, math, linguistics, conlangs (experience with Quenya, Esperanto, and lojban), and more of the typical nerd interests.
My origin story has more detail on how I ended up here.
comment by mitechka · 2009-04-16T18:58:31.623Z · LW(p) · GW(p)
- Handle: mitechka
- Name: Dmitriy Kropivnitskiy
- Location: Brooklyn, NY
- Age: 35
- Education: 2 years of college (major chemistry)
- Occupation: Systems Administrator
About a year ago, I have found Eliezer''s article about cognitive biases and from there googled my way to OB. My interest in rationality lies primarily in learning to make better decisions and better understanding of "how the world works". So far I am mostly reading OB and LW trying to see if topics I would like to write about have already been covered or actually are worth writing about.
Replies from: MBlume↑ comment by MBlume · 2009-04-16T19:01:56.009Z · LW(p) · GW(p)
So far I am mostly reading OB and LW trying to see if topics I would like to write about have already been covered or actually are worth writing about.
If you'd like to tell us about them, we might be able to give you an idea of what's already been said.
Replies from: mitechka↑ comment by mitechka · 2009-04-17T22:38:19.539Z · LW(p) · GW(p)
I guess, this is what comes out of writing in a hurry. The way it came out, I am an arrogant ass, who only reads what others have to say to see if it relates to something he, himself wants to say. I found most articles on both OB and LW to be enlightening and some to be a major revelation. The way I view the world changed in a significant way in the past 6 months and in a large part this was due to reading OB/LW and trying to read up on philosophy, math, physics etc. to better understand what people on OB and LW are saying. The topics I am contemplating writing about are concept of "deserving" in relation to utilitarianism and everyday Prisoner's Dilemma type situations and how they differentiate from classical definition of PD.
Edited: and I can definitely manage better grammar
Replies from: MBlume, steven0461↑ comment by steven0461 · 2009-04-17T22:45:42.276Z · LW(p) · GW(p)
See True PD, though you may have other differences in mind. Desert in utilitarianism hasn't been discussed as far as I remember. And FWIW, great-grandparent did not come off as arrogant to me.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-18T10:34:30.028Z · LW(p) · GW(p)
I am keen to see a discussion of desert in utilitarianism/consequentialism.
comment by David_Rotor · 2009-04-16T18:42:31.909Z · LW(p) · GW(p)
* Handle: David_Rotor
* Name: David
* Location: Ottawa, Canada
* Age: 44
* Gender: Male
* Education: MSc
* Occupation: Procurement, Business Development
I started following this site when it was introduced on Overcoming Bias. I came across OB while doing some refresher work on statistical analysis, more particularly how I could help some clients who were struggling with how to use statistical analysis to make better decisions - or in other words they were ignoring data and going with a gut feel bias. I stuck around because I found the conversations interesting, though I find it more difficult to make them useful.
On the religious front ... atheist from about the same time I figured out Santa Claus and the Easter Bunny.
Replies from: MBlume↑ comment by MBlume · 2009-04-17T03:18:15.717Z · LW(p) · GW(p)
atheist from about the same time I figured out Santa Claus and the Easter Bunny
Oddly enough, I figured out all three at different times. The Easter Bunny was an obvious absurdity from the start, but I told myself stories about how SC might exist for years.
comment by curious · 2009-04-16T18:01:03.371Z · LW(p) · GW(p)
- handle: curious
- location: NY
- age: 27
- education: BA, biology
- occupation: journalist
OB reader/lurker. not much of a commenter -- i often don't get around to reading posts thoroughly until they're a bit old (at least in 'blog time') and the discussion has moved on...
am i an "aspiring rationalist"? maybe. i want to be alert to irrational behavior/decisions in my life. i'm not yet ready to commit that i will consistently abandon those behaviors and decisions, but i at least want to acknowledge when they're not rationally defensible.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T20:14:25.723Z · LW(p) · GW(p)
Enough of us read the comments feed that you can often see new discussions spark in old posts; give it a go.
comment by thomblake · 2009-04-16T14:05:17.165Z · LW(p) · GW(p)
- Handle: thomblake
- Name: Thom Blake
- Location: New Haven, CT (USA)
- Age: 30
- Occupation: Programmer, doctoral candidate in computer ethics
Found on the web at http://thomblake.com and http://thomblake.mp. Twitter: @thomblake
My dissertation is on the philosophical foundations of building ethical robots. It's not quite done.
I'm trained as a philosopher, with special emphasis on virtue ethics/ ethical individualism and computer ethics. I've often characterized myself as a Romantic and an irrationalist. Nietzsche and Emerson FTW.
ETA: link to my origin story and closet survey
comment by gjm · 2009-04-16T14:04:06.953Z · LW(p) · GW(p)
OK, let's continue with the introductions.
- Handle: gjm (gjm11 on the wiki)
- Name: Gareth McCaughan
- Location: Cambridge, UK
- Age: 38
- Occupation: mathematician (in industry), though I've done a fair bit of programming in my time too.
Lifelong rationalist, at least in principle, though I somehow managed to remain (actively) religious for many years. Political leftie (especially by US standards). Interests include: everything. "Found" LW by being a regular at OB since long before LW was mooted. Gently skeptical about cryonics, imminent technological singularities, and suchlike.
comment by moridinamael · 2014-09-18T17:50:33.227Z · LW(p) · GW(p)
snip
comment by GabrielC · 2014-09-18T16:03:52.125Z · LW(p) · GW(p)
Hi! I'm Gabriel, and i'm a 20 year old medical student in London. I (like many of you maybe) found my way here through HPMOR. Having spent the last few years of university mentally stagnating due to the nature of my studies this site and the resources were a breath of fresh air. I'm currently working my way through the sequences, where one comment led me to this thread - apologies if commenting on old posts is frowned upon.
I was born to an educated Muslim family, and until recently I had been blindly following the beaten track, although with little interest in the religion itself. It is only now that I have begun to think about what I know, and how I know it that I am forcing myself to adopt an objective and skeptical standpoint. Over the next 12 months or so, I plan to fully examine the texts and writings of both Islam and it's opponents, and aim to come to an unbiased, rational conclusion. It is my hope that I can update my map to define the true territory and I must thank Mr. Yudkowsky for being a catalyst for my intellectual re-awakening. Although perhaps I will get some flak when I try to give him some constructive criticism: I cannot be the only person in the position I find myself in, wanting to examine my religion and come to a true conclusion. It is abundantly helpful to read arguments for both sides which are logical, and well reasoned...but most of all, courteous. There are parties far more guilty than Mr. Yudkowsky, but it really is horrible when atheist writings have a strong undertone of contempt for those who follow a religion. I am indeed delighted that those atheists have given the matter some thought and come to conclusions that satisfy them (and indeed as Mr. Yudkowsky mentions above, consider theism an open-and-shut case!) but perhaps for those younger students of rationality such as myself, it would be wonderful if we could read these writings without being looked down upon as mind-numbingly stupid. Despite this, I very much enjoy reading Mr. Yudkowsky's writings and I look forward to reading much more!
I suppose I feel very comfortable with the anonymity provided by the internet and since I have given relatively little information about myself on a relatively smaller website I doubt anyone I know will see this and cry out in horror that I could potentially leave my faith. I would love to have some discussions with anyone on this website that has been in a similar position to me, although I have noticed a bias towards discussions around Christianity as I suppose many users here are American and that is a major force for you guys over the pond.
I think above all else, the reason i'm so happy to have found this intellectual sanctuary is because I don't have much else other than my trusty mind. Unlike my peers, chasing girls would be a fruitless effort, and small talk always seemed a bit pointless. Books, learning and thinking have always been my allies and I cannot wait to read about the biases I can try to eliminate. At the end of the day, if you have only one treasure in life it would be prudent to look after and improve it wherever possible.
Well met gents, and again I apologise if I shouldn't be commenting on a very old post!
Replies from: Solliel↑ comment by Solliel · 2015-10-10T08:44:02.296Z · LW(p) · GW(p)
We don't really mind deadposting here. You might consider this a useful resource in your examination of the Quran.
comment by aspera · 2012-10-08T02:52:26.529Z · LW(p) · GW(p)
Hi all. I'm a scientist (postdoc) working on optical inverse problems. I got to LW through the quantum sequence, but my interest lies in probability theory and how it can change the way science is typically done. By comparison, cognitive bias and decision theory are fairly new to me. I look forward to learning what the community has to teach me about these subjects.
In general, I'm startled at the degree to which my colleagues are ignorant of the concepts covered in the sequences and beyond, and I'm here to learn how to be a better ambassador of rationality and probability. Expect my comments to focus on reconciling unfamiliar ideas about bias and heuristics with familiar ideas about optimal problem solving with limited information.
I'll also be interested to interact with other overt atheists. In physics, I'm pretty well buffered from theistic arguments, but theism is still one of the most obvious and unavoidable reminders of a non-rational society (that and Jersey Shore?). In particular, I'm expecting a son, and I would love to hear some input about non-theistic and rationalist parenting from those with experience.
Replies from: aspera↑ comment by aspera · 2012-10-08T03:11:40.928Z · LW(p) · GW(p)
By the way, I wonder if someone can clear something up for me about "making beliefs pay rent." Eliezer draws a sharp distinction between falsifiable and non-falsifiable beliefs (though he states these concepts differently), and constructs stand-alone webs of beliefs that only support themselves.
But the correlation between predicted experience and actual experience is never perfect: there's always uncertainty. In some cases, there's rather a lot of uncertainty. Conversely, it's extremely difficult to make a statement in English that does not contain ANY information regarding predicted or retrodicted experience. In that light, it doesn't seem useful to draw such a sharp division between two idealized kinds of beliefs. Would Eliezer assign value to a belief based on its probability of predicting experience?
How would you quantify that? Could we define some kind of correlation function between the map and the territory?
Replies from: TimS↑ comment by TimS · 2012-10-08T03:24:48.990Z · LW(p) · GW(p)
I always understood the distinction to be about when it was justifiable to label a theory as "scientific." Thus, a theory that in principle can never be proven false (Popper was thinking of Freudian psychology) should not be labeled as a "scientific theory."
The further assertion is that if one is not being scientific, one is not trying to say true things.
Replies from: aspera↑ comment by aspera · 2012-10-08T03:55:29.188Z · LW(p) · GW(p)
Thanks Tim.
In the post I'm referring to, EY evaluates a belief in the laws of kinematics based on predicting how long a bowling ball will take to hit the ground when tossed off a building, and then presumably testing it. In this case, our belief clearly "pays rent" in anticipated experience. But what if we know that we can't measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn't strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
My argument is that nearly every belief pays some rent, and no belief pays all the rent. Almost everything couples in some weak way to anticipated experience, and nothing couples perfectly.
Replies from: TimS, beoShaffer↑ comment by TimS · 2012-10-10T17:47:23.560Z · LW(p) · GW(p)
I think you are conflating the issue of falsifiability with the issue of instrument accuracy. Falsifiability is just one of several conditions for labeling a theory as scientific. Specifically, the requirement is that a theory must detail in advance what phenomena won't happen. The theory of gravity says that we won't see a ball "fall" up or spontaneously enter orbit. When more specific predictions are made, instrument errors (and other issues like air friction) become an issue, but that not the core concern of falsifiability.
For example, Karl Popper was concerned about the mutability of Freudian psychoanalysis, which seemed capable of explaining both an occurrence and its negative without difficulty. But contrast, the theory of gravity standing alone admits that it cannot explain when an object falls to Earth at a different rate than 9.88 m/s^2. Science as a whole has explanations, but gravity doesn't.
Committing to falsifiability helps prevent failure modes like belief in belief.
Replies from: aspera↑ comment by aspera · 2012-10-10T18:38:00.642Z · LW(p) · GW(p)
There are a couple things I still don't understand about this.
Suppose I have a bent coin, and I believe that P(heads) = 0.6. Does that belief pay rent? Is it a "floating belief?" It is not, in principle, falsifiable. It's not a question of measurement accuracy in this case (unless you're a frequentist, I guess). But I can gather some evidence for or against it, so it's not uninformative either. It is useful to have something between grounded and floating beliefs to describe this belief.
Second, when LWers talk about beliefs, or "the map," are they referring to a model of what we expect to observe, or how things actually happen? This would dictate how we deal with measurement uncertainties. In the first case, they must be included in the map, trivially. In the second case, the map still has an uncertainty associated with it that results from back-propagation of measurement uncertainty in the updating process. But then it might make sense to talk only about grounded or floating beliefs, and to attribute the fuzzy stuff in between to our inability to observe without uncertainty.
Your distinction makes sense - I'm just not sure how to apply it.
Replies from: TimS↑ comment by TimS · 2012-10-10T19:25:19.640Z · LW(p) · GW(p)
Strictly speaking, no proposition is proven false (i.e. probability zero). A proposition simply becomes much less likely than competing, inconsistent explanations. To speak that strictly, falsifiability requires the ability to say in advance what observations would be inconsistent (or less consistent) with the theory.
Your belief that the coin is bent does pay rent - you would be more surprised by 100 straight tails than if you thought the coin was fair. But both P=.6 and P=.5 are not particularly consistent with the new observations.
Map & Territory is a slightly different issue. Consider the toy example of the colored balls in the opaque bag. Map & Territory is a metaphor to remind you that your belief in the proportion of red and blue balls is distinct from the actual proportion. Changes in your beliefs cannot change the actual proportions.
Your distinction makes sense - I'm just not sure how to apply it.
When examining a belief, ask "What observations would make this belief less likely?" If your answer is "No such observations exist" then you should have grave concerns about the belief.
Note the distinction between:
Observations that would make the proposition less likely
Observations I expect
I don't expect to see a duck have sex with an otter and give birth to a platypus, but if I did, I'd start having serious reservations about the theory of evolution.
Replies from: BerryPick6, aspera↑ comment by BerryPick6 · 2012-10-10T21:31:49.319Z · LW(p) · GW(p)
I found this extremely helpful as well, thank you.
↑ comment by beoShaffer · 2012-10-08T04:24:31.576Z · LW(p) · GW(p)
But what if we know that we can't measure the fall time accurately? What if we can only measure it to within an uncertainty of 80% or so? Then our belief isn't strictly falsifiable, but we can gather some evidence for or against it. In that case, would we say it pays some rent?
Yes. As a more general clarification, making beliefs pay rent is supposed to highlight the same sorts of failure modes as falsifiablility while allowing useful but technically unfalsifiable beliefs (e.g., your example, some classes of probabilistic theories).
comment by gunnervi · 2012-07-09T02:00:24.681Z · LW(p) · GW(p)
Hello!
I'm an 18 year old American physics undergraduate student (rising sophomore). I came here after reading HPMOR and because I think that being Rational will improve my ability as a scientist (and now I've realized, though I guessed it after reading Surely You're Joking, Mr. Feynman, that I need to get better at not guessing the teacher's password I know a bit of pure mathematics but little of cognitive sciences (take this category as you will. If you think something might be in this category, then I likely don't know much more about it than core Sequences + layperson's knowledge).
Also, please yell at me if I make claims about history and give no sources. (one of my friends growing up was a huge history buff, so I have a bunch of half remembered historical facts in my head (mostly WWII and Roman era) that I tend to assume are not only true but undisputed and common knowledge). Even in informal settings I should link to, at the least, Wikipedia. (This also ensures that I am not making false claims)
comment by arbimote · 2010-01-10T11:42:57.173Z · LW(p) · GW(p)
- Handle: arbimote
- Gender: Male
- Age: 22 (born 1987)
- Location: Australia
- Occupation: Student of computer science
I've been lurking since May 2009. My views on some issues that are often brought up on LW are:
- It's a good idea to sign up to cryonics if you have the money, due to a Pascal's Wager type argument. I have not signed up, since I do not yet have the money (and AFAIK there are further complications due to being in Australia).
- It is possible and desirable for humans to create AGI.
- MWI seems intuitive to me, but I have not read enough about the subject to form an decent estimate on its correctness.
I feel like I should pad out this intro with more information, but that'll have to do for now.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T11:49:47.656Z · LW(p) · GW(p)
I feel like I should pad out this intro with more information
No, this is fine - thanks for commenting!
comment by akshatrathi · 2009-11-22T01:20:20.136Z · LW(p) · GW(p)
- Name: Akshat Rathi
- Location: Oxford, UK
- Age: 22
- Education: B. Tech (Pharmaceutical Chemistry & Technology), currently studying towards a D. Phil. in Organic Chemistry
I grew up in India but in a family where religion was never forced on the individual. I think I became a rationalist the day I started countering superstition and its evils through reasoning. Now as a scientist I find myself rationalising every experimental outcome. As a chemist, I get angry every so often when I have to settle for an empirical outcome over a rational one.
I was introduced to less wrong by alexflint with whom I co-author a blog. I have always been interested in philosophy and hope to take it up as a subject of study very soon.
Replies from: RobinZ↑ comment by RobinZ · 2009-11-22T03:20:32.469Z · LW(p) · GW(p)
Welcome! I'm sure we'll be glad to get your input.
Incidentally, if you're interested in checking out some of the posts, there are a couple places which are quite good to start:
- The About page, which has a list of sample posts, mostly written by Eliezer Yudkowsky and imported from Overcoming Bias, and
- The Top Rated page, which includes many very good posts from the recent, Less Wrong era.
↑ comment by akshatrathi · 2009-11-22T03:31:50.964Z · LW(p) · GW(p)
Thanks for the welcome. I had a few simple questions. How to get bullet points in comments? How to make text into hyperlinks? and How to get that blue line on the left margin when quoting something?
Replies from: RobinZ, AdeleneDawner↑ comment by RobinZ · 2009-11-22T03:38:56.888Z · LW(p) · GW(p)
Much of it is explained by the text that appears when you click the "Help" link below the comment. (Look below the text window at the right.) But to do those three things specifically:
- Bulletted lists: Put an asterisk (*) at the beginning of each line corresponding to an item on the list. Edit: You may need to put a space after the asterisk.
- Hyperlinks: Put the text you want visible in square brackets, then immediately after (no space) the URL in parentheses. Thus: [Three Worlds Collide](http://lesswrong.com/lw/y4/three_worlds_collide_08/) becomes Three Worlds Collide.
- Blockquotes: Put a greater-than (>) sign at the beginning of each quoted line (including blank lines between paragraphs).
The full specification of the Markdown Syntax has more detail.
↑ comment by AdeleneDawner · 2009-11-22T03:38:18.202Z · LW(p) · GW(p)
Comments use markdown formatting. It's very similar to how one might format an email.
comment by LukeParrish · 2009-05-29T01:02:57.973Z · LW(p) · GW(p)
- Name: Luke Parrish
- Age: 26 next month
- Sex: Male
- Personality: INTP (Socionics INTJ)
- Location: Idaho, USA
- Ideas I like: Cryonics, Forth, Esperanto, Socionics.
I became skeptical of God when I realized that as a philosophical construct his existence would present some unanswerable questions. Also it helped when I decided I was not going to hell over asking a few logical questions. I don't typically position myself as an atheist -- why should I be defined by what I don't accept? Instead I attempt to be someone who is willing to evaluate any logical question and expect consistent answers.
I believe advancing the cause of cryonics and/or life extension is important from a moral perspective, since if they take longer to develop or be accepted, that translates to more people dying. I haven't yet signed up for cryonics but definitely intend to.
comment by Velochy · 2009-04-22T19:12:44.783Z · LW(p) · GW(p)
Hello,
My name is Margus Niitsoo and Im a 22 year old Computer Science doctorial student in Tartu, Estonia. I have wide interests that span religion and psychology as well (I am a pantheist by the way.. so somewhat religious but unaffected by most of the classical theism bashing). I got here through OB which I got to when reading about AI and the thing that shall not be named.
I do not identify myself as a rationalist for I only recently understood how emotional a person I really am and id like to enjoy it before trying to get it under control again. However, I am interested in understanding human behaviour as best I can and this blog has given me many new insights I doubt I could have gotten somewhere else.
Replies from: MBlume, thomblake↑ comment by MBlume · 2009-04-22T19:31:22.411Z · LW(p) · GW(p)
I do not identify myself as a rationalist for I only recently understood how emotional a person I really am and id like to enjoy it before trying to get it under control again.
Note that rationality does not necessarily oppose emotion.
Becoming more rational - arriving at better estimates of how-the-world-is - can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.
Replies from: pjebyIf the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.
↑ comment by pjeby · 2009-04-22T20:49:32.881Z · LW(p) · GW(p)
If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.
What if the iron is hot, but if you flinch, you'll be shot? Fear of the iron won't help you stay steady, and neither will fear of the bullet.
(Note: IAWYC, I'm just taking this opportunity to nitpick the silly notion that "truth" determines or even should determine your emotions. Emotions should be chosen to support your desired actions and results.)
Replies from: MBlume, thomblake↑ comment by MBlume · 2009-04-22T21:17:40.696Z · LW(p) · GW(p)
My fear of the bullet would cause me to want to avoid it, which would mean I must ensure that I do not flinch. The decision to flinch or not to flinch is in the hands of low-level circuitry in my brain, and the current inputs to that circuitry will tend to produce a flinch. So I would be well advised to change those inputs if I can, by visualizing myself on a beach, curled up in bed, sitting at my computer writing comments on Less Wrong, or some other calming, comforting environment. If this is a form of self-deception, it is one I am comfortable with. It is of the same kind that I practiced as a member of the bardic conspiracy, and I don't think that hurt my epistemic rationality any.
↑ comment by thomblake · 2009-04-22T19:19:30.099Z · LW(p) · GW(p)
Note that rationality and emotion are not mutually exclusive, and thinking that they are can get you into trouble. Good reference, anyone? I'd recommend Aristotle.
ETA: Yes, Vladimir_Nesov's link, below, is what I was looking for.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T19:24:24.831Z · LW(p) · GW(p)
The reference from OB is Feeling Rational.
comment by JaredWigmore · 2009-04-21T12:15:51.761Z · LW(p) · GW(p)
- Name: Jared Wigmore
- Age: 21
- Location: Hamilton, NZ
- Education: doing Honours in Computer Science at the University of Waikato I discovered LW from OB, which I have been following for some time. I'm a contributor to an open-source project that shall not be named. Incidentally, I know Python well.
comment by randallsquared · 2009-04-18T15:21:35.262Z · LW(p) · GW(p)
- Handle: randallsquared (why are we putting this, anyway?)
- Name: Randall Randall
- Location: Washington, DC metro area
- Age: 35
- Education: some college; mostly autodidact.
I found LW through OB, which was mentioned on the SL4 list at least a year ago. I haven't contributed much in either place (nor did I post much on SL4), and mostly read OB and LW when I've used up the recent interesting commentary on reddit and Hacker News.
That said, I'm interested in rationality and thinking, but not to the extent that many here seem to be. I tend to assume my intuition is correct about things (biases and all) until it's obvious that it isn't, and due in part to this, I'm pretty conservative, morally, though libertarian/anarchist politically.
comment by MBlume · 2009-04-16T23:15:03.666Z · LW(p) · GW(p)
Added the note for theists. At the moment, the set of links is extremely subjective and mostly reflects what, historically, got the job done for me. Please feel free to make edits to the Wiki page.
Replies from: ciphergothcomment by ektimo · 2009-04-16T16:57:26.031Z · LW(p) · GW(p)
I read the "Meaning of Life FAQ" by a previous version of Eliezer in 1999 when I was trying to write something similar, from a Pascal’s Wager angle. I've been a financial supporter of the Organization That Can't Be Named and a huge fan of Eliezer's writings since that same time. After reading "Crisis of Faith" along with "Could Anything Be Right?" I finally gave up on objective value. Feeling my mind change was an emotional experience that lasted about two days.
This is seriously in need of updating, but here is my home page: http://home.pacbell.net/eevans2/.
BTW, would using Google Adwords be a good way to draw people to 12 Virtues? For example:
Search phrase: how to be better Cost per click: $0.05 Approximate volume per month: 33,100
(Also, I got "error on page" when trying to submit this comment using Internet Explorer 8.)
comment by Benevolence · 2012-07-09T08:19:32.577Z · LW(p) · GW(p)
Greetings!
My name is Dimitri Karcheglo, and I'm 22. I live in BC, Canada, having immigrated here from Odessa, Ukraine in 1998. I speak Russian as my first language though, not Ukrainian. Most of you likely don't know, but Odessa is a very Russian-speaking city in Ukraine.
I've been kinda lurking for a bit, but not very extensively or very consistently. I was directed here originally via HPMoR, which was recommended by a friend. I've known this site for probably around a year. originally i had read through the map and territory sequence and mysterious answers to mysterious questions sequence. after that i kind of didn't come to this site for a while.
Well I'm back now! I'm re-reading from the start since i have forgotten a lot. im also planning to go a lot deeper into LW this time around, and probably keep up with it on a day-to-day basis in the future. I am very much interested in improving my thinking, and hope to gain a lot of that here. I don't come very prepared like many people i see posting here. I have no degrees in programming, physics, mathematics, or whatnot.
Im currently studying civil engineering, about to enter my second year. I've done one year in computer programming and may do some self-education in this field down the line to improve my base. the motivation to do this likely wont show up for a while though.
You likely wont be seeing me posting much at all for quite a while, until i familiarize myself with the understandings presented on this site quite a bit more. I do hope to raise enough money next year to go visit one of these rationality camps, as i hope to have a better understanding of the subject by then, but with costs of education being what they are, I'm doubtful.
comment by [deleted] · 2012-01-08T16:42:45.461Z · LW(p) · GW(p)
I’ll introduce myself by way of an argument against material reductionism. This is an argument borrowed from Plato’s dialogue “Euthyphro”. I don’t intend this to be a knock down critique or anything. Rather, I think I might learn something about the idea of materialism (about which I’m pretty confused) from your replies should I receive any. Here goes:
Tom is carrying a bucket. There are two facts here: 1) that Tom is carrying the bucket, and 2) that the bucket is carried by Tom. (1) is something like the ‘active fact’, and (2) is something like the ‘passive fact’.
We’re material reductionists, so any true proposition is true because some material state of affairs obtains, and this is all it means to be a fact. But both fact (1) and fact (2) refer to the same state of affairs. Reduced to a material state of affairs (say the position and velocity of the molecules in the bucket and in Tom), we can’t distinguish between fact (1) and fact (2).
This is a problem because fact (1) and fact (2) are different facts: Tom is not carrying the bucket because the bucket is carried by Tom. Rather, the bucket is carried by Tom because Tom is carrying the bucket. Fact (1) has explanatory priority over fact (2).
But since there is no way to distinguish the two facts as material states of affairs, there must be more to fact (1) and fact (2) than the material state of affairs to which they refer.
What do you think? I’ve no doubt we can poke holes in this argument, but I need some help doing so.
Replies from: Anubhav, Alejandro1↑ comment by Anubhav · 2012-01-08T16:55:05.356Z · LW(p) · GW(p)
Does phrasing the state of affairs as (2) instead of (1) have any effect on your anticipations?
If not, they're the same fact.
Replies from: None↑ comment by [deleted] · 2012-01-08T17:40:26.046Z · LW(p) · GW(p)
The article to which you refer presents a convincing case, but I think it's probably inconsistant with a Tarskian semantic theory of truth. (ETA: assuming it aims at defining truth, or at laying out a criterion for the identity of facts). We would have to infer from Tom's carrying the bucket to the bucket's being carried by Tom, since we couldn't offer the Tarskian sentence "'Tom is carrying the bucket' iff the bucket is being carried by Tom" up as a definition of the truth of "Tom is carrying the bucket."
I can see Eliezer's point on an epistemological level, but what theory of truth do we need in order to understand anticipations as bearing on the identity of facts themselves?
Suppose we say simply that an identical set of anticipations makes two facts identical. Now suppose that I'm working in a factory in which I must crack red and blue eggs open to discover the color of the yolk (orange in the case of red eggs, green in the case of blue. But suppose also that all red, orange yolked eggs are rough to the touch, and all blue, green yolked eggs are smooth. The redness and the roughness of an egg will lead to an identical set of anticipations (the orangeness of the yolk). But we certainly can't say that the redness and the roughness of an egg are the same fact, since they don't even refer to the same material state of affairs.
Replies from: Anubhav↑ comment by Anubhav · 2012-01-09T04:55:51.143Z · LW(p) · GW(p)
Apparently we're speaking across a large inferential distance. I don't know about Tarskian sentences, so I can't comment on those, but I can clarify the 'anticipation controller' idea.
Basically, you're defining 'anticipation' more narrowly than what Eliezer meant by the term.
If you tell me that an egg is rough, I will anticipate that, if I rub my fingers over it, my skin will feel the sensations I associate with rough surfaces.
If you tell me that an egg is red, I will anticipate that when I look at it, the cells in my retina that are sensitive to long-wavelength radiation will be excited more than the other cells in my retina.
Clearly, these are different anticipations, so we say that redness and roughness are two different facts.
If you say to me, 'Tom is carrying a bucket', I anticipate that if I were to look in Tom's direction, I would see him carrying a bucket. If you say to me 'a bucket is carried by Tom', I anticipate that if I were to look in Tom's direction, I would see... him carrying a bucket. In other words, whether you phrase it as (1) or (2), my anticipations are exactly the same, and so I claim they're the same fact.
But you seem to be telling me that not only are they different facts, but somehow one is more fundamental than the other, and I have no idea what you mean by that.
Replies from: None↑ comment by [deleted] · 2012-01-09T15:48:43.807Z · LW(p) · GW(p)
Thanks for clarifying the point about anticipations, that was very helpful and I'll have to give it more thought. I read Eliezer's article again, and while I don't think his intention was to give an account of the identity of facts, he does mention that if we're arguing over facts with identical anticipations, we may be arguing over a merely semantic point. That's very possibly what's going on here, but let me try to defend the idea that these are distinct facts one last time. If I cannot persuade you at all, I'll reconsider the worth of my argument.
In my comment to Alejandro1, I mentioned three sets of facts. I'll pare down that point here to its simplest form: the relationship between 'X is taller than Y' and 'Y is shorter than X' is different than the relationship between 'X carries Y' and 'Y is carried by X'. This difference is in the priority of the former and the latter fact in each set. In the case of taller and shorter, there is no priority of one fact over the other. They really are just different ways of saying the same thing.
In the case of carrying and being carried, there is a priority. Y's being carried is explained by X's carrying. Y is being carried, but because X is carrying it. It is not true that X is carrying because Y is being carried. In other words, X is related to Y as agent to patient (I don't mean agency in an intentional sense, this would apply to fire and what it burns). If we try to treat 'X carries Y' and 'Y is carried by X' as involving no explanatory priority (if we try to treat them as the same fact), the loose the explanatory priority, in this case, of agent over patient.
An example of this kind of explanatory priority (in the other direction) might be this set: 'A falling tree kills a deer' and 'a deer is killed by a falling tree'. Here, I think the explanatory priority is with the patient. It is only because a deer is such as to be killed that a tree could be a killer. We have to explain the tree's killing by reference to the deer's being killed. If the tree fell on a deer statue, there would be no explanatory priority.
But maybe my confusion is deeper, and maybe I'm just getting something wrong about the idea of a cause. Thanks for taking the time.
Replies from: Anubhav↑ comment by Anubhav · 2012-01-10T01:26:46.690Z · LW(p) · GW(p)
Apparently you're working in something that's akin to a mathematical system... you start with a few facts (the ones with high 'explanatory priority') and then you derive other facts (the ones with lower 'explanatory priority'). Which is nice and all, but this system doesn't really seem to reflect anything in reality. In reality, a deer getting killed by a tree is a tree killing a deer is a deer getting killed by a tree.
Replies from: None↑ comment by [deleted] · 2012-01-10T14:57:43.566Z · LW(p) · GW(p)
Well, I'm not intentionally trying to work with anything like a mathematical system. My claim was just that if by 'in reality' we mean 'referring to basic material objects and their motions' then we loose the ability to claim any explanatory priority between facts like 'X carries Y' and 'Y is carried by X'. Y didn't just get itself carried, X had to come along and carry it. X is the cause of Y's being carried.
But all that hinges on convincing you that there is some such explanatory priority, which I haven't done. I think perhaps my argument isn't very good. Thanks for the discussion, at any rate.
↑ comment by Alejandro1 · 2012-01-08T17:09:59.810Z · LW(p) · GW(p)
Welcome to LW!
The key part of your argument is:
Tom is not carrying the bucket because the bucket is carried by Tom. Rather, the bucket is carried by Tom because Tom is carrying the bucket. Fact (1) has explanatory priority over fact (2).
Why do you think this? I do not have this intuition at all. For me, if both (1) and (2) describe exactly the same material state of affairs, no more no less (rather than, e.g. (1) carrying a subtle connotation that the carrying is voluntary) then I would say that the difference between them is only rhetorical, and neither explains the other one more than vice versa.
Replies from: None↑ comment by [deleted] · 2012-01-08T17:24:14.438Z · LW(p) · GW(p)
Thanks for the welcome, and for the reply. My whole argument turns on the premise that the two facts are distinctive because one has explanatory priority over the other, so I'll try to make this a little clearer.
So, here are three sets of facts. The first set involves no explanatory priority, in the second the active fact is prior, and in the last the passive fact is prior.
A) Tom is taller than Ralph, Ralph is shorter than Tom. B) Tom praised Steve, Steve was praised by Tom. C) Tom inadvertently offended Mary, Mary was offended by Tom inadvertently.
In the first case, of course, the facts are perfectly interchangeable. In the second, it seems to me, the active fact explains the passive fact. I mean that it would sound odd to say something like "It is because Steve was praised that Tom praised him" but it seems perfectly natural to say "It is because Tom praised him that Steve was praised."
And in the last case, I think we are all familiar with the fact that Tom can hardly explain to Mary that he didn't try to offend her, and so she was not offended. Tom offended Mary because she was offended. Mary's being offended explains Tom's inadvertent offending.
Is that convincing at all? I know my examples of explanatory priority are pretty far from billiard ball examples, etc. but maybe the point can be made there as well. Let me know what you think.
comment by gwern · 2012-01-03T04:46:41.935Z · LW(p) · GW(p)
An interesting outside perspective on AspiringKnitter: http://www.reddit.com/r/atheism/comments/nzwtv/a_very_strange_discussion_with_an_originally/
comment by Insert_Idionym_Here · 2011-12-20T06:00:19.985Z · LW(p) · GW(p)
Oh, hello. I've posted a couple of times, in a couple of places, and those of you who have spoken with me probably know that I am one: a novice, and two: a bit of a jerk.
I'm trying to work on that last one.
I think cryonics, in its current form, is a terrible idea, I am a (future) mathematician, and am otherwise divergent from the dominant paradigm here, but I think the rest of that is for me to know, and you to find out.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-20T06:01:59.484Z · LW(p) · GW(p)
I think cryonics, in its current form, is a terrible idea
What do you think of cremation in its current form?
Replies from: Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2011-12-20T06:06:31.271Z · LW(p) · GW(p)
I think cryonics is a terrible idea, not because I don't want to preserve my brain until the tech required to recreate it digitally or physically is present, but because I don't think cryonics will do the job well. Cremation does the job very, very badly, like trying to preserve data on a hard drive by melting it down with thermite.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-20T06:12:57.387Z · LW(p) · GW(p)
This obviously invites the conclusion that cryonics is a terrible idea in the same sense that democracy is the worst form of government.
Replies from: Insert_Idionym_Here, Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2011-12-20T06:30:33.975Z · LW(p) · GW(p)
Are you saying that cryonics is not perfect, but it is the best alternative?
↑ comment by Insert_Idionym_Here · 2011-12-20T06:25:36.404Z · LW(p) · GW(p)
I'm not sure I understand your point. I'll read your link a few more times, just to see if I'm missing something, but I don't quite get it now.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-20T07:02:38.442Z · LW(p) · GW(p)
Just referring to the quote:
Replies from: Insert_Idionym_Here"Democracy is the worst form of government except for all those others that have been tried." -- Winston Churchill
↑ comment by Insert_Idionym_Here · 2011-12-20T07:26:34.133Z · LW(p) · GW(p)
Ah, I see. I just don't think that cryonics significantly improves the chances of actually extending one's life span, which would be similar to saying that democracy is not significantly better than most other political systems.
Replies from: soreff↑ comment by soreff · 2011-12-22T03:20:58.075Z · LW(p) · GW(p)
What do you see as the limiting factors?
The technical ability of current best-case cryonics practice to preserve brain structure?
The ability of average-case cryonics to do the same?
The risk of organizational failure?
The risk of larger scale societal failure?
Insufficient technical progress?
Runaway unfriendly AI?
- Something else?
↑ comment by Insert_Idionym_Here · 2011-12-22T22:38:44.972Z · LW(p) · GW(p)
All of the above.
comment by AspiringKnitter · 2011-12-19T07:28:45.676Z · LW(p) · GW(p)
Hello. I expect you won't like me because I'm Christian and female and don't want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should. I've been lurking for a long time. The first time I found this place I followed a link to OvercomingBias from AnneC's blog and from there, without quite realizing it, found myself archive-binging and following another link here. But then I stopped and left and then later I got linked to the Sequences from Harry Potter and the Methods of Rationality.
A combination of the whole evaporative cooling thing and looking at an old post that wondered why there weren't more women convinced me to join. You guys are attracting a really narrow demographic and I was starting to wonder whether you were just going to turn into a cult and I should ignore you.
...And I figure I can still leave if that ends up happening, but if everyone followed the logic I just espoused, it'll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world. I'd rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don't agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
Replies from: None, Nornagest, thomblake, juliawise, Emile, CronoDAS, Ezekiel, None, Bugmaster, cousin_it, JoachimSchipper, EvelynM, wedrifid, Gust, TimS, kilobug, lavalamp, None, Mitchell_Porter, lessdazed, Jonii, AspiringKnitter, wedrifid, Laoch↑ comment by [deleted] · 2011-12-25T20:58:56.215Z · LW(p) · GW(p)
Wow. Some of your other posts are intelligent, but this is pure troll-bait.
EDIT: I suppose I should share my reasoning. Copied from my other post lower down the thread:
Hello, I expect you won't like me, I'm
Classic troll opening. Challenges us to take the post seriously. Our collective 'manhood' is threatened if react normally (eg saying "trolls fuck off").
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of "you are an irrational cult".
I've been lurking for a long time... overcoming bias... sequences... HP:MOR... namedropping
"Seriously, I'm one of you guys". Concern troll disclaimer. Classic.
evaporative cooling... women... I'm here to help you not be a cult.
Again undertones of "you are a cult and you must accept my medicine or turn into a cult". Again we are challenged to take it seriously.
I just espoused, it'll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I didn't quite understand this part, but again, straw man caricature.
I'd rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don't agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
implying we don't care about friendliness
implying you know more about friendliness than EY
'nuff said
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all.
classic reddit downvote preventer:
- Post a troll or other worthless opinion
- Imply that the hivemind wont like it
- Appeal to people's fear of hivemind
- Collect upvotes.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is "no no, we don't hate you and we certainly won't censor you; please we want more christian trolls like you". EDIT: Ha! well predicted I say. I just looked at the other 500 responses. /EDIT
I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn't have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10/10.
Replies from: NancyLebovitz, AspiringKnitter, Crux, Jonii, None, MixedNuts↑ comment by NancyLebovitz · 2011-12-25T22:48:59.790Z · LW(p) · GW(p)
You've got an interesting angle there, but I don't think AspiringKnitter is a troll in the pernicious sense-- her post has led to a long reasonable discussion that she's made a significant contribution to.
I do think she wanted attention, and her post had more than a few hooks to get it. However, I don't think it's useful to describe trolls as "just wanting attention". People post because they want attention. The important thing is whether they repay attention with anything valuable.
Replies from: None↑ comment by [deleted] · 2011-12-25T23:44:26.604Z · LW(p) · GW(p)
I don't have the timeline completely straight, but it looks to me like AspiringKnitter came in trolling and quickly changed gears to semi-intelligent discussion. Such things happen. AspiringKnitter is no longer a troll, that's for sure; like you say "her post has led to a long reasonable discussion that she's made a significant contribution to".
All that, however, does not change the fact that this particular post looks, walks, and quacks like troll-bait and should be treated as such. I try to stay out of the habit of judging posts on the quality of the poster's other stuff.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-26T08:32:18.766Z · LW(p) · GW(p)
I don't know if this is worth saying, but you look a lot more like a troll to me than she does, though of a more subtle variety than I'm used to.
You seem to be taking behavior which has been shown to be in the harmless-to-useful range and picking a fight about it.
Replies from: None↑ comment by [deleted] · 2011-12-26T20:59:33.755Z · LW(p) · GW(p)
Thanks for letting me know. If most people disagree with my assessment, I'll adjust my troll-resistance threshold.
I just want to make sure we don't end up tolerating people who appear to have trollish intent. AspiringKnitter turned out to be positive, but I still think that particular post needed to be called out.
Well Kept Gardens Die By Pacifism.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-26T21:55:35.939Z · LW(p) · GW(p)
You're welcome. This makes me glad I didn't come out swinging-- I'd suspected (actually I had to resist the temptation to obsess about the idea) that you were a troll yourself.
If you don't mind writing about it, what sort of places have you been hanging out that you got your troll sensitivity calibrated so high? I'm phrasing it as "what sort of places" in case you'd rather not name particular websites.
Replies from: None↑ comment by [deleted] · 2011-12-26T22:21:39.470Z · LW(p) · GW(p)
what sort of places have you been hanging out that you got your troll sensitivity calibrated so high?
4chan, where there is an interesting dynamic around trolling and getting trolled. Getting trolled is low-status, calling out trolls correctly that no-one else caught is high-status, and trolling itself is god-status, calling troll incorrectly is low status like getting trolled. With that culture, the art of trolling, counter-trolling and troll detection gets well trained.
I learned a lot of trolling theory from reddit, (like the downvote preventer and concern trolling). The politics, anarchist, feminist and religious subreddits have a lot of good cases to study (they generally suck at managing community, tho).
I learned a lot of relevant philosophy of trolling and some more theory from /i/nsurgency boards and wikis (start at partyvan.info). Those communities are in a sorry state these days.
Alot of what I learned on 4chan and /i/ is not common knowledge around here and could be potentially useful. Maybe I'll beat some of it into a useful form and post it.
Replies from: Vaniver, NancyLebovitz↑ comment by Vaniver · 2011-12-26T22:37:37.564Z · LW(p) · GW(p)
Maybe I'll beat some of it into a useful form and post it.
For one thing, the label "trolling" seems like it distracts more than it adds, just like "dark arts." AspiringKnitter's first post was loaded with influence techniques, as you point out, but it's not clear to me that pointing at influence techniques and saying "influence bad!" is valuable, especially in an introduction thread. I mean, what's the point of understanding human interaction if you use that understanding to botch your interactions?
Replies from: wedrifid, None↑ comment by wedrifid · 2011-12-27T19:40:18.601Z · LW(p) · GW(p)
but it's not clear to me that pointing at influence techniques and saying "influence bad!" is valuable, especially in an introduction thread.
There is a clear benefit to pointing out when a mass of other people are falling for influence techniques in a way you consider undesirable.
Replies from: Vaniver↑ comment by Vaniver · 2011-12-27T22:20:13.122Z · LW(p) · GW(p)
There is a clear benefit to pointing out when a mass of other people are falling for influence techniques in a way you consider undesirable.
It is certainly worth pointing out the techniques, especially since it looks like not everyone noticed them. What's not clear to me is the desirability of labeling it as "bad," which is how charges of trolling are typically interpreted.
↑ comment by [deleted] · 2011-12-26T22:53:30.348Z · LW(p) · GW(p)
I see your point, but that post wasn't using dark arts to persuade anything, it looked very much like the purpose was controversy. Hence trolling.
Replies from: Vaniver↑ comment by Vaniver · 2011-12-26T23:02:13.486Z · LW(p) · GW(p)
that post wasn't using dark arts to persuade anything
Son, I am disappoint.
Replies from: None↑ comment by [deleted] · 2011-12-26T23:29:51.143Z · LW(p) · GW(p)
are you implying there was persuasion going on? or that I used "dark arts" when I shouldn't?
Replies from: Vaniver, thomblake↑ comment by Vaniver · 2011-12-27T01:03:48.111Z · LW(p) · GW(p)
Easiest first: I introduced "dark arts" as an example of a label that distracted more than it added. It wasn't meant as a reference to or description of your posts.
In your previous comment, you asked the wrong question ('were they attempting to persuade?') and then managed to come up with the wrong answer ('nope'). Both of those were disappointing (the first more so) especially in light of your desire to spread your experience.
The persuasion was "please respond to me nicely." It was richly rewarded: 20 welcoming responses (when most newbies get 0 or 1), and the first unwelcoming response got downvoted quickly.
The right question is, what are our values, here? When someone expressing a desire to be welcomed uses influence techniques that further that end, should we flip the table over in disgust that they tried to influence us? That'll show them that we're savvy customers that can't be trolled! Or should we welcome them because we want the community to grow? That'll show them that we're worth sticking around.
I will note that I upvoted this post, because in the version that I saw it started off with "Some of your other posts are intelligent" and then showed many of the tricks AspiringKnitter's post used. Where I disagree with you is the implication that we should have rebuked her for trolling. The potential upsides of treating someone with charity and warmth is far greater than the potential downsides of humoring a troll for a few posts.
Replies from: None↑ comment by NancyLebovitz · 2011-12-26T22:49:11.065Z · LW(p) · GW(p)
That's interesting-- I've never hung out anywhere that trolling was high status.
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver's position, though I think the coinage "hlep" from Making Light is more widely useful--inappropriate, annoying/infuriating advice which is intended to be helpful but doesn't have enough thought behind it, but what's downvote preventer?
Hlep has a lot of overlap with other-optimizing.
I'd be interested in what you have to say about the interactions at 4chan and /i/, especially about breakdowns in political communities.
I've been mulling the question of how you identify and maintain good will-- to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn't start out being all that angry at each other.
Replies from: None↑ comment by [deleted] · 2011-12-26T23:25:29.982Z · LW(p) · GW(p)
In reddit and the like, how is consensus built around whether someone is a troll and/or is trolling in a particular case?
On reddit there is just upvotes and downvotes. Reddit doesn't have developed social mechanisms for dealing with trolls, because the downvotes work most of the time. Developing troll technology like the concern troll and the downvote preventer to hack the hivemind/vote dynamic is the only way to succeed.
4chan doesn't have any social mechanisms either, just the culture. Communication is unnecessary for social/cultural pressure to work, interestingly. Once the countertroll/troll/troll-detector/trolled/troll-crier hierarchy is formed by the memes and mythology, the rest just works in your own mind. "fuck I got trolled, better watch out better next time", "all these people are getting trolled, but I know the OP is a troll; I'm better than them" "successful troll is successful" "I trolled the troll". Even if you don't post them and no-one reacts to them, those thoughts activate the social shame/status/etc machinery.
I think I understand concern trolling, which I understand to be giving advice which actually weakens the receiver's position, though I think the coinage "hlep" from is more widely useful
Not quite. A concern troll is someone who comes in saying "I'm a member of your group, but I'm unsure about this particular point in a highly controversial way" with the intention of starting a big useless flame-war.
Havn't heard of hlep. seems interesting.
but what's downvote preventer
The downvote preventer is when you say "I know the hivemind will downvote me for this, but..." It creates association in the readers mind between downvoting and being a hivemind drone, which people are afraid of, so they don't downvote. It's one of the techniques trolls use to protect the payload, like the way the concern troll used community membership.
I've been mulling the question of how you identify and maintain good will-- to my mind, a lot of community breakdown is caused by tendencies to amplify disagreements between people who didn't start out being all that angry at each other.
Yes. A big part of trolling is actually creating and fueling those disagreements. COINTELPRO trolling is disrupting peoples ability to identify trolls and goodwill. There is a lot of depth and difficulty to that.
↑ comment by AspiringKnitter · 2011-12-27T01:50:45.147Z · LW(p) · GW(p)
Wow, I don't post over Christmas and look what happens. Easiest one to answer first.
- Wow, thanks!
- You're a little mean.
You don't need an explanation of 2, but let me go through your post and explain about 1.
Classic troll opening. Challenges us to take the post seriously. Our collective 'manhood' is threatened if react normally (eg saying "trolls fuck off").
Huh. I guess I could have come up with that explanation if I'd thought. The truth here is that I was just thinking "you know, they really won't like me, this is stupid, but if I make them go into this interaction with their eyes wide open about what I am, and phrase it like so, I might get people to be nice and listen".
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of "you are an irrational cult".
That was quite sincere and I still feel that that's a worry.
Also, I don't think I know more about friendliness than EY. I think he's very knowledgeable. I worry that he has the wrong values so his utopia would not be fun for me.
classic reddit downvote preventer:
Post a troll or other worthless opinion Imply that the hivemind wont like it Appeal to people's fear of hivemind Collect upvotes.
Wow, you're impressive. (Actually, from later posts, I know where you get this stuff from. I guess anyone could hang around 4chan long enough to know stuff like that if they had nerves of steel.) I had the intuition that this will lead to fewer downvotes (but note that I didn't lie; I did expect that it was true, from many theist-unfriendly posts on this site), but I didn't think consciously this procedure will appeal to people's fear of the hivemind to shame them into upvoting me. I want to thank you for pointing that out. Knowing how and why that intuition was correct will allow me to decide with eyes wide open whether to do something like that in the future, and if I ever actually want to troll, I'll be better at it.
And top it off with a bit of sympathetic character, damsel-in-distress crap.
Actually, I just really need to learn to remember that while I'm posting, proper procedure is not "allow internal monologue to continue as normal and transcribe it". You have no idea how much trouble that's gotten me into. (Go ahead and judge me for my self-pitying internal monologue if you want. Rereading it, I'm wondering how I failed to notice that I should just delete that part, or possibly the whole post.) On the other hand, I'd certainly hope that being honest makes me a sympathetic character. I'd like to be sympathetic, after all. ;)
This is not necessarily deliberate, but it doesn't have to be.
Thank you. It wasn't, but as you say, it doesn't have to be. I hope I'll be more mindful in the future, and bear morality in mind in crafting my posts here and elsewhere. I would never have seen these things so clearly for myself.
10/10.
Thanks, but no. LOL.
I'd upvote you, but otherwise your post is just so rude that I don't think I will.
Replies from: TheOtherDave, NancyLebovitz↑ comment by TheOtherDave · 2011-12-27T02:25:41.951Z · LW(p) · GW(p)
Note that declaring Crocker's rules and subsequently complaining about rudeness sends very confusing signals about how you wish to be engaged with.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T02:49:57.432Z · LW(p) · GW(p)
Thank you. I was complaining about his use of needless profanity to refer to what I said, and a general "I'm better than you" tone (understandable, if he comes from a place where catching trolls is high status, but still rude). I not only approve of being told that I've done something wrong, I actually thanked him for it. Crocker's rules don't say "explain things in an insulting way", they say "don't soften the truths you speak to me". You can optimize for information-- and even get it across better-- when you're not trying to be rude. For instance,
And top it off with a bit of sympathetic character, damsel-in-distress crap.
That would not convey less truth if it weren't vulgar. You can easily communicate that someone is tugging people's heartstrings by presenting as a highly sympathetic damsel in distress without being vulgar.
Also, stuff like this:
Ha! well predicted I say. I just looked at the other 500 responses.
That makes it quite clear that nyan_sandwich is getting a high from this and feels high-status because of behavior like this. While that in itself is fine, the whole post does have the feel of gloating to it. I simultaneously want to upvote it for information and downvote it for lowering the overall level of civility.
Here's my attempt to clarify how I wish to be engaged with: convey whatever information you feel is true. Be as reluctant to actively insult me as you would anyone else, bearing in mind that a simple "this is incorrect" is not insulting to me, and nor is "you're being manipulative". "This is crap" always lowers the standard of debate. If you spell out what's crappy about it, your readers (including yours truly) can grasp for themselves that it's crap.
Of course, if nyan_sandwich just came from 4chan, we can congratulate him on being an infinitely better human being than everyone else he hangs out with, as well as on saying something that isn't 100% insulting, vulgar nonsense. (I'd say less than 5% insulting, vulgar nonsense.) Actually, his usual contexts considered, I may upvote him after all. I know what it takes to be more polite than you're used to others being.
Replies from: cousin_it, TheOtherDave, thomblake↑ comment by cousin_it · 2011-12-27T18:35:06.279Z · LW(p) · GW(p)
That doesn't sound right. Here's a quote from Crocker's rules:
Anyone is allowed to call you a moron and claim to be doing you a favor.
Another quote:
Note that Crocker's Rules does not mean you can insult people; it means that other people don't have to worry about whether they are insulting you.
Quote from our wiki:
Thus, one who has committed to these rules largely gives up the right to complain about emotional provocation, flaming, abuse and other violations of etiquette
There's a decision theoretic angle here. If I declare Crocker's rules, and person X calls me a filthy anteater, then I might not care about getting valuable information from them (they probably don't have any to share) but I refrain from lashing out anyway! Because I care about the signal I send to person Y who is still deciding whether to engage with me, who might have a sensitive detector of Crocker's rules violations. And such thoughtful folks may offer the most valuable critique. I'm afraid you might have shot yourself in the foot here.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-27T18:47:41.748Z · LW(p) · GW(p)
I think this is generally correct. I do wonder about a few points:
If I am operating on Crocker's Rules (I personally am not, mind, but hypothetically), and someone's attempt to convey information to me has obvious room for improvement, is it ever permissible for me to let them know this? Given your decision theory point, my guess would be "yes, politely and privately," but I'm curious as to what others think as well. As a side note, I presume that if the other person is also operating by Crocker's Rules, you can say whatever you like back.
Replies from: cousin_it↑ comment by cousin_it · 2011-12-27T18:54:17.460Z · LW(p) · GW(p)
someone's attempt to convey information to me has obvious room for improvement
Do you mean improvement of the information content or the tone? If the former, I think saying "your comment was not informative enough, please explain more" is okay, both publicly and privately. If the latter, I think saying "your comment was not polite enough" is not okay under the spirit of Crocker's rules, neither publicly nor privately, even if the other person has declared Crocker's rules too.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-27T18:59:01.428Z · LW(p) · GW(p)
When these things are orthogonal, I think your interpretation is clear, and when information would be obscured by politeness the information should win - that's the point of Crocker's Rules. What about when information is obscured by deliberate impoliteness? Does the prohibition on criticizing impoliteness win, or the permit for criticizing lack of clarity? In any case, if the other person is not themselves operating by Crocker's Rules, it is of course important that your response be polite, whatever it is.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-27T19:37:15.397Z · LW(p) · GW(p)
What about when information is obscured by deliberate impoliteness?
Basically, no. If you want to criticize people for being rude to you just don't operate by Crocker's rules. Make up different ones.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T20:13:40.238Z · LW(p) · GW(p)
Question: do Crocker's rules work differently here than I'm used to? I'm used to a communication style where people say things to get the point across, even though such things would be considered rude in typical society, not for being insulting but for pointless reasons, and we didn't do pointless things just to be typical. We were bluntly honest with each other, even (actually especially) when people were wrong (after all, it was kind of important that we convey that information accurately, completely and as quickly as possible in some cases), but to be deliberately insulting when information could have been just as easily conveyed some other way (as opposed to when it couldn't be), or to be insulting without adding any useful information at all, was quite gauche. At one point someone mentioned that if we wanted to invoke that in normal society, say we were under Crocker's rules.
So it looks like the possibilities worth considering are:
- Someone LIED just to make it harder for us to fit in with normal society!
- Someone was just wrong.
- You're wrong.
- Crockering means different things to different people.
Which do you think it is?
Replies from: wedrifid, Emile, thomblake↑ comment by wedrifid · 2011-12-27T22:09:37.619Z · LW(p) · GW(p)
- Cousin it's comment doesn't leave much room for doubt.
- Baiting and switching by declaring Crocker's rules then shaming and condescending when they do not meet your standard of politeness could legitimately be considered a manipulative social ploy.
- I didn't consider Crocker's rules at all when reading nyan's comment and it still didn't seem at all inappropriate. You being outraged at the 'vulgarity' of the phrase "damsel in distress crap" is a problem with your excess sensitivity and not with the phrase. As far as I'm concerned "damsel in distress crap" is positively gentle. I would have used "martyrdom bullshit" (but then I also use bullshit as a technical term).
- Crocker's rules is about how people speak to you. But for all it is a reply about your comment nyan wasn't even talking to you. He was talking to the lesswrong readers warning them about perceived traps they are falling into when engaging with your comment.
- Like it or not people tend to reciprocate disrespect with disrespect. While you kept your comment superficially civil and didn't use the word 'crap' you did essentially call everyone here a bunch of sexist Christian hating bullies. Why would you expect people to be nice to you when you treat them like that?
↑ comment by Emile · 2011-12-27T20:24:29.371Z · LW(p) · GW(p)
The impression I have is that calling Crocker's rules being never acting offended or angry at the way people talk to you, with the expectation that you'll get more information if people don't censor themselves out of politeness.
Some of your reactions here are not those I expect from someone under Crocker's rules (who would just ignore anything insulting or offensive).
So maybe what you consider as "Crocker's rules" is what most people here would consider "normal" discussion, so when you call Crocker's rules, people are extra rude.
I would suggest just dropping reference to Crocker's rules, I don't think they're necessary for having a reasonable discussion, and they they put pressure on the people you're talking to to either call Crocker's rules too (giving you carte blanche to be rude to them), otherwise they look uptight or something.
Replies from: AspiringKnitter, dlthomas↑ comment by AspiringKnitter · 2011-12-27T21:14:26.613Z · LW(p) · GW(p)
So maybe what you consider as "Crocker's rules" is what most people here would consider "normal" discussion, so when you call Crocker's rules, people are extra rude.
Possible. I'm inexperienced in talking with neurotypicals. All I know is what was drilled into me by them, which is basically a bunch of things of the form "don't ever convey this piece of information because it's rude" (where the piece of information is like... you have hairy arms, you're wrong, I don't like this food, I don't enjoy spending time with you, this gift was not optimized for making me happy-- and the really awful, horrible dark side where they feel pressured never to say certain things to me, like that I'm wrong, they're annoyed by something I'm doing, I'm ugly, I sound stupid, my writing needs improvement-- it's horrible to deal with people who never say those things because I can never assume sincerity, I just have to assume they're lying all the time) that upon meeting other neurodiverse I immediately proceeded to forget all about. And so did they. And THAT works out well. It's accepted within that community that "Crocker's rules" is how the rest of the world will refer to it.
Anyway, if I'm not allowed to hear the truth without having to listen to whatever insults anyone can come up with, then so be it, I really want to hear the truth and I know it will never be given to me otherwise. But there IS supposed to be something between "you are not allowed to say anything to me except that I'm right about everything and the most wonderful special snowflake ever" and "insult me in every way you can think of", even if the latter is still preferable to the former. (Is this community a place with a middle ground? If so, I didn't think such existed. If so, I'll gladly go by the normal rules of discussion here.)
Replies from: TheOtherDave, Emile↑ comment by TheOtherDave · 2011-12-27T21:50:29.186Z · LW(p) · GW(p)
My experience of LW is that:
- the baseline interaction mode would be considered rude-but-not-insulting by most American subcultures, especially neurotypical ones
- the interaction mode invoked by "Crocker's rules" would be considered insulting by most American subcultures, especially neurotypical ones
- there's considerable heterogeneity in terms of what's considered unacceptably rude
- there's a tentative consensus that dealing with occasional unacceptable rudeness is preferable to the consequences of disallowing occasional unacceptable rudeness, and
- the community pushes back on perceived attempts to enforce politeness far more strongly than it pushes back on perceived rudeness.
Dunno if any of that answers your questions.
I would also say that nobody here has come even remotely close to "insult in every conceivable way" as an operating mode.
Replies from: daenerys, thomblake, wedrifid↑ comment by daenerys · 2011-12-28T02:41:13.467Z · LW(p) · GW(p)
the baseline interaction mode would be considered rude-but-not-insulting by most American subcultures, especially neurotypical ones
the community pushes back on perceived attempts to enforce politeness far more strongly than it pushes back on perceived rudeness.
YES!
There seem to be a lot of new people introducing themselves on the Welcome thread today/yesterday. I would like to encourage everyone to maybe be just a tad bit more polite, and cognizant of the Principle of Charity, at least for the next week or two, so all our newcomers can acclimate to the culture here.
As someone who has only been on this site for a month or two (also as a NT, socially-skilled, female), I have spoken in the past about my difficulties dealing with the harshness here. I ended up deciding not to fight it, since people seem to like it that way, and that's ok. But I do think the community needs to be aware that this IS in fact an issue that new (especially NT) people are likely to shy away from, and even leave or just not post because of.
tl;dr- I deal with the "rudeness", but want people to be aware that is does in fact exist. Those of us who dislike it have just learned to keep our mouths shut and deal with it. There are a lot of new people now, so try to soften it for the next week or two.
(Note: I have not been recently down-voted, flamed, or crushed, so this isn't just me raging.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-28T03:29:36.184Z · LW(p) · GW(p)
I'm unlikely to change my style of presentation here as a consequence of new people arriving, especially since I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
If my presentation style is offputting to new people who prefer a different style, I agree that's unfortunate. I'm not sure that my dealing by changing my style for their benefit -- supposing they even benefit from it -- is better.
Replies from: daenerys↑ comment by daenerys · 2011-12-28T03:42:42.453Z · LW(p) · GW(p)
You are correct, in that I do believe that many of the introductions here are people who have been lurking a long time, but are following the principle of social proof, and just introducing themselves now that everyone else is.
However, I do think that once they have gone through the motions of setting up an account an publishing their introduction, that self-consistency will lead them to continue to be more active on this site; They have just changed their self-image to that of "Member of LW" after all!
Your other supposition- that they might not benefit from it... I will tell you that I have almost quit LW many times in the past month, and it is only a lack of anything better out there that has kept me here.
My assumption is that you are OK with this, and feel that people that can't handle the heat should get out of the kitchen anyway, so to speak.
I think that is a valid point, IFF you want to maintain LW as it currently stands. I will admit that my preferences are different in that I hope LW grows and gets more and more participants. I also hope that this growth causes LW to be more "inclusive" and have a higher percentage of females (gender stereotyping here, sorry) and NTs, which will in effect lower the harshness of the site.
So I think our disagreement doesn't stem from "bad" rationality on either of our parts. It's just that we have different end-goals.
Replies from: Prismattic, TheOtherDave↑ comment by Prismattic · 2011-12-29T01:02:54.267Z · LW(p) · GW(p)
I am going to share with you a trick that is likely to make staying here (or anywhere else with some benefit) easier...
Prismattic's guaranteed (or your money back) method for dealing with stupid or obnoxious text on the Internet:
Read the problematic material as though it is being performed by Gonzo's chickens, to the tune of the William Tell Overture.
When this gets boring, you can alternate with reading it as performed by the Swedish chef, to the tune of Ride of the Valkyries.
Really, everything becomes easier to bear when filtered this way. I wish separating out emotional affect was as easy in tense face-to-face situations.
↑ comment by TheOtherDave · 2011-12-28T03:51:10.086Z · LW(p) · GW(p)
Can you confirm that you're actually responding to what I wrote?
If so, can you specify what it is about my presentation style that has encouraged you to almost quit?
Replies from: daenerys↑ comment by daenerys · 2011-12-28T04:07:40.140Z · LW(p) · GW(p)
I'm sorry, I did not want to imply that you specifically made me want to quit. In all honesty, the lack of visual avatars means I can't keep LW users straight at all.
But since you seem to be asking about your presentation style, here is me re-writing your previous post in a way that is optimized for a conversation I would enjoy, without feeling discomfort.
Original:
I'm unlikely to change my style of presentation here as a consequence of new people arriving, especially since I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
If my presentation style is offputting to new people who prefer a different style, I agree that's unfortunate. I'm not sure that my dealing by changing my style for their benefit -- supposing they even benefit from it -- is better.
How I WISH LW operated (and realize that 95% of you do not wish this)
Replies from: TheOtherDaveI agree that it's unfortunate that the style of LW posts may drive new users away, especially if they would otherwise enjoy the site and become valuable participants. However, I don't plan on updating my personal writing style here.
My main reason for this is that I find it unlikely that the wave of introductions reflects an actual influx of new people, as opposed to an influx of activity on the Welcome threads making the threads more visible and inspiring introductions.
I am also unsure if changing my writing style would actually help these newcomers in the long run. Or even if it did, would I prefer a LW that is watered-down, but more accessible? (my interpretation of what you meant by "better")
↑ comment by TheOtherDave · 2011-12-28T04:22:06.633Z · LW(p) · GW(p)
I asked about my presentation style because that's what I wrote about in the first place, and I couldn't tell whether your response to my comment was actually a response to what I wrote, or some more general response to some more general thing that you decided to treat my comment as a standin for.
I infer from your clarification that i was the latter. I appreciate the clarification.
Your suggested revision of what I said would include several falsehoods, were I to have said it.
Replies from: daenerys↑ comment by daenerys · 2011-12-28T04:41:12.901Z · LW(p) · GW(p)
Your suggested revision of what I said would include several falsehoods, were I to have said it.
I had to fill in some interpretations of what I thought you could have meant. If what I filled in was false, it is just that I do not know your mind as well as you do. If I did, I could fill in things that were true.
Politeness does not necessarily require falsity. Your post lacked the politeness parts, so I had to fill in politeness parts that I thought sounded like reasonable things you might be thinking. Were you trying to be polite, you could fill in politeness parts with things that were actually true for you (and not just my best guesses.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-28T05:01:44.459Z · LW(p) · GW(p)
I agree that politeness does not require falsity.
I infer from your explanation that your version of politeness does require that I reveal more information than I initially revealed. Can you say more about why?
↑ comment by thomblake · 2011-12-27T21:54:59.457Z · LW(p) · GW(p)
I would also say that nobody here has come even remotely close to "insult in every conceivable way" as an operating mode.
I should hope not. I can conceive of more ways to insult than I can type in a day, depending on how we want to count 'ways'.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-27T22:01:21.807Z · LW(p) · GW(p)
How do I insult thee? Let me count the ways.
I insult thee to the depth and breadth and height
My mind can reach, when feeling out of sight
For the lack of Reason and the craft of Bayes.
↑ comment by J_Taylor · 2011-12-28T03:46:13.967Z · LW(p) · GW(p)
Turning and turning in the narrowing spiral
The user cannot resist those memes which are viral;
The waterline is lowered; beliefs begin to cool;
Mere tribalism is loosed, upon Lesswrong's school,
The grey-matter is killed, and everywhere
The knowledge of one's ignorance is drowned;
The best lack all conviction, while the worst
Are full of passionate intensity.
Replies from: Nornagest↑ comment by Nornagest · 2011-12-28T04:00:52.294Z · LW(p) · GW(p)
Heh. I'm not sure why you felt compelled to rhyme there, though; Yeats didn't.
Replies from: J_Taylor↑ comment by J_Taylor · 2011-12-28T04:16:12.028Z · LW(p) · GW(p)
I must confess, I have never actually heard the words 'gyre' and 'falconer'. I assumed they could be pronounced in such a way that it would sound like a rhyme. In my head, they both were pronounced like 'hear'. Likewise, I assumed one could pronounce 'world' and 'hold' in such a way that they could sort-of rhyme. In my head, 'hold' was pronounced 'held' and 'world' was pronounced 'weld.'
http://www.youtube.com/watch?v=OEunVObSnVM
Apparently, this is not the case. Oops.
↑ comment by wedrifid · 2011-12-27T22:22:56.145Z · LW(p) · GW(p)
I would also say that nobody here has come even remotely close to "insult in every conceivable way" as an operating mode.
Although I must admit I was tempted take it up as a novel challenge just to demonstrate how absurd the hyperbole was.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T01:28:41.141Z · LW(p) · GW(p)
Returning to this... if you're still tempted, I'd love to see your take on it. Feel free to use me as a target if that helps your creativity, though I'm highly unlikely to take anything you say in this mode seriously. (That said, using a hypothetical third party would likely be emotionally easier.)
Unrelatedly: were you the person who had the script that sorts and display's all of a user's comments? I've changed computers since being handed that pointer and seem to have misplaced the pointer.
Replies from: gwern↑ comment by gwern · 2012-01-02T01:31:26.842Z · LW(p) · GW(p)
No, that'd be Wei Dai, I think; eg. I recently used http://www.ibiblio.org/weidai/lesswrong_user.php?u=Eliezer_Yudkowsky to point out that Eliezer has more than one negative comment (contra the cult leader accusation).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-02T01:35:11.441Z · LW(p) · GW(p)
Hah! Awesome. Thank you!
↑ comment by Emile · 2011-12-27T23:14:12.729Z · LW(p) · GW(p)
You might like this comment.
↑ comment by dlthomas · 2011-12-27T20:43:01.009Z · LW(p) · GW(p)
[T]hey put pressure on the people you're talking to to either call Crocker's rules too (giving you carte blanche to be rude to them), otherwise they look uptight or something.
This should be strongly rejected, if Crocker's Rules are ever going to do more good than harm. I do not mean that it is not the case given existing norms (I simply do not know one way or the other), but that norms should be established such that this is clearly not the case. Someone who is unable to operate according to Crocker's Rules attempting to does not improve discourse or information flow - no one should be pressured to do so.
Replies from: Emile↑ comment by Emile · 2011-12-27T23:11:24.686Z · LW(p) · GW(p)
I agree with you in the abstract.
The problem is, the more a community is likely to consider X a "good" practice, the more it is likely to think less of those who refuse to do do X, whatever X is; so I don't see a good way of avoiding negative connotations to "unable to operate according to Crocker's Rules".
... that is, unless the interaction is not symmetric, so that when one side announces Crocker's rules, there is no implicit expectation that the other side should do the same (with the associated status threat); for example if on my website I mention Crocker's rules next to the email form or something.
But in a peer-to-peer community like this, that expectation is always going to be implicit, and I don't see a good way to make it disappear.
Replies from: TheOtherDave, dlthomas↑ comment by TheOtherDave · 2011-12-28T00:47:54.204Z · LW(p) · GW(p)
Well, here's me doing my part: I don't declare Crocker's rules, and am unlikely to ever do so. Others can if they wish.
Replies from: dlthomas, wedrifid↑ comment by dlthomas · 2011-12-28T01:00:29.634Z · LW(p) · GW(p)
As I've mentioned before, I am not operating by Crocker's rules. I try to be responsible for my emotional state, but realize that I'm not perfect at this, so tell me the truth but there's no need to be a dick about it. I am not unlikely, in the future, to declare Crocker's rules with respect to some specific individuals and domains, but globally is unlikely in the foreseeable future.
↑ comment by wedrifid · 2011-12-28T01:15:39.400Z · LW(p) · GW(p)
Here's my part too: I don't declare Crocker's rules and do not commit to paying any heed to whether others have declared Crocker's rules. I'll speak to people however I see fit - which will include taking into account the preferences of both the recipient and any onlookers to precisely the degree that seems appropriate or desirable at the time.
↑ comment by dlthomas · 2011-12-27T23:18:46.554Z · LW(p) · GW(p)
I don't know about getting rid of it entirely, but we can at least help by stressing the importance of the distinction, and choosing to view operation by Crocker's rules as rare, difficult, unrelated to any particular discussion, and of only minor status boost.
Another approach might be to make all Crocker communication private, and expect polite (enough) discourse publicly.
↑ comment by thomblake · 2011-12-27T20:21:54.482Z · LW(p) · GW(p)
Wikipedia and Google seem to think Eliezer is the authority on Crocker's Rules. Quoting Eliezer on sl4 via Wikipedia:
Anyone is allowed to call you a moron and claim to be doing you a favor.
Also, from our wiki:
The underlying assumption is that rudeness is sometimes necessary for effective conveyance of information, if only to signal a lack of patience or tolerance: after all, knowing whether the speaker is becoming angry or despondent is useful rational evidence.
Looking hard for another source, something called the DoWire Wiki has this unsourced:
By invoking these Rules, the recipient declares that s/he does not care about, and some hold that s/he gives up all right to complain about and must require others not to complain about, any level of emotional provocation, flames, abuse of any kind.
So if anyone is using Crocker's Rules a different way, I think it's safe to say they're doing it wrong, but only by definition. Maybe someone should ask Crocker, if they're concerned.
↑ comment by TheOtherDave · 2011-12-27T03:51:22.321Z · LW(p) · GW(p)
OK.
FWIW, I agree that nyan-sandwich's tone was condescending, and that they used vulgar words.
I also think "I suppose they can't be expected to behave any better, we should praise them for not being completely awful" is about as condescending as anything else that's been said in this thread.
↑ comment by AspiringKnitter · 2011-12-27T03:58:11.599Z · LW(p) · GW(p)
Yeah, you're probably right. I didn't mean for that to come out that way (when I used to spend a lot of time on places with low standards, my standards were lowered, too), but that did end up insulting. I'm sorry, nyan_sandwich.
↑ comment by thomblake · 2011-12-27T17:03:49.562Z · LW(p) · GW(p)
Crocker's rules don't say "explain things in an insulting way", they say "don't soften the truths you speak to me". You can optimize for information-- and even get it across better-- when you're not trying to be rude.
A lot of intelligent folks have to spend a lot of energy trying not to be rude, and part of the point of Crocker's Rules is to remove that burden by saying you won't call them on rudeness.
Replies from: TimS↑ comment by TimS · 2011-12-27T17:26:33.055Z · LW(p) · GW(p)
Not all politeness is inconsistent with communicating truth. I agree that "Does this dress make me look fat" has a true answer and a polite answer. It's worth investing some attention into figuring out which answer to give. Often, people use questions like that as a trap, as mean-spirited or petty social and emotional manipulation. Crocker's Rule is best understood as a promise that the speaker is aware of this dynamic and explicitly denies engaging in it.
That doesn't license being rude. If you are really trying to help someone else come to a better understanding of the world, being polite helps them avoid cognitive biases that would prevent them from thinking logically about your assertions. In short, Crocker's Rule does not mean "I don't mind if you are intentionally rude to me." It means "I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
Replies from: thomblake↑ comment by thomblake · 2011-12-27T17:39:35.598Z · LW(p) · GW(p)
In short, Crocker's Rule does not mean "I don't mind if you are intentionally rude to me." It means "I am aware that your assertions might be unintentionally rude, and I will be guided by your intention to inform rather than interpreting you as intentionally rude.
Right, I wasn't saying anything that contradicted that. Rather, some of us have additional cognitive burden in general trying to figure out if something is supposed to be rude, and I always understood part of the point of Crocker's Rules to be removing that burden so we can communicate more efficiently. Especially since many such people are often worth listening to.
↑ comment by NancyLebovitz · 2011-12-27T03:09:34.443Z · LW(p) · GW(p)
For what it's worth, I generally see some variant of "please don't flame me" attached only to posts which I'd call inoffensive even without it. I'm not crazy about seeing "please don't flame me", but I write it off to nervousness and don't blame people for using it.
Caveat: I'm pretty sure that "please don't flame me" won't work in social justice venues.
↑ comment by Jonii · 2011-12-26T01:50:40.256Z · LW(p) · GW(p)
I had missed this. The original post read as really weird and hostile, but I only read after having heard about this thread indirectly for days, mostly about the way how later she seemed pretty intelligent, so I dismissed what I saw and substituted what I ought to have seen. Thanks for pointing this out.
Upvoted
↑ comment by [deleted] · 2011-12-25T21:03:46.169Z · LW(p) · GW(p)
I disagree. It's an honest expression of feeling, and a reasonable statement of expectations, given LW's other run-ins with self-identified theists. It may be a bit overstated, but not terribly much.
Replies from: DSimon, None↑ comment by DSimon · 2011-12-25T21:11:51.429Z · LW(p) · GW(p)
Do you really think it's only a bit overstated? I mean, has anybody been banned for being religious? And has anybody here indicated that they hate Christians without immediately being called on falling into blue vs. green thinking?
Replies from: Suryc11, Kaj_Sotala, None↑ comment by Suryc11 · 2011-12-25T21:20:58.820Z · LW(p) · GW(p)
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
From her other posts, AspiringKnitter strikes me as being open-minded and quite intelligent, but that last paragraph really irks me. It's self-debasing in an almost manipulative way - as if she actually wants us to talk to her like we "only want [her] to hate God" or as if we "really hate Christians". Anybody who has spent any non-trivial amount of time on LW would know that we certainly don't hate people we disagree with, at least to the best of my knowledge, so asserting that is not a charitable or reasonable expectation. Plus, it seems that it would now be hard(er) to downvote her because she specifically said she expects that, even given a legitimate reason to downvote.
Replies from: None↑ comment by Kaj_Sotala · 2011-12-26T10:35:04.823Z · LW(p) · GW(p)
Well, some of Eliezer's posts about religion and religious thought have been more than a little harsh. (I couldn't find it, but there was a post where he said something along the lines of "I have written about religion as the largest imaginable plague on thinking...") They didn't explicitly say that religious people are to be scorned, but it's very easy to read in that implication, especially since many people who are equally vocal about religion being bad do hold that opinion.
↑ comment by [deleted] · 2011-12-25T21:17:41.213Z · LW(p) · GW(p)
Banned? Not that I know of. But there have certainly been Christians who have been serially downvoted, perhaps more than they deserved.
"Hate" may be too strong a word, but the original poster's meaning seems to lean closer to "openly intolerant", which is true and partially justified.
EDIT: Looking back, the original poster was asking if they would be banned, not claiming so. So that doesn't seem to be a valid criticism.
↑ comment by [deleted] · 2011-12-25T21:36:39.812Z · LW(p) · GW(p)
Being honest and having reasonable expectations of being treated like a troll does not disqualify a post from being a troll.
Hello, I expect you won't like me, I'm
Classic troll opening. Challenges us to take the post seriously. Our collective 'manhood' is threatened if react normally (eg saying "trolls fuck off").
dont want to be turned onto an immortal computer-brain-thing that acts more like Eliezer thinks it should
Insulting straw man with a side of "you are an irrational cult".
I've been lurking for a long time... overcoming bias... sequences... HP:MOR... namedropping
"Seriously, I'm one of you guys". Concern troll disclaimer. Classic.
evaporative cooling... women... I'm here to help you not be a cult.
Again undertones of "you are a cult and you must accept my medicine or turn into a cult". Again we are challenged to take it seriously.
I just espoused, it'll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I didn't quite understand this part, but again, straw man caricature.
I'd rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don't agree with Eliezer Yudkowsky. Not that any of you (especially EY) WANT that, exactly. But anyway, my point is, With Folded Hands is a pretty bad failure mode for the worst-case scenario where EC occurs and EY gets to AI first.
Theres a rhetorical meme on 4chan that elegantly deals with this kind of crap:
implying we don't care about friendliness
implying you know more about friendliness than EY
'nuff said
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all.
classic reddit downvote preventer:
- Post a troll or other worthless opinion
- Imply that the hivemind wont like it
- Appeal to people's fear of hivemind
- Collect upvotes.
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
again implying irrational insider/outsider dynamic, hivemind tendencies and even censorship.
Of course the kneejerk response is "no no, we don't hate you and we certainly won't censor you; please we want more christian trolls like you"
I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
And top it off with a bit of sympathetic character, damsel-in-distress crap. EDIT: Oh and the bit about hating God is a staw-man. /EDIT
This is not necessarily deliberate, but it doesn't have to be.
Trolling is a art. and Aspiring_Knitter is a artist. 10/10.
Replies from: DSimon↑ comment by DSimon · 2011-12-25T21:44:28.381Z · LW(p) · GW(p)
I've been lurking for a long time... overcoming bias... sequences... HP:MOR... namedropping
"Seriously, I'm one of you guys". Concern troll disclaimer. Classic.
I don't follow how indicating that she's actually read the site can be a mark against her. If the comment had not indicated familiarity with the site content, would you then describe it as less trollish?
Replies from: None↑ comment by [deleted] · 2011-12-25T21:48:32.717Z · LW(p) · GW(p)
it's a classic troll technique. It's not independent of the other trollish tendencies. Alone, saying those things does not imply troll, but in the presence of other troll-content it is used to raise perceived standing and lower the probability that they are a troll.
EDIT: and yes, trollish opinions without trollish disclaimers raise probability of plain old stupidity.
EDIT2: Have to be very careful with understanding the causality of evidence supplied by hostile agents. What Evidence Filtered Evidence and so on,
↑ comment by MixedNuts · 2012-01-05T20:05:33.290Z · LW(p) · GW(p)
So... voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going "Seriously, I'm one of you guys". Joking about the image a group idea's have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Okay, so I see the bits that are protection against being called a troll. What I don't see is the trolling. Is it "I'm a Christian"? If you think all Christians should pretend to be atheists... well, 500 responses disagree with you. Is it what you call straw men? I read those as jokes about what we look like to outsiders, but even if they're sincere, they're surrounded with so much display of uncertainty that "No, that's not what we think." should end it then and there. And if AspiringKnitter where a troll, why would she stop trolling and write good posts right after that?
Conclusion: You fail the principle of charity forever. You're a jerk. I hope you run out of milk next time you want to eat cereal.
Replies from: wedrifid, None↑ comment by wedrifid · 2012-01-05T20:48:06.385Z · LW(p) · GW(p)
So... voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going "Seriously, I'm one of you guys". Joking about the image a group idea's have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Deliberate, active straw manning sarcasm for the purpose of giving insult and conveying contempt.
What I don't see is the trolling.
Yes, trolling is distinguished from what nyan called "troll-bait" by, for most part, duration. Trolls don't stop picking fights and seem to thrive on the conflict they provoke. If nyan tried to claim that AspiringKnitter was a troll in general - and fail to update on the evidence from after this comment - he would most certainly be wrong.
Conclusion: You fail the principle of charity forever.
He wasn't very charitable in his comment, I certainly would have phrased criticism differently (and directed most of it at those encouraging damsel in distress crap.) But for your part you haven't failed the principle of charity - you have failed to parse language correctly and respond to the meaning contained therein.
You're a jerk. I hope you run out of milk next time you want to eat cereal.
This is not ok.
Replies from: Alicorn↑ comment by Alicorn · 2012-01-05T21:06:56.013Z · LW(p) · GW(p)
You're a jerk. I hope you run out of milk next time you want to eat cereal.
This is not ok.
The cereal thing is comically mild. The impulse to wish bad things on others is a pretty strong one and I think it's moderated by having an outlet to acknowledge that it's silly in this or maybe some other way - I'd rather people publicly wish me to run out of milk than privately wish me dead.
Replies from: wedrifid, MixedNuts, None↑ comment by wedrifid · 2012-01-05T21:20:53.552Z · LW(p) · GW(p)
The cereal thing is comically mild. The impulse to wish bad things on others is a pretty strong one and I think it's moderated by having an outlet to acknowledge that it's silly in this or maybe some other way
Calling nyan a jerk in that context wasn't ok with me and nor was any joke about wanting harm to come upon him. It was unjustified and inappropriate.
- I'd rather people publicly wish me to run out of milk than privately wish me dead.
I don't much care what MixedNuts wants to happen to nyan. The quoted combination of words constitutes a status transaction of a kind I would see discouraged. Particularly given that we don't allow reciprocal personal banter of the kind this sort insult demands. If, for example, nyan responded with a pun on a keyword and a reference to Mixed's sister we wouldn't allow it. When insults cannot be returned in kind the buck stops with the first personal insult. That is, Mixed's.
Replies from: TheOtherDave, daenerys, None↑ comment by TheOtherDave · 2012-01-05T22:14:01.474Z · LW(p) · GW(p)
When insults cannot be returned in kind the buck stops with the first personal insult.
This is admirably compelling.
↑ comment by daenerys · 2012-01-05T21:54:13.070Z · LW(p) · GW(p)
Calling nyan a jerk in that context wasn't ok with me and nor was any joke about wanting harm to come upon him. It was unjustified and inappropriate.
Upvoted.
I am happy that someone other than me gets upset when they see these "jokes" on here.
(I also downvoted the "jerk" comment)
↑ comment by [deleted] · 2012-01-05T22:07:49.871Z · LW(p) · GW(p)
wanting harm to come upon him
[emphasis mine]. You assume that nyan is male. Where did "he" say that? nyan explicitly claims to be a "genderless internet being" in the introductions thread.
Last LW survey came out with 95% male, IIRC. 95% sure of something is quite strong. nyan called Aspiring_Knitter a troll on much less solid evidence. Also, you come from the unfortunate position of not having workable genderless pronouns.
I'll allow it.
Replies from: wedrifid↑ comment by wedrifid · 2012-01-05T23:03:31.534Z · LW(p) · GW(p)
[emphasis mine]. You assume that nyan is male. Where did "he" say that? nyan explicitly claims to be a "genderless internet being" in the introductions thread.
That's fair. I used male because you sounded more like a male - and still do. If you are a genderless internet being then I will henceforth refer to you as an 'it'. If you were a genderless human I would use the letter 'v' followed by whatever letters seem to fit the context.
↑ comment by [deleted] · 2012-01-05T21:17:51.329Z · LW(p) · GW(p)
I'd rather people publicly wish me to run out of milk than privately wish me dead.
Well, who knows what MixedNuts' wishes? Wishing wedrifid runs out of milk doesn't exclude this latter possibility.
I'm also reminded, of all the silly things, (the overwhelmingly irrational) Simone Weil:
Replies from: MixedNutsIf someone does me an injury I must desire that this injury shall not degrade me. I must desire this out of love for him who inflicts it, in order that he may not really have done evil.
↑ comment by [deleted] · 2012-01-05T21:46:32.802Z · LW(p) · GW(p)
Delicious controversy. Yum. I might have a lulz-relapse and become a troll.
So... voicing disagreement boldly is trolling, voicing it nervously is trolling and trying to prevent being called out. Signalling distance from the group is trolling and accusations of hive mind, signalling group membership is trolling and going "Seriously, I'm one of you guys". Joking about the image a group idea's have, in the same way the group itself does, is straw-manning and caricature, seriously worrying about those ideas is damsel-in-distress crap.
Burn the witch!
Disagreement is not trolling. Neither is nervous disagreement. The hivemind thing had nothing to do with status signaling, it was about the readers insecurity. The group membership/cultural knowledge signaling thing is almost always used as a delivery vector for a ignoble payload.
They didn't look like jokes or uncertainty to me. I am suddenly gripped by a mortal fear that I may not have a sense of humor. The damsel in distress thing was unconnected to the ideas thing.
TL;DR: what wedrifid said.
Okay, so I see the bits that are protection against being called a troll. What I don't see is the trolling. Is it "I'm a Christian"? If you think all Christians should pretend to be atheists... well, 500 responses disagree with you. Is it what you call straw men? I read those as jokes about what we look like to outsiders, but even if they're sincere, they're surrounded with so much display of uncertainty that "No, that's not what we think." should end it then and there. And if AspiringKnitter where a troll, why would she stop trolling and write good posts right after that?
Again, they still don't look like jokes. If everyone else decides they were jokes, I will upmod my belief that I am a humorless internet srs-taker. EDIT: oh I forgot to address the AS is not troll claim. It has been observed, in the long history of the internet, that sometimes a person skilled in the trolling arts will post a masterfully crafted troll-bait, and then decide to forsake their lulzy crusade for unknown reasons. /EDIT
I hope you run out of milk next time you want to eat cereal.
Joke is on you. nyan_sandwich''s human alter-ego doesn't eat cereal.
nyan_sandwich may have been stricken with a minor case of confirmation bias when they made that assessment, but I think it still stands.
↑ comment by Nornagest · 2011-12-19T09:04:51.447Z · LW(p) · GW(p)
That's some interesting reasoning. I've met people before who avoided leaving an evaporatively cooling group because they recognized the process and didn't want to contribute to it, but you might be the first person I've encountered who joined a group to counteract it (or to stave it off before it begins, given that LW seems to be both growing and to some extent diversifying right now). Usually people just write groups like that off. Aside from the odd troll or ideologue that claims similar motivations but is really just looking for a fight, at least-- but that doesn't seem to fit what you've written here.
Anyway. I'm not going to pretend that you aren't going to find some hostility towards Abrahamic religion here, nor that you won't be able to find any arguably problematic (albeit mostly unconsciously so) attitudes regarding sex and/or gender. Act as your conscience dictates should you find either one intolerable. Speaking for myself, though, I take the Common Interest of Many Causes concept seriously: better epistemology is good for everyone, not just for transhumanists of a certain bent. Your belief structure might differ somewhat from the tribal average around here, but the actual goal of this tribe is to make better thinkers, and I don't think anyone's going to want to exclude you from that as long as you approach it in good faith.
In fewer words: welcome to Less Wrong.
↑ comment by juliawise · 2011-12-19T11:44:14.938Z · LW(p) · GW(p)
Hi, Aspiring Knitter. I also find the Less Wrong culture and demographics quite different from my normal ones (being a female in the social sciences who's sympathetic to religion though not a believer. Also, as it happens, a knitter.) I stuck around because I find it refreshing to be able to pick apart ideas without getting written off as too brainy or too cold, which tends to happen in the rest of my life.
Sorry for the lack of persecution - you seem to have been hoping for it.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-19T18:30:07.202Z · LW(p) · GW(p)
Very glad not to be persecuted, actually. Yay!
↑ comment by Emile · 2011-12-19T09:44:12.029Z · LW(p) · GW(p)
Welcome to LessWrong!
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
Do we? Do you hate Hindus, or do you just think they're wrong?
One thing I slightly dislike about "internet atheists" is the exclusive focus on religion as a source of all that's wrong in the world, whereas you get very similar forms of irrationality in partisan politics or nationalism. I'm not alone in holding that view - see this for some related ideas. At best, religion can be about focusing human's natural irrationality in areas that don't matter (cosmology instead of economics), while facilitating morality and cooperative behavior. I understand that some Americans atheists are more hostile to religion than I am (I'm French, religion isn't a big issue here, except for Islam), because they have to deal with religious stupidity on a daily basis.
Note that a Mormon wrote a series of posts that was relatively well received, so you may be overestimating LessWrong's hostility to religion.
↑ comment by CronoDAS · 2011-12-19T10:16:17.074Z · LW(p) · GW(p)
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
Technically, it's "Christianity" that some of us don't like very much. Many of us live in countries where people who call themselves "Christians" compose much of the population, and going around hating everyone we see won't get us very far in life. We might wish that they weren't Christians, but while we're dreaming we might as well wish for a pony, too.
And, no, we don't ban people for saying that they're Christians. It takes a lot to get banned here.
I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God.
Well, so far you haven't given us much of a reason to want you gone. Also, people who call themselves atheists usually don't really care whether or not you "hate God" any more than we care about whether you "hate Santa Claus".
Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
Because you feel you have something you want to say?
Replies from: Alicorn, Bugmaster↑ comment by Ezekiel · 2011-12-26T11:04:07.848Z · LW(p) · GW(p)
Hi, AspiringKnitter!
There have been several openly religious people on this site, of varying flavours. You don't (or shouldn't) get downvoted just for declaring your beliefs; you get downvoted for faulty logic, poor understanding and useless or irrelevant comments. As someone who stopped being religious as a result of reading this site, I'd love for more believers to come along. My impulse is to start debating you right away, but I realise that'd just be rude. If you're interested, though, drop me a PM, because I'm still considering the possibility I might have made the wrong decision.
The evaporative cooling risk is worrying, now that you mention it... Have you actually noticed that happening here during your lurking days, or are you just pointing out that it's a risk?
Oh, and dedicating an entire paragraph to musing about the downvotes you'll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don't do that.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T02:13:08.454Z · LW(p) · GW(p)
As someone who stopped being religious as a result of reading this site, I'd love for more believers to come along.
Uh-oh. LOL.
My impulse is to start debating you right away, but I realise that'd just be rude.
Normally, I'm open to random debates about everything. I pride myself on it. However, I'm getting a little sick of religious debate since the last few days of participating in it. I suppose I still have to respond to a couple of people below, but I'm starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option. It's my own fault for showing up here, but I'm starting to realize why "agree to disagree" was ever considered by anyone at all for anything given its obvious wrongness: you just can't do anything if you spend all your time on a never-ending argument.
The evaporative cooling risk is worrying, now that you mention it... Have you actually noticed that happening here during your lurking days, or are you just pointing out that it's a risk?
Haven't been lurking long enough.
Oh, and dedicating an entire paragraph to musing about the downvotes you'll probably get, while an excellent tactic for avoiding said downvotes, is also annoying. Please don't do that.
In the future I will not. See below. Thank you for calling me out on that.
Replies from: TheOtherDave, Emile, Incorrect↑ comment by TheOtherDave · 2011-12-27T02:22:33.432Z · LW(p) · GW(p)
Talk of Aumann Agreement notwithstanding, the usual rules of human social intercourse that allow "I am no longer interested in continuing this discussion" as a legitimate conversational move continue to apply on this site. If you don't wish to discuss your religious beliefs, then don't.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T02:52:02.442Z · LW(p) · GW(p)
Ah, I didn't know that. I've never had a debate that didn't end with "we all agree, yay", some outside force stopping us or everyone hating each other and hurling insults.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-27T03:30:24.254Z · LW(p) · GW(p)
Jeez. What would "we all agree, yay" even look like in this case?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T03:36:56.133Z · LW(p) · GW(p)
I suppose either I'd become an atheist or everyone here would convert to Christianity.
Replies from: Prismattic, NancyLebovitz, TheOtherDave, lessdazed↑ comment by Prismattic · 2011-12-27T04:57:57.157Z · LW(p) · GW(p)
The assumption that everyone here is either an atheist or a Christian is already wrong.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T05:01:11.540Z · LW(p) · GW(p)
Good point. Thank you for pointing it out.
↑ comment by NancyLebovitz · 2011-12-27T03:56:48.765Z · LW(p) · GW(p)
There are additional possibilities, like everyone agreeing on agnosticism or on some other religion.
Replies from: Ratheka↑ comment by TheOtherDave · 2011-12-27T04:13:56.411Z · LW(p) · GW(p)
Hm.
So, if I'm understanding you, you considered only four possible outcomes likely from your interactions with this site: everyone converts to Christianity, you get deconverted from Christianity, the interaction is forcibly stopped, or the interaction degenerates to hateful insults. Yes?
I'd be interested to know how likely you considered those options, and if your expectations about likely outcomes have changed since then.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T05:00:39.449Z · LW(p) · GW(p)
Well, for any given conversation about religion, yes. (Obviously, I expect different things if I post a comment about HP:MoR on that thread.)
I expected the last one, since mostly no matter what I do, internet discussions on anything important have a tendency to do that. (And it's not just when I'm participating in them!) I considered any conversions highly unlikely and didn't really expect the interaction to be stopped.
My expectations have changed a lot. After a while I realized that hateful insults weren't happening very much here on Less Wrong, which is awesome, and that the frequency didn't seem to increase with the length of the discussion, unlike other parts of the internet. So I basically assumed the conversation would go on forever. Now, having been told otherwise, I realize that conversations can actually be ended by the participants without one of these things happening.
That was a failure on my part, but would have correctly predicted a lot of the things I'd experienced in the past. I just took an outside view when an inside view would have been better because it really is different this time. That failure is adequately explained by the use of the outside view heuristic, which is usually useful, and the fact that I ended up in a new situation which lacked the characteristics that caused what I observed in the past.
↑ comment by lessdazed · 2011-12-27T16:32:52.786Z · LW(p) · GW(p)
Beliefs should all be probabilistic.
I think this rules out some and only some branches of Christianity, but more importantly it impels accepting behaviorist criteria for any difference in kind between "atheists" and "Christians" if we really want categories like that.
↑ comment by Emile · 2011-12-27T11:47:01.022Z · LW(p) · GW(p)
I'm starting to fear a never-ending, energy-sapping, GPA-sabotaging argument where agreeing to disagree is literally not an option.
There isn't a strong expectation here that people should never agree to disagree - see this old discussion, or this one.
That being said, persistent disagreement is a warning sign that at least one side isn't being perfectly rational (which covers both things like "too attached to one's self-image as a contrarian" and like "doesn't know how to spell out explicitly the reasons for his belief").
↑ comment by Incorrect · 2011-12-27T03:45:07.478Z · LW(p) · GW(p)
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of schizophrenia.
However, I'm getting a little sick of religious debate since the last few days of participating in it.
Then please feel free to ignore this comment. On the other hand, if you ever feel like responding then by all means do.
A lack of response to this comment should not be considered evidence that AspiringKnitter could not have brilliantly responded.
What is the primary reason you believe in God and what is the nature of this reason?
By nature of the reason, I mean something like these:
inductive inference: you believe adding a description of whatever you understand of God leads to a simpler explanation of the universe without losing any predictive power
intuitive inductive inference: you believe in god because of intuition. you also believe that there is an underlying argument using inductive inference, you just don't know what it is
intuitive metaphysical: you believe in god because of intuition. you believe there is some other justification this intuition works
↑ comment by AspiringKnitter · 2011-12-27T04:04:39.546Z · LW(p) · GW(p)
I tried to look for a religious debate elsewhere in this thread but could not find any except the tangential discussion of schizophrenia.
It's weird, but I can't seem to find everything on the thread from the main post no matter how many of the "show more comments" links I click. Or maybe it's just easy to get lost.
What is the primary reason you believe in God and what is the nature of this reason?
None of the above, and this is going to end up on exactly (I do mean exactly) the same path as the last one within three posts if it continues. Not interested now, maybe some other time. Thanks. :)
↑ comment by [deleted] · 2011-12-20T03:38:07.041Z · LW(p) · GW(p)
Hello. I expect you won't like me because I'm Christian and female and don't want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
I don't think you'll be actively hated here by most posters (and even then, flamewars and trolling here are probably not what you'd expect from most other internet spaces)
it'll raise the probability that you start worshiping the possibility of becoming immortal polyamorous whatever and taking over the world.
I wouldn't read polyamory as a primary shared feature of the posters here -- and this is speaking as someone who's been poly her entire adult life. Compared to most mainstream spaces, it does come up a whole lot more, and people are generally unafraid of at least discussing the ins and outs of it.
(I find it hard to imagine how you could manage real immortality in a universe with a finite lifespan, but that's neither here nor there.)
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
You have to do a lot weirder or more malicious than that to get banned here. I frequently argue inarticulately for things that are rather unpopular here, and I've never once gotten the sense that I would be banned. I can think of a few things that I could do that would get me banned, but I had to go looking.
You won't be banned, but you will probably be challenged a lot if you bring your religious beliefs into discussions because most of the people here have good reasons to reject them. Many of them will be happy to share those with you, at length, should you ask.
I probably shouldn't bother talking to people who only want me to hate God.
The people here mostly don't think the God you believe in is a real being that exists, and have no interest in making you hate your deity. For us it would be like making someone hate Winnie the Pooh -- not the show or the books, but the person. We don't think there's anything there to be hated.
Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
I'm going to guess it's because you're curious, and you've identified LW as a place where people who claim to want to do some pretty big, even profound things to change the world hang out (as well as people interested in a lot of intellectual topics and skills), and on some level that appeals to you?
And I'd further guess you feel like the skew of this community's population makes you nervous that some of them are talking about changing the world in ways that would affect everybody whether or not they'd prefer to see that change if asked straight up?
↑ comment by Bugmaster · 2011-12-20T00:43:06.423Z · LW(p) · GW(p)
the possibility of becoming immortal polyamorous whatever and taking over the world.
I think I just found my new motto in life :-)
You guys really hate Christians, after all.
I personally am an atheist, and a fairly uncompromising one at that, but I still find this line a little offensive. I don't hate all Christians. Many (or probably even most) Christians are perfectly wonderful people; many of them are better than myself, in fact. Now, I do believe that Christians are disastrously wrong about their core beliefs, and that the privileged position that Christianity enjoys in our society is harmful. So, I disagree with most Christians on this topic, but I don't hate them. I can't hate someone simply for being wrong, that just makes no sense.
That said, if you are the kind of Christian who proclaims, in all seriousness, that (for example) all gay people should be executed because they cause God to send down hurricanes -- then I will find it very, very difficult not to hate you. But you don't sound like that kind of a person.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T01:31:12.971Z · LW(p) · GW(p)
If you can call down hurricanes, tell me and I'll revise my beliefs to take that into account. (But then I'd just be in favor of deporting gays to North Korea or wherever else I decide I don't like. What a waste to execute them! It could also be interesting to send you all to the Sahara, and by interesting I mean ecologically destructive and probably a bad idea not to mention expensive and needlessly cruel.) As long as you're not actually doing that (if you are, please stop), and as long as you aren't causing some other form of disaster, I can't think of a good reason why I should be advocating your execution.
Replies from: CronoDAS, Bugmaster↑ comment by CronoDAS · 2011-12-22T00:26:20.571Z · LW(p) · GW(p)
Calling down hurricanes is easy. Actually getting them to come when you call them is harder. :)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T01:41:43.478Z · LW(p) · GW(p)
Much like spirits from the vasty deep.
↑ comment by Bugmaster · 2011-12-20T01:35:14.793Z · LW(p) · GW(p)
Sadly, I myself do not possess the requisite sexual orientation, otherwise I'd be calling down hurricanes all over the place. And meteorites. And angry frogs ! Mwa ha ha !
Replies from: Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2011-12-20T05:46:41.160Z · LW(p) · GW(p)
Bugmaster, I call down hurricanes everyday. It never gets boring. Meteorites are a little harder, but I do those on occasion. They aren't quite as fun.
But the angry frogs?
The angry frogs?
Those don't leave a shattered wasteland behind, so you can just terrorize people over and over again with those. Just wonderful.
Note: All of the above is complete bull-honkey. I want this to be absolutely clear. 100%, fertilizer-grade, bull-honkey.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-12-21T10:02:18.703Z · LW(p) · GW(p)
If I had a smartphone, I could call down Angry Birds on people. Well, on pigs at least.
↑ comment by cousin_it · 2011-12-19T11:45:25.062Z · LW(p) · GW(p)
EY has read With Folded Hands and mentioned it in his CEV writeup as one more dystopia to be averted. This task isn't getting much attention now because unfriendly AI seems to be more probable and more dangerous than almost-friendly AI. Of course we would welcome any research on preventing almost-friendly AI :-)
Replies from: thomblake↑ comment by thomblake · 2011-12-19T19:28:39.784Z · LW(p) · GW(p)
Of course we would welcome any research on preventing almost-friendly AI :-)
Or creating it. That might be good too.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-20T19:56:08.989Z · LW(p) · GW(p)
The act or the research?
Replies from: thomblake↑ comment by thomblake · 2011-12-20T20:22:31.144Z · LW(p) · GW(p)
Either. The main reason creating almost-Friendly AI isn't a concern is that it's believed to be practically as hard as creating Friendly AI. Someone who tries to create a Friendly AI and fails creates an Unfriendly AI or no AI at all. And almost-Friendly might be enough to keep us from being hit by meteors and such.
Replies from: xxd↑ comment by xxd · 2011-12-20T23:48:30.685Z · LW(p) · GW(p)
I'm struggling with where the line lies.
I think pretty much everyone would agree that some variety of "makes humanity extinct by maximizing X" is unfriendly.
If however we have "makes bad people extinct by maximizing X and otherwise keeps P-Y of humanity alive" is that still unfriendly?
What about "leaves the solar system alone but tiles the rest of the galaxy" is that still unfriendly?
Can we try to close in on where the line is between friendly and unfriendly?
I really don't believe we have NOT(FAI) = UFAI.
I believe it's the other way around i.e. NOT(UFAI) = FAI.
Replies from: thomblake, TimS, APMason↑ comment by thomblake · 2011-12-21T14:54:13.508Z · LW(p) · GW(p)
I really don't believe we have NOT(FAI) = UFAI.
I believe it's the other way around i.e. NOT(UFAI) = FAI.
Are you using some nonstandard logic where these statements are distinct?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T16:29:24.507Z · LW(p) · GW(p)
In the real world if I believe that "anyone who isn't my enemy is my friend" and you believe that "anyone who isn't my friend is my enemy", we believe different things. (And we're both wrong: the truth is some people are neither my friends nor my enemies.) I assume that's what xxd is getting at here. I think it would be more precise for xxd to say "I don't believe that NOT(FAI) is a bad thing that we should be working to avoid. I believe that NOT(UFAI) is a good thing that we should be working to achieve."
In this xxd does in fact disagree with the articulated LW consensus, which is that the design space of human-created AI is so dangerous that if an AI isn't provably an FAI, we ought not even turn it on... that any AI that isn't Friendly constitutes an existential risk.
Xxd may well be wrong, but xxd is not saying something incoherent here.
Replies from: thomblake↑ comment by thomblake · 2011-12-21T18:54:00.100Z · LW(p) · GW(p)
In the real world if I believe that "anyone who isn't my enemy is my friend" and you believe that "anyone who isn't my friend is my enemy", we believe different things.
Can you explain what those things are? I can't see the distinction. The first follows necessarily from the second, and vice-versa.
Replies from: TheOtherDave, TimS↑ comment by TheOtherDave · 2011-12-21T19:09:00.318Z · LW(p) · GW(p)
Consider three people: Sam, Ethel, and Doug.
I've known Sam since we were kids together, we enjoy each others' company and act in one another's interests. I've known Doug since we were kids together, we can't stand one another and act against one another's interests. I've never met Ethel in my life and know nothing about her; she lives on the other side of the planet and has never heard of me.
It seems fair to say that Sam is my friend, and Doug is my enemy. But what about Ethel?
If I believe "anyone who isn't my enemy is my friend," then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another's interests? No, we do not. Thus we aren't enemies... and it follows from my belief that Ethel is my friend.
If I believe "anyone who isn't my friend is my enemy," then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another's interests? No, we do not. Thus we aren't friends... and it follows from my belief that Ethel is my enemy.
I think it more correct to say that Ethel is neither my friend nor my enemy. Thus, I consider Ethel an example of someone who isn't my friend, and isn't my enemy. Thus I think both of those beliefs are false. But even if I'm wrong, it seems clear that they are different beliefs, since they make different predictions about Ethel.
Replies from: thomblake↑ comment by thomblake · 2011-12-21T20:49:42.563Z · LW(p) · GW(p)
If I believe "anyone who isn't my enemy is my friend," then I can evaluate Ethel for enemyhood. Do we dislike one another? Do we act against one another's interests? No, we do not. Thus we aren't enemies... and it follows from my belief that Ethel is my friend.
If I believe "anyone who isn't my friend is my enemy," then I can evaluate Ethel for friendhood. Do we like one another? Do we act in one another's interests? No, we do not. Thus we aren't friends... and it follows from my belief that Ethel is my enemy.
Thanks - that's interesting.
It seems to me that this analysis only makes sense if you actually have the non-excluded middle of "neither my friend nor my enemy". Once you've accepted that the world is neatly carved up into "friends" and "enemies", it seems you'd say "I don't know whether Ethel is my friend or my enemy" - I don't see why the person in the first case doesn't just as well evaluate Ethel for friendhood, and thus conclude she isn't an enemy. Note that one who believes "anyone who isn't my enemy is my friend" also should thus believe "anyone who isn't my friend is my enemy" as a (logically equivalent) corollary.
Am I missing something here about the way people talk / reason? I can't really imagine thinking that way.
Edit: In case it wasn't clear enough that they're logically equivalent:
Edit: long proof was long.
¬Fx → Ex ≡ Fx ∨ Ex ≡ ¬Ex → Fx
Replies from: dlthomas, TheOtherDave↑ comment by TheOtherDave · 2011-12-21T21:22:41.365Z · LW(p) · GW(p)
Yes, I agree that if everyone in the world is either my friend or my enemy, then "anyone who isn't my enemy is my friend" is equivalent to "anyone who isn't my friend is my enemy."
But there do, in fact, exist people who are neither my friend nor my enemy.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-21T21:56:47.279Z · LW(p) · GW(p)
If "everyone who is not my friend is my enemy", then there does not exist anyone who is neither my friend nor my enemy. You can therefore say that the statement is wrong, but the statements are equivalent without any extra assumptions.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-24T10:53:07.330Z · LW(p) · GW(p)
ISTM that the two statements are equivalent denotationally (they both mean “each person is either my friend or my enemy”) but not connotationally (the first suggests that most people are my friends, the latter suggests that most people are my enemies).
↑ comment by TimS · 2011-12-21T19:06:33.849Z · LW(p) · GW(p)
It's equivocation fallacy.
In other words, there are things that are friends. There are things that are enemies. It takes a separate assertion that those are the only two categories (as opposed to believing something like "some people are indifferent to me").
In relation to AI, there is malicious AI (the Straumli Perversion), indifferent AI (Accelerando AI), and FAI. When EY says uFAI, he means both malicious and indifferent. But it is a distinct insight to say that indifferent AI are practically as dangerous as malicious AI. For example, it is not obvious that an AI whose only goal is to leave the Milky Way galaxy (and is capable of trying without directly harming humanity) is too dangerous to turn on. Leaving aside the motivation for creating such an entity, I certainly would agree with EY that such an entity has a substantial chance of being an existential risk to humanity.
↑ comment by TimS · 2011-12-21T00:02:21.085Z · LW(p) · GW(p)
This seems mostly like a terminological dispute. But I think AI that doesn't care about humanity (i.e the various AI in Accelerando) are best labeled unfriendly even though they are not trying to end humanity or kill any particular human.
↑ comment by APMason · 2011-12-20T23:57:26.202Z · LW(p) · GW(p)
I can't imagine a situation in which the AGI is sort-of kind to us - not killing good people, letting us keep this solar system - but which also does some unfriendly things, like killing bad people or taking over the rest of the galaxy (both pretty terrible things in themselves, even if they're not complete failures), unless that's what the AI's creator wanted - i.e. the creator solved FAI but managed to, without upsetting the whole thing, include in the AI's utility function terms for killing bad people and caring about something completely alien outside the solar system. They're not outcomes that you can cause by accident - and if you can do that, then you can also solve full FAI, without killing bad people or tiling the rest of the galaxy.
Replies from: dlthomas, xxd↑ comment by xxd · 2011-12-21T00:01:25.606Z · LW(p) · GW(p)
I guess what I'm saying is that we've gotten involved in a compression fallacy and are saying that Friendly AI = AI that helps out humanity (or is kind to humanity - insert favorite "helps" derivative here).
Here's an example: I'm "sort of friendly" in that I don't actively go around killing people, but neither will I go around actively helping you unless you want to trade with me. Does that make me unfriendly? I say no it doesn't.
Replies from: APMason↑ comment by APMason · 2011-12-21T00:09:06.128Z · LW(p) · GW(p)
Well, I don't suppose anyone feels the need to draw a bright-line distinction between FAI and uFAI - the AI is more friendly the more its utility function coincides with your own. But in practice it doesn't seem like any AI is going to fall into the gap between "definitely unfriendly" and "completely friendly" - to create such a thing would be a more fiddly and difficult engineering problem than just creating FAI. If the AI doesn't care about humans in the way that we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about.
EDIT: Actually, thinking about it, I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate. I'm not sure how fast this goes wrong or in what way, but it doesn't strike me as a good idea.
Replies from: soreff, xxd, xxd↑ comment by soreff · 2011-12-21T23:53:17.933Z · LW(p) · GW(p)
I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate. I'm not sure how fast this goes wrong or in what way, but it doesn't strike me as a good idea.
Conscious or unconscious volition? I think I can point to one possible failure mode :)
↑ comment by xxd · 2011-12-21T00:46:00.939Z · LW(p) · GW(p)
"I suppose one potential failure mode which falls into the grey territory is building an AI that just executes peoples' current volition without trying to extrapolate"
i.e. the device has to judge the usefulness by some metric and then decide to execute someone's volition or not.
That's exactly what my issue is with trying to define a utility function for the AI. You can't. And since some people will have their utility function denied by the AI then who is to choose who get's theirs executed?
I'd prefer to shoot for a NOT(UFAI) and then trade with it.
Here's a thought experiment:
Is a cure for cancer maximizing everyone's utility function?
Yes on average we all win.
BUT
Companies who are currently creating drugs to treat the symptoms of cancer and their employees would be out of business.
Which utility function should be executed? Creating better cancer drugs to treat the symptoms and then allowing the company to sell them, or put the companies out of business and cure cancer.
Replies from: APMason↑ comment by APMason · 2011-12-21T00:56:08.030Z · LW(p) · GW(p)
Well, that's an easy question: if you've worked sixteen hour days for the last forty years and you're just six months away from curing cancer completely and you know you're going to get the Nobel and be fabulously wealthy etc. etc. and an alien shows up and offers you a cure for cancer on a plate, you take it, because a lot of people will die in six months. This isn't even different to how the world currently is - if I invented a cure for cancer it would be detrimental to all those others who were trying to (and who only cared about getting there first) - what difference does it make if an FAI helps me? I mean, if someone really wants to murder me but I don't want them to and they are stopped by the police, that's clearly an example of the government taking the side of my utility function over the murderer's. But so what? The murderer was in the wrong.
Anyway, have you read Eliezer's paper on CEV? I'm not sure that I agree with him, but he does deal with the problem you bring up.
↑ comment by xxd · 2011-12-21T00:21:09.279Z · LW(p) · GW(p)
More friendly to you. Yes.
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
But I dispute the position that "if an AI doesn't care about humans in the way we want them to, it almost certainly takes us apart and uses the resources to create whatever it does care about".
Consider: A totally unfriendly AI whose main goal is explicitly the extinction of humanity then turning itself off. For us that's an unfriendly AI.
One, however that doesn't kill any of us but basically leaves us alone is defined by those of you who define "friendly AI" to be "kind to us"/"doing what we all want"/"maximizing our utility functions" etc is not unfriendly because by definition it doesn't kill all of us.
Unless unfriendly also includes "won't kill all of us but ignores us" et cetera.
Am I for example unfriendly to you if I spent my next month's paycheck on paperclips but did you no harm?
Replies from: APMason↑ comment by APMason · 2011-12-21T00:35:00.656Z · LW(p) · GW(p)
Well, no. If it ignores us I probably wouldn't call it "unfriendly" - but I don't really mind if someone else does. It's certainly not FAI. But an AI does need to have some utility function, otherwise it does nothing (and isn't, in truth, intelligent at all), and will only ignore humanity if it's explicitly programmed to. This ought to be as difficult an engineering problem as FAI - hence why I said it "almost certainly takes us apart". You can't get there by failing at FAI, except by being extremely lucky, and why would you want to go there on purpose?
Not necessarily friendly in the sense of being friendly to everyone as we all have differing utility functions, sometimes radically differing.
Yes, it would be a really bad idea to have a superintelligence optimise the world for just one person's utility function.
Replies from: xxd↑ comment by xxd · 2011-12-21T00:39:03.876Z · LW(p) · GW(p)
"But an AI does need to have some utility function"
What if the "optimization of the utility function" is bounded like my own personal predilection with spending my paycheck on paperclips one time only and then stopping?
Is it sentient if it sits in a corner and thinks to itself, running simulations but won't talk to you unless you offer it a trade e.g. of some paperclips?
Is it possible that we're conflating "friendly" with "useful but NOT unfriendly" and we're struggling with defining what "useful" means?
Replies from: DSimon↑ comment by DSimon · 2011-12-25T22:25:18.438Z · LW(p) · GW(p)
If it likes sitting in a corner and thinking to itself, and doesn't care about anything else, it is very likely to turn everything around it (including us) into computronium so that it can think to itself better.
If you put a threshold on it to prevent it from doing stuff like that, that's a little better, but not much. If it has a utility function that says "Think to yourself about stuff, but do not mess up the lives of humans in doing so", then what you have now is an AI that is motivated to find loopholes in (the implementation of) that second clause, because anything that can get an increased fulfilment of the first clause will give it a higher utility score overall.
You can get more and more precise than that and cover more known failure modes with their own individual rules, but if it's very intelligent or powerful it's tough to predict what terrible nasty stuff might still be in the intersection of all the limiting conditions we create. Hidden complexity of wishes and all that jazz.
↑ comment by JoachimSchipper · 2011-12-19T10:48:10.613Z · LW(p) · GW(p)
Not everyone agrees with Eliezer on everything; this is usually not that explicit, but consider e.g. the number of people talking about relationships vs. the number of people talking about cryonics or FAI - LW doesn't act, collectively, as if it really believes Eliezer is right. It does assume that there is no God/god/supernatural, though.
(Also, where does this idea of atheists hating God come from? Most atheists have better things to do than hang on /r/atheism!)
Replies from: AspiringKnitter, Anubhav, Bugmaster↑ comment by AspiringKnitter · 2011-12-19T18:34:12.083Z · LW(p) · GW(p)
I got the idea from various posts where people have said they don't even like the Christian God if he's real (didn't someone say he was like Azathoth?) and consider him some kind of monster.
I can see I totally got you guys wrong. Sorry to have underestimated your niceness.
Replies from: TheOtherDave, CuSithBell, kilobug, CronoDAS↑ comment by TheOtherDave · 2011-12-19T18:46:48.927Z · LW(p) · GW(p)
For my own part, I think you're treating "being nice" and "liking the Christian God" and "hating Christians" and "wanting other people to hate God" and "only wanting other people to hate God" and "forcibly exterminating all morality" and various other things as much more tightly integrated concepts than they actually are, and it's interfering with your predictions.
So I suggest separating those concepts more firmly in your own mind.
Replies from: Document↑ comment by CuSithBell · 2011-12-19T19:23:18.602Z · LW(p) · GW(p)
To be fair, I'm sure a bunch of people here disapprove of some actions by the Christian God in the abstract (mostly Old Testament stuff, probably, and the Problem of Evil). But yeah, for the most part LWers are pretty nice, if a little idiosyncratic!
Azathoth (the "blind idiot god") is the local metaphor for evolution - a pointless, monomaniacal force with vast powers but no conscious goal-seeking ability and thus a tendency to cause weird side-effects (such as human culture).
↑ comment by kilobug · 2011-12-19T19:16:32.497Z · LW(p) · GW(p)
Azathoth is how Eliezer described the process of evolution, not how he described the christian god.
Replies from: MarkusRamikin, Document↑ comment by MarkusRamikin · 2011-12-19T19:35:14.269Z · LW(p) · GW(p)
She's possibly thinking about Cthulhu.
↑ comment by CronoDAS · 2011-12-19T21:21:22.286Z · LW(p) · GW(p)
Well, if there were an omnipotent Creator, I'd certainly have a few bones to pick with him/her/it...
↑ comment by Anubhav · 2012-01-10T01:52:22.086Z · LW(p) · GW(p)
Not everyone agrees with Eliezer on everything; this is usually not that explicit, but consider e.g. the number of people talking about relationships vs. the number of people talking about cryonics or FAI - LW doesn't act, collectively, as if it really believes Eliezer is right
Classic example of bikeshedding.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-10T02:33:27.909Z · LW(p) · GW(p)
Well, I personally am one of those people who thinks that cryonics is currently not worth worrying about, and that the Singularity is unlikely to happen anytime soon (in astronomical terms). So, there exists at least one outlier in the Less Wrong hive mind...
Replies from: Ben_Welchner, Anubhav↑ comment by Ben_Welchner · 2012-01-10T05:05:33.697Z · LW(p) · GW(p)
Judging by the recent survey, your cryonics beliefs are pretty normal with 53% considering it, 36% rejecting it and only 4% having signed up. LW isn't a very hive-mindey community, unless you count atheism.
(The singularity, yes, you're very much in the minority with the most skeptical quartile expecting it in 2150)
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-10T20:49:15.103Z · LW(p) · GW(p)
Regarding cryonics, you're right and I was wrong, so thanks !
But in the interest of pedantry I should point out that among those 96% who did not sign up, many did not sign up simply due to a lack of funds, and not because of any misgivings they have about the process.
Replies from: dlthomas↑ comment by Anubhav · 2012-01-10T04:15:32.598Z · LW(p) · GW(p)
I guess that's what bikeshedding feels like.
↑ comment by Bugmaster · 2012-01-10T02:29:20.929Z · LW(p) · GW(p)
Also, where does this idea of atheists hating God come from?
If one reads the Bible as one would read any other fiction book, then IMO it'd be pretty hard to conclude that this "God" character is anything other than the villain of the story. This doesn't mean that atheists "hate God", no more than anyone could be said to "hate Voldemort", of course -- both of them are just evil fictional characters, no more and no less.
Christians, on the other hand, believe that a God of some sort actually does exist, and when they hear atheists talking about the character of "God" in fiction, they assume that atheists are in fact talking about the real (from the Christians' point of view) God. Hence the confusion.
Replies from: Prismattic↑ comment by Prismattic · 2012-01-10T03:44:29.845Z · LW(p) · GW(p)
In my own experience, one hears the claim more often as "atheists hate religion" rather than "atheists hate god". The likelihood of hearing it seems to correlate with how intolerant a brand of religiosity one is dealing with (I can't think of an easy way to test that intuition empirically at the the moment), so I tend to attribute it to projection.
↑ comment by EvelynM · 2011-12-26T10:45:55.745Z · LW(p) · GW(p)
What do you aspire to knit?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T01:54:57.047Z · LW(p) · GW(p)
Sweaters, hats, scarves, headbands, purses, everything knittable. (Okay, I was wrong below, that was actually the second-easiest post to answer.) Do you like knitting too?
Replies from: EvelynM↑ comment by Gust · 2011-12-21T16:59:32.075Z · LW(p) · GW(p)
Welcome! And congratulations for creating what's probably the longest and most interesting introduction thread of all time (I haven't read all the introductions threads, though).
I've read all your posts here. I now have to update my belief about rationality among christians: so long, the most "rational" I'd found turned out to be nothing beyond a repetitive expert in rationalization. Most others are sometimes relatively rational in most aspects of life, but choose to ignore the hard questions about the religion they profess (my own parents fall in this category). You seem to have clear thought, and will to rethink your ideas. I hope you stay around.
On a side note, as others already stated below, I think you misunderstand what Eliezer wants to do with FAI. I agree with what MixedNuts said here, though I would also recommend reading The Hidden Complexity of Wishes, if you haven't yet. Eliezer is more sane than it seems at first, in my opinion.
PS: How are you feeling about the reception so far?
EDIT: Clarifying: I agree with what MixedNuts said in the third and fourth paragraphs.
Replies from: AspiringKnitter, dlthomas↑ comment by AspiringKnitter · 2011-12-21T18:41:24.771Z · LW(p) · GW(p)
I think I've gotten such a nice reception that I've also updated in the direction of "most atheists aren't cruel or hateful in everyday life" and "LessWrong believes in its own concern for other people because most members are nice".
The wish on top of that page is actually very problematic...
Oh, and do people usually upvote for niceness?
Replies from: wedrifid, NancyLebovitz, army1987, dlthomas↑ comment by NancyLebovitz · 2011-12-27T12:39:01.903Z · LW(p) · GW(p)
Oh, and do people usually upvote for niceness?
The ordinary standard of courtesy here is pretty high, and I don't think you get upvotes for meeting it. You can get upvotes for being nice (assuming that you also include content) if it's a fraught issue.
↑ comment by A1987dM (army1987) · 2011-12-24T10:39:12.456Z · LW(p) · GW(p)
I've also updated in the direction of "most atheists aren't cruel or hateful in everyday life"
I'm not sure atheist LW users would be a good sample of “most atheists”. I'd expect there to be a sizeable fraction of people who are atheists merely as a form of contrarianism.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T12:42:12.509Z · LW(p) · GW(p)
I'd expect there to be a sizeable fraction of people who are atheists merely as a form of contrarianism.
I don't think that's the case. I do think there are a good many people who are naturally contrarian, and use their atheism as a platform. There are also people who become atheists after having been mistreated in a religion, and they're angry.
I'm willing to bet a modest amount that going from religious to atheist has little or no effect on how much time a person spends on arguing about religion, especially in the short run.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-27T13:16:55.456Z · LW(p) · GW(p)
Well, IME in Italy people from the former Kingdom of the Two Sicilies are usually much more religious than people from the former Papal States and the latter are much more blasphemous, and I have plenty of reasons to believe it's not a coincidence.
↑ comment by dlthomas · 2011-12-21T18:52:51.939Z · LW(p) · GW(p)
The wish on top of that page is actually very problematic...
Yes, that was a part of the point of the article - people try to fully specify what they want, it gets this complex, and it's still missing things; meanwhile, people understand what someone means when they say "I wish I was immortal."
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T19:13:32.878Z · LW(p) · GW(p)
Well, they understand it about as well as the speaker does. It's not clear to me that the speaker always knows what they mean.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-21T19:27:43.381Z · LW(p) · GW(p)
Right - there's no misunderstanding, because the complexity is hidden by expectations and all sorts of shared stuff that isn't likely to be there when talking to a genie of the "sufficiently sophisticated AI" variety, unless you are very careful about making sure that it is. Hence, the wish has hidden complexity - the point (and title) of the article.
↑ comment by TimS · 2011-12-19T15:43:06.691Z · LW(p) · GW(p)
Welcome to LessWrong. Our goal is to learn how to achieve our goals better. One method is to observe the world and update our beliefs based on what we see (You'd think this would be an obvious thing to do, but history shows that it isn't so). Another method we use is to notice the ways that humans tend to fail at thinking (i.e. have cognitive bias).
Anyway, I hope you find those ideas useful. Like many communities, we are a diverse bunch. Each of our ultimate goals likely differs, but we recognize that the world is far from how any of us want it to be, and that what each of us wants is in roughly the same direction from here. In short, the extent to which we are an insular community is a failure of the community, because we'd all like to raise the sanity line. Thus, welcome to LW. Help us be better.
↑ comment by kilobug · 2011-12-19T09:53:26.423Z · LW(p) · GW(p)
Welcome to Less Wrong.
I don't think much people here hate Christians. At least I don't. I'll just speak for myself (even if I think my view is quite shared here) : I have a harsh view on religions themselves, believing they are mind-killing, barren and dangerous (just open an history book), but that doesn't mean I hate the people who do believe (as long as they don't hate us atheists). I've christian friends, and I don't like them less because of their religion. I'm a bit trying to "open their mind" because I believe that knowing and accepting the truth makes you stronger, but I don't push too much the issue either.
For the "that acts more like Eliezer thinks it should" part, well, the Coherent Extrapolated Volition of Eliezer is supposed to be coherent over the whole of humanity, not over himself. Eliezer is not trying to make an AI that'll turn the world into his own paradise, but that'll turn it into something better according to the common wishes of all (or almost all) of humanity. He may fail at it, but if it does, he's more likely to tile the world with smiley faces then to turn it into its own paradise ;)
↑ comment by lavalamp · 2011-12-20T19:48:02.457Z · LW(p) · GW(p)
... I'd rather hang around and keep the Singularity from being an AI that forcibly exterminates all morality and all people who don't agree with Eliezer Yudkowsky.
Upvote for courage, and I'd give a few more if I could. (Though you might consider rereading some of EY's CEV posts, because I don't think you've accurately summarized his intentions.)
You guys really hate Christians, after all.
I don't hate Christians. I was a very serious one for most of my life. Practically everyone I know and care about IRL is Christian.
I don't think LW deserves all the credit for my deconversion, but it definitely hastened the event.
↑ comment by [deleted] · 2011-12-19T07:54:39.225Z · LW(p) · GW(p)
Welcome!
I'm Christian and female and don't want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
Only one of those is really a reason for me to be nervous, and that's because Christianity has done some pretty shitty things to my people. But that doesn't mean we have nothing in common! I don't want to act the way EY thinks I should, either. (At least, not merely because it's him that wants it.)
You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?)
If you look at the survey, notice you're not alone. A minority, perhaps, but not entirely alone. I hope you hang around.
Replies from: XangLiu, AspiringKnitter↑ comment by XangLiu · 2011-12-19T15:23:24.879Z · LW(p) · GW(p)
"Only one of those is really a reason for me to be nervous, and that's because Christianity has done some pretty shitty things to my people."
Oh, don't be such a martyr. "My people..." please. You do not represent "your people" and you aren't their authority.
Replies from: None↑ comment by [deleted] · 2011-12-19T18:22:14.051Z · LW(p) · GW(p)
Whoa, calm down.
I'm not claiming any such representation or authority. They're my people only in the sense that all of us happen to be guys who like guys; they're the group of people I belong to. I'm not even claiming martyrdom, because (not many) of these shitty things have explicitly happened to me. I'm only stating my own (and no one else's) prior for how interactions between self-identified Christians and gay people tend to turn out.
Replies from: XangLiu, army1987↑ comment by XangLiu · 2011-12-19T18:46:26.693Z · LW(p) · GW(p)
The point has been missed. Deep breath, paper-machine.
Nearly any viewpoint is capable of and has done cruel things to others. No reason to unnecessarilly highlight this fact and dramatize the Party of Suffering. This was an intro thread by a newcomer - not a reason to point to you and "your" people. They can speak for themselves.
Replies from: TheOtherDave, None, Vaniver, Bongo↑ comment by TheOtherDave · 2011-12-19T18:59:49.169Z · LW(p) · GW(p)
To the extent that you're saying that the whole topic of Christian/queer relations was inappropriate for an intro thread, I would prefer you'd just said that. I might even agree with you, though I didn't find paper-machine's initial comment especially problematic.
To the extent that you're saying that paper-machine should not treat the prior poor treatment of members of a group they belong to, by members of a group Y belongs to, as evidence of their likely poor treatment by Y, I simply disagree. It may not be especially strong evidence, but it's also far from trivial.
And all the stuff about martyrdom and Parties of Suffering and who gets to say what for whom seems like a complete distraction.
↑ comment by [deleted] · 2011-12-22T05:38:34.164Z · LW(p) · GW(p)
They can speak for themselves.
Why berate him for doing just that, then? He's expressing his prior: members of a reference class he belongs to are often singled out for mistreatment by members of a reference class that his interlocutor claims membership with. He does not appear to believe himself Ambassador of All The Gay Men, based on what he's actually saying, nor to treat that class-membership as some kind of ontological primitive.
↑ comment by Bongo · 2011-12-19T18:57:48.169Z · LW(p) · GW(p)
I wonder how this comment got 7 upvotes in 9 minutes.
EDIT: Probably the same way this comment got 7 upvotes in 6 minutes.
Replies from: LWMormon, TheOtherDave↑ comment by TheOtherDave · 2012-01-09T21:49:24.523Z · LW(p) · GW(p)
Though it's made more impressive when you realize that the comment you respond to, and its grandparent, are the user's only two comments, and they average 30 karma each. That's a beautiful piece of market timing!
↑ comment by A1987dM (army1987) · 2011-12-24T11:11:04.553Z · LW(p) · GW(p)
Still, I didn't get who “my people” referred to (your fellow citizens?). “To us gay people” would have been clearer IMO.
↑ comment by AspiringKnitter · 2011-12-19T08:39:53.352Z · LW(p) · GW(p)
Wow, thanks! I feel less nervous/unwelcome already!
Let me just apologize on behalf of all of us for whichever of the stains on our honor you're referring to. It wasn't right. (Which one am I saying wasn't right?)
Yay for not acting like EY wants, I guess. No offense or anything, EY, but you've proposed modifications you want to make to people that I don't want made to me already...
(I don't know what I said to deserve an upvote... uh, thanks.)
Replies from: Icehawk78, Bugmaster, Cthulhoo↑ comment by Icehawk78 · 2011-12-19T13:30:52.140Z · LW(p) · GW(p)
I'm curious which modifications EY has proposed (specifically) that you don't want made, unless it's just generically the suggestion that people could be improved in any ways whatsoever and your preference is to not have any modifications made to yourself (in a "be true to yourself" manner, perhaps?) that you didn't "choose".
If you could be convinced that a given change to "who you are" would necessarily be an improvement (by your own standards, not externally imposed standards, since you sound very averse to such restrictions) such as "being able to think faster" or "having taste preferences for foods which are most healthy for you" (to use very primitive off-the-cuff examples), and then given the means to effect these changes on yourself, would you choose to do so, or would you be averse simply on the grounds of "then I wouldn't be 'me' anymore" or something similar?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-19T19:09:06.938Z · LW(p) · GW(p)
Being able to think faster is something I try for already, with the means available to me. (Nutrition, sleep, mental exercise, I've even recently started trying to get physical exercise.) I actually already prefer healthy food (it was a really SIMPLE hack: cut out junk food, or phase it out gradually if you can't take the plunge all at once, and wait until your taste buds (probably actually some brain center) start reacting like they would have in the ancestral environment, which is actually by craving healthy food), so the only further modification to be done is to my environment (availability of the right kinds of stuff). So obviously, those in particular I do want.
However, I also believe that here lies the road to ableism. EY has already espoused a significant amount. For instance, his post about how unfair IQ is misses out on the great contributions made to the world by people with very low IQs. There's someone with an IQ of, I think she said, 86 or so, who is wiser than I am (let's just say I probably rival EY for IQ score). IQ is valid only for a small part of the population and full-scale IQ is almost worthless except for letting some people feel superior to others. I've spent a lot of time thinking about and exposed to people's writings about disability and how there are abled people who seek to cure people who weren't actually suffering and appreciated their uniqueness. Understanding and respect for the diversity of skills in the world is more important than making everyone exactly like anyone else.
The above said, that doesn't mean I'm opposed in principle to eliminating problems with disability (nor is almost anyone who speaks out against forced "cure"). Just to think of examples, I'm glad I'm better at interacting with people than I used to be and wish to be better at math (but NOT at the expense of my other abilities). Others, with other disabilities, have espoused wishes for other things (two people that I can think of want an end to their chronic pain without feeling that other aspects of their issues are bad things or need fixed). I worry about EY taking over the world with his robots and not remembering the work of Erving Goffman and a guy whose book is someplace where I can't glance at the spine to see his name. He may fall into any number of potential traps. He could impose modification on those he deems not intelligent enough to understand, even though they are (one person who strongly shaped my views on this topic has made a video about it called In My Language). I also worry that he could create nursing homes without fully understanding institutionalization and learned helplessness and why it costs less in the community anyway. And once he's made it a ways down that road, he might be better than most at admitting mistakes, but it's hard to acknowledge that you've caused that much suffering. (We see it all the time in parents who don't want to admit what harm they've caused disabled children by misunderstanding.) And by looking only at the optimal typical person, he may miss out on the unique gifts of other configurations. (I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types. I'm becoming a bit like that in some areas on a smaller scale, but not fully, and I don't think that in practice it will work for most people or work fully.)
Regarding what EY has proposed that I don't want, on the catperson post (in a comment), EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn't appeal to me at all. (Sorry, but I don't WANT to want more sex. You probably won't agree with this argument, but Jesus advocated celibacy for large swaths of the population, and should I be part of one of those, I'd rather it not be any harder. Should I NOT be in one of those swaths, it's still important that I not be too distracted satisfying those desires, since I'll have far more important things to do with my life.) But in a cooperative endeavor like that, who's going to listen to me explaining I don't want to change in the way that would most benefit them?
And that's what I can think of off the top of my head.
Replies from: MixedNuts, Kaj_Sotala, TheOtherDave, None, Emile, None, Oligopsony, kilobug↑ comment by MixedNuts · 2011-12-19T19:50:43.955Z · LW(p) · GW(p)
By the middle of the second paragraph I was thinking "Whoa, is everyone an Amanda Baggs fan around here?". Hole in one! I win so many Bayes-points, go me.
I and a bunch of LWers I've talked to about it basically already agree with you on ableism, and a large fraction seems to apply usual liberal instincts to the issue (so, no forced cures for people who can point to "No thanks" on a picture board). There are extremely interesting and pretty fireworks that go off when you look at the social model disability from a transhumanist perspective and I want to round up Alicorn and Anne Corwin and you and a bunch of other people to look at them closely. It doesn't look like curing everyone (you don't want a perfectly optimized life, you want a world with variety, you want change over time), and it doesn't look like current (dis)abilities (what does "blind" mean if most people can see radio waves?), and it doesn't look like current models of disability (if everyone is super different and the world is set up for that and everything is cheap there's no such thing as accommodations), and it doesn't look like the current structures around disability (if society and personal identity and memory look nothing like they started with "culture" doesn't mean the same thing and that applies to Deaf culture) and it's complicated and pretty and probably already in some Egan novel.
But, to address your central point directly: You are completely and utterly mistaken about what Eliezer Yudkowsky wants to do. He's certainly not going to tell a superintelligence anything as direct and complicated as "Make this person smarter", or even "Give me a banana". Seriously, nursing homes?
If tech had happened to be easier, we might have gotten a superintelligence in the 16th century in Europe. Surely we wouldn't have told it to care about the welfare of black people. We need to build something that would have done the right thing even if we had built it in the 16th century. The very rough outline for that is to tell it "Here are some people. Figure out what they would want if they knew better, and do that.". So in the 16th century, it would have been presented with abled white men; figured out that if they were better informed and smarter and less biased and so on, these men would like to be equal to black women; and thus included black women in its next turn of figuring out what people want. Something as robust as this needs to be can't miss an issue that's currently known to exist and be worthy of debate!
And for the celibacy thing: that's a bit besides the point, but obviously if you want to avoid sex for reasons other than low libido, increasing your libido obviously won't fix the mismatch.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-19T20:56:22.460Z · LW(p) · GW(p)
How do you identify what knowing better would mean, when you don't know better yet?
Replies from: MixedNuts↑ comment by MixedNuts · 2011-12-19T21:17:39.084Z · LW(p) · GW(p)
The same way we do, but faster? Like, if you start out thinking that scandalous-and-gross-sex-practice is bad, you can consider arguments like "disgust is easily culturally trained so it's a poor measure of morality", and talk to people so you form an idea of what it's like to want and do it as a subjective experience (what positive emotions are involved, for example), and do research so you can answer queries like "If we had a brain scanner that could detect brainwashing manipulation, what would it say about people who want that?".
So the superintelligence builds a model of you and feeds it lots of arguments and memory tape from others and other kinds of information. And then we run into trouble because maybe you end up wanting different things depending on the order it feeds you it, or it tells you to many facts about Deep Ones and it breaks your brain.
↑ comment by Kaj_Sotala · 2011-12-19T20:24:47.279Z · LW(p) · GW(p)
Welcome!
IQ is valid only for a small part of the population and full-scale IQ is almost worthless
This directly contradicts the mainstream research on IQ: see for instance this or this. If you have cites to the contrary, I'd be curious to read them.
That said, glad to see someone else who's found In My Language - I ran across it many years ago and thought it beautiful and touching.
Replies from: AspiringKnitter, juliawise↑ comment by AspiringKnitter · 2011-12-19T23:00:10.541Z · LW(p) · GW(p)
Yes, you're right. That was a blatant example of availability bias-- the tiny subset of the population for which IQ is not valid makes up a disproportionately large part of my circle. And I consider full-scale IQ worthless for people with large IQ gaps, such as people with learning disabilities, and I don't think it conveys any new information over and above subtest scores in other people. Thank you for reminding me again how very odd I and my friends are.
But I also refer here to understanding, for instance, morality or ways to hack life, and having learned one of the most valuable lessons I ever learned from someone I'm pretty sure is retarded (not Amanda Baggs; it's a young man I know), I know for a fact that some important things aren't always proportional to IQ. In fact, specifically, I want to say I learned to be better by emulating him, and not just from the interaction, lest you assume it's something I figured out that he didn't already know.
I don't have any studies to cite; just personal experience with some very abnormal people. (Including myself, I want to point out. I think I'm one of those people for whom IQ subtests are useful-- in specific, limited ways-- but for whom full-scale IQ means nothing because of the great variance between subtest scores.)
↑ comment by juliawise · 2011-12-21T22:25:54.943Z · LW(p) · GW(p)
glad to see someone else who's found In My Language
Her points on disability may still be valid, but it looks like the whole Amanda Baggs autism thing was a media stunt. At age 14, she was a fluent speaker with an active social life.
Replies from: Alicorn↑ comment by Alicorn · 2011-12-22T00:10:33.076Z · LW(p) · GW(p)
The page you link is kind of messy, but I read most of it. Simon's Rock is real (I went there) and none of the details presented about it were incorrect (e.g. they got the name of the girls' dorm right), but I've now poked around the rest of "Autism Fraud" and am disinclined to trust it as a source (the blogger sounds like a crank who believes that vaccines cause autism, and that chelation cures it, and he says all of this in a combative, nasty way). Do you have any other, more neutral sources about Amanda Baggs's allegedly autism-free childhood? I'm sort of tempted to call up my school and ask if she's even a fellow alumna.
Replies from: None, Craig_Heldreth, juliawise, AspiringKnitter, dlthomas↑ comment by Craig_Heldreth · 2011-12-22T19:11:27.938Z · LW(p) · GW(p)
This might interest you.
↑ comment by AspiringKnitter · 2011-12-22T00:42:08.780Z · LW(p) · GW(p)
Do you have any other, more neutral sources about Amanda Baggs's allegedly autism-free childhood?
She couldn't be called a neutral source by any stretch of the imagination, but Amanda herself (anbuend is Amanda Baggs) confirms that she went to college at 14 and that she was considered gifted. She also has a post up just to tell people that she has been able to speak.
Replies from: Alicorn↑ comment by TheOtherDave · 2011-12-19T19:51:02.088Z · LW(p) · GW(p)
But in a cooperative endeavor like that, who's going to listen to me explaining I don't want to change in the way that would most benefit them?
Those of us who endorse respecting individual choices when we can afford to, because we prefer that our individual choices be respected when we can afford it.
I am not in principle opposed to people having all the strengths and none of the weaknesses of multiple types [..] I don't think that in practice it will work for most people
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
More broadly: I mostly consider all of this "what would EY do" stuff a distraction; the question that interests me is what I ought to want done and why I ought to want it done, not who or what does it. If large-scale celibacy is a good idea, I want to understand why it's a good idea. Being told that some authority figure (any authority figure) advocated it doesn't achieve that. Similarly, if it's a bad idea, I want to understand why it's a bad idea.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-19T22:23:00.973Z · LW(p) · GW(p)
If you think it will work for some people, but not most, are you in principle opposed to giving whatever-it-is-that-distinguishes-the-people-it-works-for for to anyone who wants it?
Whatever-it-is-that-distinguishes-the-people-it-works-for seems to be inherent in the skills in question (that is, the configuration that brings about a certain ability also necessarily brings about a weakness in another area), so I don't think that's possible. If it were, I can only imagine it taking the form of people being able to shift configuration very rapidly into whatever works best for the situation, and in some cases, I find that very implausible. If I'm wrong, sure, why not? If it's possible, it's only the logical extension of teaching people to use their strengths and shore up their weaknesses. This being an inherent impossibility (or so I think; I could be wrong), it doesn't so much matter whether I'm opposed to it or not, but yeah, it's fine with me.
You make a good point, but I expect that assuming that someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky, so whether he would abuse that power is more important than whether my next-door neighbors would if they could or even what I would do, and so what EY wants is at least worth considering, because the failure mode if he does something bad is way too catastrophic.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-19T22:48:01.793Z · LW(p) · GW(p)
[if] someone makes AI and uses it to rule the world with the power to modify people, it will be Eliezer Yudkowsky
What makes you think that?
For example, do you think he's the only person working on building AI powerful enough to change the world?
Or that, of the people working on it, he's the only one competent enough to succeed?
Or that, of the people who can succeed, he's the only one who would "use" the resulting AI to rule the world and modify people?
Or something else?
↑ comment by AspiringKnitter · 2011-12-19T23:05:59.881Z · LW(p) · GW(p)
He's the only person I know of who wants to build an AI that will take over the world and do what he wants. He's also smart enough to have a chance, which is disturbing.
Replies from: dlthomas, FlatulentBayes, Bugmaster↑ comment by dlthomas · 2011-12-19T23:14:51.501Z · LW(p) · GW(p)
Have you read his paper on CEV? To the best of my knowledge, that's the clearest place he's laid out what he wants an AGI to do, and I wouldn't really label it "take over the world and do what [Eliezer Yudkowsky] wants" except for broad use of those terms to the point of dropping their typical connotations.
↑ comment by FlatulentBayes · 2011-12-21T20:36:51.592Z · LW(p) · GW(p)
Don't worry. We are in good hands. Eliezer understands the dillemas involved and will ensure that we can avoid non-friendly AI. The SI are dedicated to Friendly AI and the completion of their goal.
↑ comment by Bugmaster · 2011-12-20T00:32:07.034Z · LW(p) · GW(p)
I can virtually guarantee you that he's not the only one who wants to build such an AI. Google, IBM, and the heads of major three-letter government agencies all come to mind as the kind of players who would want to implement their own pet genie, and are actively working toward that goal. That said, it's possible that EY is the only one who has a chance of success... I personally wouldn't give him, or any other human, that much credit, but I do acknowledge the possibility.
Replies from: AspiringKnitter, soreff↑ comment by AspiringKnitter · 2011-12-20T01:34:20.106Z · LW(p) · GW(p)
Thank you. I've just updated on that. I now consider it even more likely that the world will be destroyed within my lifetime.
Replies from: Bugmaster, JoshuaZ, soreff↑ comment by Bugmaster · 2011-12-20T01:38:04.505Z · LW(p) · GW(p)
For what it's worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children's lifetimes.
EDIT: added a crucial "not" in the last sentence. Oops.
Replies from: Prismattic↑ comment by Prismattic · 2011-12-20T04:09:38.445Z · LW(p) · GW(p)
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don't think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
↑ comment by JoshuaZ · 2011-12-20T02:01:27.190Z · LW(p) · GW(p)
Yeah... spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
↑ comment by soreff · 2011-12-23T20:13:02.974Z · LW(p) · GW(p)
A runaway AI might wind up being very destructive, but quite probably not wholly destructive. It seems likely that it would find some of the knowledge humanity has built up over the millenia useful, regardless of what specific goals it had. In that sense, I think that even if a paperclip optimizer is built and eats the world, we won't have been wholly forgotten in the way we would if, e.g. the sun exploded and vaporized our planet. I don't find this to be much comfort, but how comforting or not it is is a matter of personal taste.
↑ comment by soreff · 2011-12-22T00:42:02.425Z · LW(p) · GW(p)
As I mentioned here, I've seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I've overestimated a variety of personal risks.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-22T01:59:09.093Z · LW(p) · GW(p)
"I see that you're trying to extrapolate human volition. Would you like some help ?" converts the Earth into computronium
Replies from: David_Gerard↑ comment by David_Gerard · 2011-12-26T14:28:40.028Z · LW(p) · GW(p)
Soreff was probably alluding to User:Clippy, someone role-playing a non-FOOMed paperclip maximiser.
Though yours is good too :-)
Replies from: soreff, Bugmaster↑ comment by soreff · 2011-12-26T15:11:28.163Z · LW(p) · GW(p)
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that has FOOMed that really represents the threat.
↑ comment by [deleted] · 2011-12-20T04:30:30.608Z · LW(p) · GW(p)
EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn't appeal to me at all.
Yeah, this is Eliezer inferring too much from the most-accessible information about sex drive from members of his tribe, so to speak -- it's not so very long ago in the West that female sex drive was perceived as insatiable and vast, with women being nearly impossible for any one man to please in bed; there are still plenty of cultures where that's the case. But he's heard an awful lot of stories couched in evolutionary language about why a cultural norm in his society that is broadcast all over the place in media and entertainment reflects the evolutionary history of humanity.
He's confused about human nature. If Eliezer builds a properly-rational AI by his own definitions to resolve the difficulty, and it met all his other stated criteria for FAI, it would tell him he'd gotten confused.
Replies from: Kaj_Sotala, Prismattic↑ comment by Kaj_Sotala · 2011-12-20T07:26:39.654Z · LW(p) · GW(p)
Well, there do seem to be several studies, including at least one cross-cultural study, that support the "the average female sex drive is lower" theory.
Replies from: None↑ comment by [deleted] · 2011-12-20T08:00:14.965Z · LW(p) · GW(p)
These studies also rely on self-reported sexual feelings and behavior, as reported by the subset of the population willing to volunteer for such a study and answer questions such as "How often do you masturbate?", and right away you've got interference from "signalling what you think sounds right", "signalling what you're willing to admit," "signalling what makes you look impressive", and "signalling what makes you seem good and not deviant by the standards of your culture." It is notoriously difficult to generalize such studies -- they best serve as descriptive accounts, not causal ones.
Many of the relevant factors are also difficult to pin down; testosterone clearly has an affect, but it's a physiological correlate that doesn't suffice to explain the patterns seen (which again, are themselves to be taken with a grain of salt, and not signalling anything causal). . The jump to a speculative account of evolutionary sexual strategies is even less warranted. For a good breakdown, see here: http://www.csun.edu/~vcpsy00h/students/sexmotiv.htm
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-12-21T10:23:56.979Z · LW(p) · GW(p)
These are valid points, but you said that there still exist several cultures where women are considered to be more sexual than men. Shouldn't they then show up in the international studies? Or are these cultures so rare as to not be included in the studies?
Also, it occurs to me that whether or not the differences are biological is somewhat of a red herring. If they are mainly cultural, then it means that it will be easier for an FAI to modify them, but that doesn't affect the primary question of whether they should be modified. Surely that question is entirely independent of the question of their precise causal origin?
Replies from: None, None↑ comment by [deleted] · 2011-12-21T17:02:05.176Z · LW(p) · GW(p)
An addendum: There's also the "Ecological fallacy" to consider -- where a dataset suggests that on the mean, a population A has property P and population B has P+5, but randomly selecting members of each population will give very different results due to differences in distribution.
↑ comment by [deleted] · 2011-12-21T16:49:07.892Z · LW(p) · GW(p)
These are valid points, but you said that there still exist several cultures where women are considered to be more sexual than men. Shouldn't they then show up in the international studies? Or are these cultures so rare as to not be included in the studies?
Actually it's entirely possible to miss a lot of detail while ostensibly sampling broadly. If you sample citizens in Bogota, Mumbai, Taibei, Kuala Lumpur, Ashgabat, Cleveland, Tijuana, Reykjavik, London, and Warsaw, that's pretty darn international and thus a good cross-cultural representation of humanity, right? Surely any signals that emerge from that dataset are probably at least suggestive of innate human tendency?
Well, actually, no. Those are all major cities deeply influenced and shaped by the same patterns of mercantile-industrialist economics that came out of parts of Eurasia and spread over the globe during the colonial era and continue to do so -- and that influence has worked its way into an awful lot of everyday life for most of the people in the world. It would be like assuming that using wheels is a human cultural universal, because of their prevalence.
An even better analogy here would be if you one day take a bit of plant tissue and looking under a microcoscope, spot the mitochondria. Then you find the same thing in animal tissue. When you see it in fungi, too, you start to wonder. You go sampling and sampling all the visible organisms you can find and even ones from far away, and they all share this trait. It's only Archeans and Bacteria that seem not to. Well, in point of fact there are more types of those than of anything else, significantly more varied and divergent than the other organisms you were looking at put together. It's not a basal condition for living things, it's just a trait that's nearly universal in the ones you're most likely to notice or think about. (The break in the analogy being that mitochondria are a matter of ancestry and subsequent divergence, while many of the human cultural similarities you'd observe in my above example are a matter of alternatives being winnowed and pushed to the margins, and existing similarities amplified by the effects of a coopting culture-plex that's come to dominate the picture).
If they are mainly cultural, then it means that it will be easier for an FAI to modify them, but that doesn't affect the primary question of whether they should be modified. Surely that question is entirely independent of the question of their precise causal origin?
It totally is, but my point was that Eliezer has expressed it's a matter of biology, and if I'm correct in my thoughts he's wrong about that -- and in my understanding of how he feels FAI would behave, this would lead to the behavior I described (FAI explains to Eliezer that he's gotten that wrong).
↑ comment by Prismattic · 2011-12-20T04:45:11.235Z · LW(p) · GW(p)
As I mentioned the last time this topic came up, there is evidence that giving supplementary testosterone to humans of either sex tends to raise libido, as many FTM trans people will attest, for example. While there is a lot of individual variation, expecting that on average men will have greater sex drive than women is not based purely on theory.
The pre-Victorian Western perception of female sexuality was largely defined by a bunch of misogynistic Cistercian monks, who, we can be reasonably confident, were not basing their conclusions on a lot of actual experience with women, given that they were cloistered celibates.
Replies from: None↑ comment by [deleted] · 2011-12-20T07:26:29.174Z · LW(p) · GW(p)
I don't dispute the effects of testosterone; I just don't think that sex drive is reducible to that, and I tend to be suspicious when evolutionary psychology is proposed for what may just as readily be explained as culture-bound conditions.
It's not just the frequency of the desire to copulate that matters, after all -- data on relative "endurance" and ability to go for another round, certain patterns of rates and types of promiscuity, and other things could as readily be construed to provide a very different model of human sexual evolution, and at the end of the day it's a lot easier to come up with plausible-sounding models that accord pretty well with one's biases than be certain we've explored the actual space of evolutionary problems and solutions that led to present-day humanity.
I tend to think that evolutionary psychological explanations need to meet the threshold test that they can explain a pattern of behavior better than cultural variance can; biases and behaviors being construed as human nature ought to be based on clearly-defined traits that give reliable signals, and are demonstrable across very different branches of the human cultural tree.
↑ comment by Emile · 2011-12-20T00:07:39.695Z · LW(p) · GW(p)
Regarding what EY has proposed that I don't want, on the catperson post (in a comment), EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn't appeal to me at all. (Sorry, but I don't WANT to want more sex.
Look at it this way - would you agree to trade getting a slightly higher sex drive, in exchange for living in a world where rape, divorce, and unwanted long-term celibacy ("forever alone") are each an order of magnitude rarer than they are in our world?
(That is assuming that such a change in sex drive would have those results, which is far from certain.)
Replies from: Alicorn, AspiringKnitter↑ comment by Alicorn · 2011-12-20T01:31:53.866Z · LW(p) · GW(p)
This is an unfair question. If we do the Singularity right, nobody has to accept unwanted brain modifications in order to solve general societal problems. Either we can make the brain modifications appealing via non-invasive education or other gentle means, or we can skip them for people who opt out/don't opt in. Not futzing with people's minds against their wills is a pretty big deal! I would be with Aspiring Knitter in opposing a population-wide forcible nudge to sex drive even if I bought the exceptionally dubious proposition that such a drastic measure would be called for to fix the problems you list.
Replies from: Emile↑ comment by Emile · 2011-12-20T09:23:48.393Z · LW(p) · GW(p)
I didn't mean to imply forcing unwanted modifications on everybody "for their own good" - I was talking about under what conditions we might accept things we don't like (I don't think this is a very plausible singularity scenario, except as a general "how weird things could get").
I don't like limitations on my ability to let my sheep graze, but I may accept them if everyone does so and it reduces overgrazing. I may not like limits on my ability to own guns, but I may accept them if it means living in a safer society. I may not like modifications to my sex drive, but I may be willing to agree in exchange for living in a better society.
In principle, we could find ways of making everybody better off. Of course, the details of how such an agreement is reached matter a lot - markets, democracy, competition between countries, a machine-God enforcing it's will.
↑ comment by AspiringKnitter · 2011-12-20T01:23:14.485Z · LW(p) · GW(p)
Since when is rape motivated primarily by not getting laid? (Or divorce, for that matter?)
But never mind. We have different terminal values here. You-- I assume-- seek a lot of partners for everyone, right? At least, others here seem to be non-monogamous. You won't agree with me, but I believe in lifelong monogamy or celibacy, so while increasing someone's libido could be useful in your value system, it almost never would in mine. Further, it would serve no purpose for me to have a greater sex drive because I would respond by trying to stifle it, in accordance with my principles. I hope you at least derive disutility from making someone uncomfortable.
Seriously, the more I hear on LessWrong, the more I anticipate having to live in a savage reservation a la Brave New World. But pointing this out to you doesn't change your mind because you value having most people be willing to engage in casual sex (am I wrong here? I don't know you, specifically).
Replies from: Bugmaster, JoachimSchipper, cousin_it, Kaj_Sotala, Emile, juliawise↑ comment by Bugmaster · 2011-12-20T02:36:32.184Z · LW(p) · GW(p)
But pointing this out to you doesn't change your mind because you value having most people be willing to engage in casual sex (am I wrong here? I don't know you, specifically)
I can't speak for Emile, but my own views look something like this:
- I see nothing wrong with casual sex (as long as all partners fully consent, of course), or any other kind of sex in general (again, assuming fully informed consent).
- Some studies (*) have shown that humans are generally pretty poor at monogamy.
- People whose sex drives are unsatisfied often become unhappy.
- In light of this, forcing monogamy on people is needlessly oppressive, and leads to unnecessary suffering.
- Therefore, we should strive toward building a society where monogamy is not forced upon people, and where people's sex drives are generally satisfied.
Thus, I would say that I value "most people being able to engage in casual sex". I make no judgement, however, whether "most people should be willing to engage in casual sex". If you value monogamy, then you should be able to engage in monogamous sex, and I can see no reason why anyone could say that your desires are wrong.
(*) As well as many of our most prominent politicians. Heh.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T03:35:53.986Z · LW(p) · GW(p)
I'm glad I actually asked, then, since I've learned something from your position, which is more sensible than I assumed. Upvoted because it's so clearly laid out even though I don't agree.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-20T03:46:39.088Z · LW(p) · GW(p)
Thanks, I appreciate it. I am still interested in hearing why you don't agree, but I understand that this can be a sensitive topic...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T04:50:02.239Z · LW(p) · GW(p)
Oh, sorry, I thought that was obvious. Illusion of transparency, I guess. God says we should be monogamous or celibate. Of course, I doubt it'd be useful to go around trying to police people's morals.
Replies from: JoshuaZ, Bugmaster, APMason↑ comment by JoshuaZ · 2011-12-20T05:07:38.122Z · LW(p) · GW(p)
Sorry, where does God say this? You are a Christian right? I'm not aware of any verse in either the OT or NT that calls for monogamy. Jacob has four wives, Abraham has two, David has quite a few and Solomon has hundreds. The only verses that seem to say anything negative in this regard are some which imply that Solomon just has way too many. The text strongly implies that polyandry is not ok but polygyny is fine. The closest claim is Jesus's point about how divorcing one woman and then marrying another is adultery, but that's a much more limited claim (it could be that the other woman was unwilling to be a second wife for example). 1 Timothy chapter 3 lists qualifications for being a church leader which include having only one wife. That would seem to imply that having more than one wife is at worst suboptimal.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T05:14:32.971Z · LW(p) · GW(p)
That is a really good point. (Actually, Jesus made a stronger point than that: even lusting after someone you're not married to is adultery.)
You know, you could actually be right. I'll have to look more carefully. Maybe my understanding has been biased by the culture in which I live. Upvoted for knowledgeable rebuttal of a claim that might not be correct.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-12-20T07:49:44.277Z · LW(p) · GW(p)
Is that something like "Plan to take steps to have sex with the person", or like "Experience a change in your pants"? (Analogous question for the "no coveting" commandment, too.) Because if you think some thoughts are evil, you really shouldn't build humans with a brain that automatically thinks them. At least have a little "Free will alert: Experience lust? (Y/n)" box pop up.
↑ comment by Bugmaster · 2011-12-20T05:18:23.218Z · LW(p) · GW(p)
In addition to what APMason said, I think that many Christians would disagree with your second statement:
I doubt it'd be useful to go around trying to police people's morals.
Some of them are campaigning right now on the promise that they will "police people's morals"...
↑ comment by APMason · 2011-12-20T05:01:28.276Z · LW(p) · GW(p)
I don't really know if I should say this - whether this is the place, or if the argument's moved well beyond this point for everyone involved, but: where and when did God say that, and if, as I suspect, it's the Bible, doesn't s/he also say we shouldn't wear clothing of two different kinds of fibre at the same time?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T05:22:10.831Z · LW(p) · GW(p)
Yes. That applies to the Jews but not to everyone else. You're allowed to ignore Leviticus and Exodus if you're not Jewish. EY probably knows this, since it's actually Jewish theology (note that others have looked at the same facts and come to the conclusion that the rules don't apply to anyone anymore and stopped applying when Jesus died, so take into account that someone (I don't think it's me) has done something wrong here, as per Aumann's agreement theorem).
Replies from: APMason, Ezekiel, thomblake↑ comment by APMason · 2011-12-20T05:29:02.532Z · LW(p) · GW(p)
Well, I suppose what I should do is comb the Bible for some absurd commandment that does apply to non-Jews, but frankly I'm impressed by the loophole-exploiting nature of your reply, and am inclined to concede the point (also, y'know - researching the Bible... bleh).
EDIT: And by concede the point, I of course mean concede that you're not locally inconsistent around this point, not that what you said about monogamy is true.
Replies from: AspiringKnitter, Prismattic↑ comment by AspiringKnitter · 2011-12-20T05:35:45.574Z · LW(p) · GW(p)
If you want Bible verses to use to dis Christianity, I suggest 1 Corinthians 14:33-35 and Luke 22:19, 20.
Replies from: Morendil, APMason↑ comment by Morendil · 2011-12-20T10:15:53.460Z · LW(p) · GW(p)
I'd be interested in your ideas of what books you'd recommend a non-Christian read.
The last time I entered into an earnest discussion of spirituality with a theist friend of mine, what I wanted to bend my brain around was how he could claim to derive his faith from studying the Bible, when (from the few passages I've read myself) it's a text that absolutely does not stand literal interpretation. (For instance, I wanted to know how he reconciled an interest in science, in particular the science of evolution, with a Bible that literally argues for a "young Earth" incompatible with the known duration implied by the fossil and geological records.)
Basically I wanted to know precisely what his belief system consisted of, which was very hard given the many different conceptions of Christianity I bump into. I've read "Mere Christianity" on his advice, but I found it far from sufficient - at once way too specific on some points (e.g. a husband should be in charge in a household), and way too slippery on the fundamentals (e.g. what is prayer really about).
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T22:00:08.629Z · LW(p) · GW(p)
I've formed my beliefs from a combination of the Bible, asking other Christians, a cursory study of the secular history of the Roman Empire, internet discussions, articles and gut feelings.
That said, if you have specific questions about anything, feel free to ask me.
Replies from: TimS, hairyfigment↑ comment by TimS · 2011-12-21T00:53:12.521Z · LW(p) · GW(p)
I'm curious what you think of evidence that early Christianity adopted the date of Christmas and other rituals from pre-existing pagan religions?
ETA: I'm not saying that this would detract from the central Christian message (i.e. Jesus sacrificing himself to redeem our sins). But that sort of memetic infection seems like a strange thing to happen to an objective truth.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T01:14:23.149Z · LW(p) · GW(p)
I think it indicates that Christians have done stupid things and one must be discerning about traditions rather than blindly accepting everything taught in church as 100% true, and certainly not everything commonly believed by laypersons!
It's not surprising (unless this is hindsight bias-- it might actually BE surprising, considering how unwilling Christians should have been to make compromises like that, but a lot of time passed between Jesus's death and Christianity taking over Europe, didn't it?) that humans would be humans. I can see where I might have even considered the same in that situation-- everyone likes holidays, everyone should be Christian, pagans get a fun solstice holiday, Christians don't, this is making people want to be Christian less. Let's fix it by having our own holiday. At least then we can make it about Jesus, right?
The worship and deification of Mary is similar, which is why I don't pray to her.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T01:28:47.235Z · LW(p) · GW(p)
That's interesting.
So, suppose I find a church I choose (for whatever reason) to associate with. We seem to agree that I shouldn't believe everything taught in that church, and I shouldn't believe everything believed by members of that church... I should compare those teachings and beliefs to my own expectations about and experiences of the world to decide what I believe and what I don't, just as you have used your own expectations about and experiences of human nature to decide whether to believe various claims about when Jesus was born, what properties Mary had, etc.
Yes? Or have I misunderstood you?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T02:10:08.516Z · LW(p) · GW(p)
Yes. Upvoted for both understanding me and trying to avoid the illusion of transparency.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T02:21:27.432Z · LW(p) · GW(p)
OK, cool.
So, my own experience of having compared the teachings and beliefs of a couple of churches I was for various reasons associated with to my own expectations about and experiences of the world was that, after doing so, I didn't believe that Jesus was exceptionally divine or that the New Testament was a particularly reliable source of either moral truths or information about the physical world.
Would you say that I made an error in my evaluations?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T02:53:34.206Z · LW(p) · GW(p)
Possibly. Or you may be lacking information; if your assumptions were wrong at the beginning and you used good reasoning, you'd come to the wrong conclusion.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T02:58:07.369Z · LW(p) · GW(p)
Do you have particular assumptions in mind here? Or is this a more general statement about the nature of reasoning?
Replies from: AspiringKnitter, xxd↑ comment by AspiringKnitter · 2011-12-21T03:25:04.949Z · LW(p) · GW(p)
It's a statement so general you probably learned it on your first day as a rationalist.
Replies from: CronoDAS, Multiheaded↑ comment by CronoDAS · 2011-12-21T23:38:08.673Z · LW(p) · GW(p)
In other words, "Garbage in, garbage out?"
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T01:03:03.153Z · LW(p) · GW(p)
Yes.
↑ comment by Multiheaded · 2012-01-07T01:26:07.552Z · LW(p) · GW(p)
Ehh... even when you don't mean it literally, you probably shouldn't say such things as "first day as a rationalist". It's kind of hard to increase one's capability for rational thinking without keeping in mind at all times how it's a many-sided gradient with more than one dimension.
↑ comment by xxd · 2011-12-21T03:08:43.681Z · LW(p) · GW(p)
Here's one: Let's say that the world is a simulation AND that strongly godlike AI is possible. To all intents and purposes, even though the bible in the simulation is provably inconsistent, the existence of a being indistinguishable from the God in such a bible would not be ruled out because though the inhabitants of the world are constrained by the rules of physics in their own state machines or objects or whatever, the universe containing the simulation is subject to it's own set of physics and logic and therefore may vary even inside the simulation but not be detectable to you or I.
Replies from: jacob_cannell, xxd↑ comment by jacob_cannell · 2011-12-23T06:38:34.390Z · LW(p) · GW(p)
Yes of course this is possible. So is the Tipler scenario. However, the simulation argument just as easily supports any of a vast number of god-theories, of which Christianity is just one of many. That being said, it does support judeo-xian type systems more than say Hindiusm or Vodun.
There may even be economical reasons to create universes like ours, but that's a very unpopular position on LW.
↑ comment by hairyfigment · 2011-12-22T06:11:08.422Z · LW(p) · GW(p)
How do you interpret Romans 13:8-10?
To me it seems straightforward. Instead of spelling out in detail what rules you should follow in a new situation -- say, if the authorities who Paul just got done telling you to obey order you to do something 'wrong' -- this passage gives the general principle that supposedly underlies the rules. That way you can apply it to your particular situation and it'll tell you all you need to do as a Christian. Paul does seem to think that in his time and place, love requires following a lot of odd rules. But by my reading this only matters if you plan to travel back in time (or if you personally plan to judge the dead).
But I gather that a lot of Christians disagree with me. I don't know if I understand the objection -- possibly they'd argue that we lack the ability to see how the rules follow from loving one's neighbor, and thus we should expect God to personally spell out every rule-change. (So why tell us that this principle underlies them all?)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T06:34:55.739Z · LW(p) · GW(p)
Using exegesis (meaning I'm not asking what it says in Greek or how else it might be translated, and I don't think I need to worry much about cultural norms at the time). But that doesn't tell you much.
To me it seems straightforward. Instead of spelling out in detail what rules you should follow in a new situation -- say, if the authorities who Paul just got done telling you to obey order you to do something 'wrong' -- this passage gives the general principle that supposedly underlies the rules. That way you can apply it to your particular situation and it'll tell you all you need to do as a Christian.
Yes, I agree. Also, if you didn't know what love said to do in your situation, the rules would be helpful in figuring it out.
Paul does seem to think that in his time and place, love requires following a lot of odd rules.
That gets into a broader way of understanding the Bible. I don't know enough about the time and place to talk much about this.
But I gather that a lot of Christians disagree with me. I don't know if I understand the objection -- possibly they'd argue that we lack the ability to see how the rules follow from loving one's neighbor, and thus we should expect God to personally spell out every rule-change. (So why tell us that this principle underlies them all?)
The objection I can think of is that people might want to argue in favor of being able to do whatever they want, even if it doesn't follow from God's commands, and not listen even to God's explicit prohibitions. Hence, as a general principle, it's better to obey the rules because more people who object to them (since the New Testament already massively reduces legalism anyway) will be trying to get away with violating the spirit of the rules than will be actually correct in believing that the spirit of the rules is best obeyed by violating the letter of them. Another point would be that if an omniscient being gives you a heuristic, and you are not omniscient, you'd probably do better to follow it than to disregard it.
Replies from: hairyfigment↑ comment by hairyfigment · 2011-12-23T00:49:10.876Z · LW(p) · GW(p)
Given that the context has changed, seems to me omniscience should only matter if God wants to prevent people other than the original audience from misusing or misapplying the rules. (Obviously we'd also need to assume God supplied the rules in the first place!)
Now this does seem like a fairly reasonable assumption, but doesn't it create a lot of problems for you? If we go that route then it no longer suffices to show or assume that each rule made sense in historical context. Now you need to believe that no possible change would produce better results when we take all time periods into account.
↑ comment by Prismattic · 2011-12-20T05:53:34.233Z · LW(p) · GW(p)
Well, I suppose what I should do is comb the Bible for some absurd commandment that does apply to non-Jews,
I can save you some time here. Just look up "seven laws of Noah" or "Noahide laws". That's pretty much it for commandments that apply to non-Jews.
Replies from: JoshuaZ, AspiringKnitter↑ comment by JoshuaZ · 2011-12-20T07:06:42.089Z · LW(p) · GW(p)
Note that the Noahide laws are the Jewish, not Christian interpretation of this distinction. And there are no sources mentioning them that go back prior to the Jewish/Christian split. (The relevant sections of Talmud are written no earlier than 300 CE.) There's also some confusion over how those laws work. So for example, one of the seven Noahide prohibitions is the prohibition on illicit relations. But it isn't clear which prohibited relations are included. There's an opinion that this includes only adultery and incest and not any of the other Biblical sexual prohibitions (e.g. gay sex, marrying two sisters). There's a decent halachic argument for something of this form since Jacob marries two sisters. (This actually raises a host of other halachic/theoloical problems for Orthodox Jews because many of them believe that the patriarchs kept all 613 commandments. But this is a further digression...)
↑ comment by AspiringKnitter · 2011-12-20T06:50:42.579Z · LW(p) · GW(p)
And Jesus added the commandment not to lust after anyone you're not married to and not to divorce.
And I would never have dreamed of the stupidity until someone did it, but someone actually interpreted metaphors from Proverbs literally and concluded that "her husband is praised at the city gates" actually means "women should go to the city limits and hold up signs saying that their husbands are awesome" (which just makes no sense at all). But that doesn't count because it's a person being stupid. For one thing, that's descriptive, not prescriptive, and for another, it's an illustration of the good things being righteous gets you.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-20T21:32:45.004Z · LW(p) · GW(p)
And I would never have dreamed of the stupidity until someone did it, but someone actually interpreted metaphors from Proverbs literally and concluded that "her husband is praised at the city gates" actually means "women should go to the city limits and hold up signs saying that their husbands are awesome"
As a semi-militant atheist, I feel compelled to point out that, from my perspective, all interpretations of Proverbs as a practical guide to modern life look about equally silly...
↑ comment by Ezekiel · 2011-12-20T13:28:07.810Z · LW(p) · GW(p)
Upvoted for being the only non-Jew I've ever met to know that.
Replies from: wedrifid, MixedNuts, Bugmaster↑ comment by wedrifid · 2011-12-20T13:56:37.400Z · LW(p) · GW(p)
Upvoted for being the only non-Jew I've ever met to know that.
Really? Nearly everyone I grew up with was told that and I assume I wasn't the only one to remember. I infer that either you don't know many Christians, the subject hasn't come up while you were talking to said Christians or Christian culture in your area is far more ignorant of their religious theory and tradition than they are here.
Replies from: army1987, Ezekiel↑ comment by A1987dM (army1987) · 2011-12-20T17:34:36.263Z · LW(p) · GW(p)
I've heard that some rules are specifically supposed to only apply to Jews,¹ and I think most Christians have heard that at some point in their lives, but I don't think most of them remember having heard that, and very few to know that not wearing clothing of two different kinds of fibre at the same time is one such rules.
- I remember Feynman's WTF reaction in Surely You're Joking to learning that Jews are not allowed to operate electric switches on Saturdays but they are allowed to pay someone else to do that.
↑ comment by TheOtherDave · 2011-12-20T17:39:30.773Z · LW(p) · GW(p)
There are different Jewish doctrinal positions on whether shabbos goyim -- that is, non-Jews hired to perform tasks on Saturdays that Jews are not permitted to perform -- are permissible.
Replies from: dlthomas↑ comment by MixedNuts · 2011-12-20T13:55:11.425Z · LW(p) · GW(p)
Do I get an upvote, too? I also know about what I should do if I want food I cook to be kosher (though I'm still a bit confused about food containing wheat).
Replies from: beoShaffer, wedrifid↑ comment by beoShaffer · 2011-12-20T13:59:37.625Z · LW(p) · GW(p)
I kew it too,. I thought it was common knowledge among those with any non-trival knowledge of non-folk Christian theology. Which admittedly isn't a huge subset of the population, but isn't that small in the west.
↑ comment by wedrifid · 2011-12-20T14:01:58.344Z · LW(p) · GW(p)
Do I get an upvote, too? I also know about what I should do if I want food I cook to be kosher (though I'm still a bit confused about food containing wheat).
I want an upvote too for knowing that if I touch a woman who has her period then I am 'unclean'. I don't recall exactly what 'unclean' means. I think it's like 'cooties'.
Replies from: Ezekiel↑ comment by thomblake · 2011-12-20T16:02:03.546Z · LW(p) · GW(p)
The Catholic explanation for this one is that the pope had a dream about a goat piñata.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-12-20T18:08:59.409Z · LW(p) · GW(p)
If that's real, I want the whole story and references. If you made that up, I'm starting my own heresy around it.
Replies from: Oligopsony, thomblake↑ comment by Oligopsony · 2011-12-20T18:18:36.187Z · LW(p) · GW(p)
Acts 10:9-16:
On the morrow, as they went on their journey, and drew nigh unto the city, Peter went up upon the housetop to pray about the sixth hour:
And he became very hungry, and would have eaten: but while they made ready, he fell into a trance,
And saw heaven opened, and a certain vessel descending upon him, as it had been a great sheet knit at the four corners, and let down to the earth:
Wherein were all manner of fourfooted beasts of the earth, and wild beasts, and creeping things, and fowls of the air.
And there came a voice to him, Rise, Peter; kill, and eat.
But Peter said, Not so, Lord; for I have never eaten any thing that is common or unclean.
And the voice spake unto him again the second time, What God hath cleansed, that call not thou common.
This was done thrice: and the vessel was received up again into heaven.
If you read the rest of the chapter it's made clear that the dream is a metaphor for God's willingness to accept Gentiles as Christians, rather than a specific message about acceptable foods, but abandoning kashrut presumably follows logically from not requiring new Christians to count as Jews first, so.
(Upon rereading this, my first impression is how much creepier slaughtering land animals seems as a metaphor for proselytism than the earlier "fishers of men" stuff; maybe it's the "go, kill and eat" line or an easier time empathizing with mammals, Idunno. Presumably the way people mentally coded these things in first-century Palestine would differ from today.)
Replies from: MixedNuts↑ comment by JoachimSchipper · 2011-12-20T09:56:08.196Z · LW(p) · GW(p)
More sex does not have to mean more casual sex. There are lots of people in committed relationships (marriages) that would like to have more-similar sex drives. Nuns wouldn't want their libido increased, but it's not only for the benefit of the "playahs" either.
Also, I think the highest-voted comment ("I don't think that any relationship style is the best (...) However, I do wish that people were more aware of the possibility of polyamory (...)") is closer to the consensus than something like "everyone should have as many partners as much as possible". LW does assume that polyamory and casual sex is optional-but-ok, though.
↑ comment by cousin_it · 2012-01-10T00:38:37.025Z · LW(p) · GW(p)
Hmm, that doesn't sound right. I don't want to make celibate people uncomfortable, I just want to have more casual sex myself. Also I have a weaker altruistic wish that people who aren't "getting any" could "get some" without having to tweak their looks (the beauty industry) or their personality (the pickup scene). There could be many ways to make lots of unhappy people happier about sex and romance without tweaking your libido. Tweaking libido sounds a little pointless to me anyway, because PUA dogma (which I mostly agree with) predicts that people will just spend the surplus libido on attractive partners and leave unattractive ones in the dust, like they do today.
↑ comment by Kaj_Sotala · 2011-12-20T07:42:23.962Z · LW(p) · GW(p)
At least, others here seem to be non-monogamous.
Well, some are. From the last survey:
625 people (57.3%) described themselves as monogamous, 145 (13.3%) as polyamorous, and 298 (27.3%) didn't really know. These numbers were similar between men and women.
↑ comment by Emile · 2011-12-20T09:43:54.040Z · LW(p) · GW(p)
But never mind. We have different terminal values here. You-- I assume-- seek a lot of partners for everyone, right?
Nope! I don't have any certainty about what is best for society / mankind in the long run, but personally, I'm fine with monogamy, I'm married, have a kid, and don't think "more casual sex" is necessarily a good thing.
I can, however, agree with Eliezer when he says it might be better if human sex drives were better adjusted - not because I value seeing more people screwing around like monkeys, but because it seems that the way things are now results in a great deal of frustration and unhappiness.
I don't know about rape, but I expect that more sex drive for women and less for men would result in less divorces, because differences in sex drive are a frequent source of friction, as is infidelity (though it's not clear that different sex drives would result in less infidelity). That's not to say that hacking people's brains is the only solution, or the best one.
↑ comment by juliawise · 2011-12-21T22:14:04.619Z · LW(p) · GW(p)
I'm a married, monogamous person who would love to be able to adjust my sex drive to match my spouse's (and I think we would both choose to adjust up).
The Twilight books do an interesting riff of the themes of eternal life, monogamy, and extremely high sex drives.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-21T22:19:48.123Z · LW(p) · GW(p)
If enough feel similarly, and the discrepancy is real, the means will move toward each through voluntary shifts without forcing anything on anyone, incidentally.
Replies from: juliawise↑ comment by juliawise · 2011-12-21T22:34:16.519Z · LW(p) · GW(p)
What "voluntary shifts" do you mean? I agree that small shifts in sex drive are possible based on individual choice, but not large ones. Also, why do the means matter?
Replies from: dlthomas↑ comment by dlthomas · 2011-12-21T22:44:58.779Z · LW(p) · GW(p)
Ah, misunderstanding. I did not mean "shifts by volition alone", but "voluntary as opposed to forced" as pertains to AspiringKnitter's earlier worry about Yudkowsky forcing "some sort of compromise where we lowered male sex drive a little and increased female sex drive a little."
If interpreted as a prediction rather than a recommendation, it might happen through individual choice if the ability to modify these things directly becomes sufficiently available (and sufficiently safe, and sufficiently accepted, &c) because of impulses like those you expressed: pairings that desire to be monogamous and who are otherwise compatible might choose to self modify to be compatible on this axis as well, and this will move the averages closer together.
Replies from: juliawise↑ comment by Oligopsony · 2011-12-20T00:34:14.716Z · LW(p) · GW(p)
I think people's intuitions about sex drives are interesting, because they seem to differ. Earlier we had a discussion where it became clear that some conceptualized lust as something like hunger - an active harm unless fulfilled - while I had always generalized from one example and assumed lust simpliciter pleasant and merely better when fulfilled. Of course it would be inconvenient for other things if it were constantly present, and were I a Christian of the right type the ideal level would obviously be lower, so this isn't me at all saying you're crazy and incomprehensible in some veiled way - I just think these kinds of implicit conceptual differences are interesting.
↑ comment by kilobug · 2011-12-19T19:19:08.666Z · LW(p) · GW(p)
« EY suggested that we would have some sort of compromise where we lowered male sex drive a little and increased female sex drive a little, which doesn't appeal to me at all. Sorry, but I don't WANT to want more sex. » Ok, but would you agree to lowering males sex drive then ? Making it easier for those who want to follow a "no sex" path, and lowering the different between males and females in term of sex drive in the process ? Eliezer's goal was to lower the difference between the desires of the two sex so they could both be happier. He proposed doing it by making them both go towards the average, but aligning to the lower of the two would fit the purpose too.
Replies from: thomblake↑ comment by Bugmaster · 2011-12-20T00:47:33.037Z · LW(p) · GW(p)
[EY had] proposed modifications you want to make to people that I don't want made to me already...
I am actually rather curious to hear more about your opinion on this topic. I personally would jump at the chance to become "better, stronger, faster" (and, of course, smarter), as long as doing so was my own choice. It is very difficult for me to imagine a situation where someone I trust tells me, for example, "this implant is 100% safe, cheap, never breaks down, and will make you think twice as fast, do you want it ?", and I answer "no thanks". You obviously disagree, so I'd love to hear your reasoning.
EDIT: Basically, what Cthulhoo said. Sorry Cthulhoo, I didn't see your comment earlier, somehow.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T01:32:44.969Z · LW(p) · GW(p)
Explained one example below.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-20T01:49:48.469Z · LW(p) · GW(p)
I was under the impression that your example dealt with a compulsory modification (higher sex drive for all women across the board), which is something I would also oppose; that's why I specified "...as long as doing so was my own choice" in my comment. But I am under the impression -- and perhaps I'm wrong about this -- that you would not choose any sort of a technological enhancement of any of your capabilities. Is that so ? If so, why ?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T02:08:30.826Z · LW(p) · GW(p)
No. I apologize for being unclear. EY has proposed modifications I don't want, but that doesn't mean every modification he supports is one I don't want. I think I would be more skeptical than most people here, but I wouldn't refuse all possible enhancements as a matter of principle.
Replies from: Bugmaster↑ comment by Cthulhoo · 2011-12-19T13:23:21.404Z · LW(p) · GW(p)
Yay for not acting like EY wants, I guess. No offense or anything, EY, but you've proposed modifications you want to make to people that I don't want made to me already...
I would be very interested in reading your opinion on this subject. There is sometimes a confirmation effect/death spiral inside the LW community, and it would be nice to be exposed to a completely different point of view. I may then modify my beliefs fully, in part or not at all as a consequence, but it's valuable information for me.
↑ comment by Mitchell_Porter · 2011-12-24T03:00:31.116Z · LW(p) · GW(p)
I'll bet US$1000 that this is Will_Newsome.
Replies from: NancyLebovitz, katydee, gwern, Mitchell_Porter, NancyLebovitz, AspiringKnitter, None, Eliezer_Yudkowsky, shminux, shokwave, dlthomas, Caspian↑ comment by NancyLebovitz · 2011-12-26T21:50:51.327Z · LW(p) · GW(p)
Why did you frame it that way, rather than that AspiringKnitter wasn't a Christian, or was someone with a long history of trolling, or somesuch? It's much less likely to get a particular identity right than to establish that a poster is lying about who they are.
Replies from: Larks↑ comment by katydee · 2011-12-26T01:34:30.525Z · LW(p) · GW(p)
Wow. Now that you mention it, perhaps someone should ask AspiringKnitter what she thinks of dubstep...
Replies from: katydee↑ comment by katydee · 2011-12-26T02:26:04.058Z · LW(p) · GW(p)
Holy crap. I've never had a comment downvoted this fast, and I thought this was a pretty funny joke to boot. My mental estimate was that the original comment would end up resting at around +4 or +5. Where did I err?
Replies from: wedrifid, Jonii↑ comment by wedrifid · 2011-12-26T11:30:12.514Z · LW(p) · GW(p)
I left it alone because I have absolutely no idea what you are talking about. Dubstep? Will likes, dislikes and/or does something involving dubstep? (Google tells me it is a kind of dance music.)
Replies from: katydee↑ comment by katydee · 2011-12-26T18:56:46.060Z · LW(p) · GW(p)
Explanation: Will once (in)famously claimed that watching certain dubstep videos would bolster some of your math intuitions.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T11:09:07.967Z · LW(p) · GW(p)
(Er, well, math intuitions in a few specific fields, and only one or two rather specific dubstep videos. I'm not, ya know, actually crazy. The important thing is that that video is, as the kids would offensively say, "sicker than Hitler's kill/death ratio".) newayz I upvoted your original comment.
Replies from: thomblake, katydee↑ comment by thomblake · 2011-12-27T17:17:27.462Z · LW(p) · GW(p)
sicker than Hitler's kill/death ratio
Do we count assists now?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-27T17:29:44.772Z · LW(p) · GW(p)
And if so, who gets the credit for deaths by old age?
↑ comment by gwern · 2011-12-24T03:04:30.601Z · LW(p) · GW(p)
That's remarkably confident. This doesn't really read like Newsome to me (and how would one find out with sufficient certainty to decide a bet for that much?).
Replies from: wedrifid, Bugmaster↑ comment by wedrifid · 2011-12-26T11:38:07.418Z · LW(p) · GW(p)
That's remarkably confident.
Just how confident is it? It's a large figure and colloquially people tend to confuse size of bet with degree of confidence - saying a bigger number is more of a dramatic social move. But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
Mitchell's actual confidence is some unspecified figure between 0.5 and 1 and is heavily influenced by how overconfident he expects others to be.
Replies from: Maelin, FAWS, gwern↑ comment by Maelin · 2011-12-30T09:11:19.168Z · LW(p) · GW(p)
But ultimately to make a bet at even odds all Mitchell needs is to be confident that if someone takes him up on the bet then he has 50% or more chance of being correct. The size of the bet only matters indirectly as an incentive for others to do more research before betting.
This would only be true if money had linear utility value [1]. I, for example, would not take a $1000 bet at even odds even if I had 75% confidence of winning, because with my present financial status I just can't afford to lose $1000. But I would take such a bet of $100.
The utility of winning $1000 is not the negative of the utility of losing $1000.
[1] or, to be precise, if it were approximately linear in the range of current net assets +/- $1000
Replies from: wedrifid↑ comment by wedrifid · 2011-12-30T09:13:50.580Z · LW(p) · GW(p)
The utility of winning $1000 is not the negative of the utility of losing $1000.
From what I have inferred about Michael's financial status the approximation seemed safe enough.
Replies from: Maelin↑ comment by Maelin · 2011-12-30T15:25:06.085Z · LW(p) · GW(p)
The utility of winning $1000 is not the negative of the utility of losing $1000.
From what I have inferred about Michael's financial status the approximation seemed safe enough.
Fair enough in this case, but it's important to avoid assuming that the approximation is universally applicable.
↑ comment by FAWS · 2011-12-26T23:00:24.434Z · LW(p) · GW(p)
In a case with extremely asymmetric information like this one they actually are almost the same thing, since the only payoff you can reasonably expect is the rhetorical effect of offering the bet. Offering bets the other party can refuse and the other party has effectively perfect information about can only lose money (if money is the only thing the other party cares about and they act at least vaguely rationally).
↑ comment by gwern · 2011-12-26T16:18:56.961Z · LW(p) · GW(p)
Risk aversion and other considerations like gambler's ruin usually mean that people insist on substantial edges over just >50%. This can be ameliorated by wealth, but as far as I know, Porter is at best middle-class and not, say, a millionaire.
So your points are true and irrelevant.
Replies from: wedrifid↑ comment by Bugmaster · 2011-12-24T03:20:54.424Z · LW(p) · GW(p)
I have no idea who this Newsome character is, but I bet US$1 that there's no easy way to implement the answer to the question,
how would one find out with sufficient certainty to decide a bet for that much?
without invading someone's privacy, so I'm not going to play.
Replies from: Emile↑ comment by Emile · 2011-12-24T09:28:04.375Z · LW(p) · GW(p)
Agree on a trusted third party (gwern, Alicorn, NancyLebowitz ... high-karma longtimers who showed up in this thread), and have AK call them on the phone, confirming details, then have the third party confirm that it's not Will_Newsome.
... though the main problem would be, do people agree to bet before or after AK agrees to such a scheme?
Replies from: AspiringKnitter, Alicorn, NancyLebovitz↑ comment by AspiringKnitter · 2011-12-24T09:38:42.507Z · LW(p) · GW(p)
How would gwern, Alicorn or NancyLebowitz confirm that anything I said by phone meant AspiringKnitter isn't Will Newsome? They could confirm that they talked to a person. How could they confirm that that person had made AspiringKnitter's posts? How could they determine that that person had not made Will Newsome's posts?
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-04T00:02:29.447Z · LW(p) · GW(p)
At the very least, they could dictate an arbitrary passage (or an MD5 hash) to this person who claims to be AK, and ask them to post this passage as a comment on this thread, coming from AK's account. This would not definitively prove that the person is AK, but it might serve as a strong piece of supporting evidence.
In addition, once the "AK" persona and the "WillNewsome" persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
The problem of determining a person's identity on the Internet, and doing so in a reasonably safe way, is an interesting challenge. But in practice, I don't really think it matters that much, in this case. I care about what the "AK" persona writes, not about who they are pretending not to be.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-04T00:07:31.854Z · LW(p) · GW(p)
In addition, once the "AK" persona and the "WillNewsome" persona each post a sufficiently large corpus of text, we could run some textual analysis algorithms on it to determine if their writing styles are similar; Markov Chains are surprisingly good at this (considering how simple they are to implement).
How about doing this already, with all the stuff they've written before the original bet?
↑ comment by Alicorn · 2011-12-24T15:52:47.632Z · LW(p) · GW(p)
I know Will Newsome in real life. If a means of arbitrating this bet is invented, I will identify AspiringKnitter as being him or not by visual or voice for a small cut of the stakes. (If it doesn't involve using Skype, telephone, or an equivalent, and it's not dreadfully inconvenient, I'll do it for free.)
↑ comment by NancyLebovitz · 2011-12-27T12:30:20.957Z · LW(p) · GW(p)
A sidetrack: People seem to be conflating AspiringKnitter's identity as a Christian and a woman. Female is an important part of not being Will Newsome, but suppose that AspiringKnitter were a male Christian and not Will Newsome. Would that make a difference to any part of this discussion?
More identity issues: My name is Nancy Lebovitz with a v, not a w.
Replies from: Emile↑ comment by Emile · 2011-12-27T14:40:19.258Z · LW(p) · GW(p)
Sorry 'bout the spelling of your name, I wonder if I didn't make the same mistake before ...
Well, the biggest thing AK being a male non-Will Christian would change, is that he would lose an easy way to prove to a third party that he's not Will Newsome and thus win a thousand bucks (though the important part is not exactly being female, it's having a recognizably female voice on the phone, which is still pretty highly correlated).
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T15:15:31.024Z · LW(p) · GW(p)
Rationalist lesson that I've derived from the frequency that people get my name wrong: It's typical for people to get it wrong even if I say it more than once, spell it for them, and show it to them in writing. I'm flattered if any of my friends start getting it right in less than a year.
Correct spelling and pronunciation of my name is a simple, well-defined, objective matter, and I'm in there advocating for it, though I cut people slack if they're emotionally stressed.
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks. Less Wrong has a lot about cognitive biases, but not so much about perceptual biases.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-30T17:39:11.662Z · LW(p) · GW(p)
This situation suggests that a tremendous amount of what seems like accurate perception is actually sloppy filling in of blanks.
This is a feature, not a bug. Natural language has lots of redundancy, and if we read one letter at a time rather than in word-sized chunks we would read much more slowly.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-30T18:21:16.850Z · LW(p) · GW(p)
I think you have causality reversed here. It's the redundancy of our languages that's the "feature" -- or, more precisely, the workaround for the previously existing hardware limitation. If our perceptual systems did less "filling in of blanks," it seems likely that our languages would be less redundant -- at least in certain ways.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-30T19:32:02.950Z · LW(p) · GW(p)
I think redundancy was originally there to counteract noise, of which there was likely a lot more in the ancestral environment, and as a result there's more-than-enough of it in such environments as reading text written in a decent typeface one foot away from your face, and the brain can then afford to use it to read much faster. (It's not that hard to read at 600 words per minute with nearly complete understanding in good conditions, but if someone was able to speak that fast in a not-particularly-quiet environment, I doubt I'd be able to understand much.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-30T20:38:23.793Z · LW(p) · GW(p)
Yeah, I agree with that.
↑ comment by Mitchell_Porter · 2011-12-29T05:14:07.959Z · LW(p) · GW(p)
I said
I'll bet US$1000 that this is Will_Newsome.
I think it's time to close out this somewhat underspecified offer of a bet. So far, AspiringKnitter and Eliezer expressed interest but only if a method of resolving the bet could be determined, Alicorn offered to play a role in resolving the bet in return for a share of the winnings, and dlthomas offered up $15.
I will leave the possibility of joining the bet open for another 24 hours, starting from the moment this comment is posted. I won't look at the site during that time. Then I'll return, see who (if anyone) still wants a piece of the action, and will also attempt to resolve any remaining conflicts about who gets to participate and on what terms. You are allowed to say "I want to join the bet, but this is conditional upon resolving such-and-such issue of procedure, arbitration, etc." Those details can be sorted out later. This is just the last chance to shortlist yourself as a potential bettor.
I'll be back in 24 hours.
Replies from: Mitchell_Porter, Steve_Rayhawk, ITakeBets, orthonormal↑ comment by Mitchell_Porter · 2011-12-30T05:30:20.370Z · LW(p) · GW(p)
And the winners are... dlthomas, who gets $15, and ITakeBets, who gets $100, for being bold enough to bet unconditionally. I accept their bets, I formally concede them, aaaand we're done.
Replies from: wedrifid, Solvent, ITakeBets, AspiringKnitter↑ comment by AspiringKnitter · 2011-12-30T06:46:44.460Z · LW(p) · GW(p)
What did they win money for?
Replies from: wedrifid, KPier↑ comment by wedrifid · 2011-12-30T07:23:49.035Z · LW(p) · GW(p)
What did they win money for?
Betting money. That is how such things work.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-30T07:58:53.691Z · LW(p) · GW(p)
You're such a dick. Haha. Upvoted.
↑ comment by KPier · 2011-12-30T07:36:23.511Z · LW(p) · GW(p)
You not being Will_Newsome. (I can't imagine how bizarre it must be to be watching this conversation from your perspective.)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-30T21:07:34.671Z · LW(p) · GW(p)
Wait, but what changed that caused Mitchell_Porter to realize that?
Replies from: Mitchell_Porter, dlthomas, Kevin, ArisKatsaris↑ comment by Mitchell_Porter · 2011-12-31T06:11:30.236Z · LW(p) · GW(p)
I didn't exactly realize it, but I reduced the probability. My goal was never to make a bet, my goal was to sockblock Will. But in the end I found his protestations somewhat convincing; he actually sounded for a moment like someone earnestly defending himself, rather than like a joker. And I wasn't in the mood to re-run my comparison between the Gospel of Will and the Knitter's Apocryphon. So I tried to retire the bet in a fair way, since having an ostentatious unsubstantiated accusation of sockpuppetry in the air is almost as corrosive to community trust as it is to be beset by the real thing. (ETA: I posted this before I saw Kevin's comment, by the way!)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-03T09:44:21.519Z · LW(p) · GW(p)
"Next time just don't be a dick and you won't lose a hundred bucks," says the unreflective part of my brain whose connotations I don't necessarily endorse but who I think does have a legitimate point.
↑ comment by ArisKatsaris · 2011-12-30T21:50:41.293Z · LW(p) · GW(p)
Mitchell asked Will directly at http://lesswrong.com/lw/b9/welcome_to_less_wrong/5jby so perhaps he just trusts Will not to lie when using the Will_Newsome account.
↑ comment by Steve_Rayhawk · 2011-12-30T05:12:30.254Z · LW(p) · GW(p)
I'll stake $500 if eligible.
When would the answer need to be known by?
↑ comment by orthonormal · 2011-12-29T22:56:52.215Z · LW(p) · GW(p)
I'll stake $100 against you, if and only if Eliezer also participates.
Replies from: orthonormal↑ comment by orthonormal · 2011-12-29T22:58:11.260Z · LW(p) · GW(p)
(Replying rather than editing, to make sure that my comment displays as un-edited.)
I should also stipulate that I am not, nor have I ever been, Will Newsome.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-29T23:56:33.648Z · LW(p) · GW(p)
It's not impossible that I was once Will Newsome, I suppose, nor even that I currently am. But if so, I'm unaware of the fact.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-30T02:01:47.837Z · LW(p) · GW(p)
I am a known magus, so even an Imperius curse is not out of the question.
Replies from: CuSithBell, ata↑ comment by CuSithBell · 2011-12-30T02:47:45.789Z · LW(p) · GW(p)
Turns out LW is a Chesterton-esque farce in which all posters are secretly Wills trolling Wills.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2012-01-06T02:32:19.106Z · LW(p) · GW(p)
Then I'm really wasting time here.
Replies from: wedrifid↑ comment by NancyLebovitz · 2011-12-26T08:30:02.214Z · LW(p) · GW(p)
Unfortunately, I don't have the spare money to take the other side of the bet, but Will showed a tendency to head off into foggy abstractions which I haven't seen in Aspiring Knitter.
Replies from: J_Taylor↑ comment by J_Taylor · 2011-12-28T09:54:05.959Z · LW(p) · GW(p)
Will_Newsome does not seem, one would say, incompetent. I have never read a post by him in which he seemed to be unknowingly committing some faux pas. He should be perfectly capable of suppressing that particular aspect of his posting style.
↑ comment by AspiringKnitter · 2011-12-24T03:32:35.849Z · LW(p) · GW(p)
And what do I have to do to win your bet, given that I'm not him (and hadn't even heard of him before)? After all, even if you saw me in person, you could claim I was paid off by this guy to pretend to be AspiringKnitter. Or shall I just raise my right hand?
I don't see why this guy wouldn't offer such a bet, knowing he can always claim I'm lying if I try to provide proof. No downside, so it doesn't matter how unlikely it is, he could accuse any given person of sockpuppeting. The expected return can't be negative. That said, the odds here being worse than one in a million, I don't know why he went to all that trouble for an expected return of less than a cent. There being no way I can prove who I am, I don't know why I went to all the trouble of saying this, either, though, so maybe we're all just a little irrational.
Replies from: Mitchell_Porter, dlthomas↑ comment by Mitchell_Porter · 2011-12-24T09:48:56.704Z · LW(p) · GW(p)
And what do I have to do to win your bet
Let's first confirm that you're willing to pay up, if you are who I say you are. I will certainly pay up if I'm wrong...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T09:55:33.834Z · LW(p) · GW(p)
Let's first confirm that you're willing to pay up, if you are who I say you are.
That's problematic since if I were Newsome, I wouldn't agree. Hence, if AspiringKnitter is Will_Newsome, then AspiringKnitter won't even agree to pay up.
Not actually being Will_Newsome, I'm having trouble considering what I would do in the case where I turned out to be him. But if I took your bet, I'd agree to it. I can't see how such a bet could possibly get me anything, though, since I can't see how I'd prove that I'm not him even though I'm really not him.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-12-24T10:10:08.109Z · LW(p) · GW(p)
if I took your bet, I'd agree to it.
All right, how about this. If I presented evidence already in the public domain which made it extremely obvious that you are Will Newsome, would you pay up?
By the way, when I announced my belief about who you are, I didn't have personal profit in mind. I was just expressing confidence in my reasoning.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T10:25:10.440Z · LW(p) · GW(p)
All right, how about this. If I presented evidence already in the public domain which made it extremely obvious that you are Will Newsome, would you pay up?
There is no such evidence. What do you have in mind that would prove that?
Replies from: Mitchell_Porter, wedrifid↑ comment by Mitchell_Porter · 2011-12-24T10:47:03.327Z · LW(p) · GW(p)
You write stream-of-consciousness run-on sentences which exhibit abnormal disclosure of self while still actually making sense (if one can be bothered parsing them). Not only do you share this trait with Will, the themes and the phrasing are the same. You have a deep familiarity with LessWrong concerns and modes of thought, yet you also advocate Christian metaphysics and monogamy. Again, that's Will.
That's not yet "extremely obvious", but it should certainly raise suspicions. I expect that a very strong case could be made by detailed textual comparison.
Replies from: Kaj_Sotala, None, army1987, None, JoachimSchipper↑ comment by Kaj_Sotala · 2011-12-25T20:09:49.560Z · LW(p) · GW(p)
AspiringKnitter's arguments for Christianity are quite different from Will's, though.
(Also, at the risk of sounding harsh towards Will, she's been considerably more coherent.)
Replies from: Eliezer_Yudkowsky, Will_Newsome↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-26T10:18:55.742Z · LW(p) · GW(p)
I think if Will knew how to write this non-abstractly, he would have a valuable skill he does not presently possess, and he would use that skill more often.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T10:41:09.541Z · LW(p) · GW(p)
By the time reflective and wannabe-moral people are done tying themselves up in knots, what they usually communicate is nothing; or, if they do communicate, you can hardly tell them apart from the people who truly can't.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-27T11:24:27.863Z · LW(p) · GW(p)
Point of curiosity: if you took the point above and rewrote it the way you think AspiringKnitter would say it, how would you say it?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T11:59:30.565Z · LW(p) · GW(p)
(ETA: Something like this:)
What I'm saying is that most people who write a Less Wrong comment aren't totally stressing out about all the tradeoffs that inevitably have to be made in order to say anything at all. There's a famous quote whose gist is 'I apologize that this letter is so long, but I didn't have very much time to write it'. The audience has some large and unknown set of constraints on what they're willing to glance at, read, take seriously, and so on, and the writer has to put a lot of work into meeting those constraints as effectively as possible. Some tradeoffs are easy to make: yes, a long paragraph is a self-contained stucture, but that's less important than readibility. Others are a little harder: do I give a drawn-out concrete example of my point, or would that egregiously inflate the length of my comment?
There are also the author's internal constraints re what they feel they need to say, what they're willing to say, what they're willing to say without thinking carefully about whether or not it's a good idea to say, how much effort they can put into rewriting sentences or linking to relevant papers while their heart's pumping as if the house is burning down, vague fears of vague consequences, and so on and so forth for as long as the author's neuroticism or sense of morality allows.
People who are abnormally reflective soon run into meta-level constraints: what does it say about me that I stress out this much at the prospect of being discredited? By meeting these constraints am I supporting the proliferation of a norm that isn't as good as it would be if I met some other, more psychologically feasible set of constraints? Obviously the pragmatic thing to do is to "just go with it", but "just going with it" seems to have led to horrifying consequences in the past; why do I expect it to go differently this time?
In the end the author is bound to become self-defeating, dynamically inconsistent. They'll like as not end up loathing their audience for inadvertently but non-apologetically putting them in such a stressful situation, then loathing themselves for loathing their audience when obviously it's not the audience's fault. The end result is a stressful situation where the audience wants to tell the author to do something very obvious, like not stress out about meeting all the constraints they think are important. Unfortunately if you've already tied yourself up in knots you don't generally have a hand available with which to untie them.
ETA: On the positive side they'll also build a mega-meta-FAI just to escape all these ridiculous double binds. "Ha ha ha, take that, audience! I gave you everything you wanted! Can't complain now!"
Replies from: TheOtherDave, AspiringKnitter, NancyLebovitz↑ comment by TheOtherDave · 2011-12-27T15:35:06.508Z · LW(p) · GW(p)
And yet, your g-grandparent comment, about which EY was asking, was brief... which suggests that the process you describe here isn't always dominant.
Although when asked a question about it, instead of either choosing or refusing to answer the question, you chose to back all the way up and articulate the constraints that underlie the comment.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-30T00:04:04.177Z · LW(p) · GW(p)
Hm? I thought I'd answered the question. I.e. I rewrote my original comment roughly the way I'd expect AK to write it, except with my personal concerns about justification and such, which is what Eliezer had asked me to do, 'cuz he wanted more information about whether or not I was AK, so that he could make money off Mitchell Porter. I'm reasonably confident I thwarted his evil plans in that he still doesn't know to what extent I actually cooperated with him. Eliezer probably knows I'd rather my friends make money off of Mitchell Porter, not Eliezer.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-30T01:16:50.555Z · LW(p) · GW(p)
Oh! I completely missed that that was what you were doing... sorry. Thanks for clarifying.
↑ comment by AspiringKnitter · 2012-01-04T08:09:51.831Z · LW(p) · GW(p)
You know, in some ways, that does sound like me, and in some ways it really still doesn't. Let me first of all congratulate you on being able to alter your style so much. I envy that skill.
What I'm saying is that most people who write a Less Wrong comment aren't totally stressing out about all the tradeoffs that inevitably have to be made in order to say anything at all.
Your use of "totally" is not the same as my use of "totally"; I think it sounds stupid (personal preference), so if I said it, I would be likely to backspace and write something else. Other than that, I might say something similar.
There's a famous quote whose gist is 'I apologize that this letter is so long, but I didn't have very much time to write it'.
I would have said " that goes something like" instead of "whose gist is", but that's the sort of concept I might well have communicated in roughly the manner I would have communicated it.
The audience has some large and unknown set of constraints on what they're willing to glance at, read, take seriously, and so on, and the writer has to put a lot of work into meeting those constraints as effectively as possible. Some tradeoffs are easy to make: yes, a long paragraph is a self-contained stucture, but that's less important than readibility. Others are a little harder: do I give a drawn-out concrete example of my point, or would that egregiously inflate the length of my comment?
An interesting point, and MUCH easier to understand than your original comment in your own style. This conveys the information more clearly.
There are also the author's internal constraints re what they feel they need to say, what they're willing to say, what they're willing to say without thinking carefully about whether or not it's a good idea to say, how much effort they can put into rewriting sentences or linking to relevant papers while their heart's pumping as if the house is burning down, vague fears of vague consequences, and so on and so forth for as long as the author's neuroticism or sense of morality allows.
This has become a run-on sentence. It started like something I would say, but by the end, the sentence is too run-on to be my style. I also don't use the word "neuroticism". It's funny, but I just don't. I also try to avoid the word "nostrils" for no good reason. In fact, I'm disturbed by having said it as an example of another word I don't use.
However, this is a LOT closer to my style than your normal writing is. I'm impressed. You're also much more coherent and interesting this way.
People who are abnormally reflective soon run into meta-level constraints:
I would probably say "exceptionally" or something else other than "abnormally". I don't avoid it like "nostrils" or just fail to think of it like "neuroticism", but I don't really use that word much. Sometimes I do, but not very often.
what does it say about me that I stress out this much at the prospect of being discredited?
Huh, that's an interesting thought.
By meeting these constraints am I supporting the proliferation of a norm that isn't as good as it would be if I met some other, more psychologically feasible set of constraints?
Certainly something I've considered. Sometimes in writing or speech, but also in other areas of my life.
Obviously the pragmatic thing to do is to "just go with it", but "just going with it" seems to have led to horrifying consequences in the past; why do I expect it to go differently this time?
I might have said this, except that I wouldn't have said the first part because I don't consider that obvious (or even necessarily true), and I would probably have said "horrific" rather than "horrifying". I might even have said "bad" rather than either.
In the end the author is bound to become self-defeating,
I would probably have said that "many authors become self-defeating" instead of phrasing it this way.
dynamically inconsistent
Two words I've never strung together in my life. This is pure Will. You're good, but not quite perfect at impersonating me.
They'll like as not end up loathing their audience for inadvertently but non-apologetically putting them in such a stressful situation, then loathing themselves for loathing their audience when obviously it's not the audience's fault.
Huh, interesting. Not quite what I might have said.
The end result is a stressful situation where the audience wants to tell the author to do something very obvious, like not stress out about meeting all the constraints they think are important.
...Why don't they? Seriously, I dunno if people are usually aware of how uncomfortable they make others.
Unfortunately if you've already tied yourself up in knots you don't generally have a hand available with which to untie them.
I'm afraid I don't understand.
ETA: On the positive side they'll also build a mega-meta-FAI just to escape all these ridiculous double binds. "Ha ha ha, take that, audience! I gave you everything you wanted! Can't complain now!"
And I wouldn't have said this because I don't understand it.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-04T11:38:32.548Z · LW(p) · GW(p)
Thank you, that was interesting. I should note that I wasn't honestly trying to sound like you; there was a thousand bucks on the table so I went with some misdirection to make things more interesting. Hence "dynamically inconsistent" and "totally" and so on. I don't think it had much effect on the bet though.
↑ comment by NancyLebovitz · 2011-12-27T12:22:44.480Z · LW(p) · GW(p)
Have you looked into and/or attempted methods of lowering your anxiety?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T12:37:29.352Z · LW(p) · GW(p)
Yes. Haven't tried SSRIs yet. Really I just need a regular meditation practice, but there's a chicken and egg problem of course. Or a prefrontal cortex and prefrontal cortex exercise problem. The solution is obviously "USE MOAR WILLPOWER" but I always forget that or something. Lately I've been thinking about simply not sinning, it's way easier for me to not do things than do things. This tends to have lasting effects and unintended consequences of the sort that have gotten me this far, so I should keep doing it, right? More problems more meta.
Replies from: TheOtherDave, NancyLebovitz↑ comment by TheOtherDave · 2011-12-27T15:41:09.426Z · LW(p) · GW(p)
IME, more willpower works really poorly as a solution to pretty much anything, for much the same reason that flying works really poorly as a way of getting to my roof. I mean, I suspect that if I could fly, getting to my roof would be very easy, but I can't fly.
I also find that regular physical exercise and adequate sleep do more to manage my anxiety in the long term (that is, on a scale of months) than anything else I've tried.
↑ comment by NancyLebovitz · 2011-12-27T12:52:29.827Z · LW(p) · GW(p)
Have you tried yoga or tai chi as meditation practices? They may be physically complex/challenging enough to distract you (some of the time) from verbally-driven distraction.
I suspect that "not sinning" isn't simple. How would you define sinning?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T13:31:17.718Z · LW(p) · GW(p)
Verbally-driven distraction isn't much of an issue, it's mostly just getting to the zafu. Once there then even 5 minutes of meditation is enough to calm me down for 30 minutes, which is a pretty big deal. I'm out of practice; I'm confident I can get back into the groove, but first I have actually make it to the zafu more than once every week or two. I think I want to stay with something that I already identify with really powerful positive experiences, i.e. jhana meditation. I may try contemplative prayer at some point for empiricism's sake.
Re sinning... now that I think about it I'm not sure that I could do much less than I already do. I read a lot and think a lot, and reflectively endorse doing so, mostly. I'm currently writing a Less Wrong comment which is probably a sin, 'cuz there's lots of heathens 'round these parts among other reasons. Huh, I guess I'd never thought about demons influencing norms of discourse on a community website before, even though that's one of the more obvious things to do. Anyway, yah, the positive sins are sorta simplistically killed off in their most obvious forms, except pride I suppose, while the negative ones are endless.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T14:11:22.224Z · LW(p) · GW(p)
I gather that meditating at home is either too hard or doesn't work as well?
I'm currently writing a Less Wrong comment which is probably a sin, 'cuz there's lots of heathens 'round these parts among other reasons
?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T14:23:20.487Z · LW(p) · GW(p)
I do meditate at home! "Zafu" means "cushion". Yeah, I have trouble remembering to walk 10 feet to sit down in a comfortable position on a comfortable cusion instead of being stressed about stuff all day. Brains...
Not sure what the question mark is for. Heathens are bad, it's probably bad to hang out with them, unless you're a wannabe saint and are trying to convert them, which I am, but only half-heartedly. Sin is all about contamination, you know? Hence baptism and stuff. Brains...
Replies from: hairyfigment, NancyLebovitz, MarkusRamikin↑ comment by hairyfigment · 2011-12-28T02:15:00.320Z · LW(p) · GW(p)
trying to convert them, which I am, but only half-heartedly.
You are not doing this in any way, shape, or form, unless I missed some post-length or sequence-length argument of yours. (And I don't mean a "hint" as to what you might believe.) If you have something to say on the topic, you clearly can't or won't say it in a comment.
I have to tentatively classify your "trying" as broken signaling (though I notice some confusion on my part). If you were telling the truth about your usual mental state, and not deliberately misleading the reader in some odd way, you've likely been trying to signal that you need help.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T02:39:33.995Z · LW(p) · GW(p)
Sorry, wait, maybe there's some confusion? Did you interpret me saying "convert" as meaning "convert them to Christianity"? 'Cuz what I meant was convert people to the side of reason more generally, e.g. by occasionally posting totally-non-trolling comments about decision theory and stuff. I'm not a Christian. Or am I misinterpreting you?
I'm not at all trying to signal that I need help, if I seem to be signaling that then it's an accidental byproduct of some other agenda which is SIGNIFICANTLY MORE MANLYYYY than crying for help.
Replies from: wedrifid, hairyfigment↑ comment by wedrifid · 2011-12-28T03:27:41.670Z · LW(p) · GW(p)
I'm not at all trying to signal that I need help, if I seem to be signaling that then it's an accidental byproduct of some other agenda which is SIGNIFICANTLY MORE MANLYYYY than crying for help.
Love the attitude. And for what it's worth I didn't infer any signalling of need for help.
↑ comment by hairyfigment · 2011-12-28T03:03:53.034Z · LW(p) · GW(p)
Quick response: I saw that you don't classify your views as Christianity. I do think you classify them as some form of theism, but I took the word "convert" to mean 'persuade people of whatever the frak you want to say.'
↑ comment by NancyLebovitz · 2011-12-27T15:09:05.254Z · LW(p) · GW(p)
Sorry for the misunderstanding about where you meditate-- I'm all too familiar with distraction and habit interfering with valuable self-maintenance.
As for heathens, you're from a background which is very different from mine. My upbringing was Jewish, but not religiously intense. My family lived in a majority Christian neighborhood.
I suppose it would have been possible to avoid non-Jews, but the social cost would have been very high, and in any case, it was just never considered as an option. To the best of my knowledge, I wasn't around anyone who saw religious self-segregation as a value. At all. The subject never came up.
I hope I'm not straying into other-optimizing, but I feel compelled to point out that there's more than one way of being Christian, and not all of them include avoiding socializing with non-Christians.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T21:13:37.236Z · LW(p) · GW(p)
Ah, I'm not a Christian, and it's not non-Christians that bother me so much as people who think they know something about how the world works despite, um, not actually knowing much of anything. Inadvertent trolls. My hometown friends are agnostic with one or two exceptions (a close friend of mine is a Catholic, she makes me so proud), my SingInst-related friends are mostly monotheists these days whether they'd admit to it or not I guess but definitely not Christians. I don't think of for example you as a heathen; there are a lot of intelligent and thoughtful people on this site. I vaguely suspect that they'd fit in better in an intellectual Catholic monastic order, e.g. the Dominicans, but alas it's hard to say. I'm really lucky to know a handful of thoughtful SingInst-related folk, otherwise I'd probably actually join the Dominicans just to have a somewhat sane peer group. Maybe. My expectations are probably way too high. I might try to convince the Roman Catholic Church to take FAI seriously soon; I actually expect that this will work. They're so freakin' reasonable, it's amazing. Anyway I'm not sure but my point might be that I'm just trying to stay away from people with bad epistemic habits for fear of them contaminating me, like a fundamentalist Christian trying to keep his high epistemic standards amidst a bunch of lions and/or atheists. Better to just stay away from them for the most part. Except hanging out with lions is pretty awesome and saint-worthy whereas hanging out with atheists is just kinda annoying.
Replies from: ahartell↑ comment by ahartell · 2012-01-02T20:11:55.079Z · LW(p) · GW(p)
Is this meant to be ironic?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-01-02T21:37:13.395Z · LW(p) · GW(p)
Half-ironic, yeah.
Replies from: ahartell↑ comment by MarkusRamikin · 2011-12-27T14:39:35.893Z · LW(p) · GW(p)
So why are you hanging out with them?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T14:46:09.535Z · LW(p) · GW(p)
Because I'm sinful? And not all of them are heathens, I'm just prone to exaggeration. I think this new AspiringKnitter person is cool, for example; likelihood-ratio-she apparently can supernaturally tell good from bad, which might make my FAI project like a billion times easier, God willing. NancyLebovitz is cool. cousin it is cool. cousin it I can interact with on Facebook but not all of the cool LW people. People talk about me here, I feel compelled to say something for some reason, maybe 'cuz I feel guilty that they're talking about me and might not realize that I realize that.
Replies from: TwistingFingers, Mitchell_Porter↑ comment by TwistingFingers · 2011-12-28T02:19:19.480Z · LW(p) · GW(p)
Please don't consider this patronizing but... the writing style of this comment is really cute.
I think you broke whatever part of my brain evaluates people's signalling. It just gave up and decided your writing is really cute. I really have no idea what impression to form of you; the experience was so unusual that I felt I had to comment.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:05:42.855Z · LW(p) · GW(p)
Thanks to your priming now I can't see "AspiringKnitter" without mentally replacing it with "AspiringKittens" and a mental image of a Less Wrong meetup of kittens who sincerely want to have better epistemic practices. Way to make the world a better place.
Replies from: Nisan, Multiheaded↑ comment by Multiheaded · 2012-01-07T00:01:43.431Z · LW(p) · GW(p)
Independently of you, I PM'd her the exact same thing. Well, guess I'm in good company.
↑ comment by Mitchell_Porter · 2011-12-28T03:16:56.742Z · LW(p) · GW(p)
Are you AspiringKnitter, or the author of AspiringKnitter?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:25:00.686Z · LW(p) · GW(p)
Not as far as I know, but you seemed pretty confident in that hypothesis so maybe you know something I don't.
↑ comment by Will_Newsome · 2011-12-27T10:59:15.216Z · LW(p) · GW(p)
I think I only ever made one argument for Christianity? It was hilarious, everyone was all like WTF!??! and I was like TROLOLOLOL. I wonder if Catholics know that trolling is good, I hear that Zen folk do. Anyway it was naturally a soteriological argument which I intended to be identical to the standard "moral transformation" argument which for naturalists (metaphysiskeptics?) is the easiest of the theories to swallow. If I was expounding my actual thoughts on the matter they would be significantly more sophisticated and subtle and would involve this really interesting part where I talk about "Whose Line Is It Anyway?" and how Jesus is basically like Colin Mochrie specifically during the 'make stupid noises then we make fun of you for sucking but that redeems the stupid noises' part. I'm talking about something brilliant that doesn't exist I'm like Borges LOL!
Local coherence is the hobgoblin of miniscule minds; global coherence is next to godliness.
(ETA: In case anyone can't tell, I just discovered Dinosaur Comics and, naturally, read through half the archives in one sitting.)
Replies from: AspiringKnitter, thomblake↑ comment by AspiringKnitter · 2011-12-27T21:18:53.358Z · LW(p) · GW(p)
Downvoted, by the way. I want to signal my distaste for being confused for you. Are you using some form of mind-altering substance or are you normally like this? I think you need to take a few steps back. And breathe. And then study how to communicate more clearly, because I think either you're having trouble communicating or I'm having trouble understanding you.
Replies from: NancyLebovitz, Will_Newsome, Will_Newsome↑ comment by NancyLebovitz · 2011-12-28T02:49:00.693Z · LW(p) · GW(p)
I'm not quite in a mood to downvote, but I think you were wildly underestimating how hard it would be for Will to change what he's doing.
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-12-28T11:50:57.602Z · LW(p) · GW(p)
It would probably require the community stopping feeding the ugly little lump.
Also,
Replies from: Alicorn, Multiheaded"Mood?" Halleck's voice betrayed his outrage even through the shield's filtering. "What has mood to do with it? You downvote when the necessity arises -- no matter the mood! Mood's a thing for cattle or making love or playing the baliset. It's not for downvoting."
↑ comment by Alicorn · 2012-02-10T20:02:09.900Z · LW(p) · GW(p)
ugly little lump.
Will is good-looking, normal-sized, and not at all lumpy. If you must insult people, can you do it in a less wrong way?
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-02-11T08:40:48.324Z · LW(p) · GW(p)
I'm referring to his being an admitted troll.
Replies from: wedrifid↑ comment by wedrifid · 2012-02-11T09:25:32.973Z · LW(p) · GW(p)
I'm referring to his being an admitted troll.
To be fair Will is more the big and rocky kind of troll. You can even see variability that can only be explained by drastic temperature changes!
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2012-02-14T11:44:13.616Z · LW(p) · GW(p)
That works.
↑ comment by Multiheaded · 2012-02-10T19:36:43.902Z · LW(p) · GW(p)
It would probably require the community stopping feeding the ugly little lump.
We don't approve of that kind of language used against anyone considered to be of our in-group, no matter how weird they might act. Please delete this.
Replies from: pedanterrific↑ comment by pedanterrific · 2012-02-10T19:48:32.687Z · LW(p) · GW(p)
Do you normally refer to yourselves as 'we'? I never noticed that before. (Witty, though.)
Replies from: Multiheaded↑ comment by Multiheaded · 2012-02-10T20:56:32.290Z · LW(p) · GW(p)
Nope, I'm simply being confident that the vast majority of the LW community stands with me here.
(Well, in a sense, it is the Less Wrong Hivemind speaking through me here, so yes, It refers to Itself as "we".)
Replies from: pedanterrific↑ comment by pedanterrific · 2012-02-10T21:02:53.641Z · LW(p) · GW(p)
Ah. In that case, I have to ask how you explain the vote totals?
That is, I would expect a comment of which the Hivemind strongly disapproves to accumulate a negative score over a month-plus.
Edit: Uh, not sure what the downvote's for...? I mean no offence.
Replies from: None, Bugmaster, Multiheaded↑ comment by [deleted] · 2012-02-10T21:07:11.655Z · LW(p) · GW(p)
Vote totals don't mean what you think they mean.
Replies from: pedanterrific↑ comment by pedanterrific · 2012-02-10T21:12:52.971Z · LW(p) · GW(p)
This is actually a good point! I stand corrected.
↑ comment by Bugmaster · 2012-02-10T21:26:09.231Z · LW(p) · GW(p)
That is, I would expect a comment of which the Hivemind strongly disapproves to accumulate a negative score over a month-plus.
That's what I'd expect, as well, though I wish it weren't so. I usually try to make the effort to upvote or downvote comments based on how informative, well-written, and well-reasoned they are, not whether I agree with them or not (with the exception of poll-style comments). Of course, just because I try to do this, doesn't mean that I succeed...
↑ comment by Multiheaded · 2012-02-10T21:08:15.694Z · LW(p) · GW(p)
Most people often just don't notice a comment deep in some thread. But if their attention was drawn to it, I say they'd react this way.
Replies from: pedanterrific↑ comment by pedanterrific · 2012-02-10T21:21:52.141Z · LW(p) · GW(p)
For what it's worth, I agree. Will's kind of awesome, in a weird way. (Though my first reaction was "Wait, just our in-group? That's groupist!") But I'm not nearly as confident in my model of what others approve or disapprove of.
↑ comment by Will_Newsome · 2011-12-27T22:29:03.240Z · LW(p) · GW(p)
Are you using some form of mind-altering substance[...]?
On second thought maybe I am in a sense; my cortisol (?) levels have been ridiculously high ever since I learned that people have been talking about me here on LW. For about a day before that I'd been rather abnormally happy--my default state matches the negative symptoms of schizophrenia as you'd expect of a prodrome, and "happiness" as such is not an emotion I experience very much at all--which I think combined with the unexpected stressor caused my body to go into freak-out-completely mode, where it remains and probably will remain until I spend time with a close friend. Even so I don't think this has had as much an effect on my writing style as reading a thousand Dinosaur Comics has.
Replies from: hairyfigment↑ comment by hairyfigment · 2011-12-28T02:06:04.018Z · LW(p) · GW(p)
my default state matches the negative symptoms of schizophrenia..."happiness" as such is not an emotion I experience very much at all
Have you sought professional help in the past? If not, do nothing else until you take some concrete step in that direction. This is an order from your decision theory.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T02:22:30.784Z · LW(p) · GW(p)
Yes, including from the nice but not particularly insightful folk at UCSF, but negative symptoms generally don't go away, ever. My brain is pretty messed up. Jhana meditation is wonderful and helps when I can get myself to do it. Technically if I did 60mg of Adderall and stayed up for about 30 to 45 hours then crashed, then repeated the process forever, I think that would overall increase my quality of life, but I'm not particularly confident of that, especially as the outside view says that's a horrible idea. In my experience it ups the variance which is generally a good thing. Theoretically I could take a bunch of nitrous oxide near the end of the day so as to stay up for only about 24 hours as opposed to 35 before crashing; I'm not sure if I should be thinking "well hell, my dopaminergic system is totally screwed anyway" or "I should preserve what precious little automatic dopaminergic regulation I have left". In general nobody knows nothin' 'bout nothin', so my stopgap solution is moar meditation and moar meta.
Replies from: NancyLebovitz, None↑ comment by NancyLebovitz · 2011-12-28T02:50:53.276Z · LW(p) · GW(p)
Have you tried doing a detailed analysis of what would make it easier for you to meditate, and then experimenting to find whether you've found anything which would actually make it easier? Is keeping your cushion closer to where you usually are a possibility?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T02:55:26.835Z · LW(p) · GW(p)
Not particularly detailed. It's hard to do better than convincing my girlfriend to bug me about it a few times a day, which she's getting better at. I think it's a gradual process and I'm making progress. I'm sure Eliezer's problems are quite similar, I suppose I could ask him what self-manipulation tactics he uses besides watching Courage Wolf YouTube videos.
↑ comment by [deleted] · 2012-01-08T23:22:55.037Z · LW(p) · GW(p)
Technically if I did 60mg of Adderall and stayed up for about 30 to 45 hours then crashed, then repeated the process forever, I think that would overall increase my quality of life
I suspect it would, at least in some ways. I'm mentally maybe not too dissimilar, and have done a few months of polyphasic sleeping, supported by caffeine (which I'm way too sensitive to). My mental abilities were pretty much crap, and damn was I agitated, but I was overall happier, baseline at least.
I do recommend 4+ days of sleep deprivation and desperately trying to figure out how an elevator in HL2 works as a short-term treatment for can't-think-or-talk-but-bored, though.
↑ comment by Will_Newsome · 2011-12-27T21:31:59.840Z · LW(p) · GW(p)
Are you using some form of mind-altering substance or are you normally like this?
No and no. I'm only like this on Less Wrong. Trust me, I know it doesn't seem like it, but I've thought about this very carefully and thoroughly for a long time. It's not that I'm having trouble communicating; it's that I'm not trying to. Not anything on the object level at least. The contents of my comments are more like expressions of complexes of emotions about complex signaling equilibria. In response you may feel very, very compelled to ask: "If you're not trying to communicate as such then why are you expending your and my effort writing out diatribes?" Trust me, I know it doesn't seem like it, but I've thought about this very carefully and thoroughly for a long time. "I'm going to downvote you anyway; I want to discourage flagrant violations of reasonable social norms of communication." As expected! I'm clearly not optimizing for karma. And my past selves managed to stock up like 5,000 karma anyway so I have a lot to burn. I understand exactly why you're downvoting, I have complex intuitions about the moral evidence implicit in your vote, and in recompense I'll try harder to "be perfect".
Replies from: wedrifid, shokwave, AspiringKnitter↑ comment by wedrifid · 2011-12-28T03:15:58.238Z · LW(p) · GW(p)
It's not that I'm having trouble communicating; it's that I'm not trying to.
So it is more just trolling.
The contents of my comments are more like expressions of complexes of emotions about complex signaling equilibria.
Which, from the various comments Will has made along these lines we can roughly translate to "via incoherent abstract rationalizations Will_Newsome has not only convinced himself that embracing the crazy while on lesswrong is a good idea but that doing so is in fact a moral virtue". Unfortunately this kind of conviction is highly resistant to persuasion. He is Doing the Right Thing. And he is doing the right thing from within a complex framework wherein not doing the right thing has potentially drastic (quasi-religious-level) consequences. All we can really do is keep the insane subset of his posts voted below the visibility threshold and apply the "don't feed the troll" policy while he is in that mode.
Replies from: Will_Newsome, Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2011-12-28T05:12:08.652Z · LW(p) · GW(p)
(quasi-religious-level)
Good phrase, I think I'll steal it. Helps me quickly describe how seriously I take this whole justification thing.
↑ comment by Will_Newsome · 2011-12-28T03:19:27.670Z · LW(p) · GW(p)
ACBOD. ;P
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:29:51.714Z · LW(p) · GW(p)
HOW CAN ANYONE DOWNVOTE THAT IT WAS SO CLEVER LOL?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:37:30.392Z · LW(p) · GW(p)
NO BUT SERIOUSLY GUYS IT WAS VERY CLEVER I SWITCHED THE C AND THE D SO AS TO MORE ACCURATELY DESCRIBE MY STATE OF MIND LOL?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T04:05:25.137Z · LW(p) · GW(p)
One of my Facebook activities is "finding bits of Chaitin's omega"! I am an interesting and complex person! I am nice to my girflriend and she makes good food like fresh pizza! Sometimes I work on FAI stuff, I'm not the best at it but I'm surprisingly okay! I found a way to hack the arithmetical hierarchy using ambient control, it's really neat, when I tell people about it they go like "WTF that is a really neat idea Will!"! If you're nice to me maybe I'll tell you someday? You never know, life is full or surprises allegedly!
Replies from: J_Taylor↑ comment by J_Taylor · 2011-12-28T21:02:08.884Z · LW(p) · GW(p)
Greetings, Will_Newsome.
This particular post of yours was, last night, at 4 upvotes. Do you have any hypothesis as to why that was the case? I am rather curious as to how that happened.
Replies from: wedrifid, Will_Newsome↑ comment by wedrifid · 2011-12-28T21:13:40.670Z · LW(p) · GW(p)
This particular post of yours was, last night, at 4 upvotes.
An instance of the more general phenomenon. If I recall the grandparent in particular was at about -3 then overnight (wedrifid time) went up to +5 and now seems to be back at -4. Will's other comments from the time period all experienced a fluctuation of about the same degree. I infer that the fickle bulk upvotes and downvotes are from the same accounts and with somewhat less confidence that they are from the same user.
Do you have any hypothesis as to why that was the case?
Or, you know, memories.
Replies from: thomblake, Will_Newsome, J_Taylor↑ comment by thomblake · 2011-12-28T21:15:55.880Z · LW(p) · GW(p)
If I recall the grandparent in particular was at about -3 then overnight (wedrifid time) went up to +5 and now seems to be back at -4.
It's possible that the aesthetic only appeals to voters in certain parts of the globe.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-28T21:25:17.159Z · LW(p) · GW(p)
It's possible that the aesthetic only appeals to voters in certain parts of the globe.
Are you saying there is a whole country which supports internet trolls? Forget WMDs, the next war needs to be on the real threat to (the convenience of) civilization!
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:03:30.517Z · LW(p) · GW(p)
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD DAMMIT I can't take it anymore, why does English treat "or" as "xor"? We have "either x or y" for that. Now I have to say "and/or" which looks and is stupid. I refuse.
Replies from: gwern, wedrifid, Nornagest, dlthomas, Prismattic↑ comment by gwern · 2011-12-29T05:33:33.068Z · LW(p) · GW(p)
The general impression of the Book of Job seems to be to lower people's opinion of God rather than raise their opinion of trolling.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2011-12-29T05:56:24.640Z · LW(p) · GW(p)
And it was an atheist philosopher who first called trolling a art.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T07:06:46.216Z · LW(p) · GW(p)
I DID NOT KNOW THAT THANK YOU. Not only is Schopenhauer responsible for Borges, he is a promoter of trolling... this is amazing.
I hear that Zen people have been doing it for like 1,000 years, but maybe they didn't think of it as an art as such.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2011-12-29T07:29:19.067Z · LW(p) · GW(p)
If you like it than you should have put an upvote on it.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T07:34:55.590Z · LW(p) · GW(p)
Now I have. And on that comment too. All the single comments.
↑ comment by wedrifid · 2011-12-29T01:59:27.792Z · LW(p) · GW(p)
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD
Which God? If it is Yahweh then that guy's kind of a dick and I don't value his opinion much at all. But he isn't enough of a dick that I can reverse stupidity to arrive at anything useful either.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T02:06:51.037Z · LW(p) · GW(p)
/nods, makes sense.
↑ comment by Nornagest · 2011-12-29T02:29:40.245Z · LW(p) · GW(p)
If I told you that God likes to troll people would that raise your opinion of trolls or lower your opinion of GOD
Neither, really. There are trickster figures all over the place in mythology; it'd take a fairly impressive argument to get me to believe that YHWH is one of them, but assuming such an argument I don't think it'd imply many updates that "Coyote likes trolling people" (a nearly tautological statement) wouldn't.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T02:53:28.393Z · LW(p) · GW(p)
Hm? Even if YHWH existed and was really powerful, you still wouldn't update much if you found out He likes to troll people? Or does your comment only apply if YHWH is a fiction?
↑ comment by dlthomas · 2011-12-29T01:14:27.056Z · LW(p) · GW(p)
You could say, "x or y or both" in place of "x and/or y". I'm not sure if that looks more or less stupid.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:21:48.238Z · LW(p) · GW(p)
I'll try it out at some point at least, thanks for the suggestion.
↑ comment by Prismattic · 2011-12-29T01:06:39.716Z · LW(p) · GW(p)
If the Bible is the world's longest-running Rickroll, does that count?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:10:13.945Z · LW(p) · GW(p)
What's the hypothesis, that the Bible was subtly optimized to bring about Rick Astley and Rickrolling 1,500 or so years later? That... that does seem like His style... I mean obviously the Bible would be optimized to do all kinds of things, but that might be one of the subgoals, you never know.
↑ comment by Will_Newsome · 2011-12-28T21:22:45.311Z · LW(p) · GW(p)
Or, you know, memories.
Aw, wedrifid, that's mean. :( I was asleep during that time. There's probably some evidence of that on my Facebook page, i.e. no activity until about like 5 hours ago when I woke up. Also you should know that I'm not so incredibly lame/retarded as to artificially inflate a bunch of comments' votes for basically no reason other than to provoke accusations that I had done so.
Replies from: wedrifid, J_Taylor↑ comment by wedrifid · 2011-12-28T21:36:37.662Z · LW(p) · GW(p)
Aw, wedrifid, that's mean.
Is it? I didn't think it was something that you would be offended by. Since the mass voting was up but then back down to where it started it isn't a misdemeanor so much as it is peculiar and confusing. The only possibility that sprung to mind was that it could be an extension of of your empirical experimentation. You (said that you) actually made a bunch of the comments specifically so that they would get downvotes so that you could see how that influenced the voting behavior of others. Tinkering with said votes to satisfy a further indecipherable curiosity doesn't seem like all that much of a stretch.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T21:42:51.520Z · LW(p) · GW(p)
Is it?
No, not really at all, I was just playing around. I don't really get offended; I get the impression that you don't either. And yeah upon reflection your hypothesis was reasonable, I probably only thought it was absurd 'cuz I have insider knowledge. (ETA: Reasoning about counterfactual states of knowledge is really hard; not only practically speaking 'cuz brains aren't meant to do that, but theoretically too, which is why people get really confused about anthropics. The latter point deserves a post I mean Facebook status update at some point.)
Replies from: wedrifid↑ comment by wedrifid · 2011-12-28T21:50:02.600Z · LW(p) · GW(p)
ETA: Reasoning about counterfactual states of knowledge is really hard; not only practically speaking 'cuz brains aren't meant to do that, but theoretically too, which is why people get really confused about anthropics. The later point deserves a post I mean Facebook status update at some point.
That's true. It's tricky enough that Eliezer seems to get confused about it (or at least I thought he was confusing himself back when he wrote a post or two on the subject.)
↑ comment by J_Taylor · 2011-12-28T21:44:17.746Z · LW(p) · GW(p)
inflate a bunch of comments' votes for basically no reason other than to provoke accusations that I had done so.
That actually sounds like a lot of fun, if followed up with a specific denial of having done that.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T21:47:02.229Z · LW(p) · GW(p)
I guess that sounds fun? Or why do you think it sounds fun? I think it'd only be worth if if the thread was really public, like when that Givewell dude made that one post about naive EU maximization and charity.
Replies from: J_Taylor↑ comment by J_Taylor · 2011-12-28T21:55:44.662Z · LW(p) · GW(p)
Why does that sound fun? I don't know. I do know that when I am less-than-lucid, I am liable to lead individuals on conversational wild-goose chases. Within these conversations, I will use a variety of tactics to draw the other partner deeper into the conversation. No tactic in particular is fun, except in-so-far as it confuses the other person. Of course, when I am of sound mind, I do not find this game to be terribly fun.
I assume that you play similar games on Lesswrong. Purposely upvoting one's own comments in an obvious way, followed by then denying that one did it, seems like a good way to confuse and frustrate other people. I know that if the thought occurred to me when I was less-than-lucid, and if I were the sort of person to play such games on Lesswrong, I probably would try the tactic out.
This seems more likely than you having a cadre of silent, but upvoting, admirers.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:05:30.661Z · LW(p) · GW(p)
Both seem unlikely. I'm still confused. I think God likes trolling, maybe He did it? Not sure what mechanism He'd use though so it's not a particularly good explanation.
↑ comment by Will_Newsome · 2011-12-28T21:08:23.362Z · LW(p) · GW(p)
Wedrifid said that too. I don't have a model that predicts that. I think that most of the time my comments get upvoted to somewhere between 1 and 5 and then drop off as people who aren't Less Wrong regulars read through; that the reverse would happen for a few hours at least is odd. It's possible that the not-particularly-intelligent people who normally downvote my posts when they're insightful also tend to upvote my posts when they're "worthless". ETA: thomblake's hypothesis about regional differences in aesthetics seems more plausible than mine.
↑ comment by Will_Newsome · 2011-12-28T03:27:43.947Z · LW(p) · GW(p)
I think you severely underestimate the value of trolling.
Replies from: Eliezer_Yudkowsky, wedrifid, wedrifid↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-28T08:23:44.588Z · LW(p) · GW(p)
Erm. I can't say that this raises my confidence much. I am reminded of the John McCarthy quote, "Your denial of the importance of objectivity amounts to announcing your intention to lie to us. No-one should believe anything you say."
Replies from: Mitchell_Porter, Will_Newsome, Will_Newsome↑ comment by Mitchell_Porter · 2011-12-29T01:42:47.562Z · LW(p) · GW(p)
I feel responsible for the current wave of gibberish-spam from Will, and I regret that. If it were up to me, I would present him with an ultimatum - either he should promise not to sockpuppet here ever again, and he'd better make it convincing, or else every one of his accounts that can be identified will be banned. The corrosive effect of not knowing whether a new identity is a real person or just Will again, whether he's "conducting experiments" by secretly mass-upvoting his own comments, etc., to my mind far outweighs the value of his comments.
Replies from: Will_Newsome, Will_Newsome, Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2011-12-29T02:24:31.602Z · LW(p) · GW(p)
I freely admit that I have one sockpuppet, who has made less than five comments and has over 20 karma. I do not think that having one sockpuppet for anonymity's sake is against community norms.
ETA: I mean one sock puppet besides Mitchell Porter obviously.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2011-12-29T03:00:20.919Z · LW(p) · GW(p)
I freely admit that I have one sockpuppet, who has made less than five comments and has over 20 karma.
I have a private message, dated 7 October, from an account with "less than five comments and [...] over 20 karma", which begins, "I'm Will_Newsome, this is one of my alts." (Emphasis mine.)
Will, I'm sorry it's turning out like this. I am not perfect myself; anyone who cares may look up users "Bananarama" and "OperationPaperclip" and see my own lame anonymous humor. More to the point, I do actually believe that you want to "keep the stars from burning down", and you're not just a troll out to waste everyone's time. The way I see it, because you have neither a job to tie you down, nor genuine intellectual peers and collaborators, it's easy to end up seeking the way forward via elaborate crazy schemes, hatched and pursued in solitude; and I suspect that I got in the way of one such scheme, by asserting that AK is you.
Replies from: Will_Newsome, Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2011-12-29T03:12:20.631Z · LW(p) · GW(p)
genuine intellectual peers and collaborators
I have those! E.g. I spend a lot of time with Steve, who is the most rational person in the entire universe, and I hang out with folk like Nick Tarleton and Michael Vassar and stuff. All those 3 people are way smarter than me, though arguably I get around some of that by way of playing to my strengths. The point is that I can play intellectualism with them, especially Steve who's really good at understanding me. ETA: I also talk to the Black Belt Bayesian himself sorta often.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-29T03:22:24.631Z · LW(p) · GW(p)
I spend a lot of time with Steve, who is the most rational person in the entire universe
With no offense intended to Steve, no, he isn't.
Replies from: Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2011-12-29T03:25:01.272Z · LW(p) · GW(p)
If you know any rationalists that are better than Steve then please, please introduce me to them.
↑ comment by Will_Newsome · 2011-12-29T03:23:21.148Z · LW(p) · GW(p)
How about most rational person I know of?
↑ comment by Will_Newsome · 2011-12-29T03:09:05.287Z · LW(p) · GW(p)
I suspect that I got in the way of one such scheme, by asserting that AK is you.
Ahhhh, okay, I see why you'd feel bad now I guess? Admittedly I wouldn't have started commenting recently unless there'd been the confusion of me and AK, but AK isn't me and my returning was just 'cuz I freaked out that people on LW were talking about me and I didn't know why. Really I don't think you're to blame at all. And thinking AK is me does seem like a pretty reasonable hypothesis. It's a false hypothesis but not obviously so.
↑ comment by Will_Newsome · 2011-12-29T03:06:14.364Z · LW(p) · GW(p)
I was only counting alts I'd used in the last few months. I remember having made two alts, but the first one, User:Arbitrarity, I gave up on (I think I'd forgotten about it) which is when I switched to the alt that I used to message you with (apparently I'd remembered it by then, though I wasn't using it; I just like the word "arbitrarity").
ETA: Also note that the one substantive comment I made from Arbitrarity has obvious reasons for being kept anonymous.
↑ comment by Will_Newsome · 2011-12-29T03:01:01.640Z · LW(p) · GW(p)
Anyway I can't see any plausible reason why you should feel responsible for my current wave of gibberish-spam. [ETA: I mean except for the gibberish-spam I'm writing as a response to your comment; you should maybe feel responsible for that.] My autobiographical memory is admittedly pretty horrible but still.
↑ comment by Will_Newsome · 2011-12-29T02:30:13.315Z · LW(p) · GW(p)
Why do you feel responsible? That's really confusing.
↑ comment by Will_Newsome · 2011-12-29T02:36:30.275Z · LW(p) · GW(p)
Okay I admit it, Mitchell Porter is one of my many sockpuppets. Please ban Mitchell Porter unless he can prove he's not one of my many sockpuppets.
↑ comment by Will_Newsome · 2011-12-28T20:44:37.245Z · LW(p) · GW(p)
I don't follow; your confidence in the value of trolling or your confidence in the general worthwhileness of fairly reading or charitably interpreting my contributions to Less Wrong? 'Cuz I'd given up on the latter a long time ago, but I don't want your poor impression of me to falsely color your views on the value of trolling.
Replies from: thomblake↑ comment by Will_Newsome · 2011-12-29T02:41:51.697Z · LW(p) · GW(p)
Eliezer please ban Mitchell Porter, he's one of my sock puppets and I feel really guilty about it. Yeah I know you've known the real Mitchell Porter for like a decade now but I hacked into his account or maybe I bought it from him or something and now it's just another of my sock puppets, so you know, ban the hell out of him please? It's only fair. Thx bro!
Replies from: wedrifid↑ comment by wedrifid · 2011-12-29T03:25:04.091Z · LW(p) · GW(p)
It's not often that I laugh out loud and downvote the same comment! ;)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T03:50:18.844Z · LW(p) · GW(p)
Thanks! Um do you know any easy way to provide a lot of evidence that I have only one sockpuppet? I'm mildly afraid that Eliezer is going to take Mitchell Porter's heinous allegations seriously as part of a secret conspiracy is that redundant? fuck. anyway secret conspiracy to discredit me. I am the only one who should be allowed to discredit me!
Replies from: wedrifid↑ comment by wedrifid · 2011-12-29T04:32:12.508Z · LW(p) · GW(p)
Um do you know any easy way to provide a lot of evidence that I have only one sockpuppet?
Ask a moderator (or whatever it takes to have access to IP logs) to check to see if there are multiple suspicious accounts from your most common IP. That's even better than asking you to raise your right hand if you are not lying. It at least shows that you have enough respect for the community to at least try to hide it when you are defecting! :P
↑ comment by wedrifid · 2011-12-28T10:44:24.270Z · LW(p) · GW(p)
I'm confused. What happened overnight that made people suddenly start appreciating Will's advocacy of his own trolling here and the surrounding context? -5 to +7 is a big change and there have been similar changes to related comments. Either someone is sockpuppeting or people are actually starting to appreciate this crap. (I'm really hoping the former!)
Edit: And now it is back to -3. How bizarre!
Replies from: thomblake, Solvent, XiXiDu↑ comment by thomblake · 2011-12-28T21:01:29.711Z · LW(p) · GW(p)
people are actually starting to appreciate this crap.
I've been appreciating it all along. I would not be terribly surprised if there were a dozen or so other people who do.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-28T21:05:43.524Z · LW(p) · GW(p)
I've been appreciating it all along.
Do you specifically appreciate the advocacy of trolling comments that are the context or are you just saying that you appreciate Will's actual contributions such as they are?
Replies from: thomblake↑ comment by thomblake · 2011-12-28T21:13:35.688Z · LW(p) · GW(p)
I appreciate Will's contributions in general. Mostly the insane ones.
They remind me of a friend of mine who is absolutely brilliant but has lived his whole life with severe damage to vital parts of the brain.
Replies from: Jack, Will_Newsome, wedrifid↑ comment by Jack · 2011-12-28T21:52:18.738Z · LW(p) · GW(p)
I often appreciate his contributions as well. He is generally awful at constraining his abstract creativity so as to formulate constructive, concrete ideas but I can constrain abstract creativity just fine so his posts often provoke insights-- the rest just bumps up against my nonsense filter. Reading him at his best is a bit like taking a small dose of a hallucinogenic to provide my brain with a dose of raw material to hack away at with logic.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T07:01:00.982Z · LW(p) · GW(p)
Folks like you might wanna friend me on Facebook, I'm generally a lot more insightful and comprehensible there. I use Facebook like Steven Kaas uses Twitter. https://www.facebook.com/autothexis
Re your other comment re mechanisms for psi, I can't muster up the energy to reply unfortunately. I'd have to be too careful about keeping levels of organization distinct, which is really easy to do in my head but really hard to write about. I might respond later.
↑ comment by Will_Newsome · 2011-12-28T21:32:34.020Z · LW(p) · GW(p)
That's interesting. Which parts of the brain, if you don't mind sharing? (Guess: qbefbyngreny cersebagny pbegrk, ohg abg irel pbasvqrag bs gung.)
Replies from: thomblake↑ comment by XiXiDu · 2011-12-28T11:26:51.117Z · LW(p) · GW(p)
Either someone is sockpuppeting or people are actually starting to appreciate this crap.
Did I say 5 years? Whoops...
Regarding sockpuppeting, that would suck. Can't someone take a look at the database and figure out if many votes came from the same IP? Even better, when there are cases of weird voting behavior someone should check if the votes came from dummy accounts by looking at the karma score and recent submissions and see if they are close to zero karma and if their recent submissions are similar in style and diction etc.
↑ comment by wedrifid · 2011-12-28T03:38:13.718Z · LW(p) · GW(p)
I think you severely underestimate the value of trolling.
And I suspect you incorrectly classify some of your contributions, placing them into a different subcategory within "willful defiance of the community preference" than where they belong. Unfortunately this means that the subset of your thoughts that are creative, deep and informed rather than just incoherent and flawed tend to be wasted.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:45:20.548Z · LW(p) · GW(p)
My creative, deep, and informed thoughts are a superset of my thoughts in general not a subset wedrifid. Also I do not have any incoherent or flawed thoughts as should be obvious from the previous sentence but I realize that category theory is a difficult subject for many people.
ETA: Okay good, it took awhile for this to get downvoted and I was starting to get even more worried about the local sanity waterline.
Replies from: Dorikka↑ comment by Dorikka · 2011-12-28T04:41:13.954Z · LW(p) · GW(p)
Okay good, it took awhile for this to get downvoted and I was starting to get even more worried about the local sanity waterline.
I suspect that the reason for this is that the comment tree of which your post was a branch of is hidden by default, as it originates from a comment with less than -3 karma.
Um, on another note, could you just be less mean? 'Mean' seems to be the most accurate descriptor for posting trash that people have to downvote to stay hidden, after all.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T04:52:57.557Z · LW(p) · GW(p)
I suspect that the reason for this is that the comment tree of which your post was a branch of is hidden by default, as it originates from a comment with less than -3 karma.
No, I ran an actual test by posting messages in all caps to use as a control. Empiricism is so cool! (ETA: I also wrote a perfectly reasonable but mildly complex comment as a second control, which garnered the same number of downvotes as my insane set theory comment in about he same length of time.)
Re meanness, I will consider your request Dorikka. I will consider it.
↑ comment by shokwave · 2011-12-28T03:32:53.364Z · LW(p) · GW(p)
Trust me
Nope.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T03:36:23.133Z · LW(p) · GW(p)
THANKS FOR TELLIN ME BRAH
Replies from: shokwave, LoudFarts↑ comment by shokwave · 2011-12-28T04:01:51.965Z · LW(p) · GW(p)
The problem I have is that you claim to be "not optimising for karma", but you appear to be "optimising for negative karma". For example, the parent comment. There are two parts to it; acknowledgement of my comment, and a style that garners downvotes. The second part - why? It doesn't fit into any other goal structure I can think of; it really only makes sense if you're explicitly trying to get downvoted.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-28T04:12:41.313Z · LW(p) · GW(p)
One of my optimization criteria is discreditable-ness which I guess is sort of like optimizing for downvotes insofar as my audience really cares about credibility. When it comes to motivational dynamics there tends to be a lot of crossing between meta-levels and it's hard to tell what models are actually very good predictors. You can approximately model the comment you replied to by saying I was optimizing for downvotes, but that model wouldn't remain accurate if e.g. suddenly Less Wrong suddenly started accepting 4chan-speak. That's obviously unlikely but the point is that a surface-level model like that doesn't much help you understand why I say what I say. Not that you should want to understand that.
↑ comment by AspiringKnitter · 2011-12-27T21:49:48.496Z · LW(p) · GW(p)
And my past selves managed to stock up like 5,000 karma anyway so I have a lot to burn.
I'm confused. Have you sockpuppeted before?
The contents of my comments are more like expressions of complexes emotions about complex signaling equilibria.
I think I might understand what you're saying here, in which case I see... sort of. I think I see what you're doing but not why you're doing it. Oh, well. Thank you for the explanation, that makes more sense.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T22:14:21.688Z · LW(p) · GW(p)
I'm confused. Have you sockpuppeted before?
Yes, barely, but I meant "past selves" in the usual Buddhist sense, i.e. I wrote some well-received posts under this account in the past. You might like the irrationality game, I made it for people like you.
On another note I'm sorry that my taste for discreditability has contaminated you by association; a year or so ago I foresaw that such an event would happen and deemed it a necessary tradeoff but naturally I still feel bad about it. I'm also not entirely sure I made the correct tradeoff; morality is hard. I wish I had synderesis.
↑ comment by thomblake · 2011-12-27T17:16:05.489Z · LW(p) · GW(p)
Local coherence is the hobgoblin of miniscule minds; global coherence is next to godliness.
Well, you're half right.
Not telling which half.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T20:57:09.757Z · LW(p) · GW(p)
You're right.
↑ comment by A1987dM (army1987) · 2011-12-26T01:06:05.682Z · LW(p) · GW(p)
“Deep familiarity with LessWrong concerns and modes of thought” can be explained by her having lurked a lot, and the rest of those features are not rare IME (even though they are under-represented on LW).
↑ comment by JoachimSchipper · 2012-01-04T10:14:26.358Z · LW(p) · GW(p)
I put some text from recent comments by both AspiringKnitter and Will_Newsome into I write like; it suggested that AspiringKnitter writes "like" Arthur Clarke (2001: A Space Odyssey and other books) while Will_Newsome writes "like" Vladimir Nabokov (Lolita and other books). I've never read either, but it does look like a convenient textual comparison doesn't trivially point to them being the same.
Also, if AspiringKnitter is a sockpuppet, it's at least an interesting one.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2012-01-04T11:33:29.736Z · LW(p) · GW(p)
When I put your first paragraph in that confabulator, it says "Vladimir Nabokov". If I remove the words "Vladimir Nabokov (Lolita and other books)" from the paragraph, it says "H.P. Lovecraft". It doesn't seem to cut possible texts into clusters well enough.
Replies from: wedrifid, JoachimSchipper↑ comment by wedrifid · 2012-01-04T11:52:42.409Z · LW(p) · GW(p)
I just got H.P. Lovecraft, Dan Brown, and Edgar Allan Poe for three different comments. I am somewhat curious as to whether this page clusters better than random assignment.
ETA: @#%#! I just got Dan Brown again, this time for the last post I wrote. This site is insulting me!
Replies from: None↑ comment by JoachimSchipper · 2012-01-04T11:48:11.259Z · LW(p) · GW(p)
Looks like you are right. Two of my (larger, to give the algorithm more to work with) texts from other sources gave Cory Doctorow (technical piece) and again Lovecraft (a Hacker News comment about drug dogs?)
Sorry, and thanks for the correction.
↑ comment by [deleted] · 2011-12-24T03:03:28.370Z · LW(p) · GW(p)
You're clearly out of touch with the populace. :) I'm only willing to risk 10% of my probability mass on your prediction.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-12-26T10:17:43.741Z · LW(p) · GW(p)
That's really odd. If there were some way to settle the bet I'd take it.
Replies from: steven0461, Mitchell_Porter↑ comment by steven0461 · 2011-12-26T23:31:24.386Z · LW(p) · GW(p)
For what it's worth, I thought Mitchell's hypothesis seemed crazy at first, then looked through user:AspiringKnitter's comment history and read a number of things that made me update substantially toward it. (Though I found nothing that made it "extremely obvious", and it's hard to weigh this sort of evidence against low priors.)
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-27T02:34:42.829Z · LW(p) · GW(p)
Out of curiosity, what's your estimate of the likelihood that you'd update substantially toward a similar hypothesis involving other LW users? ...involving other users who have identified as theists or partial theists?
↑ comment by Mitchell_Porter · 2011-12-26T11:01:21.309Z · LW(p) · GW(p)
It used to be possible - perhaps it still is? - to make donations to SIAI targeted towards particular proposed research projects. If you are interested in taking up this bet, we should do a side deal whereby, if I win, your $1000 would go to me via SIAI in support of some project that is of mutual interest.
↑ comment by Shmi (shminux) · 2011-12-26T02:42:22.023Z · LW(p) · GW(p)
Here is an experiment that could solve this.
If someone takes the bet and some of the proceeds go to trike, they might agree to check the logs and compare IPs (a matching IP or even a proxy as a detection avoidance attempt could be interpreted as AK=WN). Of course, AK would have to consent.
Replies from: None, lessdazed, wedrifid↑ comment by [deleted] · 2011-12-26T02:57:03.417Z · LW(p) · GW(p)
.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T23:25:11.067Z · LW(p) · GW(p)
I'm still surprised that our collective ingenuity has yet to find a practical solution. I don't think anybody is trying very hard but it's still surprising how little our knowledge of cryptography and such is helping us.
Anyway yeah, I really don't think IPs provide much evidence. As wedrifid said if the IPs don't match it only means that at least I'm putting a minimal amount of effort into anonymity.
↑ comment by lessdazed · 2011-12-28T17:09:56.349Z · LW(p) · GW(p)
Why didn't you suggest asking Will_Newsome?
Replies from: shminux, wedrifid↑ comment by Shmi (shminux) · 2011-12-28T22:09:43.830Z · LW(p) · GW(p)
DIdn't think about it. He would have to consent, too. Fortunately, any interest in the issue seems to have waned.
↑ comment by wedrifid · 2011-12-28T19:21:53.316Z · LW(p) · GW(p)
Why didn't you suggest asking Will_Newsome?
Ask him what? To raise his right arm if he is telling the truth?
Replies from: lessdazed, dlthomas↑ comment by lessdazed · 2011-12-29T00:23:13.361Z · LW(p) · GW(p)
I missed where he explicitly made a claim about it one way or the other.
The months went by, and at last on a day of spring Ged returned to the Great House, and he had no idea what would be asked of him next. At the door that gives on the path across the fields to Roke Knoll an old man met him, waiting for him in the doorway. At first Ged did not know him, and then putting his mind to it recalled him as the one who had let him into the School on the day of his coming, five years ago.
The old man smiled, greeting him by name, and asked, "Do you know who I am?"
Now Ged had thought before of how it was always said, the Nine Masters of Roke, although he knew only eight: Windkey, Hand, Herbal, Chanter, Changer, Summoner, Namer, Patterner. It seemed that people spoke of the Archmage as the ninth. Yet when a new Archmage was chosen, nine Masters met to choose him.
"I think you are the Master Doorkeeper," said Ged.
"I am. Ged, you won entrance to Roke by saying your name. Now you may win your freedom of it by saying mine." So said the old man smiling, and waited. Ged stood dumb.
He knew a thousand ways and crafts and means for finding out names of things and of men, of course; such craft was a part of everything he had learned at Roke, for without it there could be little useful magic done. But to find out the name of a Mage and Master was another matter. A mage's name is better hidden than a herring in the sea, better guarded than a dragon's den. A prying charm will be met with a stronger charm, subtle devices will fail, devious inquiries will be deviously thwarted, and force will be turned ruinously back upon itself.
"You keep a narrow door, Master," said Ged at last. "I must sit out in the fields here, I think, and fast till I grow thin enough to slip through"
"As long as you like," said the Doorkeeper, smiling.
So Ged went off a little way and sat down under an alder on the banks of the Thwilburn, letting his otak run down to play in the stream and hunt the muddy banks for creekcrabs. The sun went down, late and bright, for spring was well along. Lights of lantern and werelight gleamed in the windows of the Great House, and down the hill the streets of Thwil town filled with darkness. Owls hooted over the roofs and bats flitted in the dusk air above the stream, and still Ged sat thinking how he might, by force, ruse, or sorcery, learn the Doorkeeper's name. The more he pondered the less he saw, among all the arts of witchcraft he had learned in these five years on Roke, any one that would serve to wrest such a secret from such a mage.
He lay down in the field and slept under the stars, with the otak nestling in his pocket. After the sun was up he went, still fasting, to the door of the House and knocked. The Doorkeeper opened.
"Master," said Ged, "I cannot take your name from you, not being strong enough, and I cannot trick your name from you, not being wise enough. So I am content to stay here, and learn or serve, whatever you will: unless by chance you will answer a question I have."
"Ask it."
"What is your name?"
The Doorkeeper smiled, and said his name: and Ged, repeating it, entered for the last time into that House.
--A Wizard of Earthsea Ursula K. LeGuin
http://tvtropes.org/pmwiki/pmwiki.php/Main/YouDidntAsk
Replies from: wedrifid↑ comment by wedrifid · 2011-12-29T00:32:22.098Z · LW(p) · GW(p)
I missed where he explicitly made a claim about it one way or the other.
If he is AK then he made an explicit claim about it. So either he is not AK or he is lying - a raise your right hand situation.
Replies from: lessdazed↑ comment by lessdazed · 2011-12-29T22:00:14.053Z · LW(p) · GW(p)
I simply had not considered the logical implications of AspiringKnitter making the claim that she is not Will_Newsome, and had only noticed that no similar claim had appeared under the name of Will_Newsome.
It would be interesting if one claimed to be them both and the other claimed to be separate people. If Will_Newsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying. So that is something possible to learn from asking Will_Newsome explicitly. I hadn't considered this when I made my original comment, which was made without thinking deeply.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-29T22:19:07.288Z · LW(p) · GW(p)
If WillNewsome claimed to be both of them and AspiringKnitter did not, then we would know he was lying.
Um? Supposing I'd created both accounts, I could certainly claim as Will that both accounts were me, and claim as AK that they weren't, and in that case Will would be telling the truth.
Replies from: Will_Newsome, CuSithBell↑ comment by Will_Newsome · 2011-12-29T23:15:46.761Z · LW(p) · GW(p)
Supposing I'd created both accounts, I could certainly claim as Will that both accounts were me, and claim as AK that they weren't
Me too.
ETA: And I really mean no offense, but I'm sort of surprised that folk don't immediately see things like this... is it a skill maybe?
Replies from: khafra↑ comment by CuSithBell · 2011-12-29T22:58:13.972Z · LW(p) · GW(p)
But if Will is AK, then Will claimed both that they were and were not the same person (using different screen names).
Replies from: Will_Newsome, Nick_Tarleton, TheOtherDave↑ comment by Will_Newsome · 2011-12-30T00:54:53.609Z · LW(p) · GW(p)
(Maybe everyone knows this but I've pretty much denied that me and AK are the same person. Just saying so people don't get confused.)
Replies from: CuSithBell↑ comment by CuSithBell · 2011-12-30T00:58:46.293Z · LW(p) · GW(p)
Yes, a good thing to clarify! I'm only speaking to a hypothetical situation.
↑ comment by Nick_Tarleton · 2011-12-30T00:18:29.948Z · LW(p) · GW(p)
Oh, so by "Will" you mean "any account controlled by Will" not "the account called Will_Newsome".
I think everyone else interpreted it as the latter.
(I'm sort of surprised that folk don't immediately see things like this... is it a skill maybe?)
Replies from: ArisKatsaris, CuSithBell↑ comment by ArisKatsaris · 2011-12-30T00:30:53.016Z · LW(p) · GW(p)
Oh, so by "Will" you mean "any account controlled by Will" not "the account called Will_Newsome". I think everyone else interpreted it as the latter.
Nick, it was pretty obvious to me that lessdazed and CuSithBell meant the person Will, not "any account controlled by Will" or "the account called Will_Newsome" -- it doesn't matter if the person would be using an account in order to lie, or an email in order to lie, or Morse code in order to lie, just that they would be lying.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-30T01:01:48.247Z · LW(p) · GW(p)
It was "obvious" to me that lessdazed didn't mean that and it would've been obvious to me that CuSithBell did mean that if I hadn't been primed to interpret his/her comment in the light of lessdazed's comment. Looking back I'm still not sure what lessdazed intended, but at this point I'm starting to think he/she meant the same as CuSithBell but unfortunately put an underscore betwen "Will" and "Newsome", confusing the matter.
↑ comment by CuSithBell · 2011-12-30T00:33:37.481Z · LW(p) · GW(p)
Oh, so by "Will" you mean "any account controlled by Will" not "the account called Will_Newsome".
I think everyone else interpreted it the other way.
Well, this was my first post in the thread. I assume you are referring to this post by lessdazed? I thought at the time of my post that lessdazed was using it in the former way (though I'd phrase it "the person Will Newsome"), as you say - either Will lied with the Will account, or told the truth with the Will account and was thus AK, and thus lying with the AK account.
I now think it's possible that they meant to make neither assumption, instead claiming that if the accounts were inconsistent in this way (if the Will account could not "control" the AK account) then this would indicate that Will (the account and person) was lying about being AK. This claim fails if Will can be expected to engage in deliberate trickery (perhaps inspired by lessdazed's post), which I think should be a fairly uncontentious assertion.
↑ comment by TheOtherDave · 2011-12-29T23:51:47.879Z · LW(p) · GW(p)
Yes, that's true.
And?
Replies from: ArisKatsaris, CuSithBell↑ comment by ArisKatsaris · 2011-12-30T00:31:55.320Z · LW(p) · GW(p)
And?
And therefore, either one way or another, Will would be lying.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-30T00:47:37.371Z · LW(p) · GW(p)
(Maybe I should point out that this is all academic since at this point both AK and I have denied that we're the same person, though I've been a little bit more coy about it.)
↑ comment by CuSithBell · 2011-12-30T00:35:44.749Z · LW(p) · GW(p)
And then he (the person) is lying (also telling the truth, naturally, but I interpreted your claim that he would be telling the truth as a claim that he would not be lying).
I suss out the confusion in this post.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-30T01:18:30.350Z · LW(p) · GW(p)
Ah! The person (whatever his or her name was) would be lying, although the Will Newsome the identity would not be. I get it now.
Edit: And then I was utterly redundant. Sorry twice.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-12-30T02:48:23.944Z · LW(p) · GW(p)
Absolutely not a problem :) I think I got turned around a few times there myself.
↑ comment by dlthomas · 2011-12-28T19:30:15.732Z · LW(p) · GW(p)
This was my initial interpretation as well, but on reflection I think lessdazed meant "ask him if it's okay if his IP is checked." Although that puts us in a strange situation in that he's then able to sabotage the credibility of another member through refusal, but if we don't require his permission we are perhaps violating his privacy...
Briefly, my impulse was "but how much privacy is lost in demonstrating A is (probably - proxies, etc) not a sock puppet of B"? If there's no other information leaked, I see no reason to protect against a result of "BAD/NOTBAD" on privacy grounds. However, that is not what we are asking - we're asking if two posters come from the same IP address. So really, we need to decide whether posters cohabiting should be able to keep that cohabitation private - which seems far more weighty a question.
↑ comment by wedrifid · 2011-12-26T11:27:35.487Z · LW(p) · GW(p)
Replies from: shminux, Emile↑ comment by Shmi (shminux) · 2011-12-26T20:00:00.068Z · LW(p) · GW(p)
I probably phrased it wrong. AK does not have to consent, but I would be surprised if the site admins would bother getting in the middle of this silly debate unless both parties ask for it and provide some incentive to do so.
↑ comment by Emile · 2011-12-26T16:51:06.079Z · LW(p) · GW(p)
Yes, it may be legal to check people's IP addresses, but that doesn't mean it's morally okay to do so without asking; and if one does check, it's best to do so privately (i.e. not publicize any identifying information, only the information "yup, it's the same IP as another user").
Replies from: wedrifid↑ comment by wedrifid · 2011-12-26T17:23:04.506Z · LW(p) · GW(p)
Yes, it may be legal to check people's IP addresses, but that doesn't mean it's morally okay to do so without asking
No, but it still is morally ok. In fact it is usually the use of multiple accounts that is frowned upon, morally questionable or an outright breach of ToS - not the identification thereof.
Replies from: Emile↑ comment by Emile · 2011-12-26T17:56:13.317Z · LW(p) · GW(p)
I don't think sock puppets are always frowned down upon - if Clippy and QuirinusQuirrel were sock puppets of regular users (I think Quirrell is, but not Clippy), they are "good faith" ones (as long as they don't double downvote etc.), and I expect "outing" them would be frowned upon.
If AK is a sock puppet, then yeah, it's something morally questionable the admins should deal with. But I wouldn't extend that to all sock puppets.
Replies from: katydee, TheOtherDave, wedrifid↑ comment by katydee · 2011-12-26T19:25:35.916Z · LW(p) · GW(p)
Quirrell overtly claims to be a sock puppet or something like one (it's kind of complicated), whereas Clippy has been consistent in its claim to be the online avatar of a paperclip-maximizing AI. That said, I think most people here believe (like good Bayesians) that Clippy is more likely to be a sockpuppet of an existing user.
↑ comment by TheOtherDave · 2011-12-26T19:00:58.469Z · LW(p) · GW(p)
Huh. Can you clarify what is morally questionable about another user posting pseudonymously under the AK account?
For example, suppose hypothetically that I was the user who'd created, and was posting as, AK, and suppose I don't consider myself to have violated any moral constraints in so doing. What am I missing?
Replies from: Emile↑ comment by Emile · 2011-12-26T19:47:41.129Z · LW(p) · GW(p)
Having multiple sock puppets can be a dishonest way to give the impression that certain views are held by more members than in reality. This isn't really a problem for novelty sockpuppets (Clippy and Quirrel), since those clearly indicate their status.
What's also iffy in this case is the possibility of AK lying about who she claims to be, and wasting everybody's time (which is likely to go hand-in-hand with AK being a sockpuppet of someone else).
If you are posting as AK and are actually female and Christian but would rather that fact not be known about your more famous "TheOtherDave" identity, then I don't have any objection (as long as you don't double vote, or show up twice in the same thread to support the same position, etc.).
Replies from: TheOtherDave, None↑ comment by TheOtherDave · 2011-12-26T20:12:06.497Z · LW(p) · GW(p)
OK, thanks for clarifying.
I can see where double-voting is a problem, both for official votes (e.g., karma-counts) and unofficial ones (e.g., discussions on controversial issues).
I can also see where people lying about their actual demographics, experiences, etc. can be problematic, though of course that's not limited to sockpuppetry. That is, I might actually be female and Christian, or seventeen and Muslim, or Canadian and Theosophist, or what-have-you, and still only have one account.
↑ comment by [deleted] · 2011-12-26T21:23:37.736Z · LW(p) · GW(p)
Hmm. I am generally a strong supporter of anonymity and pseudonymity. I think we just have to accept that multiple internet folks may come from the same meatspace body. You are right that sockpuppets made for rhetorical purposes are morally questionable, but that's mostly because rhetoric itself is morally questionable.
My preferred approach is to pretend that names, numbers, and reputations don't matter. Judge only the work, and not the name attached to it or how many comments claim to like it. Of course this is difficult, like the rest of rationality; we do tend to fail on these by default, but that part is our own problem.
Sockpuppetry and astroturfing is pretty clearly a problem, and being rational is not a complete defense. I'm going to have to think about this problem more, and maybe make a post.
↑ comment by wedrifid · 2011-12-26T18:12:42.312Z · LW(p) · GW(p)
if Clippy and QuirinusQuirrel were sock puppets of regular users (I think Quirrell is, but not Clippy)
Clippy is too.
If AK is a sock puppet, then yeah, it's something morally questionable the admins should deal with.
Weren't you just telling me that it is morally wrong for the admins to even look at the IP addresses?
But I wouldn't extend that to all sock puppets.
When it comes to well behaved sockpuppetts "Don't ask, don't tell" seems to work.
↑ comment by shokwave · 2011-12-24T03:11:12.012Z · LW(p) · GW(p)
I'll bet US$10 you have significant outside information.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T07:35:19.042Z · LW(p) · GW(p)
He doesn't.
Replies from: shokwave, Will_Newsome↑ comment by shokwave · 2011-12-24T07:57:08.443Z · LW(p) · GW(p)
See, I'd like to believe you, but a thousand dollars is a lot of money.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T07:58:52.992Z · LW(p) · GW(p)
Take him up on his bet, then.
(Not that I have any intention of showing up anywhere just to show you who I am and am not. Unless you're going to pay ME that $1000.)
Replies from: shokwave, Mass_Driver↑ comment by shokwave · 2011-12-24T08:34:07.914Z · LW(p) · GW(p)
What about if I bet you $500 that you're not WillNewsome? That way you can prove your separate existence to me, get paid, and I can use the proof you give me to take a thousand from MitchellPorter. In fact, I'll go as high as 700 dollars if you agree to prove yourself to me and MitchellPorter.
Of course, this offer is isomorphic to you taking Mitchell's bet and sending 300-500 dollars to me for no reason, and you're not taking his bet currently, so I don't expect you to be convinced by this offering either.
Replies from: AspiringKnitter, wedrifid↑ comment by AspiringKnitter · 2011-12-24T09:25:04.088Z · LW(p) · GW(p)
What possible proof could I offer you? I can't take you up on the bet because, while I'm not Newsome, I can't think of anything I could do that he couldn't fake if this were a sockpuppet account. If we met in person, I could be the very same person as Newsome anyway; he could really secretly be a she. Or the person you meet could be paid by Newsome to pretend to be AspiringKnitter.
Replies from: shokwave, Alicorn, TheOtherDave↑ comment by Alicorn · 2011-12-24T15:54:40.678Z · LW(p) · GW(p)
he could really secretly be a she
Nope, plenty of people onsite have met Will. I mean, I suppose it is not strictly impossible, but I would be surprised if he were able to present that convincingly as a dude and then later present as convincingly as a girl. Bonus points if you have long hair.
↑ comment by TheOtherDave · 2011-12-24T15:29:01.922Z · LW(p) · GW(p)
Excellent question. One way to deal with it is for all the relevant agents to agree on a bet that's actually specified... that is, instead of betting that "AspiringKnitter is/isn't the same person as WillNewsome," bet that "two verifiably different people will present themselves to a trusted third party identifying as WillNewsome and AspiringKnitter" and agree on a mechanism of verifying their difference (e.g., Skype).
You're of course right that these are two different questions, and the latter doesn't prove the former, but if y'all agree to bet on the latter then the former becomes irrelevant. It would be silly of anyone to agree to the latter if their goal was to establish the former, but my guess is that isn't actually the goal of anyone involved.
Just in case this matters, I don't actually care. For all I know, you and shokwave are the same person; it really doesn't affect my life in any way. This is the Internet, if I'm not willing to take people's personas at face value, then I do best not to engage with them at all.
↑ comment by Mass_Driver · 2011-12-24T09:20:47.737Z · LW(p) · GW(p)
Yeah, you take the bet. Free money! Show up on Skype.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T09:26:07.225Z · LW(p) · GW(p)
And get accused of being this person's sister impersonating his sockpuppet?
↑ comment by Will_Newsome · 2011-12-27T10:46:23.811Z · LW(p) · GW(p)
As far as we know.
↑ comment by Caspian · 2011-12-28T05:07:28.015Z · LW(p) · GW(p)
I have a general heuristic that making one on one bets is not worthwhile as a way to gain money, as the other party's willingness to bet indicates they don't expect to lose money to me. I would also be surprised if a bet of this size, between two members of a rationalist website, paid off to either side (though I guess paying off as a donation to SIAI would not be so surprising). At this point though, I am guessing the bet will not go through.
Was there supposed to be a time limit on that bet offer? It seems like as long as the offer is available you and everyone else will have an incentive not to show all the evidence as a fully-informed betting opponent is less profitable.
↑ comment by lessdazed · 2011-12-23T18:33:01.164Z · LW(p) · GW(p)
Can you please talk more about the word "immortal?" As nothing in physics can make someone immortal, as far as I know, did you mean truly immortal, or long lived, or do you think it likely science will advance and make immortality possible, or what?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T20:14:58.924Z · LW(p) · GW(p)
...Poor choice of words based on EY's goals (which are just as poorly-stated).
Replies from: lessdazed↑ comment by lessdazed · 2011-12-23T21:31:18.136Z · LW(p) · GW(p)
Allow me to invent (or put under the microscope a slight, existing) distinction.
"Poorly stated" - not explicit, without fixed meaning. The words written may mean any of several things.
"Poorly worded" - worded so as to mean one thing which is wrong, perhaps even obviously wrong, in which case the writer may intend for people to assume he didn't mean the obviously wrong thing, but instead meant the less literal, plausibly correct thing.
I have several times criticized the use of the words "immortal" and "immortality" by several people, including EY. I agree with the analysis by Robin Hanson here, in which he argues that the word "immortality" distracts from what people actually intend.
I characterize the use of "immortality" on this site as frequently obviously wrong in many contexts in which it is used, in which it is intended to mean the near thing "living a very long time and not being as fragile as humans are now." In other words, often it is a poor wording of clear concepts.
I'm not sure if you agree, or instead think that the goal of very long life is unclear, or poorly justified, or just wrong, or perhaps something else.
Replies from: AspiringKnitter, Bugmaster↑ comment by AspiringKnitter · 2011-12-23T21:47:42.313Z · LW(p) · GW(p)
Yeah, good point. That makes sense.
↑ comment by Bugmaster · 2011-12-24T04:24:39.170Z · LW(p) · GW(p)
As far as I understand, EY believes that humans and/or AIs will be able to survive until at least the heat death of the Universe, which would render such entities effectively immortal (i.e., as immortal as it is possible to be). That said, I do agree with your assessment.
Replies from: lessdazed, soreff↑ comment by lessdazed · 2011-12-24T06:35:36.125Z · LW(p) · GW(p)
If someone believed that no human and/or AI will ever be able to last longer than 1,000 years - perhaps any mind goes mad at that age, or explodes due to a law of the universe dealing with mental entities, or whatever - that person would be lambasted for using "immortal" to mean beings "as immortal as it is possible to be in my opinion."
↑ comment by soreff · 2011-12-24T04:50:44.978Z · LW(p) · GW(p)
It is unfortunate that we don't have clearer single words for the more plausible, more limited alternatives, closer to
living a very long time and not being as fragile as humans are now.
Come to think of it, if de Grey's SENS program actually succeeded, we'd get the "living a very long time" but not the "not being as fragile as humans are now" so we could use terms to distinguish those.
And all of the variations on these are distinct from uploading/ems, with the possibility of distributed backups
Unfortunately, I suspect that neither of these is very likely to ultimately happen. SENS has curing cancer as a subtask. Uploading/ems requires a scanning technology fast enough to scan a whole human brain and fine-grained enough to distinguish synapse types. I think other events will happen first.
(Waves to Clippy)
↑ comment by Jonii · 2011-12-23T00:38:40.010Z · LW(p) · GW(p)
Welcome, its fun to have you here.
So, the next thing, I think you should avoid this religion-topic here. I mean, you are allowed to continue about it, but I fear you are gonna wear yourself out by doing that. I think there are better topics to discuss, where both you and LW have chance to learn new and change their opinions. Learning new is refreshing, discussions about religion rarely are that.
Admittedly, I think that there is no god, but also I'm not thinking anyone here convinces you of that. I think you actually have higher chance of converting someone here than someone here converting you.
So, come, share some of your thoughts about what is LW doing wrong, or just partake discussions here and there you find interesting. Welcome!
Replies from: TimS↑ comment by AspiringKnitter · 2011-12-28T00:40:38.513Z · LW(p) · GW(p)
You know, I was right.
I'll probably just leave soon anyway. Nothing good can come of this.
You guys are fine and all, but I'm not cut out for this. I'm not smart enough or thick-skinned enough or familiar enough with various things to be a part of this community. It's not you, it's me, for real, I'm not saying that to make you feel better or something. I've only made you all confused and upset, and I know it's draining for me to participate in these discussions.
See you.
Replies from: TidPao↑ comment by TidPao · 2011-12-28T00:58:20.477Z · LW(p) · GW(p)
Stick around. Your contributions are fine. Not everyone will be accusatory like nyan_sandwich.
Read through the Sequences and comment on what seems good to you.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-28T03:07:13.375Z · LW(p) · GW(p)
Not everyone will be accusatory like nyan_sandwich.
It's fine, I'm not pitching a fit about a little crudeness. I really can take it... or I can stay involved, but I don't think I can do both, unlike some people (like maybe you) who are without a doubt better at some things than I am. Don't blame him for chasing me off, I know the community is welcoming.
And I'm not really looking for reassurance. Maybe I'll sleep on it for a while, but I really don't think I'm cut out for this. That's fine with me, I hope it's fine with you too. I might try to hang around the HP:MoR thread, I don't know, but this kind of serious discussion requires skills I just don't have.
All of that said, I really appreciate that sweet comment. Thank you.
Replies from: orthonormal, thomblake↑ comment by orthonormal · 2011-12-28T05:50:37.798Z · LW(p) · GW(p)
I hope you're not seeing the options as "keep up with all the threads of this conversation simultaneously" or "quit LW". It's perfectly OK to leave things hanging and lurk for a while. (If you're feeling especially polite, you can even say that you're tapping out of the conversation for now.)
(Hmm, I might add that advice to the Welcome post...)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-28T06:36:05.148Z · LW(p) · GW(p)
Okay. I'm tapping out of everything indefinitely. Thank you.
↑ comment by thomblake · 2011-12-29T17:02:49.282Z · LW(p) · GW(p)
I don't know, but this kind of serious discussion requires skills I just don't have.
But remember, fixing this sort of problem is ostensibly what we're here for.
If we fail at that for reasons you can articulate, I at least would like to know.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-29T17:14:37.731Z · LW(p) · GW(p)
But remember, fixing this sort of problem is ostensibly what we're here for.
Education is ostensibly what high school teachers are there for, but if a student shows up who can't read, they don't blame themselves because they're not there to teach basic skills like that.
Replies from: None↑ comment by wedrifid · 2011-12-19T09:01:12.226Z · LW(p) · GW(p)
Okay, ready to be shouted down. I'll be counting the downvotes as they roll in, I guess. You guys really hate Christians, after all. (Am I actually allowed to be here or am I banned for my religion?) I'll probably just leave soon anyway. Nothing good can come of this. I don't know why I'm doing this. I shouldn't be here; you don't want me here, not to mention I probably shouldn't bother talking to people who only want me to hate God. Why am I even here again? Seriously, why am I not just lurking? That would make more sense.
Good questions.
↑ comment by Laoch · 2011-12-19T17:50:15.730Z · LW(p) · GW(p)
I'm Christian and female and don't want to be turned into an immortal computer-brain-thing that acts more like Eliezer thinks it should.
Interesting, how comfortable are you with the concept of being immortal but being under the yoke of an immortal whimsical tyrant? Do you not see the irony at all? Besides I think you'll find indefinite life extension as the more appropriate term.
Replies from: MixedNuts, NancyLebovitz↑ comment by MixedNuts · 2011-12-19T18:01:27.603Z · LW(p) · GW(p)
There are places for this debate and they're not this thread. You're being rude.
Replies from: Laoch, MarkusRamikin, Laoch↑ comment by MarkusRamikin · 2011-12-19T18:13:55.523Z · LW(p) · GW(p)
And more disappointingly, confirming what should have been completely off-the-mark predictions about what reception Knitter would get as a Christian. I confess myself surprised.
Hi, Knitter. What does EC stand for again?
Replies from: MixedNuts, AspiringKnitter↑ comment by MixedNuts · 2011-12-19T18:53:31.706Z · LW(p) · GW(p)
The boring explanation is that Laoch was taught as the feet of PZ Myers and Hitchens, who operate purely in places open for debate (atheist blogs are not like dinner tables); talk about the arguments of religious people not to them, but to audiences already sympathetic to atheism, and thus care little about principles of charity; and have a beef with religion-as-harmful-organization (e.g. "Hassidic Judaism hurts queers!") and rather often with religious-people-as-outgroup-members (e.g. "Sally says abortion is murder because she's trying to manipulate me!"), which interferes with their beef with religion-as-reasoning-mistake (e.g. "Sadi thinks he can derive knowledge in ways that violate thermodynamics!").
The reading-too-much-HPMOR explanation is that Laoch is an altruistic Slytherin, who wants Knitter to think: "This is a good bunch. Not only are most people nice, but they can swiftly punish jerks. And there are such occasional jerks - I don't have to feel silly about expecting a completely different reaction than I got, it was because bad apples are noisier.".
Replies from: MarkusRamikin↑ comment by MarkusRamikin · 2011-12-19T19:04:40.585Z · LW(p) · GW(p)
I would have thought there ain't no such critter as "too much MoR", but after seeing that theory... ;)
↑ comment by AspiringKnitter · 2011-12-19T22:43:08.960Z · LW(p) · GW(p)
It stands for evaporative cooling and I'm not offended. It's a pretty valid point.
(Laoch: I expect God not to abuse his power, hence I wouldn't classify him as a whimsical tyrant. And part of my issue is with being turned into a computer, which sounds even worse than making a computer that acts like me and thinks it is me.)
I can't decide which of MixedNuts's hypotheses is more awesome.
Replies from: TheOtherDave, Laoch, None, NancyLebovitz↑ comment by TheOtherDave · 2011-12-20T01:14:37.583Z · LW(p) · GW(p)
I'd be interested to hear more about your understanding of what a computer is, that drives your confidence that being turned into one is a bad thing.
Relatedly, how confident are you that God will never make a computer that acts like you and thinks it is you? How did you arrive at that confidence?
Replies from: Bugmaster, AspiringKnitter↑ comment by Bugmaster · 2011-12-20T01:23:36.360Z · LW(p) · GW(p)
(this is totally off-topic, but is there a "watch comment" feature hiddent around the LW UI somewhere ? I am also interested to see AspiringKnitter's opinion on this subject, but just I know I'll end up losing track of it without technological assistance...)
Replies from: jaimeastorga2000, TheOtherDave↑ comment by jaimeastorga2000 · 2011-12-20T01:36:58.316Z · LW(p) · GW(p)
Every LW comment has its own RSS feed. You can find it by going to the comment's permalink URL and then clicking on "Subscribe to RSS Feed" from the right column or by adding "/.rss" to the end of the aforementioned URL, whichever is easier for you. The grandparent's RSS feed is here.
↑ comment by TheOtherDave · 2011-12-20T01:34:36.834Z · LW(p) · GW(p)
Not that I know of, but http://lesswrong.com/user/AspiringKnitter/ is one way to monitor that if you like.
↑ comment by AspiringKnitter · 2011-12-20T02:25:51.289Z · LW(p) · GW(p)
For one thing, I'm skeptical that an em would be me, but aware that almost everyone here thinks it would be. If it thought it was me, and they thought it was me, but I was already dead, that would be really bad. And if I somehow wasn't dead, there could be two of us and both claiming to be the real person. God would never blunder into it by accident believing he was prolonging my life.
And if it really was me, and I really was a computer, whoever made the computer would have access to all of my brain and could embed whatever they wanted in it. I don't want to be programmed to, just as an implausible example, worship Eliezer Yudkowsky. More plausibly, I don't want to be modified without my consent, which might be even easier if I were a computer. (For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
And if the computer was not me but was sentient (wouldn't it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people?), then I wouldn't want it to be vulnerable to involuntary modification, either. I'd feel a great deal of responsibility for it if I were alive, and if I were not alive, then it would essentially be the worst of both worlds. God doing this would not expose it to any more risk than all other living beings.
Does this seem rational to you, or have I said something that doesn't make sense?
Replies from: Bugmaster, TheOtherDave↑ comment by Bugmaster · 2011-12-20T02:57:28.038Z · LW(p) · GW(p)
I'm going to scoop TheOtherDave on this topic, I hope he doesn't mind :-/
But first of all, who do you mean by "an em" ? I think I know the answer, but I want to make sure.
If it thought it was me, and they thought it was me, but I was already dead, that would be really bad.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be "already dead" under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
There are two scenarios I can think of that help illustrate my point.
1). Let's pretend that you and I know each other relatively well, though only through Less Wrong. But tomorrow, aliens abduct me and replace me with a machine that makes the same exact posts as I normally would. If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you're talking to the real Bugmaster, or the replica ? If the answer is "no", then how do you know that you aren't talking to the replica at this very moment ? More importantly, why does it matter ?
2). Let's say that a person gets into an accident, and loses his arm. But, luckily, our prosthetic technology is superb, and we replace his arm with a perfectly functional prosthesis, indistinguishable from the real arm (in reality, our technology isn't nearly as good, but we're getting there). Is the person still human ? Now let's say that one of his eyes gets damaged, and similarly replaced. Is the person still human ? Now let's say that the person has epilepsy, but we are able to implant a chip in his brain that will stop the epileptic fits (such implants do, in fact, exist). What if part of the person's brain gets damaged -- let's say, the part that's responsible for color perception -- but we are able to replace it with a more sophisticated chip. Is the person still human ? At what point do you draw the line from "augmented human" to "inhuman machine", and why do you draw the line just there and not elsewhere ?
there could be two of us and both claiming to be the real person.
Two copies of me would both be me, though they would soon begin to diverge, since they would have slightly different perceptions of the world. If you don't believe that two identical twins are the same person, why would you believe that two copies are ?
More plausibly, I don't want to be modified without my consent, which might be even easier if I were a computer.
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well. On the flip side, your (and mine, and everyone else's) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won't necessarily be a step down.
(For God to do it, it would be no different from the current situation, of course. He has as much access to my brain as he wants.)
So, you don't want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
wouldn't it be awful if we created nonsentient ems that emulated everyone and ended up with a world populated entirely by beings with no qualia that pretend to be real people ?
I personally would answer "no", because I believe that the concept of qualia is a bit of a red herring. I might be in the minority on this one, though.
Replies from: AspiringKnitter, Dreaded_Anomaly↑ comment by AspiringKnitter · 2011-12-20T04:11:35.428Z · LW(p) · GW(p)
That's a REALLY good response.
An em would be a computer program meant to emulate a person's brain and mind.
From my perspective, a machine that thinks it is me, and that behaves identically to myself, would, in fact, be myself. Thus, I could not be "already dead" under that scenario, until someone destroys the machine that comprises my body (which they could do with my biological body, as well).
If you create such a mind that's just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won't know the answer. In that sense, it isn't me. If you then let us meet each other, it could tell me something.
If you ask this replica what he ate for breakfast, or how he feels about walks on the beach, or whatever, it will respond exactly as I would have responded. Is there any test you can think of that will tell you whether you're talking to the real Bugmaster, or the replica ? If the answer is "no", then how do you know that you aren't talking to the replica at this very moment ? More importantly, why does it matter ?
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured. The machine also doesn't have Bugmaster's soul-- it would be important to ascertain whether or not it did have a soul, though I'd have some trouble figuring out a test for that (but I'm sure I could-- I've already got ideas, pretty much along the lines of "ask God")-- and if it doesn't, then it's useless to worry about preaching the Gospel to the replica. (It's probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.) This has implications for, e.g., reunions after death. Not to mention that if I'm concerned about the state of Bugmaster's soul, I should worry about Bugmaster in the alien ship. And if both of them (the replica and the real Bugmaster) accept Jesus (a soulless robot couldn't do that), it's two souls saved rather than one.
At what point do you draw the line from "augmented human" to "inhuman machine", and why do you draw the line just there and not elsewhere ?
That's a really good question. How many grains of sand do you need to remove from a heap of sand for it to stop being a heap? I suppose what matters is whether the soul stays with the body. I don't know where the line is. I expect there is one, but I don't know where it is.
Of course, what do we mean by "inhuman machine" in this case? If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I'd venture to call it a real person.
Sure, it might be, or it might not; this depends entirely on implementation. Today, there exist some very sophisticated encryption algorithms that safeguard valuable data from modification by third parties; I would assume that your mind would be secured at least as well.
And who does the programming and encrypting? That only one person (who has clearly not respected my wishes to begin with since I don't want to be a computer, so why should xe start now?) can alter me at will to be xyr peon does not actually make me feel significantly better about the whole thing than if anyone can do it.
So, you don't want your mind to be modified without your consent, but you give unconditional consent to God to do so ?
I feel like being sarcastic here, but I remembered the inferential distance, so I'll try not to. There's a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn't know everything, and may or may not love me enough to die for me... and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God's character. (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he'd modify a human against the human's will.)
On the flip side, your (and mine, and everyone else's) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won't necessarily be a step down.
True. I consider the risk unacceptably high. I just think it'd be even worse as a computer. We have to practice our critical thinking as well as we can and avoid mind-altering chemicals like drugs and coffee. (I suppose you don't want to hear me say that we have to pray for discernment, too?) A core tenet of utilitarianism is that we compare possibilities to alternatives. This is bad. The alternatives are worse. Therefore, this the best.
Replies from: Prismattic, Kaj_Sotala, Bugmaster↑ comment by Prismattic · 2011-12-20T04:34:43.974Z · LW(p) · GW(p)
I feel like being sarcastic here, but I remembered the inferential distance, so I'll try not to. There's a difference between a human, whose extreme vulnerability to corruption has been extensively demonstrated, and who doesn't know everything, and may or may not love me enough to die for me... and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. This bothers me a lot less than an omniscient person without God's character.
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood....
Suppose that tonight I lay I minefield all around your house. In the morning, I tell you the minefield is there. Then I send my child to walk through it. My kid gets blown up, but this shows you a safe path out of your house and allows you to go about your business. If I then suggest that you should express your gratitude to me everyday for the rest of your life, would you think that reasonable?.... According to your theology, was hell not created by God?
(God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he'd modify a human against the human's will.)
I once asked my best friend, who is a devout evangelical, how he could be sure that the words of the Bible as we have it today are correct, given the many iterations of transcription it must have gone through. According to him, God's general policy of noninterference in free will didn't preclude divinely inspiring the writers of the Bible to trancribe it inerrantly. At least according to one thesist's account, then, God was willing to interfere as long it was something really important for man's salvation. And even if you don't agree with that particular interpretation, I'd like to hear your explanation how the points at which God "hardened Pharaoh's heart", for example, don't amount to interfering with free will.
Replies from: AspiringKnitter, khafra↑ comment by AspiringKnitter · 2011-12-20T05:04:26.685Z · LW(p) · GW(p)
I have nothing to say to your first point because I need to think that over and study the relevant theology (I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is). With regard to your second point, however, I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible. (I must disagree less cordially with anyone who espouses the inerrancy of only the King James Version.) I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt. A quick glance at the disagreements even among ancient manuscripts could tell you that.
I suppose if I complain about people with illogical beliefs making Christianity look bad, you'll think it's a joke...
Replies from: Nornagest, Prismattic, Bugmaster, Oligopsony↑ comment by Nornagest · 2011-12-20T08:31:05.754Z · LW(p) · GW(p)
I never considered that God made hell and now I need to ascertain whether he did before I respond or even think about responding, a question complicated by being unsure of what hell is
I don't really have a dog in this race. That said, Matthew 25:41 seems to point in that direction, although "prepared" is perhaps a little weaker than "made". It does seem to imply control and deliberate choice.
That's the first passage that comes to mind, anyway. There's not a whole lot on Hell in the Bible; most of the traditions associated with it are part of folk as opposed to textual Christianity, or are derived from essentially fanfictional works like Dante's or Milton's.
Replies from: Gust↑ comment by Prismattic · 2011-12-20T05:07:51.369Z · LW(p) · GW(p)
Upvoted for self-awareness.
The more general problem, of course, is that if you don't believe in textual inerrancy (of whatever version of the Bible you happen to prefer), you still aren't relying on God to decide which parts are correct.
↑ comment by Bugmaster · 2011-12-20T05:25:49.665Z · LW(p) · GW(p)
As Prismattic said, if you discard inerrancy, you run into the problem of classifications. How do you know which parts of the Bible are literally true, which are metaphorical, and which have been superseded by the newer parts ?
I would also add that our material world contains many things that, while they aren't as bad as Hell, are still pretty bad. For example, most animals eat each other alive in order to survive (some insects do so in truly terrifying ways); viruses and bacteria ravage huge swaths of the population, human, animal and plant alike; natural disasters routinely cause death and suffering on the global scale, etc. Did God create all these things, as well ?
Replies from: MixedNuts↑ comment by MixedNuts · 2011-12-20T07:30:41.775Z · LW(p) · GW(p)
That's not a very good argument. "If you accept some parts are metaphorical, how do you know which are?" is, but if you only accept transcription and translation errors, you just treat it like any other historical document.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-20T08:23:45.140Z · LW(p) · GW(p)
My bad; for some reason I thought that when AK said,
I must cordially disagree with anyone who espouses the complete inerrancy of all versions of the Bible.
She meant that some parts of the Bible are not meant to be taken literally, but on second reading, it's obvious that she is only referring to transcription and translation errors, like you said. I stand corrected.
↑ comment by Oligopsony · 2011-12-21T15:27:44.710Z · LW(p) · GW(p)
I thought it was common knowledge that the King James Version suffered from poor translation and the Vulgate was corrupt.
Well, that really depends on what your translation criteria are. :) Reading KJV and, say, NIV side-by-side is like hearing Handel in one ear and Creed in the other.
↑ comment by khafra · 2011-12-21T16:17:51.889Z · LW(p) · GW(p)
I realize that theological debate has a pretty tenuous connection to the changing of minds, but sometimes one is just in the mood....
When I feel the urge, I go to r/debatereligion. The standards of debate aren't as high as they are here, of course; but I don't have to feel guilty about lowering them.
↑ comment by Kaj_Sotala · 2011-12-21T10:34:27.753Z · LW(p) · GW(p)
Upvoted for dismissing the inclination to respond sarcastically after remembering the inferential distance.
↑ comment by Bugmaster · 2011-12-20T05:04:18.180Z · LW(p) · GW(p)
An em would be a computer program meant to emulate a person's brain and mind.
That's what I thought, cool.
If you create such a mind that's just like mine at this very moment, and take both of us and show the construct something, then ask me what you showed the construct, I won't know the answer. In that sense, it isn't me.
Agreed; that is similar to what I meant earlier about the copies "diverging". I don't see this as problematic, though -- after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that's probably a good thing.
Because this means I could believe that Bugmaster is comfortable and able to communicate with the world via the internet, but it could actually be true that Bugmaster is in an alien jail being tortured.
Ok, that's a very good point; my example was flawed in this regard. I could've made the aliens more obviously benign. For example, maybe the biological Bugmaster got hit by a bus, but the aliens snatched up his brain just in time, and transcribed it into a computer. Then they put that computer inside of a perfectly realistic synthetic body, so that neither Bugmaster nor anyone else knows what happened (Bugmaster just thinks he woke up in a hospital, or something). Under these conditions, would it matter to you that you were talking to the replica or the biological Bugmaster ?
But, in the context of my original example, with the (possibly) evil aliens: why aren't you worried that you are talking to the replica right at this very moment ?
The machine also doesn't have Bugmaster's soul-- it would be important to ascertain whether or not it did have a soul, though I'd have some trouble figuring out a test for that (but I'm sure I could-- I've already got ideas, pretty much along the lines of "ask God"
I agree that the issue of the soul would indeed be very important; if I believed in souls, as well as a God who answers specific questions regarding souls, I would probably be in total agreement with you. I don't believe in either of those things, though. So I guess my next two questions would be as follows:
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn't count as you, and/or
b). Is there anything other than faith that causes you to believe that souls exist ?
If the answers to (a) and (b) are both "no", then we will pretty much have to agree to disagree, since I lack faith, and faith is (probably) impossible to communicate.
It's probably useless to preach it to Bugmaster anyway, since Bugmaster is almost certainly a very committed atheist.
Well, yes, preaching to me or to any other atheist is very unlikely to work. However, if you manage to find some independently verifiable and faith-independent evidence of God's (or any god's) existence, I'd convert in a heartbeat. I confess that I can't imagine what such evidence would look like, but just because I can't imagine it doesn't mean it can't exist.
If it truly thought like a human brain, and FELT like a human, was really sentient and not just a good imitation, I'd venture to call it a real person.
Do you believe that a machine could, in principle, "feel like a human" without having a soul ? Also, when you say "feel", are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject's behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
And who does the programming and encrypting?
That's a good point: if you are trusting someone with your mind, how do you know they won't abuse that trust ? But this question applies to your biological brain, as well, I think. Presumably, there exist people whom you currently trust; couldn't the person who operates the mind transfer device earn your trust in a similar way ?
That only one person (who has clearly not respected my wishes to begin with since I don't want to be a computer, so why should xe start now?)
Oh, in that scenario, obviously you shouldn't trust anyone who wants to upload your mind against your will. I am more interested in finding out why you don't want to "be a computer" in the first place.
and God, who is incorruptible, knows all and has been demonstrated already to love me enough to die and go to hell for me. ... (God has also demonstrated a respect for human free will that surpasses his desire for humans not to suffer, making it very unlikely he'd modify a human against the human's will.)
You're probably aware of this already, but just in case: atheists (myself included) would say (at the very minimum) that your first sentence contains logical contradictions, and that your second sentence is contradicted by evidence and most religious literature, even if we assume that God does exist. That is probably a topic for a separate thread, though; I acknowledge that, if I believed what you do about God's existence and his character, I'd agree with you.
...and avoid mind-altering chemicals like drugs and coffee
Guilty as charged; I'm drinking some coffee right now :-/
I suppose you don't want to hear me say that we have to pray for discernment, too?
I only want to hear you say things that you actually believe...
That said, let's assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy. Anyway, let's assume that this is the case. If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn't that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T06:19:38.496Z · LW(p) · GW(p)
Agreed; that is similar to what I meant earlier about the copies "diverging". I don't see this as problematic, though -- after all, there currently exists only one version of me (as far as I know), but that version is changing all the time (even as I type this sentence), and that's probably a good thing.
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge? My answer would be the one who was here first is me, which is problematic because I could be a replica, but only conditional on machines having souls or many of my religious beliefs being wrong. (If I learn that I am a replica, I must update on one of those.)
a). Can you think of any non-supernatural reasons why an electronic copy of you wouldn't count as you, and/or
Besides being electronic and the fact that I might also be currently existing (can there be two ships of Theseus?), no. Oh, wait, yes; it SHOULDN'T count as me if we live in a country which uses deontological morality in its justice system. Which isn't really the best idea for a justice system anyway, but if so, then it's hardly fair to treat the construct as me in that case because it can't take credit or blame for my past actions. For instance, if I commit a crime, it shouldn't be blamed if it didn't commit the crime. (If we live in a sensible, consequentialist society, we might still want not to punish it, but if everyone believes it's me, including it, then I suppose it would make sense to do so. And my behavior would be evidence about what it is likely to do in the future.)
b). Is there anything other than faith that causes you to believe that souls exist ?
If by "faith" you mean "things that follow logically from beliefs about God, the afterlife and the Bible" then no.
Do you believe that a machine could, in principle, "feel like a human" without having a soul ?
No, but it could act like one.
Also, when you say "feel", are you implying some sort of a supernatural communication channel, or would it be sufficient to observe the subject's behavior by purely material means (f.ex. by talking to him/it, reading his/its posts, etc.) in order to obtain this feeling ?
When I say "feel like a human" I mean "feel" in the same way that I feel tired, not in the same way that you would be able to perceive that I feel soft. I feel like a human; if you touch me, you'll notice that I feel a little like bread dough. I cannot perceive this directly, but I can observe things which raise the probability of it.
But something acting like a person is sufficient reason to treat it like one. We should err on the side of extending kindness where it's not needed, because the alternative is to err on the side of treating people like unfeeling automata.
Presumably, there exist people whom you currently trust;
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement, I feel no compulsion to trust anyone in a situation where I would be even more vulnerable. Trust has limits.
I only want to hear you say things that you actually believe...
I'm past underestimating you enough not to know that. I'm aware that believing something is a necessary condition for saying it; I just don't know if it's a sufficient condition.
That said, let's assume that your electronic brain would be at least as resistant to outright hacking as your biological one. IMO this is a reasonable assumption, given what we currently know about encryption, and assuming that the person who transferred your brain into the computer is trustworthy.
Those are some huge ifs, but okay.
If your computerized mind under this scenario was able to think faster, and remember more, than your biological mind; wouldn't that mean that your critical skills would greatly improve ? If so, you would be more resistant to persuasion and indoctrination, not more so.
Yes, and if we can prove that my soul would stay with this computer (as opposed to a scenario where it doesn't but my body and physical brain are killed, sending the real me to heaven about ten decades sooner than I'd like, or a scenario where a computer is made that thinks like me only smarter), and if we assume all the unlikely things stated already, and if I can stay in a corporeal body where I can smell and taste and hear and see and feel (and while we're at it, can I see and hear and smell better?) and otherwise continue being the normal me in a normal life and normal body (preferably my body; I'm especially partial to my hands), then hey, it sounds neat. That's just too implausible for real life.
EDIT: oh, and regarding why I'm not worried now, it's because I think it's unlikely for it to happen right now.
Replies from: TheOtherDave, Bugmaster, Prismattic↑ comment by TheOtherDave · 2011-12-20T16:27:01.621Z · LW(p) · GW(p)
So... hm.
So if I'm parsing you correctly, you are assuming that if an upload of me is created, Upload_Dave necessarily differs from me in the following ways:
it doesn't have a soul, and consequently is denied the possibility of heaven,
it doesn't have a sense of smell, taste, hearing, sight, or touch,
it doesn't have my hands, or perhaps hands at all,
it is easier to hack (that is, to modify without its consent) than my brain is.
Yes?
Yeah, I think if I believed all of that, I also wouldn't be particularly excited by the notion of uploading.
For my own part, though, those strike me as implausible beliefs.
I'm not exactly sure what your reasons for believing all of that are... they seem to come down to a combination of incredulity (roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties) and that they contradict your existing religious beliefs. Have I understood you?
I can see where, if I had more faith than I do in the idea that computer programs will always be more or less like they are now, and in the idea that what my rabbis taught me when I was a child was a reliable description of the world as it is, those beliefs about computer programs would seem more plausible.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T21:47:15.098Z · LW(p) · GW(p)
Mostly.
it doesn't have a soul, and consequently is denied the possibility of heaven
More like "it doesn't have a soul, therefore there's nothing to send to heaven".
(roughly speaking, no computer program in your experience has ever had those properties, so it feels ridiculous to assume that a computer program can ever have those properties)
I have a great deal of faith in the ability of computer programs to surprise me by using ever-more-sophisticated algorithms for parsing data. I don't expect them to feel. If I asked a philosopher what it's like for a bat to be a bat, they'd understand the allusion I'd like to make here, but that's awfully jargony. Here's an explanation of the concept I'm trying to convey.
I don't know whether that's something you've overlooked or whether I'm asking a wrong question.
Replies from: TheOtherDave, lessdazed↑ comment by TheOtherDave · 2011-12-20T22:07:22.953Z · LW(p) · GW(p)
If it helps, I've read Nagel, and would have gotten the bat allusion. (Dan Dennett does a very entertaining riff on "What is it like to bat a bee?" in response.)
But I consider the physics of qualia to be kind of irrelevant to the conversation we're having.
I mean, I'm willing to concede that in order for a computer program to be a person, it must be able to feel things in italics, and I'm happy to posit that there's some kind of constraint -- label it X for now -- such that only X-possessing systems are capable of feeling things in italics.
Now, maybe the physics underlying X is such that only systems made of protoplasm can possess X. This seems an utterly unjustified speculation to me, and no more plausible than speculating that only systems weighing less than a thousand pounds can possess X, or only systems born from wombs can possess X, or any number of similar speculations. But, OK, sure, it's possible.
So what? If it turns out that a computer has to be made of protoplasm in order to possess X, then it follows that for an upload to be able to feel things in italics, it has to be an upload running on a computer made of protoplasm. OK, that's fine. It's just an engineering constraint. It strikes me as a profoundly unlikely one, as I say, but even if it turns out to be true, it doesn't matter very much.
That's why I started out by asking you what you thought a computer was. IF people have to be made of protoplasm, AND IF computers can't be made of protoplasm, THEN people can't run on computers... but not only do I reject the first premise, I reject the second one as well.
Replies from: xxd, AspiringKnitter↑ comment by xxd · 2011-12-20T23:38:58.350Z · LW(p) · GW(p)
"IF people have to be made of protoplasm, AND IF computers can't be made of protoplasm, THEN people can't run on computers... but not only do I reject the first premise, I reject the second one as well."
Does it matter?
What if we can run some bunch of algorithms on a computer that pass the turing test but are provably non-sentient? When it comes down to it we're looking for something that can solve generalized problems willingly and won't deliberately try to kill us.
It's like the argument against catgirls. Some people would prefer to have human girls/boys but trust me sometimes a catgirl/boy would be better.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-20T23:44:32.591Z · LW(p) · GW(p)
It matters for two things:
1) If we are trying to upload (the context here, if you follow the thread up a bit), then we want the emulations to be alive in whatever senses it is important to us that we are presently alive.
2) If we are building a really powerful optimization process, we want it not to be alive in whatever senses make alive things morally relevant, or we have to consider its desires as well.
Replies from: xxd↑ comment by xxd · 2011-12-20T23:53:05.021Z · LW(p) · GW(p)
OK fair enough if you're looking for uploads. Personally I don't care as I take the position that the upload concept isn't really me, it's a simulated me in the same way that a "spirit version of me" i.e. soul isn't really me either.
Please correct my logic if I'm wrong here: in order to take the position that an upload is provably you, the only feasible way to do the test is have other people verify that it's you. The upload saying it's you doesn't cut it and neither does the upload just acting exactly like you cut it. In other words the test for whether an upload is really you doesn't even require it to be really you just simulate you exactly. Which means that the upload doesn't need to be sentient.
Please fill in the blanks in my understanding so I can get where you're coming from (this is a request for information not sarcastic).
Replies from: TheOtherDave, dlthomas↑ comment by TheOtherDave · 2011-12-21T00:40:40.844Z · LW(p) · GW(p)
I endorse dthomas' answer in the grandparent; we were talking about uploads.
I have no idea what to do with word "provably" here. It's not clear to me that I'm provably me right now, or that I'll be provably me when I wake up tomorrow morning. I don't know how I would go about proving that I was me, as opposed to being someone else who used my body and acted just like me. I'm not sure the question even makes any sense.
To say that other people's judgments on the matter define the issue is clearly insufficient. If you put X in a dark cave with no observers for a year, then if X is me then I've experienced a year of isolation and if X isn't me then I haven't experienced it and if X isn't anyone then no one has experienced it. The difference between those scenarios does not depend on external observers; if you put me in a dark cave for a year with no observers, I have spent a year in a dark cave.
Mostly, I think that identity is a conceptual node that we attach to certain kinds of complex systems, because our brains are wired that way, but we can in principle decompose identity to component parts -- shared memory, continuity of experience, various sorts of physical similarity, etc. -- without anything left over. If a system has all those component parts -- it remembers what I remember, it remembers being me, it looks and acts like me, etc. -- then our brains will attach that conceptual node to that system, and we'll agree that that system is me, and that's all there is to say about that.
And if a system shares some but not all of those component parts, we may not agree whether that system is me, or we may not be sure if that system is me, or we may decide that it's mostly me.
Personal identity is similar in this sense to national identity. We all agree that a child born to Spaniards and raised in Spain is Spanish, but is the child of a Spaniard and an Italian who was born in Barcelona and raised in Venice Spanish, or Italian, or neither, or both? There's no way to study the child to answer that question, because the child's national identity was never an attribute of the child in the first place.
↑ comment by dlthomas · 2011-12-21T00:04:30.105Z · LW(p) · GW(p)
While I do take the position that there is unlikely to be any theoretical personhood-related reason uploads would be impossible, I certainly don't take the position that verifying an upload is a solved problem, or even that it's necessarily ever going to be feasible.
That said, consider the following hypothetical process:
- You are hooked up to sensors monitoring all of your sensory input.
- We scan you thoroughly.
- You walk around for a year, interacting with the world normally, and we log data.
- We scan you thoroughly.
- We run your first scan through our simulation software, feeding it the year's worth of data, and find everything matches up exactly (to some ridiculous tolerance) with your second scan.
Do you expect that there is a way in which you are sentient, in which your simulation could not be if you plugged it into (say) a robot body or virtual environment that would feed it new sensory data?
Replies from: xxd↑ comment by xxd · 2011-12-21T00:12:25.847Z · LW(p) · GW(p)
That is a very good response and my answer to you is:
- I don't know AND
- To me it doesn't matter as I'm not for any kind of destructive scanning upload ever though I may consider slow augmentation as parts wear out.
But I'm not saying you're wrong. I just don't know and I don't think it's knowable.
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it's sentient or not)? Definitely.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-21T00:18:41.329Z · LW(p) · GW(p)
That said, would I consent to being non-destructively scanned in order to be able to converse with a fast-running simulation of myself (regardless of whether it's sentient or not)? Definitely.
What about being non-destructively scanned so you can converse with something that may be a fast running simulation of yourself, or may be something using a fast-running simulation of you to determine what to say to manipulate you?
Replies from: xxd↑ comment by AspiringKnitter · 2011-12-20T23:24:56.698Z · LW(p) · GW(p)
You make sense. I'm starting to think a computer could potentially be sentient. Isn't a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
Replies from: Bugmaster, TheOtherDave, Laoch↑ comment by Bugmaster · 2011-12-21T00:43:58.273Z · LW(p) · GW(p)
Isn't a computer a machine, generally made of circuits, that runs programs somebody put on it in a constructed non-context-dependent language?
I personally believe that humans are likewise machines, generally made of meat, that run "programs". I put the word "programs" in scare-quotes because our programs are very different in structure from computer programs, though the basic concept is the same.
What we have in common with computers, though, is that our programs are self-modifying. We can learn, and thus change our own code. Thus, I see no categorical difference between humans and computers, though obviously our current computers are far inferior to humans in many (though not all) areas.
↑ comment by TheOtherDave · 2011-12-20T23:37:58.499Z · LW(p) · GW(p)
That's a perfectly workable model of a computer for our purposes, though if we were really going to get into this we'd have to further explore what a circuit is.
Personally, I've pretty much given up on the word "sentient"... in my experience it connotes far more than it denotes, such that discussions that involve it end up quickly reaching the point where nobody quite knows what they're talking about, or what talking about it entails. I have the same problem with "qualia" and "soul." (Then again, I talk comfortably about something being or not being a person, which is just as problematic, so it's not like I'm consistent about this.)
But that aside, yeah, if any physical thing can be sentient, then I don't see any principled reason why a computer can't be. And if I can be implemented in a physical thing at all, then I don't see any principled reason why I can't be implemented in a computer.
Also (getting back to an earlier concern you expressed), if I can be implemented in a physical thing, I don't see any principled reason why I can't be implemented in two different physical things at the same time.
Replies from: xxd↑ comment by xxd · 2011-12-20T23:40:57.407Z · LW(p) · GW(p)
I agree Dave. Also I'll go further. For my own personal purposes I care not a whit if a powerful piece of software passes the Turing test, can do cool stuff, won't kill me but it's basically an automaton.
Replies from: Bugmaster, APMason↑ comment by Bugmaster · 2011-12-21T00:45:28.228Z · LW(p) · GW(p)
I would go one step further, and claim that if a piece of software passes the general Turing test -- i.e., if it acts exactly like a human would act in its place -- then it is not an automaton.
Replies from: dlthomas, xxd↑ comment by xxd · 2011-12-21T00:49:18.860Z · LW(p) · GW(p)
And I'd say that taking that step is a point of philosophy.
Consider this: I have a dodge durango sitting in my garage.
If I sell that dodge durango and buy an identical one (it passes all the same tests in exactly the same way) then is it the same dodge durango? I'd say no, but the point is irrelevant.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T00:53:14.020Z · LW(p) · GW(p)
I'd say no, but the point is irrelevant.
Why not, and why is it irrelevant ? For example, if your car gets stolen, and later returned to you, wouldn't you want to know whether you actually got your own car back ?
I have to admit, your response kind of mystified me, so now I'm intrigued.
Replies from: xxd↑ comment by xxd · 2011-12-21T00:59:31.080Z · LW(p) · GW(p)
Very good questions.
No I'd not particularly care if it was my car that was returned to me because it gives me utility and it's just a thing.
I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.
In the case of the simulated porn actress, I wouldn't really care if she was real because her utility for me would be similar to watching a movie. Once done with the simulation she would be shut off.
That said the struggle would be with whether or not she (the catgirl version of porn actress) was truly sentient. If she was truly sentient then I'd be evil in the first place because I'd be coercing her to do evil stuff in my personal simulation but I think there's no viable way to determine sentience other than "if it walks like a duck and talks like a duck" so we're back to the beginning again and THUS I say "it's irrelevant".
Replies from: Oligopsony, APMason, Bugmaster↑ comment by Oligopsony · 2011-12-21T01:36:36.059Z · LW(p) · GW(p)
I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.
My primary concern in a situation like this is that she'd be kidnapped and presumably extremely not happy about that.
If my partner were vaporized in her sleep and then replaced with a perfect simulacrum, well, that's just teleporting (with less savings on airfare.) If it were a known fact that sometimes people died and were replaced by cylons, finding out someone had been cyloned recently, or that I had, wouldn't particularly bother me. (I suppose this sounds bold, but I'm almost entirely certain that after teleporters or perfect destructive uploads or whatever were introduced, interaction with early adopters people had known before their "deaths" would rapidly swing intuitions towards personal identity being preserved. I have no idea how human psychology would react to there being multiple copies of people.)
Replies from: TheOtherDave, xxd↑ comment by TheOtherDave · 2011-12-21T01:52:12.480Z · LW(p) · GW(p)
I expect we'd adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
The closest analogy I can think of is if I lived in a culture where families only had one child each, and was suddenly introduced to brothers. It would be strange to find two people who shared parents, a childhood environment, and so forth -- attributes I was accustomed to treating as uniquely associated with a person, but it turned out I was wrong to do so. It would be disconcerting, but I expect I'd get used to it.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-21T22:50:40.167Z · LW(p) · GW(p)
I expect we'd adapt pretty quickly to the idea that there exists a new possible degree of relationship between people, namely the relationship between two people who used to be the same person.
If you count a fertilized egg as a person, then two identical twins did use to be the same person. :-)
Replies from: Alicorn↑ comment by xxd · 2011-12-21T02:12:49.694Z · LW(p) · GW(p)
While I don't doubt that many people would be OK with this I wouldn't because of the lack of certainty and provability.
My difficulty with this concept goes further. Since it's not verifiable that the copy is you even though it seems to present the same outputs to any verifiable test then what is to prevent an AI getting round the restriction on not destroying humanity?
"Oh but the copies running in a simulation are the same thing as the originals really", protests the AI after all the humans have been destructively scanned and copied into a simulation...
Replies from: APMason↑ comment by APMason · 2011-12-21T02:19:00.902Z · LW(p) · GW(p)
That shouldn't happen as long as the AI is friendly - it doesn't want to destroy people.
Replies from: xxd↑ comment by xxd · 2011-12-21T02:22:31.865Z · LW(p) · GW(p)
But is it destroying people if the simulations are the same as the original?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T02:39:31.864Z · LW(p) · GW(p)
There are a few interesting possibilities here:
1) The AI and I agree on what constitutes a person. In that case, the AI doesn't destroy anything I consider a person.
2) The AI considers X a person, and I don't. In that case, I'm OK with deleting X, but the AI isn't.
3) I consider X a person, and the AI doesn't. In that case, the AI is OK with deleting X, but I'm not.
You're concerned about scenario #3, but not scenario #2. Yes?
But in scenario #2, if the AI had control, a person's existence would be preserved, which is the goal you seem to want to achieve.
This only makes sense to me if we assume that I am always better at detecting people than the AI is.
But why would we assume that? It seems implausible to me.
↑ comment by xxd · 2011-12-21T02:57:28.468Z · LW(p) · GW(p)
Ha Ha. You're right. Thanks for reflecting that back to me.
Yes if you break apart my argument I'm saying exactly that though I hadn't broken it down to that extent before.
The last part I disagree with which is that I assume that I'm always better at detecting people than the AI is. Clearly I'm not but in my own personal case I don't trust it if it disagrees with me because of simple risk management. If it's wrong and it kills me then resurrects a copy then I have experienced total loss. If it's right then I'm still alive.
But I don't know the answer. And thus I would have to say that it would be necessary to only allow scenario #1 if I were designing the AI because though I could be wrong I'd prefer not to take the risk of personal destruction.
That said if someone chose to destructively scan themselves to upload that would be their personal choice.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T03:25:57.966Z · LW(p) · GW(p)
Well, I certainly agree that all else being equal we ought not kill X if there's a doubt about whether X is a person or not, and I support building AIs in such a way that they also agreed with that.
But if for whatever reason I'm in a scenario where only one of X and Y can survive, and I believe X is a person and Y is not, and the AI says that Y is a person and X is not, and I'm the one who has to decide which of X and Y to destroy, then I need to decide whether I trust my own judgment more than the AI's judgment, or less.
And obviously that's going to depend on the particulars of X, Y, me, and the AI... but it's certainly possible that I might in that situation update my beliefs and destroy X instead of Y.
Replies from: xxd, ochopelotas↑ comment by xxd · 2011-12-21T18:04:33.306Z · LW(p) · GW(p)
I think we're on the same page from a logical perspective.
My guess is the perspective taken is that of physical science vs compsci.
My guess is a compsci perspective would tend to view the two individuals as being two instances of the class of individual X. The two class instances are logically equivalent exception for position.
The physical science perspective is that there are two bunches of matter near each other with the only thing differing being the position. Basically the same scenario as two electrons with the same spin state, momentum, energy etc but different positions. There's no way to distinguish the two of them from physical properties but there are two of them not one.
Regardless, if you believe they are the same person then you go first through the teleportation device... ;->
Replies from: None, dlthomas, TheOtherDave↑ comment by [deleted] · 2011-12-21T18:44:32.695Z · LW(p) · GW(p)
In Identity Isn't In Specific Atoms, Eliezer argued that even from what you called the "physical science perspective," the two electrons are ontologically the same entity. What do you make of his argument?
Replies from: xxd↑ comment by xxd · 2011-12-21T19:12:50.410Z · LW(p) · GW(p)
What do I make of his argument? Well I'm not a PHD in Physics though I do have a Bachelors in Physics/Math so my position would be the following:
Quantum physics doesn't scale up to macro. While swapping the two helium atoms in two billiard balls results in you not being able to tell which helium atom was which, the two billiard balls certainly can be distinguished from each other. Even "teleporting" one from one place to another will not result in an identical copy since the quantum states will all have changed just by dint of having been read by the scanning device. Each time you measure, quantum state changes so the reason why you cannot distinguish two identical copies from each other is not because they are identical it's just that you cannot even distinguish the original from itself because the states change each time you measure them.
A macro scale object composed of multiple atoms A, B and C could not distinguish the atoms from another macro scale object composed of multiple atoms of type A, B and C in exactly the same configuration.
That said, we're talking about a single object here. As soon as you go to comparing more than one single object it's not the same: there is position, momentum et cetera of the macro scale objects to distinguish them even though they are the same type of object.
I strongly believe that the disagreement around this topic comes from looking at things as classes from a comp sci perspective.
From a physics perspective it makes sense to say two objects of the same type are different even though the properties are the same except for minor differences such as position and momentum.
From a compsci perspective, talking about the position and momentum of instances of classes doesn't make any sense. The two instances of the classes ARE the same because they are logically the same.
Anyways I've segwayed here: Take the two putative electrons in a previous post above: there is no way to distinguish between the two of them except by position but they ARE two separate electrons, they're not a single electron. If one of them is part of e.g. my brain and then it's swapped out for the other then there's no longer any way to tell which is which. It's impossible. And my guess is this is what's causing the confusion. From a point of view of usefulness neither of the two objects is different from each other. But they are separate from each other and destroying one doesn't mean that there are still two of them, there are now only one and one has been destroyed.
Dave seems to take the position that that is fine because the position and number of copies are irrelevant for him because it's the information content that's important.
For me, sure if my information content lived on that would be better than nothing but it wouldn't be me.
↑ comment by dlthomas · 2011-12-21T18:14:57.127Z · LW(p) · GW(p)
I wouldn't take a destructive upload if I didn't know that I would survive it (in the senses I care about), in roughly the same sense that I wouldn't cross the street if I didn't know I wasn't going to be killed by a passing car. In both cases, I require reasonable assurance. In neither case does it have to be absolute.
Replies from: xxd↑ comment by xxd · 2011-12-21T19:17:36.390Z · LW(p) · GW(p)
Exactly. Reasonable assurance is good enough, absolute isn't necessary. I'm not willing to be destructively scanned even if a copy of me thinks it's me, looks like me, and acts like me.
That said I'm willing to accept the other stance that others take: they believe they are reasonably convinced that destructive scanning just means they will appear somewhere else a fraction of a second (or however long it takes). Just don't ask me to do it. And expect a bullet if you try to force me!
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T19:27:28.745Z · LW(p) · GW(p)
Well, sure. But if we create an economy around you where people who insist on carrying a sack of atoms around with them wherever they go are increasingly a minority... for example, if we stop maintaining roads for you to drive a car on, stop flying airplanes to carry your atoms from place to place, etc. ... what then?
Replies from: xxd↑ comment by xxd · 2011-12-21T19:36:23.949Z · LW(p) · GW(p)
This is a different point entirely. Sure it's more efficient to just work with instances of similar objects and I've already said elsewhere I'm OK with that if it's objects.
And if everyone else is OK with being destructively scanned then I guess I'll have to eke out an existence as a savage. The economy can have my atoms after I'm dead.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T20:02:11.018Z · LW(p) · GW(p)
Sorry I wasn't clear -- the sack of atoms I had in mind was the one comprising your body, not other objects.
Also, my point is that it's not just a case of live and let live. Presumably, if the rest of us giving up the habit of carrying our bodies wherever we go means you are reduced to eking out your existence as a savage, then you will be prepared to devote quite a lot of resources to preventing us from giving up that habit... yes?
Replies from: xxd↑ comment by xxd · 2011-12-21T20:57:17.335Z · LW(p) · GW(p)
Yes that's right.
I will not consent to being involuntarily destructively scanned and yes I will devote all of my resources to prevent myself from being involunarily destructively scanned.
That said, if you or anyone else wants to do it to themselves voluntarily it's none of my business.
If what you're really asking, however, is whether I will attempt to intervene if I notice a group of invididuals or an organization forcing destructive scanning on individuals I suspect that I might but we're not there yet.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T21:17:28.120Z · LW(p) · GW(p)
I understand that you won't consent to being destructively scanned, and that you might intervene to prevent others from being destructively scanned without their consent. That isn't what I asked.
I encourage you to re-read my question. If, after doing so, you still think your reply answers it, then I think we do best to leave it at that.
Replies from: xxd↑ comment by xxd · 2011-12-21T21:55:10.393Z · LW(p) · GW(p)
I thought I had answered but perhaps I answered what I read into it.
If you are asking "will I prevent you from gradually moving everything to digital perhaps including yourselves" then the answer is no.
I just wanted to clarify that we were talking about with consent vs without consent.
↑ comment by TheOtherDave · 2011-12-21T19:00:36.390Z · LW(p) · GW(p)
I agree completely that there are two bunches of matter in this scenario. There are also (from what you're labeling the compsci perspective) two data structures. This is true.
My question is, why should I care? What value does the one on the left have, that the one on the right doesn't have, such that having them both is more valuable than having just one of them? Why is destroying one of them a bad thing? What you seem to be saying is that they are valuable because they are different people... but what makes that a source of value?
For example: to my way of thinking, what's valuable about a person is the data associated with them, and the patterns of interaction between that data and its surroundings. Therefore, I conclude that if I have that data and those interactions then I have preserved what's valuable about the person. There are other things associated with them -- for example, a particular set of atoms -- but from my perspective that's pretty valueless. If I lose the atoms while preserving the data, I don't care. I can always find more atoms; I can always construct a new body. But if I lose the data, that's the ball game -- I can't reconstruct it.
In the same sense, what I care about in a book is the data, not the individual pieces of paper. If I shred the paper while digitizing the book, I don't care... I've kept what's valuable. If I keep the paper while allowing the patterns of ink on the pages t o be randomized, I do care... I've lost what's valuable.
So when I look at a system to determine how many people are present in that system, what I'm counting is unique patterns of data, not pounds of biomass, or digestive systems, or bodies. All of those things are certainly present, but they aren't what's valuable to me. And if the system comprises two bodies, or five, or fifty, or a million, and they all embody precisely the same data, then I can preserve what's valuable about them with one copy of that data... I don't need to lug a million bundles of atoms around.
So, as I say, that's me... that's what I value, and consequently what I think is important to preserve. You think it's important to preserve the individual bundles, so I assume you value something different.
What do you value?
Replies from: dlthomas, xxd↑ comment by dlthomas · 2011-12-21T19:06:59.791Z · LW(p) · GW(p)
More particularly, you regularly change out your atoms.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T19:11:09.981Z · LW(p) · GW(p)
That turns out to be true, but I suspect everything I say above would be just as true if I kept the same set of atoms in perpetuity.
Replies from: dlthomas↑ comment by xxd · 2011-12-21T19:31:24.572Z · LW(p) · GW(p)
I understand that you value the information content and I'm OK with your position.
Let's do another tought experiment then: Say we're some unknown X number of years in the future and some foreign entity/government/whatever decided it wanted the territory of the United States (could be any country, just using the USA as an example) but didn't want the people. It did, however, value the ideas, opinions, memories etc of the American people. If said entity then destructively scanned the landmass but painstakingly copied all of the ideas, opinions, memories etc into some kind of data store which it could access at it's leisure later then would that be the same thing as the original living people?
I'd argue that from a comp sci perspective what you have just done is built a static class which describes the people, their ideas, memories etc but this is not the original people it's just a model of them.
Now don't get me wrong, a model like that would be very valuable, it just wouldn't be the original.
And yes, of course some people value originals otherwise you wouldn't have to pay millions of dollars for postage stamps printed in the 1800s even though I'd guess that scanning that stamp and printing out a copy of it should to all intents and purposes be the same.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T19:56:41.881Z · LW(p) · GW(p)
In the thought experiment you describe, they've preserved the data and not the patterns of interaction (that is, they've replaced a dynamic system with a static snapshot of that system), and something of value is therefore missing, although they have preserved the ability to restore the missing component at their will.
If they execute the model and allow the resulting patterns of interaction to evolve in an artificial environment they control, then yes, that would be just as valuable to me as taking the original living people and putting them into an artificial environment they control.
I understand that there's something else in the original that you value, which I don't... or at least, which I haven't thought about. I'm trying to understand what it is. Is it the atoms? Is it the uninterrupted continuous existence (e.g., if you were displaced forward in time by two seconds, such that for a two-second period you didn't exist, would that be better or worse or the same as destroying you and creating an identical copy two seconds later?) Is it something else?
Similarly, if you valued a postage stamp printed in the 1800s more than the result of destructively scanning such a stamp and creating an atom-by-atom replica of it, I would want to understand what about the original stamp you valued, such that the value was lost in that process.
Thus far, the only answer I can infer from your responses is that you value being the original... or perhaps being the original, if that's different... and the value of that doesn't derive from anything, it's just a primitive. Is that it?
If so, a thought experiment for you in return: if I convince you that last night I scanned xxd and created an identical duplicate, and that you are that duplicate, do you consequently become convinced that your existence is less valuable than you'd previously thought?
Replies from: xxd↑ comment by xxd · 2011-12-22T16:25:00.941Z · LW(p) · GW(p)
I guess from your perspective you could say that the value of being the original doesn't derive from anything and it's just a primitive because the macro information is the same except for position (thought the quantum states are all different even at point of copy). But yes I value the original more than the copy because I consider the original to be me and the others to be just copies, even if they would legally and in fact be sentient beings in their own right.
Yes, if I woke up tomorrow and you could convince me I was just a copy then this is something I have already modeled/daydreamed about and my answer would be: I'd be disappointed that I wasn't the original but glad that I had existence.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T17:21:01.061Z · LW(p) · GW(p)
OK.
↑ comment by ochopelotas · 2011-12-21T17:57:01.969Z · LW(p) · GW(p)
Hmm
↑ comment by APMason · 2011-12-21T01:08:08.364Z · LW(p) · GW(p)
I find "if it walks like a duck and talks like a duck" to be a really good way of identifying ducks.
Replies from: xxd↑ comment by xxd · 2011-12-21T02:15:13.827Z · LW(p) · GW(p)
Agreed. It's the only way we have of verifying that it's a duck.
But is the destructively scanned duck the original duck even though it appears to be the same to all intents and purposes even though you can see the mulch that used to be the body of the original lying there beside the new copy?
Replies from: APMason↑ comment by APMason · 2011-12-21T02:28:17.164Z · LW(p) · GW(p)
I'm not sure that duck identity works like personal identity. If I destroy a rock but make an exact copy of it ten feet to the east, whether or not the two rocks share identity just depends on how you want to define identity - the rock doesn't care, and I'm not convinced a duck would care either. Personal identity, however, is a whole other thing - there's this bunch of stuff we care about to do with having the right memories and the correct personality and utility function etc., and if these things aren't right it's not the same person. If you make a perfect copy of a person and destroy the original, then it's the same person. You've just teleported them - even if you can see the left over dust from the destruction. Being made of the "same" atoms, after all, has nothing to do with identity - atoms don't have individual identities.
Replies from: xxd↑ comment by xxd · 2011-12-21T02:35:07.081Z · LW(p) · GW(p)
That's a point of philosophical disagreement between us. Here's why:
Take an individual.
Then take a cell from that individual. Grow it in a nutrient bath. Force it to divide. Rinse, wash, repeat.
You create a clone of that person.
Now is that clone the same as the original? No it is not. It is a copy. Or in a natural version of this, a twin.
Now let's say technology exists to transfer memories and mind states.
After you create the clone-that-is-not-you you then put your memories into it.
If we keep the original alive the clone is still not you. How does killing the original QUICKLY make the clone you?
Replies from: TheOtherDave, APMason, None↑ comment by TheOtherDave · 2011-12-21T02:49:42.004Z · LW(p) · GW(p)
(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me"?
If there is some X that differs between those people, such that the label "me" applies to one value of X but not the other value, then talking about which one is "me" makes sense. We might not be able to detect the difference, but there is a difference; if we improved the quality of our X-detectors we would be able to detect it.
But if there is no such X, then for as long as we continue talking about which of those people is "me," we are not talking about anything in the world. Under those circumstances it's best to set aside the question of which is "me."
Replies from: xxd↑ comment by xxd · 2011-12-21T02:59:52.766Z · LW(p) · GW(p)
"(shrug) After the process you describe, there exist two people in identical bodies with identical memories. What conceivable difference does it make which of those people we label "me"? What conceivable difference does it make whether we label both of those people "me""
Because we already have a legal precedent. Twins. Though their memories are very limited they are legally different people. My position is rightly so.
Replies from: Nornagest, TheOtherDave↑ comment by Nornagest · 2011-12-21T03:16:12.722Z · LW(p) · GW(p)
Identical twins, even at birth, are different people: they're genetically identical and shared a very close prenatal environment, but the actual fork happened sometime during the zygote stage of development, when neither twin had a nervous system let alone a mind-state. But I'm not sure why you're bringing this up in the first place: legalities don't help us settle philosophical questions. At best they point to a formalization of the folk solution.
As best I can tell, you're trying to suggest that individual personhood is bound to a particular physical instance of a human being (albeit without actually saying so). Fair enough, but I'm not sure I know of any evidence for that proposition other than vague and usually implicitly dualist intuitions. I'm not a specialist in this area, though. What's your reasoning?
Replies from: xxd↑ comment by xxd · 2011-12-21T03:19:50.558Z · LW(p) · GW(p)
Risk avoidance. I'm uncomfortable with taking the position that creating a second copy and destroying the original is the original simply because if it isn't then the original is now dead.
Replies from: Nornagest↑ comment by Nornagest · 2011-12-21T03:27:37.830Z · LW(p) · GW(p)
Yes, but how do you conclude that a risk exists? Two philosophical positions don't mean fifty-fifty chances that one is correct; intuition is literally the only evidence for one of the alternatives here to the best of my knowledge, and we already know that human intuitions can go badly off the rails when confronted with problems related to anthropomorphism.
Granted, we can't yet trace down human thoughts and motivations to the neuron level, but we'll certainly be able to by the time we're able to destructively scan people into simulations; if there's any secret sauce involved, we'll by then know it's there if not exactly what it is. If dualism turns out to win by then I'll gladly admit I was wrong; but if any evidence hasn't shown up by that time, it sounds an awful lot like all there is to fall back on is the failure mode in "But There's Still A Chance, Right?".
Replies from: xxd↑ comment by xxd · 2011-12-21T03:34:21.853Z · LW(p) · GW(p)
Here's why I conclude a risk exists: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
Replies from: Nornagest↑ comment by Nornagest · 2011-12-21T03:40:22.826Z · LW(p) · GW(p)
I read that earlier, and it doesn't answer the question. If you believe that the second copy in your scenario is different from the first copy in some deep existential sense at the time of division (equivalently, that personhood corresponds to something other than unique brain state), you've already assumed a conclusion to all questions along these lines -- and in fact gone past all questions of risk of death and into certainty.
But you haven't provided any reasoning for that belief: you've just outlined the consequences of it from several different angles.
↑ comment by TheOtherDave · 2011-12-21T03:31:44.933Z · LW(p) · GW(p)
Yes, we have two people after this process has completed... I said that in the first place. What follows from that?
EDIT: Reading your other comments, I think I now understand what you're getting at.
No, if we're talking about only the instant of duplication and not any other instant, then I would say that in that instant we have one person in two locations.
But as soon as the person at those locations start to accumulate independent experiences, then we have two people.
Similarly, if I create a static backup of a snapshot of myself, and create a dozen duplicates of that backup, I haven't created a dozen new people, and if I delete all of those duplicates I haven't destroyed any people.
The uniqueness of experience is important.
Replies from: xxd↑ comment by xxd · 2011-12-21T03:33:56.657Z · LW(p) · GW(p)
this follows: http://lesswrong.com/lw/b9/welcome_to_less_wrong/5huo?context=1#5huo
↑ comment by APMason · 2011-12-21T02:42:51.630Z · LW(p) · GW(p)
I agree that the clone is not me until you write my brain-states onto his brain (poor clone). At that point it is me - it has my brain states. Both the clone and the original are identical to the one who existed before my brain-states were copied - but they're not identical to each other, since they would start to have different experiences immediately. "Identical" here meaning "that same person as" - not exact isomorphic copies. It seems obvious to me that personal identity cannot be a matter of isomorphism, since I'm not an exact copy of myself from five seconds ago anyway. So the answer to the question is killing the original quickly doesn't make a difference to the identity of a clone, but if you allow the original to live a while, it becomes a unique person, and killing him is immoral. Tell me if I'm not being clear.
↑ comment by [deleted] · 2011-12-21T02:41:11.385Z · LW(p) · GW(p)
Regardless of what you believe you're avoiding the interesting question: if you overwrite your clone's memories and personality with your own, is that clone the same person as you? If not, what is still different?
I don't think anyone doubts that a clone of me without my memories is a different person.
↑ comment by Bugmaster · 2011-12-21T01:07:48.750Z · LW(p) · GW(p)
No I'd not particularly care if it was my car that was returned to me because it gives me utility and it's just a thing.
Right, but presumably, you would be unhappy if your Ferrari got stolen and you got a Yaris back. In fact, you might be unhappy even if your Yaris got stolen and you got a Ferrari back -- wouldn't you be ?
I'd care if my wife was kidnapped and some simulacrum was given back in her stead but I doubt I would be able to tell if it was such an accurate copy and though if I knew the fake-wife was fake I'd probably be creeped out but if I didn't know I'd just be so glad to have my "wife" back.
If the copy was so perfect that you couldn't tell that it wasn't your wife, no matter what tests you ran, then would you believe anyone who told you that this being was in fact a copy, and not your wife at all ?
I think there's no viable way to determine sentience other than "if it walks like a duck and talks like a duck"
I agree (I think), but then I am tempted to conclude that creating fully sentient beings merely for my own amusement is, at best, ethically questionable.
Replies from: xxd↑ comment by xxd · 2011-12-21T02:18:57.594Z · LW(p) · GW(p)
Really good discussion.
Would I believe? I think the answer would depend on whether I could find the original or not. I would, however, find it disturbing to be told that the copy was a copy.
And yes, if the beings are fully sentient then yes I agree it's ethically questionable. But since we cannot tell then it comes down to the conscience of the individual so I guess I'm evil then.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T02:38:17.693Z · LW(p) · GW(p)
Would I believe? I think the answer would depend on whether I could find the original or not.
Finding the original, and determining that it is, in fact, the original, would constitute a test you could run to determine whether your current wife is a replica or not. Thus, under our scenario, finding the original would be impossible.
I would, however, find it disturbing to be told that the copy was a copy.
Disturbing how ? Wouldn't you automatically dismiss the person who tells you this as a crazy person ? If not, why not ?
But since we cannot tell then it comes down to the conscience of the individual so I guess I'm evil then.
Er... ok, that's good to know. edges away slowly
Personally, if I encountered some beings who appeared to be sentient, I'd find it very difficult to force them to do my bidding (through brute force, or by overwriting their minds, or by any other means). Sure, it's possible that they're not really sentient, but why risk it, when the probability of this being the case is so low ?
Replies from: xxd↑ comment by xxd · 2011-12-21T02:51:08.950Z · LW(p) · GW(p)
You're right. It is impossible to determine that the current copy is the original or not.
"Disturbing how?" Yes I would dismiss the person as being a fruitbar of course. But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I'd be really uncomfortable with it. I personally would strenously object to ever teleporting myself or copying myself by this method into a simulation.
"edges away slowly" lol. Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function. I would fight to prevent the construction of an AI based on anything but the average utility function of humanity even if it excluded my own maximized utility function because I'm honest enough to say that maximizing my own personal utility function is not in the best interests of humanity. Even then I believe that producing an AI whose utility function is maximizing the best interests of humanity is incredibly difficult and thus have concluded that created an AI whose definition is just NOT(Unfriendly) and attempting to trade with it is probably far easier. Though I have not read Eliezer's CEV paper so I require further input.
"difficult to force them to do my bidding".
I don't know if you enjoy video games or not. Right now there's a 1st person shooter called Modern Warfare 3. It's pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill - are automatons and we know for sure that they're automatons. Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T03:12:18.163Z · LW(p) · GW(p)
But if the technology existed to destructively scan an individual and copy them into a simulation or even reconstitute them from different atoms after being destructively scanned I'd be really uncomfortable with it.
What if the reconstitution process was so flawless that there was no possible test your wife could run to determine whether or not you'd been teleported in this matter ? Would you still be uncomfortable with the process ? If so, why, and how does it differ from the reversed situation that we discussed previously ?
Not any more evil than I believe it was Phil who explicitly stated he would kill others who would seek to prevent the building of an AI based on his utility function.
Whoever that Phil guy is, I'm going to walk away briskly from him, as well. Walking backwards. So as not to break the line of sight.
Right now there's a 1st person shooter called Modern Warfare 3. It's pretty damn realistic though the non-player-characters [NPCs] - which you shoot and kill - are automatons and we know for sure that they're automatons.
I haven't played that particular shooter, but I am reasonably certain that these NPCs wouldn't come anywhere close to passing the Turing Test. Not even the dog version of the Turing Test.
Now fast forward 20 years and we have NPCs which are so realistic that to all intents and purposes they pass the turing test. Is killing these NPCs in Modern Warfare 25 murder?
I would say that, most likely, yes, it is murder.
Replies from: xxd↑ comment by xxd · 2011-12-21T03:28:48.019Z · LW(p) · GW(p)
I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.
Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Simply speeding the process up perhaps by vaporizing the original doesn't make the outcome any different, the original is still dead.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I'd still be reluctant to do this myself.
That said, I'd be willing to become a hybrid organism slowly by replacing parts of me and although it wouldn't be the original me at the end of the total replacement process it would still be the hybrid "me".
Interesting position on the killing of the NPCs and in terms of usefulness that's why it doesn't matter to me if a being is sentient or not in order to meet my definition of AI.
Replies from: TheOtherDave, APMason↑ comment by TheOtherDave · 2011-12-21T03:51:29.863Z · LW(p) · GW(p)
If I make a perfect copy of myself, then at the instant of duplication there exists one person at two locations. A moment later, the entities at those two locations start having non-identical experiences and entering different mental states, and thereby become different people (who aren't one another, although both of them are me). If prior to duplication I program a device to kill me once and only once, then I die, and I have killed myself, and I continue to live.
I agree that this is a somewhat confusing way of talking, because we're not used to life and death and identity working that way, but we have a long history of technological innovations changing the way we talk about things.
Replies from: xxd↑ comment by xxd · 2011-12-21T03:57:50.810Z · LW(p) · GW(p)
I understand completely your logic but I do not buy it because I do not agree that at the instant of the copying you have one person at two locations. They are two different people. One being the original and the other being an exact copy.
Replies from: Bugmaster, TheOtherDave↑ comment by TheOtherDave · 2011-12-21T04:08:20.408Z · LW(p) · GW(p)
OK, cool... I understand you, then.
Can you clarify what, if anything, is uniquely valuable about a person who is an exact copy of another person?
Or is this a case where we have two different people, neither of whom have any unique value?
↑ comment by APMason · 2011-12-21T03:42:42.744Z · LW(p) · GW(p)
I'm talking exactly about a process that is so flawless you can't tell the difference. Where my concern comes from is that if you don't destroy the original you now have two copies. One is the original (although you can't tell the difference between the copy and the original) and the other is the copy.
Now where I'm uncomfortable is this: If we then kill the original by letting Freddie Krueger or Jason do his evil thing then though the copy is still alive AND is/was indistinguishable from the original then the alternative hypothesis which I oppose states that the original is still alive and yet I can see the dead body there.
Well, think of it this way: Copy A and Copy B are both Person X. Copy A is then executed. Person X is still alive because Copy B is Person X. Copy A is dead. Nothing inconsistent there - and you have a perfectly fine explanation for the presence of a dead body.
It gets murkier if the original is destructively scanned and then rebuilt from the same atoms but I'd still be reluctant to do this myself.
There is no such thing as "the same atoms" - atoms do not have individual identities.
Interesting position on the killing of the NPCs and in terms of usefulness that's why it doesn't matter to me if a being is sentient or not in order to meet my definition of AI.
I don't think anyone was arguing that the AI needed to be conscious - intelligence and consciousness are orthogonal.
Replies from: xxd↑ comment by xxd · 2011-12-21T03:55:26.405Z · LW(p) · GW(p)
K here's where we disagree:
Original Copy A and new Copy B are indeed instances of person X but it's not a class with two instances as in CompSci 101. The class is Original A and it's B that is the instance. They are different people.
In order to make them the same person you'd need to do something like this: Put some kind of high bandwidth wifi in their heads which synchronize memories. Then they'd be part of the same hybrid entity. But at no point are they the same person.
Replies from: APMason↑ comment by APMason · 2011-12-21T04:04:39.527Z · LW(p) · GW(p)
Original Copy A and new Copy B are indeed instances of person X but it's not a class with two instances as in CompSci 101. The class is Original A and it's B that is the instance. They are different people.
I don't know why it matters which is the original - the only difference between the original and the copy is location. A moment after the copy happens, their mental states begin to diverge because they have different experiences, and they become different people to each other - but they're both still Person X.
Replies from: xxd↑ comment by xxd · 2011-12-21T05:18:01.017Z · LW(p) · GW(p)
It matters to you if you're the original and then you are killed.
You are right that they are both an instance of person X but my argument is that this is not the equivalent to them being the same person in fact or even in law (whatever that means).
Also when/if this comes about I bet the law will side with me and define them as two different people in the eyes of the law. (And I'm not using this to fallaciously argue from authority, just pointing out I strongly believe I am correct - though willing to concede if there is ultimately some logical way to prove they are the same person.)
The reason is obvious. If they are the same person and one of them kills someone are both of them guilty? If one fathers a child, is the child the offspring of both of them?
Because of this I cannot agree beyond saying that the two different people are copies of person x. Even you are prepared to concede that they are different people to each other after the mental states begin to diverge so I can't close the logical gap why you say they are the same person and not copies of the same person one being the original. You come partway to saying they are different people. Why not come all the way?
Replies from: APMason, TheOtherDave↑ comment by APMason · 2011-12-21T11:52:57.644Z · LW(p) · GW(p)
I agree with TheOtherDave. If you imagine that we scan someone's brain and then run one-thousand simulations of them walking around the same environment, all having exactly the same experiences, it doesn't matter if we turn one of those simulations off. Nobody's died. What I'm saying is that the person is the mental states, and what it means for two people to be different people is that they have different mental states. I'm not really sure about the morality of punishing them both for the crimes of one of them, though. On one hand, the one who didn't do it isn't the same person as the one who did - they didn't actually experience committing the murder or whatever. On the other hand, they're also someone who would have done it in the same circumstances - so they're dangerous. I don't know.
Replies from: twanvl↑ comment by twanvl · 2011-12-21T12:22:58.889Z · LW(p) · GW(p)
it doesn't matter if we turn one of those simulations off. Nobody's died.
You are decreasing the amount of that person that exists.
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Replies from: APMason, TheOtherDave, ArisKatsaris↑ comment by APMason · 2011-12-21T13:17:40.538Z · LW(p) · GW(p)
Suppose the multiple words interpretation is true. Now I flip a fair quantum coin, and kill you if it comes up heads. Then in 50% of the worlds you still live, so by your reasoning, nobody has died. All that changes is the amplitude of your existence.
Well, maybe. But there is a whole universe full of people who will never speak to you again and are left to grieve over your body.
Replies from: twanvl↑ comment by TheOtherDave · 2011-12-21T16:49:57.443Z · LW(p) · GW(p)
You are decreasing the amount of that person that exists.
Yes, there is a measure of that person's existence (number of perfect copies) which I'm reducing by deleting a perfect copy of that person. What I'm saying is precisely that I don't care, because that is not a measure of people I value.
Similarly, if I gain 10 pounds, there's a measure of my existence (mass) which I thereby increase. I don't care, because that's not a measure of people I value.
Neither of those statements is quite true, admittedly. For example, I care about gaining 10 pounds because of knock-on effects -- health, vanity, comfort, etc. I care about gaining an identical backup because of knock-on effects -- reduced risk of my total destruction, for example. Similarly, I care about gaining a million dollars, I care about gaining the ability to fly, there's all kinds of things that I care about. But I assume that your point here is not that identical copies are valuable in some sense, but that they are valuable in some special sense, and I just don't see it.
As far as MWI goes, yes... if you posit a version of many-worlds where the various branches are identical, then I don't care if you delete half of those identical branches. I do care if you delete me from half of them, because that causes my loved ones in those branches to suffer... or half-suffer, if you like. Also, because the fact that those branches have suddenly become non-identical (since I'm in some and not the others) makes me question the premise that they are identical branches.
↑ comment by ArisKatsaris · 2011-12-21T12:47:29.709Z · LW(p) · GW(p)
You are decreasing the amount of that person that exists.
And this "amount" is measured by the number of simulations? What if one simulation is using double the amount of atoms (e.g. by having thicker transistors), does it count twice as much? What if one simulation double checks each result, and another does not, does it count as two?
All that changes is the amplitude of your existence.
The equivalence between copies spreads across the many-worlds and identical simulations running in the same world, is yet to be proven or disproven -- and I expect it won't be proven or disproven until we have some better understanding about the hard problem of consciousness.
↑ comment by TheOtherDave · 2011-12-21T05:29:27.297Z · LW(p) · GW(p)
Can't speak for APMason, but I say it because what matters to me is the information.
If the information is different, and the information constitutes people, then it constitutes different people. If the information is the same, then it's the same person. If a person doesn't contain any unique information, whether they live or die doesn't matter nearly as much to me as if they do.
And to my mind, what the law decides to do is an unrelated issue. The law might decide to hold me accountable for the actions of my 6-month-old, but that doesn't make us the same person. The law might decide not to hold me accountable for what I did ten years ago, but that doesn't mean I'm a different person than I was. The law might decide to hold me accountable for what I did ten years ago, but that doesn't mean I'm the same person I was.
Replies from: xxd↑ comment by xxd · 2011-12-21T19:45:22.971Z · LW(p) · GW(p)
"If the information is different, and the information constitutes people, then it constitutes different people."
True and therein lies the problem. Let's do two comparisons: You have two copies. One the original, the other the copy.
Compare them on the macro scale (i.e. non quantum). They are identical except for position and momentum.
Now let's compare them on the quantum scale: Even at the point where they are identical on the macro scale, they are not identical on the quantum scale. All the quantum states are different. Just the simple act of observing the states (either by scanning it or by rebuilding it) changes it and thus on the quantum scale we have two different entities even though they are identical on the macro scale except for position and momentum.
Using your argument that it's the information content that's important, they don't really have any useful differences from an information content especially not on the macro scale but they have significant differences in all of their non useful quantum states. They are physically different entities.
Basically what you're talking about is using a lossy algorithm to copy the individuals. At the level of detail you care about they are the same. At a higher level of detail they are distinct.
I'm thus uncomfortable with killing one of them and then saying the person still exists.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T20:07:35.075Z · LW(p) · GW(p)
So, what you value is the information lost during the copy process? That is, we've been saying "a perfect copy," but your concern is that no copy that actually exists could actually be a perfect copy, and the imperfect copies we could actually create aren't good enough?
Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?
Replies from: xxd, xxd↑ comment by xxd · 2011-12-22T16:33:32.763Z · LW(p) · GW(p)
"Again, just to be clear, what I'm trying to understand is what you value that I don't. If data at these high levels of granularity is what you value, then I understand your objection. Is it?"
OK I've mulled your question over and I think I have the subtley of what you are asking down as distinct from the slight variation I answered.
Since I value my own life I want to be sure that it's actually me that's alive if you plan to kill me. Because we're basically creating an additional copy really quickly and then disposing of the original I have a hard time believing that we're doing something equivalent to a single copy walking through a gate.
I don't believe that just the information by itself is enough to answer the question "Is it the original me?" in affirmative. And given that it's not even all of the information (though is all of the information on the macro scale) I know for a fact we're doing a lossy copy. The quantum states are possibly irrelevant on a macro scale for determing is (A == B) but since I knew from physics that they're not exactly equivalent once you go down to the quantum level I just can't buy into it though things would be murkier if the quantum states were provably identical.
Does that answer your question?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T17:56:47.501Z · LW(p) · GW(p)
Maybe?
Here's what I've understood; let me know if I've misunderstood anything.
Suppose P is a person who was created and preserved in the ordinary way, with no funky hypothetical copy/delete operations involved. There is consequently something about P that you value... call that "something" X for convenience.
If P' is a duplicate of P, then P' does not possess X, or at least cannot be demonstrated to possess X.
This only applies to people; non-person objects either do not possess X in the first place, or if they do, it is possible in principle for a duplication process to create a duplicate that also possesses X.
X is preserved for P from one moment/day/year to the next, even though P's information content -- at a macroscopic level, let alone a quantum one -- changes over time. I conclude that X does not depend on P's information content at all, even on a macroscopic level, and all this discussion of preserving quantum states is a red herring.
By similar reasoning, I conclude that X doesn't depend on atoms, since the atoms of which P is comprised change over time. The same is true of energy levels.
I don't have any idea of what that X might actually be; since we've eliminated from consideration everything about people I'm aware of.
I'm still interested in more details about X, beyond the definitional attribute of "X is that thing P has that P' doesn't", but I no longer believe I can elicit those details through further discussion.
Replies from: xxd↑ comment by xxd · 2011-12-22T18:17:54.908Z · LW(p) · GW(p)
EDIT: Yes, you did understand though I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I think it's an axiomatic difference Dave.
It appears from my side of the table that you're starting from the axiom that all that's important is information and that originality and/or physical existence including information means nothing.
And you're dismissing the quantum states as if they are irrelevant. They may be irrelevant but since there is some difference between the two copies below the macro scale (and the position is different and the atoms are different - though unidentifiably so other than saying that the count is 2x rather than x of atoms) then it's impossible to dismiss the question "Am I dying when I do this?" because your are making a lossy copy even from your standpoint. The only get-out clause is to say "it's a close enough copy because the quantum states and position are irrelevant because we can't measure the difference between atoms in two identical copies on the macro scale other than saying we've now got 2X the same atoms whereas before we had 1X).
It's exactly analogous to a bacteria budding. The original cell dies and close to an exact copy is budded off a. If the daughter bacteria were an exact copy of the information content of the original bacteria then you'd have to say from your position that it's the same bacteria and the original is not dead right? Or maybe you'd say that it doesn't matter that the original died.
My response to that argument (if it were the line of reasoning you took - is it?) would be that "it matters volitionally - if the original didn't want to die and it was forced to bud then it's been killed).
Replies from: TheOtherDave, dlthomas↑ comment by TheOtherDave · 2011-12-22T18:58:00.763Z · LW(p) · GW(p)
I can't personally say that I'm willing to come out and say definitively that the X is a red herring though it sounds like you are willing to do this.
I did not say the X is a red herring. If you believe I did, I recommend re-reading my comment.
The X is far from being a red herring; rather, the X is precisely what I was trying to elicit details about for a while. (As I said above, I no longer believe I can do so through further discussion.)
But I did say that identity of quantum states is a red herring.
As I said before, I conclude this from the fact that you believe you are the same person you were last year, even though your quantum states aren't identical. If you believe that X can remain unchanged while Y changes, then you don't believe that X depends on Y; if you believe that identity can remain unchanged while quantum states change, then you don't believe that identity depends on quantum states.
To put this another way: if changes in my quantum states are equivalent to my death, then I die constantly and am constantly replaced by new people who aren't me. This has happened many times in the course of writing this comment. If this is already happening anyway, I don't see any particular reason to avoid having the new person appear instantaneously in my mom's house, rather than having it appear in an airplane seat an incremental distance closer to my mom's house.
Other stuff:
Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding.
I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then.
I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)
I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)
A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is "no," since the duplicate isn't them; they stopped existing just as they desired.
↑ comment by xxd · 2011-12-22T20:16:19.525Z · LW(p) · GW(p)
Other stuff:
"Yes, I would say that if the daughter cell is identical to the parent cell, then it doesn't matter that the parent cell died at the instant of budding."
OK good to know. I'll have other questions but I need to mull it over.
"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then." I agree with this but I don't think it supports your line of reasoning. I'll explain why after my meeting this afternoon.
"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)" Interesting. I have a contrary line of argument which I'll explain this afternoon.
"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)" Disagree. Again I'll explain why later.
"A question for you: if someone wants to stop existing, and they destructively scan themselves, am I violating their wishes if I construct a perfect duplicate from the scan? I assume your answer is "no," since the duplicate isn't them; they stopped existing just as they desired." Maybe. If you have destructively scanned them then you have killed them so they now no longer exist so that part you have complied perfectly with their wishes from my point of view. But in order to then make a copy, have you asked their permission? Have they signed a contract saying they have given you the right to make copies? Do they even own this right to make copies? I don't know.
What I can say is that our differences in opinion here would make a superb science fiction story.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T20:24:58.330Z · LW(p) · GW(p)
There's a lot of decent SF on this theme. If you haven't read John Varley's Eight Worlds stuff, I recommend it; he has a lot of fun with this. His short stories are better than his novels, IMHO, but harder to find. "Steel Beach" isn't a bad place to start.
Replies from: xxd↑ comment by xxd · 2011-12-27T18:31:32.983Z · LW(p) · GW(p)
Thanks for the suggestion. Yes I already have read it (steal beach). It was OK but didn't really touch much on our points of contention as such. In fact I'd say it steered clear from them since there wasn't really the concept of uploads etc. Interestingly, I haven't read anything that really examines closely whether the copied upload really is you. Anyways.
"I would also say that it doesn't matter that the vast majority of the cells comprising me twenty years ago are dead, even though the cells currently comprising me aren't identical to the cells that comprised me then."
OK I have to say that now I've thought it through I think this is a straw man argument that "you're not the same as you were yesterday" used as a pretext for saying that you're exactly the same from one moment to the next. It is missing the point entirely.
Although you are legally the same person, it's true that you're not exactly physically the same person today as you were yesterday and it's also true that you have almost none of the original physical matter or cells in you today as you had when you were a child.
That this is true in no way negates the main point: human physical existence at any one point in time does have continuity. I have some of the same cells I had up to about seven to ten years ago. I have some inert matter in me from the time I was born AND I have continual memories to a greater or lesser extent. This is directly analogous to my position that I posted before about a slow hybridizing transition to machine form before I had even clearly thought this out consciously.
Building a copy of yourself and then destroying the original has no continuity. It's directly analgous to budding asexually a new copy of yourself and then imprinting it with your memories and is patently not the same concept as normal human existence. Not even close.
That you and some others might dismiss the differences is fine and if you hypothetically wanted to take the position that killing yourself so that a copy of your mind state could exist indefinitely then I have no problem with that, but it's patently not the same as the process you, I and everyone else goes through on a day to day basis. It's a new thing. (Although it's already been tried in nature as the asexual budding process of bacteria).
I would appreciate, however, that if that is a choice being offered to others, that it is clearly explained to them what is happening. i.e. physical body death and a copy being resurrected, not that they themselves continue living, because they do not. Whether you consider it irrelevant is besides the point. Volition is very important, but I'll get to that later.
"I agree with you that if a person is perfectly duplicated and the original killed, then the original has been killed. (I would also say that the person was killed, which I think you would agree with. I would also say that the person survived, which I think you would not agree with.)"
That's directly analogous to multi worlds interpretation of quantum physics which has multiple timelines. You could argue from that perspective that death is irrelevant because in an infintude of possibilities if one of your instances die then you go on existing. Fine, but it's not me. I'm mortal and always will be even if some virtual copy of me might not be. So you guessed correctly, unless we're using some different definition of "person" (which is likely I think) then the person did not survive.
"I agree that volition is important for its own sake, but I don't understand what volition has to do with what we've thus far been discussing. If forcing the original to bud kills the original, then it does so whether the original wants to die or not. If it doesn't kill the original, then it doesn't, whether the original wants to die or not. It might be valuable to respect people's volition, but if so, it's for some reason independent of their survival. (For example, if they want to die, then respecting their volition is opposed to their survival.)"
Volition has everything to do with it. While it's true that volition is independent of whether they have died or not (agreed), the reason it's important is that some people will likely take your position to justify forced destructive scanning at some point because it's "less wasteful of resources" or some other pretext.
It's also particularly important in the case of an AI over which humanity would have no control. If the AI decides that uploads via destructive scanning are exactly the same thing as the original, and it needs the space for it's purposes then there is nothing to stop it from just going ahead unless volition is considered to be important.
Here's a question for you: Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
So here's a scenario for you given that you think information is the only important thing: Do you consider a person who has lost much of their memory to be the same person? What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top. Did they just die? What if it's someone else's memories: did they just die?
Here's yet another scenario. I wonder if you have though about this one: Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using "identical" to mean functionally indentical because we can't get exactly identical as discussed before). Copy the contents of the mindstates into that clone.
Ask yourself this question: How many deaths have taken place here?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-27T20:35:27.024Z · LW(p) · GW(p)
I agree that there is physical continuity from moment to moment in typical human existence, and that there is similar continuity with a slow transition to a nonhuman form. I agree that there is no such continuity with an instantaneous copy-and-destroy operation.
I understand that you consider that difference uniquely important, such that I continue living in the first case, and I don't continue living in the second case.
I infer that you believe in some uniquely important attribute to my self that is preserved by the first process, and not preserved by the second process.
I agree that if a person is being offered a choice, it is important for that person to understand the choice. I'm perfectly content to describe the choice as between the death of one body and the creation of another, on the one hand, and the continued survival of a single body, on the other. I'm perfectly content not to describe the latter process as the continuation of an existing life.
I endorse individuals getting to make informed choices about their continued life, and their continued existence as people, and the parameters of that existence. I endorse respecting both their stated wishes, and (insofar as possible) their volition, and I acknowledge that these can conflict given imperfect information about the world.
Do you have a problem with involuntary forced destructive scanning in order to upload individuals into some other substrate (or even a copied clone)?
Yes. As I say, I endorse respecting individuals' stated wishes, and I endorse them getting to make informed choices about their continued existence and the parameters of that existence; involuntary destructive scanning interferes with those things. (So does denying people access to destructive scanning.)
Do you consider a person who has lost much of their memory to be the same person?
It depends on what 'much of' means. If my body continues to live, but my memories and patterns of interaction cease to exist, I have ceased to exist and I've left a living body behind. Partial destruction of those memories and patterns is trickier, though; at some point I cease to exist, but it's hard to say where that point is.
What if such a person (who has lost much of their memory) then has a backed up copy of their memories from six months ago imprinted over top?
I am content to say I'm the same person now that I was six months ago, so if I am replaced by a backed-up copy of myself from six months ago, I'm content to say that the same person continues to exist (though I have lost potentially valuable experience). That said, I don't think there's any real fact of the matter here; it's not wrong to say that I'm a different person than I was six months ago and that replacing me with my six-month-old memories involves destroying a person.
What if it's someone else's memories: did they just die?
If I am replaced by a different person's memories and patterns of interaction, I cease to exist.
Scan a person destructively (with their permission). Keep their scan in storage on some static substrate. Then grow a perfectly identical clone of them (using "identical" to mean functionally indentical because we can't get exactly identical as discussed before). Copy the contents of the mindstates into that clone. How many deaths have taken place here?
Several trillion: each cell in my current body died. I continue to exist. If my clone ever existed, then it has ceased to exist.
Incidentally, I think you're being a lot more adversarial here than this discussion actually calls for.
Replies from: xxd↑ comment by xxd · 2011-12-27T21:31:01.782Z · LW(p) · GW(p)
Very Good response. I can't think of anything to disagree with and I don't think I have anything more to add to the discussion.
My apologies if you read anything adversarial into my message. My intention was to be pointed in my line of questioning but you responded admirably without evading any questions.
Thanks for the discussion.
↑ comment by dlthomas · 2011-12-22T18:28:30.345Z · LW(p) · GW(p)
What if you were in a situation where you had a near 100% chance of a seemingly successful destructive upload on the one hand, and a 5% chance of survival without upload on the other? Which would you pick, and how does your answer generalize as the 5% goes up or down?
Replies from: xxd↑ comment by xxd · 2011-12-22T18:40:28.617Z · LW(p) · GW(p)
Of course I would do it because it would be better than nothing. My memories would survive. But I would still be dead.
Here's a thought experiment for you to outline the difference (whether you think it makes sense from your position whether you only value the information or not): Let's say you could slowly transfer a person into an upload by the following method: You cut out a part of the brain. That part of the brain is now dead. You replace it with a new part, a silicon part (or some computational substrate) that can interface directly with the remaining neurons.
Am I dead? Yes but not all of me is and we're now left with a hybrid being. It's not completely me, but I've not yet been killed by the process and I get to continue to live and think thoughts (even though part of my thoughts are now happening inside something that isn't me).
Gradually over a process of time (let's say years rather than days or minutes or seconds) all of the parts of the brain are replaced.
At the end of it I'm still dead, but my memories live on. I did not survive but some part of the hybrid entity I became is alive and I got the chance to be part of that.
Now I know the position you'd take is that speeding that process up is mathematically equivalent.
It isn't from my perspective. I'm dead instantly and I don't get the chance to transition my existence in a meaningful way to me.
Sidetracking a little: I suspect you were comparing your unknown quantity X to some kind of "soul". I don't believe in souls. I value being alive and having experiencing and being able to think. To me, dying and then being resurrected on the last day by some superbeing who has rebuilt my atoms using other atoms and then copies my information content into some kind of magical "spirit being" is exactly identical to deconstructing me - killing me - and making a copy even if I took the position that the reconstructed being on "the last day" was me. Which I don't. As soon as I die that's me gone, regardless of whether some superbeing reconstructs me later using the same or different atoms (if that were possible).
↑ comment by xxd · 2011-12-21T20:49:41.111Z · LW(p) · GW(p)
You're basically asking why I should value myself over a separate in space exact copy of myself (and by exact copy we mean as close as you can get) and then superimposing another question of "isn't it the information that's important?"
Not exactly.
I'm concerned that I will die and I'm examining the hyptheses as to why it's not me that dies. Best as I can come up with the response is "you will die but it doesn't matter because there's another identical (or close as possible) copy still around.
As to what you value that I don't I don't have an answer. Perhaps a way to elicit the answer would be to ask you the question of why you only value the information and not the physical object also?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-21T21:10:46.850Z · LW(p) · GW(p)
I'm not asking why you should value yourself over an exact copy, I'm asking why you do. I'm asking you (over and over) what you value. Which is a different question from why you value whatever that is.
I've told you what I value, in this context. I don't know why I value it, particularly... I could tell various narratives, but I'm not sure I endorse any of them.
As to what you value that I don't I don't have an answer.
Is that a typo? What I've been trying to elicit is what xxd values here that TheOtherDave doesn't, not the other way around. But evidently I've failed at that... ah well.
Replies from: xxd↑ comment by xxd · 2011-12-21T21:59:40.827Z · LW(p) · GW(p)
Thanks Dave. This has been a very interesting discussion and although I think we can't close the gap on our positions I've really enjoyed it.
To answer your question "what do I value"? I think I answered it already, I valued not being killed.
The difference in our positions appears to be some version "but your information is still around" and my response is "but it's not me" and your response is "how is it not you?"
I don't know.
"What is it I value that you don't?" I don't know. Maybe I consider myself to be a higher resolution copy or a less lossy copy or something. I can't put my finger on it because when it comes down to it why do just random quantum states make a difference to me when all the macro information is the same apart from position and perhaps momentum. I don't really have an answer for that.
↑ comment by APMason · 2011-12-20T23:48:47.590Z · LW(p) · GW(p)
But you want the things you think are people to really be people, right?
Replies from: xxd↑ comment by xxd · 2011-12-20T23:53:34.282Z · LW(p) · GW(p)
I'm not sure I care. For example if I had my evil way and I went FOOM then part of my optimization process would involve mind control and somewhat deviant roleplay with certain porno actresses. Would I want those actresses to be controlled against their will? Probably not. But at the same time it would be good enough if they were able to simulate being the actresses in a way that I could not tell the difference between the original and the simulated.
Others may have different opinions.
Replies from: APMason↑ comment by APMason · 2011-12-21T00:00:58.882Z · LW(p) · GW(p)
You wouldn't prefer to forego the deviant roleplay for the sake of, y'know, not being evil?
But that's not the point, I suppose. It sounds like you'd take the Experience Machine offer. I don't really know what to say to that except that it seems like a whacky utility function.
Replies from: xxd↑ comment by xxd · 2011-12-21T00:07:22.664Z · LW(p) · GW(p)
How is the deviant roleplay being evil if the participants are not being coerced or are catgirls? And if it's not being evil then how would I be defined as evil just because I (sometimes - not always) like deviant roleplay?
That's the cruz of my point. I don't reckon that optimizing humanity's utility function is the opposite of unfriendly AI (or any individual's for that matter) and I furthermore reckon that trying to seek that goal is much, much harder than trying to create an AI that at a minimum won't kill us all AND might trade with us if it wants to.
Replies from: APMason↑ comment by APMason · 2011-12-21T00:19:02.721Z · LW(p) · GW(p)
Oh, sorry, I interpreted the comment incorrectly - for some reason I assumed your plan was to replace the actual porn actresses with compliant simulations. I wasn't saying the deviancy itself was evil. Remember that the AI doesn't need to negotiate with you - it's superintelligent and you're not. And while creating an AI that just ignores us but still optimises other things, well, it's possible, but I don't think it would be easier than creating FAI, and it would be pretty pointless - we want the AI to do something, after all.
Replies from: xxd↑ comment by xxd · 2011-12-21T00:22:39.048Z · LW(p) · GW(p)
A-Ha!
Therein lies the crux: you want the AI to do stuff for you.
EDIT: Oh yeah I get you. So it's by definition evil if I coerce the catgirls by mind control. I suppose logically I can't have my cake and eat it since I wouldn't want my own non-sentient simulation controlled by an evil AI either.
So I guess that makes me evil. Who would have thunk it. Well I guess strike my utility function of the list of friendly AIs. But then again I've already said that elsewhere that I wouldn't trust my own function to be the optimal.
I doubt, however, that we'd easily find a candidate function from a single individual for similar reasons.
Replies from: APMason↑ comment by APMason · 2011-12-21T00:42:36.614Z · LW(p) · GW(p)
I think we've slightly misunderstood each other. I originally thought you were saying that you wanted to destructively upload porn actresses and then remove sentience so they did as they were told - which is obviously evil. But I now realise you only want to make catgirl copies of porn actresses while leaving the originals intact (?) - the moral character of which depends on things like whether you get the consent of the actresses involved.
But yes! Of course I want the AGI to do something. If it doesn't do anything, it's not an AI. It's not possible to write code that does absolutely nothing. And while building AGI might be a fun albeit stupidly dangerous project to pursue just for the heck of it, the main motivator behind wanting the thing to be created (speaking for myself) is so that it can solve problems, like, say, death and scarcity.
Replies from: Bugmaster, xxd↑ comment by Bugmaster · 2011-12-21T00:56:21.466Z · LW(p) · GW(p)
If it doesn't do anything, it's not an AI.
Technically, it's still an AI, it's just a really useless one.
Replies from: xxd↑ comment by xxd · 2011-12-21T00:54:44.333Z · LW(p) · GW(p)
Correct. I (unlike some others) don't hold the position that a destructive upload and then a simulated being is exactly the same being therefore destructively scanning the porn actresses would be killing them in my mind. Non destructively scanning them and them using the simulated versions for "evil purposes", however, is not killing the originals. Whether using the copies for evil purposes even against their simulated will is actually evil or not is debatable. I know some will take the position that the simulations could theoretically be sentient, If they are sentient then I am therefroe de facto evil.
And I get the point that we want to get the AGI to do something, just that I think it will be incredibly difficult to get it to do something if it's recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
Replies from: TimS, APMason↑ comment by APMason · 2011-12-21T01:05:44.210Z · LW(p) · GW(p)
Well, I would argue that if the computer is running a perfect simulation of a person, then the simulation is sentient - it's simulating the brain and is therefore simulating consciousness, and for the life of me I can't imagine any way in which "simulated consciousness" is different from just "consciousness".
I think it will be incredibly difficult to get it to do something if it's recursively self improving and it becomes progressively more difficult to do the further away you go from defining friendly as NOT(unfriendly).
I disagree. Creating a not-friendly-but-harmless AGI shouldn't be any easier than creating a full-blown FAI. You've already had to do all the hard working of making it consistent while self-improving, and you've also had the do the hard work of programming the AI to recognise humans and to not do harm to them, while also acting on other things in the world. Here's Eliezer's paper.
Replies from: xxd↑ comment by Bugmaster · 2011-12-20T07:10:30.315Z · LW(p) · GW(p)
Okay, but if both start out as me, how do we determine which one ceases to be me when they diverge?
I would say that they both cease to be you, just as the current, singular "you" ceases to be that specific "you" the instant you see some new sight or think some new thought.
For instance, if I commit a crime, it shouldn't be blamed if it didn't commit the crime.
Agreed, though I would put something like, "if a person diverged into two separate versions who then became two separate people, then one version shouldn't be blamed for the crimes of the other version".
On a separate note, I'm rather surprised to hear that you prefer consequentialist morality to deontological morality; I was under the impression that most Christians followed the Divine Command model, but it looks like I was wrong.
If by "faith" you mean "things that follow logically from beliefs about God, the afterlife and the Bible" then no.
I mean something like, "whatever it is that causes you to believe in in God, the afterlife, and the Bible in the first place", but point taken.
When I say "feel like a human" I mean "feel" in the same way that I feel tired...
Ooh, I see, I totally misunderstood what you meant. By feel, you mean "experience feelings", thus something akin to qualia, right ? But in this case, your next statement is problematic:
But something acting like a person is sufficient reason to treat it like one.
In this case, wouldn't it make sense to conclude that mind uploading is a perfectly reasonable procedure for anyone (possibly other than yourself) to undergo ? Imagine that Less Wrong was a community where mind uploading was common. Thus, at any given point, you could be talking to a mix of uploaded minds and biological humans; but you'd strive to treat them all the same way, as human, since you don't know which is which (and it's considered extremely rude to ask).
This makes sense to me, but this would seem to contradict your earlier statement that you could, in fact, detect whether any particular entity had a soul (by asking God), in which case it might make sense for you to treat soulless people differently regardless of what they acted like.
On the other hand, if you're willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Since I can think of none that I trust enough to, for instance, let them chain me to the wall of a soundproof cell in the wall of their basement.
Good point. I should point out that some people do trust select individuals to do just that, and many more people trust psychiatrists and neurosurgeons enough to give them at least some control over their minds and brains. That said, the hypothetical technician in charge of uploading your mind would have much greater degree of access than any modern doctor, so your objection makes sense. I personally would likely undergo the procedure anyway, assuming the technician had some way of proving that he has a good track record, but it's possible I'm just being uncommonly brave (or, more likely, uncommonly foolish).
I'm aware that believing something is a necessary condition for saying it; I just don't know if it's a sufficient condition.
Haha yes, that's a good point, you should probably stick to saying things that are actually relevant to the topic, otherwise we'd never get anywhere :-)
and while we're at it, can I see and hear and smell better?
FWIW, this is one of the main goals of transhumanists, if I understand them correctly: to be able to experience the world much more fully than their current bodies would allow.
That's just too implausible for real life.
Oh, I agree (well, except for that whole soul thing, obviously). As I said before, I don't believe that anything like full mental uploading, not to mention the Singularity, will occur during my lifetime; and I'm not entirely convinced that such things are possible (the Singularity seems especially unlikely). Still, it's an interesting intellectual exercise.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T23:05:05.423Z · LW(p) · GW(p)
I typed up a response to this. It wasn't a great one, but it was okay. Then I hit the wrong button and lost it and I'm not in the mood to write it over again because I woke up early this morning to get fresh milk. (By "fresh" I mean "under a minute from the cow to me", if you're wondering why I can't go shopping at reasonable hours.) It turns out that four hours of sleep will leave you too tired to argue the same point twice.
That said,
On the other hand, if you're willing to treat all people the same way, even if their ensoulment status is in doubt, then why would you not treat yourself the same way, regardless of whether you were using a biological body or an electronic one ?
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me. Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone. No need to act needlessly cruel anyway. Plus it's good practice, since our justice system won't decide personhood by asking God...
Replies from: khafra, Bugmaster, APMason↑ comment by Bugmaster · 2011-12-21T00:50:45.684Z · LW(p) · GW(p)
By "fresh" I mean "under a minute from the cow to me", if you're wondering why I can't go shopping at reasonable hours.
That sounds ecolicious to a city-slicker such as myself, but all right :-)
Deciding whether or not to get uploaded is a choice I make trying to minimize the risk of dying by accident or creating multiple copies of me.
Fair enough, though I would say that if we assume that souls do not exist, then creating copies is not a problem (other than that it might be a drain on resources, etc.), and uploading may actually dramatically decrease your risk of dying. But if we assume that souls do exist, then your objections are perfectly reasonable.
Reacting to other people is a choice I make trying to minimize the risk of accidentally being cruel to someone.
That makes sense, but couldn't you ask God somehow whether the person you're talking to has a soul or not, and then act accordingly ? Earlier you indicated that you could do this, but it's possible I misunderstood.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T02:04:04.403Z · LW(p) · GW(p)
I apologize; earlier I deliberately glossed over a complicated thought process just to give the conclusion that maybe you could know, as opposed to explaining in full.
God has been known to speak to people through dreams, visions and gut feelings. That doesn't mean God always answers when I ask questions, which probably has something to do with the weakness of my faith. You could ask and you could try to listen, and if God is willing to answer, and if you don't ignore obvious evidence due to your own biases*, you could get an answer. But God has for whatever reason chosen to be rather taciturn (I can only think of one person I know who's been sent a vision from God), so you also might not, and God might speak to one person about it but not everyone, leaving others to wonder if they can trust people's claims, or to study the Bible and other relevant information to try to figure it out for themselves. And then there are people who just get stuff wrong and won't listen, but insist they're right, and insist God agrees with them, confusing anyone God hasn't spoken to. Hence, if you receive an answer and listen (something that's happened to me, but not nearly every time I ask a question-- at least, not unless we count finding the answer after asking through running into it in a book or something), you'll know, but there's also a possibility of just not finding out.
*There's a joke I can't find about some Talmudic scholars who are arguing. They ask God, a voice booms out from the heavens which one is right, and the others fail to update.
Replies from: APMason, Bugmaster↑ comment by APMason · 2011-12-21T02:09:44.186Z · LW(p) · GW(p)
God has been known to speak to people through dreams, visions and gut feelings.
But schizophrenics have been known to experience those things too. How do you tell the difference - even if you're the one it's happening to?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T03:21:41.408Z · LW(p) · GW(p)
I had to confront that one. Upvoted for being an objection a reasonable person should make.
Be familiar with how mental illnesses and other disorders that can affect thinking actually present. (Not just the DSM. Read what people with those conditions say about them.)
Be familiar with what messages from God are supposed to be like. (From Old Testament examples or Paul's heuristic. I suppose it's also reasonable to ascertain whether or not they fit the pattern for some other religion.)
Essentially, look at what your experiences best fit. That can be hard. But if your "visions" are highly disturbing and you become paranoid about your neighbors trying to kill you, it's more likely schizophrenia than divine inspiration. This applies to other things as well.
Does it actually make sense? Is it a message saying something, and then another one of the same sort, proclaiming the opposite, so that to believe one requires disbelieving the other?
Is there anything you can do to increase the probability that you're mentally healthy? Is your thyroid okay? How are your adrenals? Either could get sick in a way that mimics a mood disorder. Can you also consider whether your lifestyle's not conducive to mental health? Sleep problems? Poor nutrition?
Run it by other people who know you well and would be people you would trust to know if you were mentally ill.
No certainties. Just ways to be a little more sure. And that leads into the next one.
- Pick the most likely interpretation and go with it and see if your quality of life improves. See if you're becoming a better person.
↑ comment by juliawise · 2011-12-21T20:43:28.784Z · LW(p) · GW(p)
But if your "visions" are highly disturbing and you become paranoid about your neighbors trying to kill you, it's more likely schizophrenia than divine inspiration.
"The angel of the Lord appeareth to Joseph in a dream, saying, Arise, and take the young child and his mother, and flee into Egypt, and be thou there until I bring thee word: for Herod will seek the young child to destroy him. When he arose, he took the young child and his mother by night, and departed into Egypt."
- Does it actually make sense?
I work in a psych hospital, and the delusional patients there uniformly believe that their delusions make sense.
- Run it by other people
This is the most likely to work. The delusional people I know are aware that other people disagree with their delusions. Relatedly, there is great disagreement on the topic of religion.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T21:47:57.140Z · LW(p) · GW(p)
"The angel of the Lord appeareth to Joseph in a dream, saying, Arise, and take the young child and his mother, and flee into Egypt, and be thou there until I bring thee word: for Herod will seek the young child to destroy him. When he arose, he took the young child and his mother by night, and departed into Egypt."
Good point. Of course, this one does make a testable prediction, and as opposed to what might be more characteristic of a mental illness, the angel tells him there's trouble, he avoids it and we have no further evidence of his getting any more such messages. That at least makes schizophrenia a much less likely explanation than just having a weird dream, so that's what to try ruling out.
↑ comment by Bugmaster · 2011-12-21T04:02:28.040Z · LW(p) · GW(p)
Be familiar with what messages from God are supposed to be like. (From Old Testament examples or Paul's heuristic. I suppose it's also reasonable to ascertain whether or not they fit the pattern for some other religion.)
I have to admit that I'm not familiar with Paul's heuristic -- what is it ?
As for the Old Testament, God gives out some pretty frightening messages in there, from "sacrifice your son to me" to "wipe out every man, woman, and child who lives in this general area". I am reasonably sure you wouldn't listen to a message like that, but why wouldn't you ?
Pick the most likely interpretation and go with it and see if your quality of life improves. See if you're becoming a better person.
I have heard this sentiment from other theists, but I still understand it rather poorly, I'm ashamed to admit... maybe it's because I've never been religious, and thus I'm missing some context.
So, what do you mean by "a better person"; how do you judge what is "better" ? In addition, let's imagine that you discovered that believing in, say, Buddhism made you an even better person. Would you listen to messages that appear to be Buddhist, and discard those that appear to be Christian but contradict Buddhism -- even though you're pretty sure that Christianity is right and Buddhism is wrong ?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T04:44:52.308Z · LW(p) · GW(p)
I think I might be too tired to give this the response it deserves. If this post isn't a good enough answer, ask me again in the morning.
I have to admit that I'm not familiar with Paul's heuristic -- what is it ?
That you can tell whether a spirit is good or evil by whether or not it says Jesus is Lord.
I have heard this sentiment from other theists, but I still understand it rather poorly, I'm ashamed to admit... maybe it's because I've never been religious, and thus I'm missing some context.
Well, right here I mean that if you've narrowed it down to either schizophrenia or Christianity is true and God is speaking to you, if it's the former, untreated, you expect to feel more miserable. If it's the latter, by embracing God, you expect it'll make your quality of life improve. "Better person" here means "person who maximizes average utility better".
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T08:24:52.072Z · LW(p) · GW(p)
That you can tell whether a spirit is good or evil by whether or not it says Jesus is Lord.
Oh, I see, and the idea here is that the evil spirit would not be able to actually say "Jesus is Lord" without self-destructing, right ? Thanks, I get it now; but wouldn't this heuristic merely help you to determine whether the message is coming from a good spirit or an evil one, not whether the message is coming from a spirit or from inside your own head ?
if it's the former, untreated, you expect to feel more miserable.
I haven't studied schizophrenia in any detail, but wouldn't a person suffering from it also have a skewed subjective perception of what "being miserable" is ?
If it's the latter, by embracing God, you expect it'll make your quality of life improve.
Some atheists claim that their life was greatly improved after their deconversion from Christianity, and some former Christians report the same thing after converting to Islam. Does this mean that the Christian God didn't really talk to them while they were religious, after all -- or am I overanalyzing your last bullet point ?
"Better person" here means "person who maximizes average utility better".
Understood, though I was confused for a moment there. When other people say "better person", they usually mean something like "a person who is more helpful and kinder to others", not merely "a happier person", though obviously those categories do overlap.
Replies from: AspiringKnitter, ArisKatsaris, juliawise↑ comment by AspiringKnitter · 2011-12-21T20:08:11.808Z · LW(p) · GW(p)
I just lost my comment by hitting the wrong button. Not being too tired today, though, here's what I think in new words:
Oh, I see, and the idea here is that the evil spirit would not be able to actually say "Jesus is Lord" without self-destructing, right ? Thanks, I get it now; but wouldn't this heuristic merely help you to determine whether the message is coming from a good spirit or an evil one, not whether the message is coming from a spirit or from inside your own head ?
Yes. That's why we have to look into all sorts of possibilities.
I haven't studied schizophrenia in any detail, but wouldn't a person suffering from it also have a skewed subjective perception of what "being miserable" is ?
Speaking here only as a layperson who's done a lot of research, I can't think of any indication of that. Rather, they tend to be pretty miserable if their psychosis is out of control (with occasional exceptions). One person's biography that I read recounts having it mistaken for depression at first, and believing that herself since it fit. That said, conventional approaches to treating schizophrenia don't help much/any with half of it, the half that most impairs quality of life. (Not that psychosis doesn't, but as a quick explanation, they also suffer from the "negative symptoms" which include stuff like apathy, poor grooming and stuff. The "positive symptoms" are stuff like hearing voices and being delusional. In the rare* cases where medication works, it only treats positive symptoms and usually exacerbates negative symptoms. (Just run down a list of side-effects and a list of negative symptoms. It helps if you know jargon.) Hence, poor quality of life.) So it's also possible that receiving treatment for a mental illness you actually have would fail to increase quality of life. Add in abuses by the system and it could even decrease it, so this is definitely a problem.
Understood, though I was confused for a moment there. When other people say "better person", they usually mean something like "a person who is more helpful and kinder to others", not merely "a happier person", though obviously those categories do overlap.
Aris understood correctly.
*About a third of schizophrenics are helped by medication. Not rare, certainly, but that's less than half. Guidelines for treating schizophrenia are irrational. I will elaborate if asked, with the caveat that it's irrelevant and I'm not a doctor.
Replies from: AspiringKnitter, Bugmaster↑ comment by AspiringKnitter · 2011-12-21T21:18:38.378Z · LW(p) · GW(p)
And I left stuff out here that was in the first.
Some atheists claim that their life was greatly improved after their deconversion from Christianity,
Short version: unsurprising because of things like this. People can identify as Christian while being confused about what that means.
some former Christians report the same thing after converting to Islam.
Surprising. My model takes a hit here. Do you have links to firsthand accounts of this?
Replies from: TheOtherDave, lavalamp↑ comment by TheOtherDave · 2011-12-21T21:27:57.908Z · LW(p) · GW(p)
I'm surprised by your surprise.
I generally expect that people who make an effort to be X will subsequently report that being X improves their life, whether we're talking about "convert to Christianity" or "convert to Islam" or "deconvert from Christianity" or "deconvert from Islam."
Replies from: dlthomas↑ comment by lavalamp · 2011-12-21T21:34:40.694Z · LW(p) · GW(p)
People can identify as Christian while being confused about what that means.
Can you clarify? Is it your claim that these "confused" Christians are the only ones who experience improved lives upon deconversion? Or did you mean something else?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T22:31:40.024Z · LW(p) · GW(p)
I'm saying people can believe that they are Christians, go to church, pray, believe in the existence of God and still be wrong about fundamental points of doctrine like "I require mercy, not sacrifice" or the two most important commands, leading to people who think being Christian means they should hate certain people. There are also people who conflate tradition and divine command, leading to groups that believe being Christian means following specific rules which are impractical in modern culture and not beneficial. I expect anyone like that to have an improved quality of life after they stop hating people and doing pointless things. I expect a quality of life even better than that if they stop doing the bad stuff but really study the Bible and be good people, with the caveat that quality of life for those people could be lowered by persecution in some times and places. (They could also end up persecuted for rejecting it entirely in other times and places. Or even the same ones.)
Basically, yeah, only if they've done something wrong in their interpretation of Scripture will they like being atheists better than being Christians.
Replies from: lavalamp↑ comment by lavalamp · 2011-12-21T23:30:24.938Z · LW(p) · GW(p)
My brain is interpreting that as "well, TRUE Christians wouldn't be happier/better if they deconverted." How is this not "No True Scotsman"?
Would you say you are some variety of Calvinist? I'm guessing not, since you don't sound quite emphatic enough on this point. (For the Calvinist, it's point of doctrine that no one can cease being a Christian-- they must not have been elect in the first place. I expect you already know this, I'm saying it for the benefit of any following the conversation who are lucky enough to not have heard of Calvinism. Also, lots of fundamentalist leaning groups (e.g., Baptists) have a "once saved always saved" doctrine.)
I hope I'm not coming off confrontational; I had someone IRL tell me I must never have been a real christian not too long ago, and I found it very annoying-- so I may be being a bit overly sensitive.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T01:10:06.783Z · LW(p) · GW(p)
My brain is interpreting that as "well, TRUE Christians wouldn't be happier/better if they deconverted." How is this not "No True Scotsman"?
Explained here. Tell me if that's not clear.
Would you say you are some variety of Calvinist?
Um... not exactly?
I expect you already know this,
I was familiar with the concept, but not its name.
I hope I'm not coming off confrontational;
You're not, but I live by Crocker's Rules anyway.
↑ comment by Bugmaster · 2011-12-21T22:55:46.860Z · LW(p) · GW(p)
In the rare* cases where medication works, it only treats positive symptoms and usually exacerbates negative symptoms. ... So it's also possible that receiving treatment for a mental illness you actually have would fail to increase quality of life.
Could you elaborate on this point a bit ? As far as I understand, at least some of the positive symptoms may pose significant existential risks to the patient (and possibly those around him, depending on severity). For example, a person may see a car coming straight at him, and desperately try to dodge it, when in reality there's no car. Or a person may fail to notice a car that actually exists. Or, in extreme cases, the person may believe that his neighbour is trying to kill him, take preemptive action, and murder an innocent. If I had symptoms like that, I personally would rather live with the negatives for the rest of my life, rather than living with the vastly increased risk that I might accidentally kill myself or harm others -- even knowing that I might feel subjectively happier until that happens.
Aris understood correctly.
Ok, that makes sense: by "becoming a better person", you don't just mean "a happier person", but also "a person who's more helpful and nicer to others"; and you choose to believe things that make you such a person.
I have to admit, this mode of thought is rather alien to me, and thus I have a tough time understanding it. To me, this sounds perilously close to wishful thinking. To use an exaggerated example, I would definitely feel happier if I knew that I had a million dollars in the bank. Having a million dollars would also empower me to be a better person, since I could donate at least some of it to charity, or invest it in a school, etc. However, I am not going to go ahead and believe that I have a million dollars, because... well... I don't.
In addition, there's a question of what one sees as being "better". As we'd talked about earlier, at least some theists do honestly believe that persecuting gay people and forcing women to wear burqas is a good thing to do (and a moral imperative). Thus, they will (presumably) interpret any gut feelings that prompt them to enforce the burqa ordinances even harder as being good and therefore godly and true. You (and I), however, would do just the opposite. So, we both use the same method but arrive at diametrically opposed conclusions; doesn't this mean that the method may be flawed ?
Short version: unsurprising because of things like this. People can identify as Christian while being confused about what that means.
My main objection to this line of reasoning is that it involves the "No True Scotsman" fallacy. Who is to say (other than the Pope, perhaps) what being a Christian "really means" ? The more conservative Christians believe that feminism is a sin, whereas you do not; but how would you convince an impartial observer that you are right and they are wrong ? You could say, "clearly such attitudes harm women, and we shouldn't be hurting people", but they'd just retort with, "yes, and incarcerating criminals harms the criminals to, but it must be done for the greater good, because that's what God wants; He told me so".
In addition, it is not the case that all people who leave Christianity (be it for another religion, or for no religion at all) come from such extreme sects as the one you linked to. For example, Julia Sweeny (*), a prominent atheist, came from a relatively moderate background, IIRC. More on this below:
Surprising. My model takes a hit here. Do you have links to firsthand accounts of this?
I don't have any specific links right now (I will try to find some later), but apparently there is a whole website dedicated to the subject. Wikipedia also has a list. I personally know at least two people who converted from relatively moderate versions of Christianity to Wicca and Neo-Paganism, and report being much happier as the result, though obviously this is just anecdotal information and not hard data. In general, though, my impression was that religious conversions are relatively common, though I haven't done any hard research on the topic. There's an interesting-looking paper on the topic that I don't have access to... maybe someone else here does ?
(*) I just happened to remember her name off the top of my head, because her comedy routine is really funny.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T00:11:52.278Z · LW(p) · GW(p)
Could you elaborate on this point a bit ?
Yeah. You could feel unhappy a lot more if you take the pills usually prescribed to schizophrenics because side-effects of those pills include mental fog and weight gain. You could also be a less helpful person to others because you would be less able to do thinks if you're on a high enough dose to "zombify" you. Also, Erving Goffman's work shows that situations where people are in an institution, as he defines the term, cause people to become stupider and less capable. (Kudos to the mental health system for trying to get people out of those places faster-- most people who go in get out after a little while now, as opposed to the months it usually took when he was studying. However, the problems aren't eliminated and his research is still applicable.) Hence, it could make you a worse and unhappier person to undergo treatment.
(and possibly those around him, depending on severity)
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence. It's correlated with self-harm, but not hurting other people.
Mental illness is correlated (no surprise here) with being abused and with substance abuse. Both of those are correlated with violence, leading to higher rates of violence among the mentally ill. Even when not corrected for, the rate isn't that high and the mentally ill are more likely to be victims of violent crime than perpetrators of it. But when those effects ARE corrected for, mental illness does not, by itself, cause violence.
At all. End of story. Axe-crazy villains in the movies are unrealistic and offensive portrayals of mental illness. /rant
I have to admit, this mode of thought is rather alien to me, and thus I have a tough time understanding it. To me, this sounds perilously close to wishful thinking. To use an exaggerated example, I would definitely feel happier if I knew that I had a million dollars in the bank. Having a million dollars would also empower me to be a better person, since I could donate at least some of it to charity, or invest it in a school, etc. However, I am not going to go ahead and believe that I have a million dollars, because... well... I don't.
This mode of thought is alien to me too, since I wasn't advocating it. I'm confused about how you could come to that conclusion. I have been unclear, it seems.
(Seriously, what?)
Okay, so I mean, if you think you only want to fulfill your own selfish desires, and then become a Christian, and even though you don't want to, decide it's right to be nice to other people and spend time praying, and then after a while learn that it makes you really happy to be nice and happier than you've ever been before to pray. That's what I meant.
In addition, there's a question of what one sees as being "better". As we'd talked about earlier, at least some theists do honestly believe that persecuting gay people and forcing women to wear burqas is a good thing to do (and a moral imperative). Thus, they will (presumably) interpret any gut feelings that prompt them to enforce the burqa ordinances even harder as being good and therefore godly and true. You (and I), however, would do just the opposite. So, we both use the same method but arrive at diametrically opposed conclusions; doesn't this mean that the method may be flawed ?
Yes. It's only to be used as an adjunct to thinking things through, not the end-all-be-all of your strategy for deciding what to do in life.
The more conservative Christians believe that feminism is a sin, whereas you do not; but how would you convince an impartial observer that you are right and they are wrong ?
My argument isn't against people who think feminism is sinful (would you like links to sane, godly people espousing the idea without being hateful?) but with the general tenor of the piece. See below.
My main objection to this line of reasoning is that it involves the "No True Scotsman" fallacy. Who is to say (other than the Pope, perhaps) what being a Christian "really means" ?
Well, not the Pope, certainly. He's a Catholic. But I thought a workable definition of "Christian" was "person who believes in the divinity of Jesus Christ and tries to follow his teachings", in which case we have a pretty objective test. Jesus taught us to love our neighbors and be merciful. He repeatedly behaved politely toward women of poor morals, converting them with love and specifically avoiding condemnation. Hence, people who are hateful or condemn others are not following his teachings. If that was a mistake, that's different, just like a rationalist could be overconfident-- but to systematically do it and espouse the idea that you should be hateful clearly goes against what Jesus taught as recorded in the Bible. Here's a quote from the link:
If I were a king, I'd make a law that any woman who wore a miniskirt would go to jail. I'm not kidding!
Compare it with a relevant quote from the Bible, which has been placed in different places in different versions, but the NIVUK (New International Version UK) puts it at the beginning of John 8:
The teachers of the law and the Pharisees brought in a woman caught in adultery. They made her stand before the group
4 and said to Jesus, Teacher, this woman was caught in the act of adultery.
5 In the Law Moses commanded us to stone such women. Now what do you say?
6 They were using this question as a trap, in order to have a basis for accusing him. But Jesus bent down and started to write on the ground with his finger.
7 When they kept on questioning him, he straightened up and said to them, If any one of you is without sin, let him be the first to throw a stone at her.
8 Again he stooped down and wrote on the ground.
9 At this, those who heard began to go away one at a time, the older ones first, until only Jesus was left, with the woman still standing there.
10 Jesus straightened up and asked her, Woman, where are they? Has no-one condemned you?
11 No-one, sir, she said. Then neither do I condemn you, Jesus declared. Go now and leave your life of sin.
So, it's not unreasonable to conclude that, whether or not Christianity is correct and whether or not it's right to lock people up for wearing miniskirts, that attitude is unChristian.
I don't have any specific links right now (I will try to find some later), but apparently there is a whole website dedicated to the subject. Wikipedia also has a list. I personally know at least two people who converted from relatively moderate versions of Christianity to Wicca and Neo-Paganism, and report being much happier as the result, though obviously this is just anecdotal information and not hard data.
Thank you! I'll look that over.
Replies from: lavalamp, CronoDAS, wedrifid, dlthomas, Bugmaster↑ comment by lavalamp · 2011-12-22T02:34:10.586Z · LW(p) · GW(p)
... I thought a workable definition of "Christian" was "person who believes in the divinity of Jesus Christ and tries to follow his teachings", in which case we have a pretty objective test. Jesus taught us to love our neighbors and be merciful. He repeatedly behaved politely toward women of poor morals, converting them with love and specifically avoiding condemnation. Hence, people who are hateful or condemn others are not following his teachings. If that was a mistake, that's different, just like a rationalist could be overconfident-- but to systematically do it and espouse the idea that you should be hateful clearly goes against what Jesus taught as recorded in the Bible.
I seem to be collecting downvotes, so I'll shut up about this shortly. But to me, anyway, this still sounds like No True Scotsman. I suspect that nearly all Christians will agree with your definition (excepting Mormons and JW's, but I assume you added "divinity" in there to intentionally exclude them). However, I seriously doubt many of them will agree with your adjudication. Fundamentalists sincerely believe that the things they do are loving and following the teachings of Jesus. They think you are the one putting the emphasis on the wrong passages. I personally happen to think you probably are much more correct than they are; but the point is neither one of us gets to do the adjudication.
Replies from: AspiringKnitter, wedrifid↑ comment by AspiringKnitter · 2011-12-22T02:58:14.360Z · LW(p) · GW(p)
I think this is missing the point: they believe that, but they're wrong. The fact that they're wrong is what causes them distress. If you'd like, we can taboo the word "Christian" (or just end the conversation, as you suggest).
Replies from: None, lavalamp↑ comment by [deleted] · 2011-12-22T04:26:52.581Z · LW(p) · GW(p)
.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T04:36:29.358Z · LW(p) · GW(p)
I have never before had someone disagree with me on the grounds that I'm both morally superior to other people and a genius.
Replies from: None, Bugmaster↑ comment by [deleted] · 2011-12-22T04:38:36.681Z · LW(p) · GW(p)
.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T04:46:01.985Z · LW(p) · GW(p)
I wouldn't go disagreeing with him; I'd try performing a double-blind test of his athletic ability while wearing different pairs of socks. It just seems like the sort of thing that's so simple to design and test that I don't know if I could resist. I'd need three people and a stopwatch...
Replies from: wedrifid, TheOtherDave, None↑ comment by wedrifid · 2011-12-22T04:58:18.343Z · LW(p) · GW(p)
I'd need three people and a stopwatch...
Don't forget the spare pairs of socks!
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T05:35:49.806Z · LW(p) · GW(p)
Yes, thanks for reminding me. I'd also need pencil and paper.
↑ comment by TheOtherDave · 2011-12-22T14:44:03.321Z · LW(p) · GW(p)
And a nontrivial amount of time and attention.
I suspect that after the third or fifth such athlete, you'd develop the ability to resist, and simply have your opinion about his or her belief about socks, which you might or might not share depending on the circumstances.
↑ comment by [deleted] · 2011-12-22T05:03:11.168Z · LW(p) · GW(p)
.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T05:50:12.872Z · LW(p) · GW(p)
Uh-oh, that's a bad sign. If someone on LessWrong thinks something like that, I'd better give it credence. But now I'm confused because I can't think what has given you that idea. Ergo, there appears to be evidence that I've not only made a mistake in thinking, but made one unknowingly, and failed to realize afterward or even see that something was wrong.
So, this gives me two questions and I feel like an idiot for asking them, and if this site had heretofore been behaving like other internet sites this would be the point where the name-calling would start, but you guys seem more willing than average to help people straighten things out when they're confused, so I'm actually going to bother asking:
What do you mean by "basic premise" and "can't question" in this context? Do you mean that I can't consider his nonexistence as a counterfactual? Or is there a logical impossibility in my conception of God that I've failed to notice?
Can I have specific quotes, or at least a general description, of when I've been evasive? Since I'm unaware of it, it's probably a really bad thinking mistake, not actual evasiveness-- that or I have a very inaccurate self-concept.
Actually, no possibility seems good here (in the sense that I should revise my estimate of my own intelligence and/or honesty and/or self-awareness down in almost every case), except that something I said yesterday while in need of more sleep came out really wrong. Or that someone else made a mistake, but given that I've gotten several downvotes (over seventeen, I think) in the last couple of hours, that's either the work of someone determined to downvote everything I say or evidence that multiple people think I'm being stupid.
(You know, I do want to point out that the comment about testing his lucky socks was mostly a joke. I do assign a really low prior probability to the existence of lucky socks anywhere, in case someone voted me down for being an idiot instead of for missing the point and derailing the analogy. But testing it really is what I would do in real life if given the chance.)
This isn't a general objection to my religion, is it? (I'm guessing no, but I want to make sure.)
Replies from: None, TimS, Will_Newsome↑ comment by [deleted] · 2011-12-22T10:07:45.014Z · LW(p) · GW(p)
.
Replies from: AspiringKnitter, lavalamp↑ comment by AspiringKnitter · 2011-12-22T18:34:14.106Z · LW(p) · GW(p)
There is a man in the sky who created everything and loves all of us, even the 12-year-old girl getting gang-raped to death right now. His seeming contradictions are part of a grander plan that we cannot fathom.
Not how I would have put that, but mostly ADBOC this. (I wouldn't have called him a man, nor would I have singled out the sky as a place to put him. But yes, I do believe in a god who created everything and loves all, and ADBOC the bit about the 12-year-old-- would you like to get into the Problem of Evil or just agree to disagree on the implied point even though that's a Bayesian abomination? And agree with the last sentence.)
Can't, won't, unwilling to. Yes, it's possible for you to question it, but you aren't doing so.
I'd ask you what would look different if I did, but I think you've answered this below.
Sure you can. How is a universe not set in motion by God notably different from one that is?
You think I'm one of those people. Let me begin by saying that God's existence is an empirical fact which one could either prove or disprove.
I worry about telling people why I converted because I fear ridicule or accusations of lying. However, I'll tell you this much: I suddenly became capable of feeling two new sensations, neither of which I'd felt before and neither of which, so far as I know, has words in English to describe it. Sensation A felt like there was something on my skin, like dirt or mud, and something squeezing my heart, and was sometimes accompanied by a strange scent and almost always by feelings of distress. Sensation B never co-occurred with Sensation A. I could be feeling one, the other or neither, and could feel them to varying degrees. Sensation B felt relaxing, but also very happy and content and jubilant in a way and to a degree I'd never quite been before, and a little like there was a spring of water inside me, and like the water was gold-colored, and like this was all I really wanted forever, and a bit like love. After becoming able to feel these sensations, I felt them in certain situations and not in others. If one assumed that Sensation A was Bad and Sensation B was Good, then they were consistent with Christianity being true. Sometimes they didn't surprise me. Sometimes they did-- I could get the feeling that something was Bad even if I hadn't thought so (had even been interested in doing it) and then later learn that Christian doctrine considered it Bad as well.
I do not think a universe without God would look the same. I can't see any reason why a universe without God would behave as if it had an innate morality that seems, possibly, somewhat arbitrary. I would expect a universe without God to work just like I thought it did when I was an atheist. I would expect there to be nothing wrong (no signal saying Bad) with... well, anything, really. A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot. And I certainly wouldn't expect to get a Good signal on the Bible but a Bad signal on other holy books.
So. That's the better part of my evidence, such as it is.
Replies from: Kaj_Sotala, DSimon, TimS, None, Bugmaster, juliawise, Prismattic, None, TheOtherDave, lavalamp, None, None, Will_Newsome, lessdazed, Will_Newsome↑ comment by Kaj_Sotala · 2011-12-22T23:51:47.498Z · LW(p) · GW(p)
If one assumed that Sensation A was Bad and Sensation B was Good, then they were consistent with Christianity being true. Sometimes they didn't surprise me. Sometimes they did-- I could get the feeling that something was Bad even if I hadn't thought so (had even been interested in doing it) and then later learn that Christian doctrine considered it Bad as well.
This would be considerably more convincing if Christianity were a unified movement.
Suppose there existed only three religions in the world, all of which had a unified dogma and only one interpretation of it. Each of them had a long list of pretty specific doctrinal points, like one religion considering Tarot cards bad and another thinking that they were fine. If your Good and Bad sensations happened to precisely correspond to the recommendations of one particular religion, even in the cases where you didn't actually know what the recommendations were beforehand, then that would be some evidence for the religion being true.
However, in practice there are a lot of religions, and a lot of different Christian sects and interpretations. You've said that you've chosen certain interpretations instead of others because that's the interpretation that your sensations favored. Consider now that even if your sensations were just a quirk of your brain and mostly random, there are just so many different Christian sects and varying interpretations that it would be hard not to find some sect or interpretation of Christian doctrine who happened to prescribe the same things as your sensations do.
Then you need to additionally take into account ordinary cognitive flaws like confirmation bias: once you begin to believe in the hypothesis that your sensations reflect Christianity's teachings, you're likely to take relatively neutral passages and read into them doctrinal support for your position, and ignore passages which say contrary things.
In fact, if I've read you correctly, you've explicitly said that you choose the correct interpretation of Biblical passages based on your sensations, and the Biblical passages which are correct are the ones that give you a Good feeling. But you can't then say that Christianity is true because it's the Christian bits that give you the good feeling - you've defined "Christian doctrine" as "the bits that give a good feeling", so "the bits that give a good feeling" can't not be "Christian doctrine"!
Furthermore, our subconscious models are often accurate but badly understood by our conscious minds. For many skills, we're able to say what's the right or wrong way of doing something, but be completely unable to verbalize the reason. Likewise, you probably have a better subconscious model of what would be "typical" Christian dogma than you are consciously aware of. It is not implausible that you'd have a subconscious process making guesses on what would be a typical Christian response to something, giving you good or bad sensation based on that, and often guessing right (especially since, as noted before, there's quite a lot of leeway in how a "Christian response" is defined).
For instance, you say that you hadn't thought of Tarot cards being Bad before. But the traditional image of Christianity is that of being strongly opposed to witchcraft, and Tarot cards are used for divination, which is strongly related to witchcraft. Even if you hadn't consciously made that connection, it's obvious enough that your subconscious very well could have.
↑ comment by DSimon · 2011-12-25T21:33:53.256Z · LW(p) · GW(p)
I don't think the conclusion that the morality described by sensations A/B is a property of the universe at large has been justified. You mention that the sensations predict in advance what Christian doctrine describes as moral or immoral before you know directly what that doctrine says, but that strikes me as being an investigation method that is not useful, for two reasons:
Christian culture is is very heavily permeated throughout most English-speaking cultures. A person who grows up in such a culture will have a high likelihood of correctly guessing Christianity's opinion on any given moral question, even if they haven't personally read the relevant text.
More generally, introspection is a very problematic way of gathering data. Many many biases, both obvious and subtle, come into play, and make your job way more difficult. For example: Did you take notes on each instance of feeling A or B when it occurred, and use those notes (and only those notes) later when validating them against Christian doctrine? If not, you are much more likely to remember hits than misses, or even to after-the-fact readjust misses into hits; human memory is notorious for such things.
↑ comment by TimS · 2011-12-22T19:00:42.657Z · LW(p) · GW(p)
A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot.
In a world entirely without morality, we are constantly facing situations where trusting another person would be mutually beneficial, but trusting when the other person betrays is much worse than mutual betrayal. Decision theory has a name for this type of problem: Prisoner's Dilemma. The rational strategy is to defect, which makes a pretty terrible world.
But when playing an indefinite number of games, it turns out that cooperating, then punishing defection is a strong strategy in an environment of many distinct strategies. That looks a lot like "turn the other cheek" combined with a little bit of "eye for an eye." Doesn't the real world behavior consistent with that strategy vaguely resemble morality?
In short, decision theory suggests that material considerations can justify a substantial amount of "moral" behavior.
Regarding your sensations A and B, from the outside perspective it seems like you've been awfully lucky that your sense of right and wrong match your religious commitments. If you believed Westboro Baptist doctrine but still felt sensations A and B at the same times you feel them now, then you'd being doing sensation A behavior substantially more frequently. In other words, I could posit that you have a built-in morality oracle, but why should I believe that the oracle should be labelled Christian? If I had the same moral sensations you do, why shouldn't I call it rationalist morality?
Replies from: dlthomas, AspiringKnitter↑ comment by dlthomas · 2011-12-22T19:05:35.504Z · LW(p) · GW(p)
I would say tit-for-tat looks very much like "eye for an eye" but very little like "turn the other cheek", which seems much more like a cooperatebot.
Replies from: None↑ comment by [deleted] · 2011-12-22T19:08:14.406Z · LW(p) · GW(p)
it's turn the other cheek in the sense that you immediately forgive as soon as you figure out that your partner is willing to cooperate
Replies from: dlthomas↑ comment by AspiringKnitter · 2011-12-22T19:15:33.923Z · LW(p) · GW(p)
If you believed Westboro Baptist doctrine but still felt sensations A and B at the same times you feel them now,
...I became a Christian and determined my religious beliefs based on sensations A and B. Why would I believe in unsupported doctrine that went against what I could determine of the world? I just can't see myself doing that. My sense of right and wrong match my religious commitments because I chose my religious commitments so they would fit with my sense of right and wrong.
but why should I believe that the oracle should be labelled Christian?
Because my built-in morality oracle likes the Christian Bible.
Doesn't the real world behavior consistent with that strategy vaguely resemble morality?
It's sufficient to explain some, but not all, morality. Take tarot cards, for example. What was there in the ancestral environment to make those harmful? That just doesn't make any sense with your theory of morality-as-iterated-Prisoner's-Dilemma.
Replies from: TimS↑ comment by TimS · 2011-12-22T19:22:57.507Z · LW(p) · GW(p)
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. "A implies B" is not equivalent "B implies A").
And if playing with tarot cards could open a doorway for demons to enter the world (or whatever wrong they cause), it seems perfectly rational to morally condemn tarot cards. I don't morally condemn tarot cards because I think they have the same mystical powers as regular playing cards (i.e. none). Also, I'm not intending to invoke "ancestral environment" when I invoke decision theory.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T19:44:04.929Z · LW(p) · GW(p)
And if playing with tarot cards could open a doorway for demons to enter the world (or whatever wrong they cause), it seems perfectly rational to morally condemn tarot cards.
But that's already conditional on a universe that looks different from what most atheists would say exists. If you see proof that tarot cards-- or anything else-- summon demons, your model of reality takes a hit.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. "A implies B" is not equivalent "B implies A").
I don't understand. Can you clarify?
Replies from: TimS↑ comment by TimS · 2011-12-22T20:02:02.759Z · LW(p) · GW(p)
If tarot cards have mystical powers, I absolutely need to adjust my beliefs about the supernatural. But you seemed to assert that decision theory can't say that tarot are immoral in the universes where they are actually dangerous.
If you picked a sect based on your moral beliefs, then that is evidence that your Christianity is moral. It is not evidence that morality is your Christianity (i.e. "A implies B" is not equivalent "B implies A").
I don't understand. Can you clarify?
Alice has a moral belief that divorce is immoral. This moral belief is supported by objective evidence. She is given a choice to live in Distopia, where divorce is permissible by law, and Utopia, where divorce is legally impossible. For the most part, Distopia and Utopia are very similar places to live. Predictably, Alice chooses to live in Utopia. The consistency between Alice's (objectively true) morality and Utopian law is evidence that Utopia is moral. It is not evidence that Utopia is the cause of Alice's morality (i.e. is not evidence that morality is Utopian - the grammatical ordering of phrases does not help making my point).
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T20:23:19.335Z · LW(p) · GW(p)
But you seemed to assert that decision theory can't say that tarot are immoral in the universes where they are actually dangerous.
Oh, I'm sorry. Yes, that does make sense. Decision theory WOULD assert it, but to believe they're immoral requires belief in some amount of supernatural something, right? Hence it makes no sense under what my prior assumptions were (namely, that there was nothing supernatural).
Alice has a moral belief that divorce is immoral. This moral belief is supported by objective evidence. She is given a choice to live in Distopia, where divorce is permissible by law, and Utopia, where divorce is legally impossible. For the most part, Distopia and Utopia are very similar places to live. Predictably, Alice chooses to live in Utopia. The consistency between Alice's (objectively true) morality and Utopian law is evidence that Utopia is moral. It is not evidence that Utopia is the cause of Alice's morality (i.e. is not evidence that morality is Utopian - the grammatical ordering of phrases does not help making my point).
Oh, now I understand. That makes sense.
Replies from: None, TimS↑ comment by [deleted] · 2011-12-22T20:37:06.701Z · LW(p) · GW(p)
Oh, I'm sorry. Yes, that does make sense. Decision theory WOULD assert it, but to believe they're immoral requires belief in some amount of supernatural something, right? Hence it makes no sense under what my prior assumptions were (namely, that there was nothing supernatural).
Accepting the existence of the demon portal should not impact your disbelief in a supernatural morality.
Anyways, the demons don't even have to be supernatural. First hypothesis would be hallucination, second would be aliens.
↑ comment by TimS · 2011-12-22T20:33:45.189Z · LW(p) · GW(p)
I don't see that decision theory cares why an activity is dangerous. Decision theory seems quite capable of imposing disincentives for poisoning (chemical danger) and cursing (supernatural danger) in proportion to their dangerousness and without regard to why they are dangerous.
The whole reason I'm invoking decision theory is to suggest that supernatural morality is not necessary to explain a substantial amount of human "moral" behavior.
↑ comment by [deleted] · 2011-12-22T19:55:03.843Z · LW(p) · GW(p)
sensation A and sensation B
You were not entirely clear, but you seem to be taking these as signals of things being Bad or Good in the morality sense, right? Ok so it feels like there is an objective morality. Let's come up with hypotheses:
You have a morality that is the thousand shards of desire left over by an alien god. Things that were a good idea (for game theory, etc reasons) to avoid in the ancestral environment tend to feel good so that you would do them. Things that feel bad are things you would have wanted to avoid. As we know, an objective morality is what a personal morality feels like from the inside. That is, you are feeling the totally natural feelings of morality that we all feel. Why you attached special affect to the bible, I suppose that's the affect hueristic: you feel like the bible is true and it is the center of your belief or something, and that goodness gets confused with a moral goodness. This is all hindsight, but it seems pretty sound.
Or it could be Jesus-is-Son-of-a-Benevolent-Love-Agent-That-Created-the-Universe. I guess God is sending you signals to say what sort of things he likes/doesn't like? Is that the proposed mechanism for morality? I don't know enough about the theory to say much more.
Ok now let's consider the prior. The complex loving god hypothesis is incredibly complicated. Minds are so complex we can't even build one yet. It would take a hell of a lot more than your feeling-of-morality evidence to even raise this to our attention. A lot more than any scientific hypothesis has ever collected, I would say. You must have other evidence, not only to overcome the prior, but all the evidence against a loving god who intelligently arranged anything,
Anyways, It sounds like you were primarily a moral nihilist before your encounter with the god-prescribes-a-morality hypothesis. Have you read Eliezers metaethics stuff? it deals the with subject of morality in a neutral universe quite well.
I'm afraid I don't see why you call your reward-signal-from-god is an "objective morality" It sounds like the best course of action would be to learn the mechanism and seize control of it like AIXI would.
I (as a human) already have a strong morality, so if I figured out that the agent responsible for all of the evil in the universe were directly attempting to steer me with a subtle reward signal, I'd be pissed. It's interesting that you didn't have that reaction. I guess that's the moral nihilism thing. You didn't know you had your own morality.
Replies from: cousin_it, AspiringKnitter↑ comment by cousin_it · 2011-12-27T13:52:38.333Z · LW(p) · GW(p)
The complex loving god hypothesis is incredibly complicated. Minds are so complex we can't even build one yet.
There are two problems with this argument. First, each individual god might be very improbable, but that could be counterbalanced by the astronomical number of possible gods (e.g. consider all possible tweaks to the holy book), so you can argue apriori against specific flavors of theism but not against theism in general. Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don't need high K-complexity. A powerful mind (or a program that blossoms into one) could even be simpler than physics as we currently know it, which is already quite complex and seems to have even more complexity waiting in store.
IMO a correct argument against theism should focus on the "loving" part rather than the "mind" part, and focus on evidence rather than complexity priors. The observed moral neutrality of physics is more probable if there's no moral deity. Given what we know about evolution etc., it's hard to name any true fact that makes a moral deity more likely.
I'm not sure that everything in my comment is correct. But I guess LW could benefit from developing an updated argument against (or for) theism?
Replies from: Will_Newsome, Estarlio, None, Will_Newsome↑ comment by Will_Newsome · 2011-12-27T14:19:17.239Z · LW(p) · GW(p)
Your argument about K-complexity is a decent shorthand but causes people to think that this "simplicity" thing is baked into the universe (universal prior) as if we had direct access to the universe (universal prior, reference machine language) and isn't just another way of saying it's more probable after having updated on a ton of evidence. As you said it should be about evidence not priors. No one's ever seen a prior, at best a brain's frequentist judgment about what "priors" are good to use when.
↑ comment by Estarlio · 2012-01-07T15:57:44.609Z · LW(p) · GW(p)
Second, if Eliezer is right and AI can develop from a simple seed someone can code up in their garage, that means powerful minds don't need high K-complexity.
That may be somewhat misleading. A seed AI, denied access to external information, will be a moron. Yet the more information it takes into memory the higher the K-complexity of the thing, taken as a whole, is.
You might be able to code a relatively simple AI in your garage, but if it's going to be useful it can't stay simple.
ETA: Also if you take the computer system as a whole with all of the programming libraries and hardware arrangements - even 'hello world' would have high K-complexity. If you're talking about whatsoever produces a given output on the screen in terms of a probability mass I'm not sure it's reasonable to separate the two out and deal with K-complexity as simply a manifestation of high level APIs.
Replies from: orthonormal↑ comment by orthonormal · 2012-01-07T16:48:40.047Z · LW(p) · GW(p)
↑ comment by [deleted] · 2011-12-31T20:32:57.878Z · LW(p) · GW(p)
For every every program that could be called a mind, there are very very very many that are not.
Eliezer's "simple" seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
As long as we continue to accept occams razor, there's no reason to postulate fundamental gods.
Given that a god exists by other means (alien singularity), I would expect it to appear approximately moral, because it would have created me (or modified me) with approximately it's own morality. I assume that god would understand the importance of friendly intelligence. So yeah, the apparent neutrality is evidence against the existence of anything like a god.
Replies from: cousin_it, TheOtherDave↑ comment by cousin_it · 2012-01-01T10:46:58.255Z · LW(p) · GW(p)
Eliezer's "simple" seed AI is simple compared to an operating system (which people code up in their garages), not compared to laws of physics.
Fair point, but I think you need lots of code only if you want the AI to run fast, and K-complexity doesn't care about speed. A slow naive implementation of "perfect AI" should be about the size of the math required to define a "perfect AI". I'd be surprised if it were bigger than the laws of physics.
Replies from: None↑ comment by [deleted] · 2012-01-01T22:27:58.394Z · LW(p) · GW(p)
You're right; AIXI or whatever is probably around the same complexity as physics. I bet physics is a lot simpler than it appears right now tho.
Now I'm unsure that a fundamental intelligence even means anything. AIXI, for example is IIRC based on bayes and occam induction, who's domain is cognitive engines within universes more or less like ours. What would a physics god optimising some morality even be able to see and do? It sure wouldn't be constrained by bayes and such. Why not just replace it with a universe that is whatever morality maximised; max(morality)
is simpler than god(morality)
almost no matter how simple god is. Assuming a physics god is even a coherent concept.
In our case, assuming a fundamental god is coherent, the "god did it" hypothesis is strictly defeated (same predictions, less theory) by the "god did physics" hypothesis, which is strictly defeated by the "physics" hypothesis. (becuase physics is a simpler morality than anything else that would produce our world, and if we use physics, god doesn't have to exist)
That leaves us with only alien singularity gods, which are totally possible, but don't exist here by the reasoning I gave in parent.
What did I miss?
Replies from: cousin_it↑ comment by cousin_it · 2012-01-02T15:46:36.914Z · LW(p) · GW(p)
I bet physics is a lot simpler than it appears right now tho.
That's a reasonable bet. Another reasonable bet is that "laws of physics are about as complex as minds, but small details have too little measure to matter".
Why not just replace it with a universe that is whatever morality maximised; max(morality) is simpler than god(morality) almost no matter how simple god is.
Well, yeah. Then I guess the question is whether our universe is a byproduct of computing max(morality) for some simple enough "morality" that's still recognizable as such. Will_Newsome seems to think so, or at least that's the most sense I could extract from his comments...
↑ comment by TheOtherDave · 2011-12-31T21:08:12.722Z · LW(p) · GW(p)
Friendly intelligence is not particularly important when the intelligence in question is significantly less powerful an optimizer than its creator. I'm not really sure what would motivate a superintelligence to create entities like me, but given the assumption that one did so, it doesn't seem more likely that it created me with (approximately) its own morality than that it created me with some different morality.
Replies from: None↑ comment by [deleted] · 2012-01-01T20:52:00.554Z · LW(p) · GW(p)
I take it you don't think we have a chance of creating a superpowerful AI with our own morality?
We don't have to be very intelligent to be a threat if we can create something that is.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-01T20:56:22.084Z · LW(p) · GW(p)
I don't think we have a chance of doing so if we have a superintelligent creator who has taken steps to prevent us from doing so, no. (I also don't think it likely that we have such a creator.)
↑ comment by Will_Newsome · 2011-12-27T14:13:11.291Z · LW(p) · GW(p)
focus on evidence rather than priors
Bayesians don't believe in evidence silly goose, you know that. Anyway, User:cousin_it, you're essentially right, though I think that LW would benefit less from developing updated arguments and more from reading Aquinas, at least in the counterfactual universe where LW knew how to read. Anyway. In the real world Less Wrong is hopeless. You're not hopeless. As a decision theorist you're trying to find God, so you have to believe in him in a sense, right? And if you're not trying to find God you should probably stay the hell away from FAI projects. Just sayin'.
↑ comment by AspiringKnitter · 2011-12-22T20:11:52.451Z · LW(p) · GW(p)
A really intelligent response, so I upvoted you, even though, as I said, it surprised me by telling me that, just as one example, tarot cards are Bad when I had not even considered the possibility, so I doubt this came from inside me.
Replies from: None, Will_Newsome↑ comment by [deleted] · 2011-12-22T20:15:57.195Z · LW(p) · GW(p)
Well you are obviously not able to predict the output of your own brain, that's the whole point of the brain. If morality is in the brain and still too complex to understand, you would expect to encounter moral feelings that you had not anticipated.
↑ comment by Will_Newsome · 2011-12-27T10:24:15.116Z · LW(p) · GW(p)
A really intelligent response,
Er, I thought it was overall pretty lame, e.g. the whole question-begging w.r.t. the 'prior probability of omnibenevolent omnipowerful thingy' thingy (nothing annoys me more than abuses of probability theory these days, especially abuses of algorithmic probability theory). Perhaps you are conceding too much in order to appear reasonable. Jesus wasn't very polite.
By the way, in case you're not overly familiar with the heuristics and biases literature, let me give you a hint: it sucks. At least the results that most folk around her cite have basically nothing to do with rationality. There's some quite good stuff with tons of citations, e.g. Gigerenzer's, but Eliezer barely mentioned it to Less Wrong (as fastandfrugal.com which he endorsed) and therefore as expected Less Wrong doesn't know about it. (Same with interpretations of quantum mechanics, as Mitchell Porter often points out. I really hope that Eliezer is pulling some elaborate prank on humanity. Maybe he's doing it unwittingly.)
Anyway the upshot is that when people tell you about 'confirmation bias' as if it existed in the sense they think it does then they probably don't know what the hell they're talking about and you should ignore them. At the very least don't believe them until you've investigated the literature yourself. I did so and was shocked at how downright anti-informative the field is, and less shocked but still shocked at how incredibly useless statistics is (both Bayesianism as a theoretical normative measure and frequentism as a practical toolset for knowledge acquisition). The opposite happened with the parapsychology literature, i.e. low prior, high posterior. Let's just say that it clearly did not confirm my preconceptions; lolol.
Lastly, towards the esoteric end: All roads lead to Rome, if you'll pardon a Catholicism. If they don't it's not because the world is mad qua mad; it is because it is, alas, sinful. An easy way to get to hell is to fall into a fully-general-counterargument blackhole, or a literal blackhole maybe. Those things freak me out.
(P.S. My totally obnoxious arrogance is mostly just a passive aggressive way of trolling LW. I'm not actually a total douchebag IRL. /recursive-compulsive-self-justification)
Replies from: Bongo, Will_Newsome↑ comment by Will_Newsome · 2011-12-27T13:18:03.326Z · LW(p) · GW(p)
I love how Less Wrong basically thinks that all evidence that doesn't support its favored conclusion is bad because it just leads to confirmation bias. "The evidence is on your side, granted, but I have a fully general counterargument called 'confirmation bias' that explains why it's not actually evidence!" Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn't actually exist. (Eliezer knew about the controversy, which is why his post is titled "Positive Bias", which arguably also doesn't exist, especially not in a cognitively relevant way.) Then they talk about Occam's razor while completely failing to understand what algorithmic probability is actually saying. Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable! It's like they're trolling and I'm not supposed to feed them but they look sort of like a very hungry, incredibly stupid puppy.
Replies from: Bongo, Jack, Bongo, lessdazed↑ comment by Bongo · 2011-12-27T20:19:45.130Z · LW(p) · GW(p)
confirmation bias ... doesn't actually exist.
Explain?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T00:50:13.557Z · LW(p) · GW(p)
http://library.mpib-berlin.mpg.de/ft/gg/gg_how_1991.pdf is exemplary of the stuff I'm thinking of. Note that that paper has about 560 citations. If you want to learn more then dig into the literature. I really like Gigerenzer's papers as they're well-cited and well-reasoned, and he's a statistician. He even has a few papers about how to improve rationality, e.g. http://library.mpib-berlin.mpg.de/ft/gg/GG_How_1995.pdf has over 1,000 citations.
Replies from: dlthomas↑ comment by dlthomas · 2011-12-29T01:09:13.635Z · LW(p) · GW(p)
Searching and skimming, the first link does not seem to actually say that confirmation bias does not exist. It says that it does not appear to be the cause of "overconfidence bias" - it seems to take no position on whether it exists otherwise.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:14:37.698Z · LW(p) · GW(p)
Okay, yeah, I was taking a guess. There are other papers that talk about confirmation/positive bias specifically, a lot of in the vein of this kinda stuff. Maybe Kaj's posts called 'Heuristics and Biases Biases?' from here on LW references some relevant papers too. Sorry, I have limited cognitive resources at the moment, I'm mostly trying to point in the general direction of the relevant literature because there's quite a lot of it.
↑ comment by Jack · 2011-12-29T00:32:57.837Z · LW(p) · GW(p)
Hint: It definitely does not say that naturalistic mechanistic universes are a priori more probable!
Hard to know whether to agree or disagree without knowing "more probable than what?"
Replies from: Will_Newsome, dlthomas↑ comment by Will_Newsome · 2011-12-29T00:41:40.907Z · LW(p) · GW(p)
Sorry. More probable than supernaturalistic universes of the sort that the majority of humans finds more likely (where e.g. psi phenomena exist).
Replies from: Jack↑ comment by Jack · 2011-12-29T04:56:29.220Z · LW(p) · GW(p)
So I think you're quite right in that "supernatural" and "natural" are sets that contain possible universes of very different complexity and that those two adjectives are not obviously relevant to the complexity of the universes they describe. I support tabooing those terms. But if you compare two universes, one of which is described most simply by the wave function and an initial state, and another which is described by the wave function, an initial state and another section of code describing the psychic powers of certain agents the latter universe is a priori more unlikely (bracketing for the moment the simulation issue), Obviously if psi phenomenon can be incorporated into the physical model without adding additional lines of code that's another matter entirely.
Returning to the simulation issue I take your position to be that there are conceivable "meta-physics" (meant literally; not necessarily referring to the branch of philosophy) which can make local complexities more common? Is that a fair restatement? I have a suspicion that this is not possibly without paying the complexity back at the other end, though I'm not sure.
↑ comment by lessdazed · 2011-12-27T15:58:36.433Z · LW(p) · GW(p)
Anyway the upshot is that when people tell you about 'confirmation bias' as if it existed in the sense they think it does then they probably don't know what the hell they're talking about and you should ignore them.
...
I love how Less Wrong basically thinks that all evidence that doesn't support its favored conclusion is bad because it just leads to confirmation bias. "The evidence is on your side, granted, but I have a fully general counterargument called 'confirmation bias' that explains why it's not actually evidence!" Yeah, confirmation bias, one of the many claimed cognitive biases that arguably doesn't actually exist.
What was said that's a synonym for or otherwise invoked the confirmation bias?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T00:45:57.184Z · LW(p) · GW(p)
It's mentioned a few times in this thread re AspiringKnitter's evidence for Christianity. I'm too lazy to link to them, especially as it'd be so easy to get the answer to your question with control+f "confirmation" that I'm not sure I interpreted it correctly?
↑ comment by Bugmaster · 2011-12-23T16:38:49.728Z · LW(p) · GW(p)
Just to echo the others that brought this up, I applaud your courage; few people have the guts to jump into the lions' den, as it were. That said, I'm going to play the part of the lion (*) on this topic.
I suddenly became capable of feeling two new sensations, neither of which I'd felt before and neither of which, so far as I know, has words in English to describe it.
How do you know that these sensations come from a supernatural entity, and not from your own brain ? I know that if I started experiencing odd physical sensations, no matter how pleasant, this would be my first hypothesis (especially since, in my personal case, the risk of stroke is higher than average). In fact, if I experienced anything that radically contradicted my understanding of the world, I'd probably consider the following explanations, in order of decreasing likelihood:
- I am experiencing some well-known cognitive bias.
- My brain is functioning abnormally and thus I am experiencing hallucinations.
- Someone is playing a prank on me.
- Shadowy human agencies are testing a new chemical/biological/emissive device on me.
- A powerful (yet entirely material) alien is inducing these sensations, for some reason.
- A trickster spirit (such as a Kami, or the Coyote, etc.) is doing the same by supernatural means.
- A localized god is to blame (Athena, Kali, the Earth Mother, etc.)
- An omniscient, omnipotent, and generally all-everything entity is responsible.
This list is not exhaustive, obviously, it's just some stuff I came up with off the top of my head. Each next bullet point is less probable than the one before it, and thus I'd have to reject pretty much every other explanation before arriving at "the Christian God exists".
(*) Or a bobcat, at least.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T20:27:42.746Z · LW(p) · GW(p)
I am experiencing some well-known cognitive bias.
Is either of those well-known? What about the pattern with which they're felt? Sound like anything you know? Me neither.
My brain is functioning abnormally and thus I am experiencing hallucinations.
That don't have any other effect? That remain stable for years? With no other sign of mental illness? Besides, if I set out by assuming that I can't tell anything because I'm crazy anyway, what good does that do me? It doesn't tell me what to predict. It doesn't tell me what to do. All it tells me is "expect nothing and believe nothing". If I assume it's just these hallucinations and everything else is normal, then I run into "my brain is functioning abnormally and I am experiencing hallucinations that tell me Christian doctrine is true even when I don't know the doctrine in question", which is the original problem you're trying to explain.
A trickster spirit (such as a Kami, or the Coyote, etc.) is doing the same by supernatural means.
And instead of messing with me like a real trickster, it convinces me to worship something other than it and in so doing increases my quality of life?
(*) Or a bobcat, at least.
You've read xkcd?
Replies from: Bugmaster, None, dlthomas↑ comment by Bugmaster · 2011-12-24T03:59:39.185Z · LW(p) · GW(p)
Is either of those well-known? What about the pattern with which they're felt? Sound like anything you know? Me neither.
In addition to dlthomas's suggestion of the affect heuristic, I'd suggest something like the ideomotor effect amplified by confirmation bias.
However, there's a reason I put "cognitive bias" as the first item on my list: I believe that it is overwhelmingly more likely than any alternatives. Thus, it would take a significant amount of evidence to convince me that I'm not laboring under such a bias, even if the bias does not yet have a catchy name.
That don't have any other effect? That remain stable for years? With no other sign of mental illness?
AFAIK some brain cancers can present this way. In any case, if I started experiencing unusual physical symptoms all of a sudden, I'd consult a medical professional. Then I'd write down the results of his tests, and consult a different medical professional, just in case. Better safe than sorry.
And instead of messing with me like a real trickster, it convinces me to worship something other than it and in so doing increases my quality of life?
Trickster spirits (especially Tanuki or Kitsune) rarely demand worship; messing with people is enough for them. Some such spirits are more or less benign; the Tanuki and Raven both would probably be on board with the idea of tricking a human into improving his or her life.
That said, you skipped over human agents and aliens, both of which are IMO overwhelmingly more likely to exist than spirits (though that doesn't make them likely to exist in absolute terms).
You've read xkcd?
Hadn't everyone ? :-)
↑ comment by dlthomas · 2011-12-23T20:30:10.793Z · LW(p) · GW(p)
Is either of those well-known? What about the pattern with which they're felt? Sound like anything you know? Me neither.
It sounds a little like the affect heuristic.
↑ comment by juliawise · 2012-01-04T23:03:23.819Z · LW(p) · GW(p)
AspiringKnitter, what do you think about people who have sensory experiences that indicate that some other religion or text is correct?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-05T00:01:40.678Z · LW(p) · GW(p)
Do they actually exist?
Replies from: Nornagest, juliawise↑ comment by Nornagest · 2012-01-05T00:15:53.949Z · LW(p) · GW(p)
Well, as best I can tell my maintainer didn't install the religion patch, so all I'm working with is the testaments of others; but I have seen quite a variety of such testaments. Buddhism and Hinduism have a typology of religious experience much more complex than anything I've seen systematically laid down in mainline Christianity; it's usually expressed in terms unique to the Dharmic religions, but vipassanā for example certainly seems to qualify as an experiential pointer to Buddhist ontology.
If you'd prefer Western traditions, a phrase I've heard kicked around in the neopagan, reconstructionist, and ceremonial magic communities is "unsubstantiated personal gnosis". While that's a rather flippant way of putting it, it also seems to point to something similar to your experiences.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-05T00:47:29.952Z · LW(p) · GW(p)
Huh, interesting. I should study that in more depth, then.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-01-05T00:56:55.691Z · LW(p) · GW(p)
Careful, you may end up like Draco in HPMoR chapter 23, without a way to gom jabbar the guilty parties (sorry about the formatting):
Replies from: AspiringKnitter"You should have warned me," Draco said. His voice rose. "You should have warned me!" "I... I did... every time I told you about the power, I told you about the price. I said, you have to admit you're wrong. I said this would be the hardest path for you. That this was the sacrifice anyone had to make to become a scientist. I said, what if the experiment says one thing and your family and friends say another -" "You call that a warning?" Draco was screaming now. "You call that a warning? When we're doing a ritual that calls for a permanent sacrifice?" "I... I..." The boy on the floor swallowed. "I guess maybe it wasn't clear. I'm sorry. But that which can be destroyed by the truth should be."
↑ comment by AspiringKnitter · 2012-01-05T01:56:20.650Z · LW(p) · GW(p)
Nah, false beliefs are worthless. That which is true is already so; owning up to it doesn't make it worse. If I turned out to actually be wrong-- well, I have experience being wrong about religion. I'd probably react just like I did before.
Replies from: shminux↑ comment by Shmi (shminux) · 2012-01-05T02:49:15.645Z · LW(p) · GW(p)
I have experience being wrong about religion. I'd probably react just like I did before.
Feel free to elaborate or link if you have talked about it before.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-05T03:33:07.050Z · LW(p) · GW(p)
I used to be an atheist before realizing that was incorrect. I wasn't upset about that; I had been wrong, I stopped being wrong. Is that enough?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-01-05T03:38:36.823Z · LW(p) · GW(p)
Intriguing. I wonder what made you see the light.
↑ comment by Prismattic · 2011-12-29T01:39:01.803Z · LW(p) · GW(p)
A universe without God has no innate morality. The only thing that could make morality would be human preference, which changes an awful lot.
God does not solve this problem.
Replies from: ESRogs↑ comment by ESRogs · 2012-01-04T22:29:37.180Z · LW(p) · GW(p)
It sounded like she was already coming down on the side of the good being good because it is commanded by God when she said, "an innate morality that seems, possibly, somewhat arbitrary."
So maybe the dilemma is not such a problem for her.
↑ comment by TheOtherDave · 2011-12-22T19:16:00.806Z · LW(p) · GW(p)
I can understand your hesitation about telling that story. Thanks for sharing it.
Some questions, if you feel like answering them:
Can you give me some examples of things you hadn't known Christian doctrine considered Bad before you sensed them as A?
If you were advising someone who lacks the ability to sense Good and Bad directly on how to have accurate beliefs about what's Good and Bad, what advice would you give? (It seems to follow from what you've said elsewhere that simply telling them to believe Christianity isn't sufficient, since lots of people sincerely believe they are following the directive to "believe Christianity" and yet end up believing Bad things. It seems something similar applies to "believe the New Testament". Or does it?)
If you woke up tomorrow and you experienced sensation A in situations that were consistent with Christianity being true, and experienced sensation B in situations that were consistent with Islam being true, what would you conclude about the world based on those experiences?
** EDIT: My original comment got A and B reversed. Fixed.
↑ comment by [deleted] · 2011-12-22T20:09:41.124Z · LW(p) · GW(p)
.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-12-22T21:50:18.586Z · LW(p) · GW(p)
I think that should probably be AspiringKnitter's call. (I don't think you're pushing too hard, given the general norms of this community, but I'm not sure of what our norms concerning religious discussions are.)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T22:16:26.127Z · LW(p) · GW(p)
If you want it to be my call, then I say go ahead.
↑ comment by [deleted] · 2011-12-22T19:07:10.933Z · LW(p) · GW(p)
And I certainly wouldn't expect to get a Good signal on the Bible but a Bad signal on other holy books.
Do you currently get a "Bad" signal on other holy books?
Replies from: TimS↑ comment by TimS · 2011-12-22T19:10:21.362Z · LW(p) · GW(p)
Do you get it when you don't know it's another holy book?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T19:34:13.303Z · LW(p) · GW(p)
Let's try that! I got a Bad signal on the Koran and a website explaining the precepts of Wicca, but I knew what both of those were. I would be up for trying a test where you give me quotes from the Christian Bible (warning: I might recognize them; if so, I'll tell you, but for what it's worth I've only read part of Ezekiel, but might recognize the story anyway... I've read a lot of the Bible, actually), other holy books and neutral sources like novels (though I might have read those, too; I'll tell you if I recognize them), without telling me where they're from. If it's too difficult to find Biblical quotes, other Christian writings might serve, as could similar writings from other religions. I should declare up front that I know next to nothing about Hinduism but once got a weak Good reading from what someone said about it. Also, I would prefer longer quotes; the feelings build up from unnoticeable, rather than hitting full-force instantly. If they could be at least as long as a chapter of the Bible, that would be good.
That is, if you're actually proposing that we test this. If you didn't really want to, sorry. It just seems cool.
Replies from: TheOtherDave, lavalamp, lavalamp, TimS, TimS, dlthomas, Bugmaster, lavalamp↑ comment by TheOtherDave · 2011-12-22T20:31:43.634Z · LW(p) · GW(p)
Upvoted for the willingness to test, and in general for being a good sport.
↑ comment by lavalamp · 2011-12-22T20:20:05.166Z · LW(p) · GW(p)
Try this one:
The preparatory prayer is made according to custom.
The first prelude will be a certain historical consideration of ___ on the one part, and __ on the other, each of whom is calling all men to him, to be gathered together under his standard.
The second is, for the construction of the place, that there be represented to us a most extensive plain around Jerusalem, in which ___ stands as the Chief-General of all good people. Again, another plain in the country of Babylon, where ___ presents himself as the captain of the wicked and [God's] enemies.
The third, for asking grace, will be this, that we ask to explore and see through the deceits- of the evil captain, invoking at the same time the Divine help in order to avoid them ; and to know, and by grace be able to imitate, the sincere ways of the true and most excellent General, ___ .
The first point is, to imagine before my eyes, in the Babylonian plain, the captain of the wicked, sitting in a chair of fire and smoke, horrible in figure, and terrible in countenance.
The second, to consider how, having as sembled a countless number of demons, he disperses them through the whole world in order to do mischief; no cities or places, no kinds of persons, being left free.
The third, to consider what kind of address he makes to his servants, whom he stirs up to seize, and secure in snares and chains, and so draw men (as commonly happens) to the desire of riches, whence afterwards they may the more easily be forced down into the ambition of worldly honour, and thence into the abyss of pride.
Thus, then, there are three chief degrees of temptation, founded in riches, honours, and pride; from which three to all other kinds of vices the downward course is headlong.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T20:41:11.684Z · LW(p) · GW(p)
If I had more of the quote, it would be easier. I get a weak Bad feeling, but while the textual cues suggest it probably comes from either the Talmud or the Koran, and while I think it is, I'm not getting a strong feeling on this quote, so this makes me worry that I could be confused by my guess as to where it comes from.
But I'm going to stick my neck out anyway; I feel like it's Bad.
Replies from: lavalamp, TimS↑ comment by TimS · 2011-12-22T20:44:01.717Z · LW(p) · GW(p)
If I had more of the quote, it would be easier. I get a weak Bad feeling, but while the textual cues suggest it probably comes from either the Talmud or the Koran, and while I think it is, I'm not getting a strong feeling on this quote, so this makes me worry that I could be confused by my guess as to where it comes from. But I'm going to stick my neck out anyway; I feel like it's Bad.
I think it's here
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T20:45:31.787Z · LW(p) · GW(p)
I admit to being surprised that this is a Christian writing.
↑ comment by lavalamp · 2011-12-22T21:27:45.063Z · LW(p) · GW(p)
What do you think of this; it's a little less obscure:
Your wickedness makes you as it were heavy as lead, and to tend downwards with great weight and pressure towards hell; and if [God] should let you go, you would immediately sink and swiftly descend and plunge into the bottomless gulf, and your healthy constitution, and your own care and prudence, and best contrivance, and all your righteousness, would have no more influence to uphold you and keep you out of hell, than a spider's web would have to stop a falling rock. Were it not that so is the sovereign pleasure of [God], the earth would not bear you one moment; for you are a burden to it; the creation groans with you; the creature is made subject to the bondage of your corruption, not willingly; the sun don't willingly shine upon you to give you light to serve sin and [the evil one]; the earth don't willingly yield her increase to satisfy your lusts; nor is it willingly a stage for your wickedness to be acted upon; the air don't willingly serve you for breath to maintain the flame of life in your vitals, while you spend your life in the service of [God]'s enemies. [God]'s creatures are good, and were made for men to serve [God] with, and don't willingly subserve to any other purpose, and groan when they are abused to purposes so directly contrary to their nature and end. And the world would spew you out, were it not for the sovereign hand of him who hath subjected it in hope. There are the black clouds of [God]'s wrath now hanging directly over your heads, full of the dreadful storm, and big with thunder; and were it not for the restraining hand of [God] it would immediately burst forth upon you. The sovereign pleasure of [God] for the present stays his rough wind; otherwise it would come with fury, and your destruction would come like a whirlwind, and you would be like the chaff of the summer threshing floor.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T21:41:09.104Z · LW(p) · GW(p)
Bad? I think Bad, but wish I had more of the quote.
Replies from: lavalamp, TimS, hairyfigment↑ comment by hairyfigment · 2011-12-23T00:39:12.258Z · LW(p) · GW(p)
Huh! How about this:
… the mysterious (tablet)…is surrounded by an innumerable company of angels; these angels are of all kinds, — some brilliant and flashing , down to . The light comes and goes on the tablet; and now it is steady...
And now there comes an Angel, to hide the tablet with his mighty wing. This Angel has all the colours mingled in his dress; his head is proud and beautiful; his headdress is of silver and red and blue and gold and black, like cascades of water, and in his left hand he has a pan-pipe of the seven holy metals, upon which he plays. I cannot tell you how wonderful the music is, but it is so wonderful that one only lives in one's ears; one cannot see anything any more.
Now he stops playing and moves with his finger in the air. His finger leaves a trail of fire of every colour, so that the whole Aire is become like a web of mingled lights. But through it all drops dew.
(I can't describe these things at all. Dew doesn't represent what I mean in the least. For instance, these drops of dew are enormous globes, shining like the full moon, only perfectly transparent, as well as perfectly luminous.) ... All this while the dewdrops have turned into cascades of gold finer than the eyelashes of a little child. And though the extent of the Aethyr is so enormous, one perceives each hair separately, as well as the whole thing at once. And now there is a mighty concourse of angels rushing toward me from every side, and they melt upon the surface of the egg in which I am standing __, so that the surface of the egg is all one dazzling blaze of liquid light.
Now I move up against the tablet, — I cannot tell you with what rapture. And all the names of __, that are not known even to the angels, clothe me about. All the seven senses are transmuted into one sense, and that sense is dissolved in itself ...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T01:46:53.389Z · LW(p) · GW(p)
Neutral/no idea.
Replies from: TimS↑ comment by TimS · 2011-12-23T03:25:38.324Z · LW(p) · GW(p)
Neutral/no idea.
This is it
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T03:29:25.016Z · LW(p) · GW(p)
Huh. Odd.
Replies from: hairyfigment↑ comment by hairyfigment · 2011-12-23T03:48:14.302Z · LW(p) · GW(p)
Yes, I was trying to figure out how much of the feeling had to do with lack of Hell (answer: not all of it). The Tarot does fit the pattern.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T04:25:19.236Z · LW(p) · GW(p)
? I'm confused.
Replies from: hairyfigment↑ comment by hairyfigment · 2011-12-23T05:12:47.893Z · LW(p) · GW(p)
Good for you. ^_^
You had a Bad feeling about two Christian quotes that mentioned Hell or demons/hellfire. You also got a Good feeling about a quote from Nietzsche that didn't mention Hell. I don't know the context of your reactions to the Tarot and Wicca, but obviously people have linked those both to Hell. (See also Horned God, "Devil" trump.) So I wanted to get your reaction to a passage with no mention of Hell from an indeterminate religion, in case that sufficed to make it seem Good.
The author designed a famous Tarot deck, and inspired a big chunk (at minimum) of Wicca.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T06:39:55.298Z · LW(p) · GW(p)
I hadn't considered that hypothesis. I'd upvote for the novel theory, but now that you've told me you'll never be able to trust further reactions that could confirm or deny it, which seems like it's worth a downvote, so not voting your post up or down. That said, I think this fails to explain having a Bad reaction to this page and the entire site it's on, despite thinking before reading it that Wicca was foofy nonsense and completely not expecting to find evil of that magnitude (a really, really strong feeling-- none of the quotes you guys have asked me about have been even a quarter that bad). It wasn't slow, either; unlike most other things, it was almost immediately obvious. (The fact that this has applied to everything else I've ever read about Wicca since-- at least, everything written by Wiccans about their own religion-- could have to do with expectation, so I can see where you wouldn't regard subsequent reactions as evidence... but the first one, at least, caught me totally off-guard.)
I know who Crowley is. (It was his tarot deck that someone gave me as a gift-- and I was almost happy about it, because I'd actually been intending to research tarot because it seemed cool and I meant to use the information for a story I was writing. But then I felt like, you know, Bad, so I didn't end up using it.) That's why I was surprised not to have a bad feeling about his writings.
↑ comment by TimS · 2011-12-22T21:25:31.326Z · LW(p) · GW(p)
One more, then I'll stop.
Man is a rope tied between beast and [superior man] - a rope over an abyss. A dangerous across, a dangerous on-the-way, a dangerous looking-back, a dangerous shuddering and stopping.
What is great in man is that he is a bridge and not a goal: what is lovable in man is that he is an overture and a going under.
I love those that know not how to live except by going under, for they are those who cross over.
I love the great despisers, because they are the great reverers, and arrows of longing for the other shore.
I love those who do not first seek a reason beyond the stars for going under and being sacrifices, but sacrifice themselves to the earth, that the earth may some day become the [superior man’s].
I love him who lives to know, and wants to know so that the [superior man] may live some day. Thus he wants to go under.
I love him who works and invents to build a house for the [superior man] and to prepare earth, animal, and plant for him: for thus he wants to go under.
I love him who loves his virtue: for virtue is the will to go under, and an arrow of longing.
I love him who does not hold back one drop of spirit for himself, but wants to be entirely the spirit of his virtue: thus he strides over the bridge as spirit.
I love him who makes his virtue his addiction and catastrophe: for his virtue’s sake he wants to live on and to live no longer.
I love him who does not want to have too many virtues. One virtue is more virtue than two, because it is more of a noose on which his catastrophe may hang.
I love him whose soul squanders itself, who wants no thanks and returns none: for he always gives away, and does not want to preserve himself.
I love him who is abashed when the dice fall to make his fortune, and who asks: "Am I a crooked gambler?” For he wants to perish.
I love him who casts golden words before his deed, and always does more than he promises: for he wants to go under.
I love him who justifies future and redeems past generations: for he wants to perish of the present.
I love him who chastens his God, because he loves his God: for he must perish of the wrath of his God.
I love him whose soul is deep even in being wounded, and who can perish of a small experience: thus he gladly goes over the bridge.
I love him whose soul is so overfull that he forgets himself, and all things are in him: thus all things spell his going under.
I love him who has a free spirit and a free heart: thus his head is only the entrails of his heart, but his heart causes him to go under.
I love all who are as heavy drops, falling one by one out of the dark cloud that hangs over men: they herald the advent of lightning, and, as heralds, they perish.
Behold, I am a herald of the lightning, and a heavy drop from the cloud: but this lightning is called [superior man].
Replies from: Kaj_Sotala, AspiringKnitter, Multiheaded, soreff, dlthomas↑ comment by Kaj_Sotala · 2011-12-25T20:30:06.940Z · LW(p) · GW(p)
I know very little about Nietzsche, but I recognized this instantly because the first three lines were quoted in Sid Meier's Alpha Centauri. :-)
↑ comment by AspiringKnitter · 2011-12-22T21:34:04.814Z · LW(p) · GW(p)
I get a moderate Good reading (?!) and I'm confused to get it because the morality the person is espousing seems wrong. I'm guessing this comes from someone's writings about their religion, possibly an Eastern religion?
Replies from: TimS↑ comment by TimS · 2011-12-22T21:40:39.087Z · LW(p) · GW(p)
I get a moderate Good reading (?!) and I'm confused to get it because the morality the person is espousing seems wrong. I'm guessing this comes from someone's writings about their religion, possibly an Eastern religion?
Walter Kaufman (Nietzsche's translator here) prefers overman as the best translation of ubermensch.
ETA: This is some interesting commentary on the work
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T21:45:35.307Z · LW(p) · GW(p)
I'm surprised. I'd heard Nietzsche was not a nice person, but had also heard good things about him... huh. I'll have to read his work, now. I wonder if the library has some.
Replies from: TimS, Nornagest↑ comment by TimS · 2011-12-22T21:53:22.600Z · LW(p) · GW(p)
Niezsche's sister was an anti-semite and a German nationalist. After Nietzsche's death, she edited his works into something that became an intellectual foundation for Nazism. Thus, he got a terrible reputation in the English speaking world.
It's tolerable clear from a reading of his unabridged works that Nietzsche would have hated Nazism. But he would not have identified himself as Christian (at least as measured by a typical American today). He went mad before he died, and the apocryphal tale is that the last thing he did before being institutionalize was to see a horse being beaten on the street and moving to protect it.
To see his moral thought, you could read Thus Spake Zarathustra. To see why he isn't exactly Christian, you can look at The Geneology of Morals. Actually, you might also like Kierkegaard because he expresses somewhat similar thoughts, but within a Christian framework.
Replies from: pragmatist↑ comment by pragmatist · 2011-12-22T22:00:08.428Z · LW(p) · GW(p)
To really see why he isn't Christian, read The Antichrist.
Replies from: TimSThe Christian conception of God -- God as god of the sick, God as a spider, God as spirit -- is one of the most corrupt conceptions of the divine ever attained on earth... God as the declaration of war against life, against nature, against the will to live! God -- the formula for every slander against "this world," for every lie about the "beyond"! God -- the deification of nothingness, the will to nothingness pronounced holy!
↑ comment by TimS · 2011-12-22T22:09:09.565Z · LW(p) · GW(p)
As with what he wrote in Genealogy of Morals, it is unclear how tongue-in-cheek/intentional provocative Nietzsche is being. I'm honestly not sure whether Nietzsche thought the "master morality" was better or worse than the "slave morality."
Replies from: Nornagest↑ comment by Nornagest · 2012-01-05T00:04:23.031Z · LW(p) · GW(p)
The sense I get -- but note that it's been a couple of years since I've read any substantial amount of Nietzsche -- is that he treats master morality as more honest, and perhaps what we could call psychologically healthier, than slave morality, but does not advocate that the former be adopted over the latter by people living now; the transition between the two is usually explained in terms of historical changes. The morality embodied by his superior man is neither, or a synthesis of the two, and while he says a good deal about what it's not I don't have a clear picture of many positive traits attached to it.
Replies from: Oligopsony↑ comment by Oligopsony · 2012-06-08T01:13:36.851Z · LW(p) · GW(p)
The morality embodied by his superior man is neither, or a synthesis of the two, and while he says a good deal about what it's not I don't have a clear picture of many positive traits attached to it.
That's because the superman, by definition, invents his own morality. If you read a book telling you the positive content of morality and implement it because the eminent philosopher says so, you ain't superman.
↑ comment by Nornagest · 2012-01-04T23:32:32.683Z · LW(p) · GW(p)
I wouldn't call him a fully sane person, especially in his later work (he suffered in later life from mental problems most often attributed to neurosyphilis, and it shows), but he has a much worse reputation than I think he really deserves. I'd recommend Genealogy of Morals and The Gay Science; they're both laid out a bit more clearly than the works he's most famous for, which tend to be heavily aphoristic and a little scattershot.
↑ comment by Multiheaded · 2012-01-07T02:12:17.981Z · LW(p) · GW(p)
It's easy to find an equally forceful bit by Nietzsche that's not been quoted to death, really. Had AK recognized it, you would've botched a perfectly good test.
↑ comment by TimS · 2011-12-22T19:52:46.685Z · LW(p) · GW(p)
Because I'm curious
Fairly read as a whole and in the context of the trial, the instructions required the jury to find that Chiarella obtained his trading advantage by misappropriating the property of his employer's customers. The jury was charged that,
"[i]n simple terms, the charge is that Chiarella wrongfully took advantage of information he acquired in the course of his confidential position at Pandick Press and secretly used that information when he knew other people trading in the securities market did not have access to the same information that he had at a time when he knew that that information was material to the value of the stock."
Record 677 (emphasis added). The language parallels that in the indictment, and the jury had that indictment during its deliberations; it charged that Chiarella had traded "without disclosing the material non-public information he had obtained in connection with his employment." It is underscored by the clarity which the prosecutor exhibited in his opening statement to the jury. No juror could possibly have failed to understand what the case was about after the prosecutor said:
"In sum, what the indictment charges is that Chiarella misused material nonpublic information for personal gain and that he took unfair advantage of his position of trust with the full knowledge that it was wrong to do so. That is what the case is about. It is that simple."
Id. at 46. Moreover, experienced defense counsel took no exception and uttered no complaint that the instructions were inadequate in this regard. [Therefore, the conviction is due to be affirmed].
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T20:09:19.864Z · LW(p) · GW(p)
I get no reading here. My guess is that this is some sort of legal document, in which case I'm not really surprised to get no reading. Is that correct?
Replies from: TimS↑ comment by TimS · 2011-12-22T20:17:34.909Z · LW(p) · GW(p)
Yes, it is a legal document. Specifically a dissent from the reversal of a criminal conviction. In particular, I think the quoted text is an incredibly immoral and wrong-headed understanding of American criminal law. Which makes it particularly depressing that the writer was Chief Justice when he wrote it
↑ comment by dlthomas · 2011-12-22T19:36:13.538Z · LW(p) · GW(p)
With, I assume, the names changed? Otherwise it seems too easy :-P
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T19:49:15.058Z · LW(p) · GW(p)
Yes, where names need to be changed. [God] will be sufficient to confuse me as to whether it's "the LORD" or "Allah" in the original source material. There might be a problem with substance in very different holy books where I might be able to guess the religion just by what they're saying (like if they talk about reincarnation or castes, I'll know they're Hindu or Buddhist). I hope anyone finding quotes will avoid those, of course.
↑ comment by Bugmaster · 2012-01-07T02:39:41.481Z · LW(p) · GW(p)
This is a bit off-topic, but, out of curiosity, is there anything in particular that you find objectionable about Wicca on a purely analytical level ? I'm not saying that you must have such a reason, I'm just curious.
Just in the interests of pure disclosure, the reason I ask is because I found Wicca to be the least harmful religion among all the religions I'd personally encountered. I realize that, coming from an atheist, this doesn't mean much, of course...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T02:44:11.904Z · LW(p) · GW(p)
Assuming you mean besides the fact that it's wrong (by both meanings-- incorrect and sinful), then no, nothing at all.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-07T02:54:10.032Z · LW(p) · GW(p)
I'm actually not entirely sure what you mean by "incorrect", and how it differs from "sinful". As an atheist, I would say that Wicca is "incorrect" in the same way that every other religion is incorrect, but presumably you'd disagree, since you're religious.
Some Christians would say that Wicca is both "incorrect" and "sinful" because its followers pray to the wrong gods, since a). YHVH/Jesus is the only God who exists, thus worshiping other (nonexistent) gods is incorrect, and b). he had expressly commanded his followers to worship him alone, and disobeying God is sinful. In this case, though, the "sinful" part seems a bit redundant (since Wiccans would presumably worship Jesus if they were convinced that he existed and their own gods did not). But perhaps you meant something else ?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T03:09:06.990Z · LW(p) · GW(p)
I mean incorrect in that they believe things that are wrong, yes; they believe in, for instance, a goddess who doesn't really exist. And sinful because witchcraft is forbidden.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-07T03:20:16.796Z · LW(p) · GW(p)
And sinful because witchcraft is forbidden.
Wouldn't this imply that witchcraft is effective, though ? Otherwise it wouldn't be forbidden; after all, God never said (AFAIK), "you shouldn't pretend to cast spells even though they don't really work", nor did he forbid a bunch of other stuff that is merely silly and a waste of time. But if witchcraft is effective, it would imply that it's more or less "correct", which is why I was originally confused about what you meant.
FWIW, I feel compelled to point out that some Wiccans believe in multiple gods or none at all, even though this is off-topic -- since I can practically hear my Wiccan acquaintances yelling at me in the back of my head... metaphorically speaking, that is.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T03:49:30.006Z · LW(p) · GW(p)
Wouldn't this imply that witchcraft is effective, though ?
Yes.
some Wiccans believe in multiple gods or none at all
Which is still wrong.
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-07T03:57:43.463Z · LW(p) · GW(p)
Wouldn't this imply that witchcraft is effective, though ? Yes.
Ok, but in that case, isn't witchcraft at least partially "correct" ? Otherwise, how can they cast all those spells and make them actually work (assuming, that is, that their spells actually do work) ?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T04:04:20.092Z · LW(p) · GW(p)
By consorting with demons.
Replies from: Bugmaster, taelor↑ comment by Bugmaster · 2012-01-07T04:24:38.730Z · LW(p) · GW(p)
Ah, right, so you believe that the entities that Wiccans worship do in some way exist, but that they are actually demons, not benign gods.
I should probably point out at this point that Wiccans (well, at least those whom I'd met), consider this point of view utterly misguided and incredibly offensive. No one likes to be called a "demon-worshiper", especially when one is generally a nice person whose main tenet in life is a version of "do no harm". You probably meant no disrespect, but flat-out calling a whole group of people "demon-worshipers" tends to inflame passions rather quickly, and not in a good way.
Replies from: AspiringKnitter, wedrifid, MixedNuts, None↑ comment by AspiringKnitter · 2012-01-07T04:43:08.266Z · LW(p) · GW(p)
I should probably point out at this point that Wiccans (well, at least those whom I'd met), consider this point of view utterly misguided and incredibly offensive.
That's a bizarre thing to say. Is their offense evidence that I'm wrong? I don't think so; I'd expect it whether or not they worship demons. Or should I believe something falsely because the truth is offensive? That would go against my values-- and, dare I say it, the suggestion is offensive. ;) Or do you want me to lie so I'll sound less offensive? That risks harm to me (it's forbidden by the New Testament) and to them (if no one ever tells them the truth, they can't learn), as well as not being any fun.
No one likes to be called a "demon-worshiper",
What is true is already so, Owning up to it doesn't make it worse. Not being open about it doesn't make it go away.
especially when one is generally a nice person whose main tenet in life is a version of "do no harm".
Nice people like that deserve truth, not lies, especially when eternity is at stake.
flat-out calling a whole group of people "demon-worshipers" tends to inflame passions rather quickly,
So does calling people Cthulhu-worshipers. But when you read that article, you agreed that it was apt, right? Because you think it's true. You guys sure seem quick to tell me that my beliefs are offensive, but if I said the same to you, you'd understand why that's beside the point. If Wiccans worship demons, I desire to believe that Wiccans worship demons; if Wiccans don't worship demons, I desire to believe that Wiccans don't worship demons. Sure, it's offensive and un-PC. If you want me to stop believing it, tell me why you think it's wrong.
Replies from: Yvain, occlude, Bugmaster↑ comment by Scott Alexander (Yvain) · 2012-01-07T07:23:08.975Z · LW(p) · GW(p)
I like your post (and totally agree with the first paragraph), but have some concerns that are a little different from Bugmaster's.
What's the exact difference between a god and a demon? Suppose Wicca is run by a supernatural being (let's call her Astarte) who asks her followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of spells, and insists she will reward the righteous and punishes the wicked. You worship a different supernatural being who also asks His followers to follow commendable moral rules, grants their petitions when expressed in the ritualistic form of prayer, and insists He will reward the righteous and punish the wicked. If both Jehovah and Astarte exist and act similarly, why name one "a god" and the other "a demon"? Really, the only asymmetry seems to be that Jehovah tries to inflict eternal torture on people who prefer Astarte, where Astarte has made no such threats among people who prefer Jehovah, which is honestly advantage Astarte. So why not just say "Of all the supernatural beings out there, some people prefer this one and other people prefer that one"?
I mean, one obvious answer is certainly to list the ways Jehovah is superior to Astarte - the one created the Universe, the other merely lives in it; the one is all-powerful, the other merely has some magic; the one is wise and compassionate, the other evil and twisted. But all of these are Jehovah's assertions. One imagines Astarte makes different assertions to her followers. The question is whose claims to believe.
Jehovah has a record of making claims which seem to contradict the evidence from other sources - the seven-day creation story, for example. And He has a history of doing things which, when assessed independently of their divine origin, we would consider immoral - the Massacre of the Firstborn in Exodus, or sanctioning the rape, enslavement, infanticide, and genocide of the Canaanites. So it doesn't seem obvious at all that we should trust His word over Astarte's, especially since you seem to think that Astarte's main testable claim - that she does magic for her followers - is true.
Now, you've already said that you believe in Christianity because of direct personal revelation - a sense of serenity and rightness when you hear its doctrines, and a sense of repulsion from competing religions, and that this worked even when you didn't know what religion you were encountering and so could not bias the result. I upvoted you when you first posted this because I agree that such feelings could provide some support for religious belief. But that was before you said you believed in competing supernatural beings. Surely you realize how difficult a situation that puts you in?
Giving someone a weak feeling of serenity or repulsion is, as miracles go, not a very flashy one. One imagines it would take only simple magic, and should be well within the repertoire of even a minor demon or spirit. And you agree that Astarte performs minor miracles of the same caliber all the time to try to convince her own worshippers. So all that your feelings indicate is that some supernatural being is trying to push you toward Christianity. If you already believe that there are multiple factions of supernatural beings, some of whom push true religions and others of whom push false ones, then noticing that some supernatural being is trying to push you toward Christianity provides zero extra evidence that Christianity is true.
Why should you trust the supernatural beings who have taken an interest in your case, as opposed to the supernatural beings apparently from a different faction who caused the seemingly miraculous revelations in this person and this person's lives?
Replies from: AspiringKnitter, Alejandro1↑ comment by AspiringKnitter · 2012-01-07T20:51:02.944Z · LW(p) · GW(p)
Since you use the names Jehovah and Astarte, I'll follow suit, though they're not the names I prefer.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte. Also, if Astarte knows this, but pretends otherwise, then Astarte's a liar.
If you already believe that there are multiple factions of supernatural beings, some of whom push true religions and others of whom push false ones, then noticing that some supernatural being is trying to push you toward Christianity provides zero extra evidence that Christianity is true.
Not quite. I only believe in "multiple factions of supernatural beings" (actually only two) because it's implied by Christianity being true. It's not a prior belief. If Christianity is false, one or two or fifteen or zero omnipotent or slightly-powerful or once-human or monstrous gods could exist, but if Christianity is false I'd default to atheism, since if my evidence for Christianity proved false (say, I hallucinated it all because of some undiagnosed mental illness that doesn't resemble any currently-known mental illness and only causes that one symptom) without my gaining additional evidence for some other religion or non-atheist cosmology, I'd have no evidence for anything spiritual. Or do I misunderstand? I'm confused.
Why should you trust the supernatural beings who have taken an interest in your case, as opposed to the supernatural beings apparently from a different faction who caused the seemingly miraculous revelations in this person and this person's lives?
Being, singular, first of all.
I already know myself, what kind of a person I am. I know how rational I am. I know how non-crazy I am. I know exactly the extent to which I've considered illness affecting my thoughts as a possible explanation.
I know I'm not lying.
The first person became an apostate, something I've never done, and is still confused years later. The second person records only the initial conversion, while I know how it's played out in my own life for several years.
The second person is irrationally turned off by even the mere appearance of Catholicism and Christianity in general because of terrible experiences with Catholics.
I discount all miracle stories from people I don't know, including Christian and Jewish miracle stories, which could at least plausibly be true. I discount them ALL when I don't know the person. In fact, that means MOST of the stories I hear and consider unlikely (without passing judgment when I have so little info) are stories that, if true, essentially imply Christianity, while others would provide evidence for it.
And knowing how my life has gone, I know how I've changed as a person since accepting Jesus, or Jehovah if that's the word you prefer. They don't mention drastic changes to their whole personalities to the point of near-unrecognizability even to themselves. In brief: I was unbelievably awful. I was cruel, hateful, spiteful, vengeful and not a nice person. I was actively hurtful toward everyone, including immediate family. After finding Jesus, I slowly became a less horrible person, until I got to where I am now. Self-evaluation may be somewhat unreliable, but I think the lack of any physical violence recently is a good sign. Also, rather than escalating arguments as far as possible, when I realize I've lashed out, I deliberately make an effort not to fall prey to consistency bias and defend my actions, but to stop and apologize and calm down. That's something I would not have done-- would not have WANTED to do, would not have thought was a good idea, before.
And you agree that Astarte performs minor miracles of the same caliber all the time to try to convince her own worshippers.
I don't know (I only guess) what Astarte does to xyr worshipers. I'm conjecturing; I've never prayed to xem, nor have I ever been a Wiccan or any other type of non-Christian religion. But I think I ADBOC this statement; if said by me, it would have sounded more like "Satan makes xyrself look very appealing".
(I'm used to a masculine form for this being. You're using a feminine form. Rather than argue, I've simply shifted my pronoun usage to an accurate-- possibly more accurate-- and less loaded set of pronouns.)
Also, my experience suggests that if something is good or evil, and you're open to the knowledge, you'll see through any lies or illusions with time. It might be a lot of time-- I'll confess I recently got suckered into something for, I think, a couple of years, when I really ought to have known better much sooner, and no, I don't want to talk about it-- but to miss it forever requires deluding yourself.
(Not, as we all know, that self-delusion is particularly rare...)
So all that your feelings indicate is that some supernatural being is trying to push you toward Christianity.
That someone is trying to convince me to be a Christian or that I perceive the nature of things using an extra sense.
Giving someone a weak feeling of serenity or repulsion is, as miracles go, not a very flashy one.
Strength varies. Around the time I got to the fourth Surah of the Koran, it was much flashier than anything I've seen since, including everything previously described (on the negative side) at incredible strength plus an olfactory hallucination. And the result of, I think, two days straight of Bible study and prayer at all times constantly... well, that was more than a weak feeling of serenity. But on its own it'd be pretty weak evidence, because I was only devoting so much time to prayer because my state of mind was so volatile and my thoughts and feelings were unreliable. It's only repetitions of that effect that let me conclude that it means what I've already listed, after controlling for other possibilities that are personal so I don't want to talk about it. Those are rare extremes, though; normally it's not as flashy as those.
you seem to think that Astarte's main testable claim - that she does magic for her followers - is true.
I consider it way likelier than you do, anyway. I'm only around fiftyish percent confidence here. But that's only one aspect of it. Their religion also claims to cause changes in its followers along the lines of "more in tune with the Divine" or something, right? So if there are any overlapping claims about morality, that would also be testable-- NOT absolute morality of the followers, but change in morality on mutually-believed-in traits, measuring before and after conversion, then a year on, then a few years on, then several years on. Of course, I'm not sure how you'll ever get the truth about how moral people are when they think no one's watching...
Replies from: Yvain, TheOtherDave, soreff, DSimon, Anubhav, wedrifid↑ comment by Scott Alexander (Yvain) · 2012-01-08T17:36:17.055Z · LW(p) · GW(p)
Sorry - I used "Astarte" and the female pronoun because the Wiccans claim to worship a Goddess, and Astarte was the first female demon I could think of. If we're going to go gender-neutral, I recommend "eir", just because I think it's the most common gender neutral pronoun on this site and there are advantages to standardizing this sort of thing.
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte.
Well, okay, but this seems to be an argument from force, sort of "Jehovah is a god and Astarte a demon because if I say anything else, Jehovah will torture me". It seems to have the same form as "Stalin is not a tyrant, because if I call Stalin a tyrant, he will kill me, and I don't want that!"
Not quite. I only believe in "multiple factions of supernatural beings" (actually only two) because it's implied by Christianity being true.
It sounds like you're saying the causal history of your belief should affect the probability of it being true.
Suppose before you had any mystical experience, you had non-zero probabilities X of atheism, Y of Christianity (in which God promotes Christianity and demons promote non-Christian religions like Wicca), and Z of any non-Christian religion (in which God promotes that religion and demons promote Christianity).
Then you experience an event which you interpret as evidence for a supernatural being promoting Christianity. This should raise the probability of Y and Z an equal amount, since both theories seem to equally predict this would happen.
You could still end up a Christian if you started off with a higher probability Y than Z, but it sounds like you weren't especially interested in Christianity before your mystical experience, and the prior for Z is higher than Y since there are so many more non-Christian than Christian religions.
Being, singular, first of all...
I understand you as having two categories of objections: first, objections that the specific people in the Islamic conversion stories are untrustworthy or their stories uninteresting (3,4,6). Second, that you find mystical experiences by other people inherently hard to believe but you believe your own because you are a normal sane person (1,2,5).
The first category of objections apply only to those specific people's stories. That's fair enough since those were the ones I presented, but they were the ones I presented because they were the first few good ones I found in the vast vast vast vast VAST Islamic conversion story literature. I assume that if you were to list your criteria for believability, we could eventually find some Muslim who experienced a seemingly miraculous conversion who fit all of those criteria (including changing as a person) - if it's important to you to test this, we can try.
The second category of objections is more interesting. Different studies show somewhere from a third to half of Americans having mystical experiences, including about a third of non-religious people who have less incentive to lie. Five percent of people experience them "regularly". Even granted that some of these people are lying and other people categorize "I felt really good" as a mystical experience, I don't think denying that these occur is really an option.
The typical view that people need to be crazy, or on the brink of death, or uneducated, or something other than a normal middle class college-educated WASP adult in order to have mystical experiences also breaks down before the evidence. According to Greeley 1975 and Hay and Morisy 1976, well-educated upper class people are more likely to have mystical experiences, and Hay and Morisy 1978 found that people with mystical experiences are more likely to be mentally well-balanced.
Since these experiences occur with equal frequency among people of all religion and even atheists, I continue to think this supports either the "natural mental process" idea or the "different factions of demons" idea - you can probably guess which one I prefer :)
Also, my experience suggests that if something is good or evil, and you're open to the knowledge, you'll see through any lies or illusions with time.
There are 1.57 billion Muslims and 2.2 billion Christians in the world. Barring something very New-Agey going on, at least one of those groups believes an evil lie. The number of Muslims who convert to Christianity at some point in their lives, or vice versa, is only a tiny fraction of a percent. So either only a tiny fraction of a percent of people are open to the knowledge - so tiny that you could not reasonably expect yourself to be among them - or your experience has just been empirically disproven.
(PS: You're in a lot of conversations at once - let me know if you want me to drop this discussion, or postpone it for later)
Replies from: Multiheaded, AspiringKnitter↑ comment by Multiheaded · 2012-01-08T18:52:07.934Z · LW(p) · GW(p)
Speaking of mystical experiences, my religion tutor at the university (an amazing woman, Christian but pretty rational and liberal) had one, as she told us, in transport one day, and that's when she converted, despite growing up at an atheistic middle-class Soviet family.
Oh, and the closest thing I ever had to one was when I tried sensory deprivation + dissociatives (getting high on cough syrup, then submersing myself in a warm bath with lights out and ears plugged; had a timer set to 40 minutes and a thin ray of light falling where I could see it by turning my head as precaution against, y'know, losing myself). That experiment was both euphoric and interesting, but I wouldn't really want to repeat it. I experienced blissful ego death and a feeling of the universe spinning round and round in cycles, around where I would be, but where now was nothing. It's hard to describe.
And then, well, I saw the tiny, shining shape of Rei Ayanami. She was standing in her white plugsuit amidst the blasted ruins on a dead alien world, and I got the feeling that she was there to restore it to life. She didn't look at me, but I knew she knew I saw her. Then it was over.
Fret not, I didn't really make any more bullshit out of that, but it's certainly an awesome moment to remember.
↑ comment by AspiringKnitter · 2012-01-08T22:57:04.642Z · LW(p) · GW(p)
Second, that you find mystical experiences by other people inherently hard to believe but you believe your own because you are a normal sane person (1,2,5).
Unless I know them already. Once I already know people for honest, normal, sane people ("normal" isn't actually required and I object to the typicalist language), their miracle stories have the same weight as my own. Also, miracles of more empirically-verifiable sorts are believable when vetted by snopes.com.
If we're going to go gender-neutral, I recommend "eir", just because I think it's the most common gender neutral pronoun on this site and there are advantages to standardizing this sort of thing.
Xe is poetic and awesome. I'm hoping it'll become standard English. To that end, I use it often.
(including changing as a person)
I read your first link and I'm very surprised because I didn't expect something like that. It would be interesting to talk to that person about this.
So either only a tiny fraction of a percent of people are open to the knowledge - so tiny that you could not reasonably expect yourself to be among them -
Is that surprising? First of all, I know that I already converted to Christianity, rather than just having assumed it always, so I'm already more likely to be open to new facts. And second, I thought it was common knowledge around these parts that most people are really, really bad at finding the truth. How many people know Bayes? How many know what confirmation bias is? Anchoring? The Litany of Tarski? Don't people on this site rail against how low the sanity waterline is? I mean, you don't disagree that I'm more rational than most Christians and Muslims, right?
Different studies show somewhere from a third to half of Americans having mystical experiences, including about a third of non-religious people who have less incentive to lie. Five percent of people experience them "regularly".
Do they do this by using tricks like Multiheaded described? Or by using mystical plants or meditation? (I know there are Christians who think repeating a certain prayer as a mantra and meditating on it for a long time is supposed to work... and isn't there, or wasn't there, some Islamic sect where people try to find God by spinning around?) If so, that really doesn't count. Is there another study where that question was asked? Because if you're asserting that mystical experiences can be artificially induced by such means in most if not all people, then we're in agreement.
Well, okay, but this seems to be an argument from force, sort of "Jehovah is a god and Astarte a demon because if I say anything else, Jehovah will torture me". It seems to have the same form as "Stalin is not a tyrant, because if I call Stalin a tyrant, he will kill me, and I don't want that!"
I was thinking more along the lines of "going to hell is a natural consequence of worshiping Astarte", analogous to "if I listen to my peers and smoke pot, I won't be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad". I hadn't even considered it from that point of view before.
Replies from: Yvain, Prismattic, Estarlio, TheOtherDave, fortyeridania, soreff, None↑ comment by Scott Alexander (Yvain) · 2012-01-10T04:54:35.038Z · LW(p) · GW(p)
Is that surprising? ... Don't people on this site rail against how low the sanity waterline is? I mean, you don't disagree that I'm more rational than most Christians and Muslims, right?
No, I suppose it's not surprising. I guess I misread the connotations of your claim. Although I am still not certain I agree: I know some very rational and intelligent Christians, and some very rational and intelligent atheists (I don't really know many Muslims, so I can't say anything about them). At some point I guess this statement is true by definition, since we can define open-minded as "open-minded enough to convert religion if you have good enough evidence to do so." But I can't remember where we were going with this one so I'll shut up about it.
Do they do this by using tricks like Multiheaded described? Or by using mystical plants or meditation? (I know there are Christians who think repeating a certain prayer as a mantra and meditating on it for a long time is supposed to work... and isn't there, or wasn't there, some Islamic sect where people try to find God by spinning around?) If so, that really doesn't count. Is there another study where that question was asked? Because if you're asserting that mystical experiences can be artificially induced by such means in most if not all people, then we're in agreement.
I was unable to find numerical data on this. I did find some assertions in the surveys that some of the mystical experience was untriggered, I found one study comparing 31 people with triggered mystical experience to 31 people with untriggered mystical experience (suggesting it's not too hard to get a sample of the latter), and I have heard anecdotes from people I know about having untriggered mystical experience.
Honestly I had never really thought of that as an important difference. Keep in mind that it's really weird that the brain responds to relatively normal stressors, like fasting or twirling or staying still for two long, by producing this incredible feeling of union with God. Think of how surprising this would be if you weren't previously aware of it, how complex a behavior this is, as opposed to something simpler like falling unconscious. The brain seems to have this built-in, surprising tendency to have mystical experiences, which can be triggered by a lot of different things.
As someone in the field of medicine, this calls to mind the case of seizures, another unusual mental event which can be triggered in similar conditions. Doctors have this concept called the "seizure threshold". Some people have low seizure thresholds, other people high seizure thresholds. Various events - taking certain drugs, getting certain diseases, being very stressed, even seeing flashing lights in certain patterns - increases your chance of having a seizure, until it passes your personal seizure threshold and you have one. And then there are some people - your epileptics - who can just have seizures seemingly out of nowhere in the course of everyday life (another example is that some lucky people can induce orgasm at will, whereas most of us only achieve orgasm after certain triggers).
I see mystical experiences as working a lot like seizures - anyone can have one if they experience enough triggers, and some people experience them without any triggers at all. It wouldn't be at all parsimonous to say that some people have this reaction when they skip a few meals, or stay in the dark, or sit very still, and other people have this reaction when they haven't done any of these things, but these are caused by two completely different processes.
I mean, if we already know that dreaming up mystical experiences is the sort of thing the brain does in some conditions, it's a lot easier to expand that to "and it also does that in other conditions" than to say "but if it happens in other conditions, it is proof of God and angels and demons and an entire structure of supernatural entities."
I was thinking more along the lines of "going to hell is a natural consequence of worshiping Astarte", analogous to "if I listen to my peers and smoke pot, I won't be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad". I hadn't even considered it from that point of view before.
The (relatively sparse) Biblical evidence suggests an active role of God in creating Hell and damning people to it. For example:
"This is how it will be at the end of the age. The angels will come and separate the wicked from the righteous and throw them into the blazing furnace, where there will be weeping and gnashing of teeth." (Matthew 13:49)
"Depart from me, you accursed, into the eternal fire that has been prepared for the devil and his angels!" (Matthew 25:41)
"If anyone’s name was not found written in the book of life, that person was thrown into the lake of fire." (Revelations 20:15)
"God did not spare angels when they sinned, but sent them to hell, putting them into gloomy dungeons to be held for judgment" (2 Peter 2:4)
"Fear him who, after the killing of the body, has power to throw you into hell. Yes, I tell you, fear him." (Luke 12:5)
That last one is particularly, um, pleasant. And it's part of why it is difficult for me to see a moral superiority of Jehovah over Astarte: of the one who's torturing people eternally, over the one who fails to inform you that her rival is torturing people eternally.
↑ comment by Prismattic · 2012-01-08T23:45:51.458Z · LW(p) · GW(p)
I was thinking more along the lines of "going to hell is a natural consequence of worshiping Astarte", analogous to "if I listen to my peers and smoke pot, I won't be able to sing, whereas if I listen to my mother and drink lots of water, I will; therefore, my mother is right and listening to my peers is bad". I hadn't even considered it from that point of view before.
To return to something I pointed out far, far back in this thread, this is not analagous. Your mother does not cause you to lose your voice for doing the things she advises you not to do. On the other hand, you presumably believe that god created hell, or at a minimum, he tolerates its existence (unless you don't think God is omnipotent).
(As an aside, another point against the homogeneity you mistakenly assumed you would find on Lesswrong when you first showed up is that not everyone here is a complete moral anti-realist. For me, that one cannot hold the following three premises without contradiction is sufficient to discount any deeper argument for Christianity:
- Inflicting suffering is immoral, and inflicting it on an infinite number of people or for an inifinite duration is infinitely immoral
- The Christian God is benevolent.
- The Christian God allows the existence of Hell.
Resorting to, "Well, I don't actually know what hell is" is blatant rationalization.)
Replies from: Nornagest↑ comment by Nornagest · 2012-01-09T18:23:55.325Z · LW(p) · GW(p)
You don't actually need to be a moral realist to make that argument; you just need to notice the tension between the set of behavior implied by the Christian God's traditional attributes and the set of behavior Christian tradition claims for him directly. That in itself implies either a contradiction or some very sketchy use of language (i.e. saying that divine justice allows for infinitely disproportionate retribution).
I think it's a weakish argument against anything less than a strictly literalist interpretation of the traditions concerning Hell, though. There are versions of the redemption narrative central to Christianity that don't necessarily involve torturing people for eternity: the simplest one that I know of says that those who die absent a state of grace simply cease to exist ("everlasting life" is used interchangeably with "heaven" in the Bible), although there are interpretations less problematic than that as well.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-09T20:19:55.857Z · LW(p) · GW(p)
The (modern) Orthodox opinion that my tutor relayed to us is that Hell isn't a place at all, but a condition of the soul where it refuses to perceive/accept God's grace at all and therefore shuts itself out from everything true and meaningful that can be, just wallowing in despair; it exists in literally no-where, as all creation is God's, and the refusal of God is the very essence of this state. She dismissed all suggestions of sinners' "torture" in hell - especially by demonic entities - as folk religion.
(Wait, what's that, looks like either I misquoted her a little or she didn't quite give the official opinion...)
http://en.wikipedia.org/wiki/Hell_in_Christian_beliefs#Eastern_Orthodox_concepts_of_hell
One expression of the Eastern teaching is that hell and heaven are being in God's presence, as this presence is punishment and paradise depending on the person's spiritual state in that presence.[29][32] For one who hates God, to be in the presence of God eternally would be the gravest suffering... ...Some Eastern Orthodox express personal opinions that appear to run counter to official church statements, in teaching hell is separation from God.
I has a confused.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-09T20:28:53.267Z · LW(p) · GW(p)
I've heard that one too, but I'm not sure how functionally different from pitchforks and brimstone I'd consider it to be, especially in light of the idea of a Last Judgment common to Christianity and Islam.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-09T20:36:19.235Z · LW(p) · GW(p)
Oh, there's a difference alright, one that could be cynically interpreted as an attempt to dodge the issue of cruel and disproportionate punishment by theologians. The version above suggests that God doesn't ever actively punish anyone at all, He simply refuses to force His way to someone who rejects him, even if they suffer as a result. That's sometimes assumed to be due to God's respect for free will.
Replies from: Nornagest, None↑ comment by Nornagest · 2012-01-09T20:49:37.846Z · LW(p) · GW(p)
Yeah. Thing is, we're dealing with an entity who created the system and has unbounded power within it. Respect for free will is a pretty good excuse, but given that it's conceivable for a soul to be created that wouldn't respond with permanent and unspeakable despair to separation from the Christian God (or to the presence of a God whom the soul has rejected, in the other scenario), making souls that way looks, at best, rather irresponsible.
If I remember right the standard response to that is to say that human souls were created to be part of a system with God at its center, but that just raises further questions.
↑ comment by [deleted] · 2012-01-09T21:56:30.746Z · LW(p) · GW(p)
What, so god judges that eternal torture is somehow preferable to violating someones free will by inviting them to eutopia?
I am so tired of theists making their god so unable to be falsified that he becomes useless. Let's assume for a moment that some form of god actually exists. I don't care how much he loves us in his own twisted little way, I can think of 100 ways to improve the world and he isn't doing any of them. It seems to me that we ought to be able to do better than what god has done, and in fact we have.
The standard response to theists postulating a god should be "so what?".
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-10T09:04:23.581Z · LW(p) · GW(p)
's cool, bro, relax. I agree completely with that, I'm just explaining what the other side claims.
↑ comment by Estarlio · 2012-01-09T03:01:52.630Z · LW(p) · GW(p)
I mean, you don't disagree that I'm more rational than most Christians and Muslims, right?
Actually, I do. You use the language that rationalists use. However, you don't seem to have considered very many alternate hypothesis. And you don't seem to have performed any of the obvious tests to make sure you're actually getting information out of your evidence.
For instance, you could have just cut up a bunch of similarly formatted stories from different sources, (or even better, have had a third party do it for you, so you don't see it,) stuck them in a box and pulled them out at random - sorting them into Bible and non-Bible piles according to your feelings. If you were getting the sort of information out that would go some way towards justifying your beliefs, you should easily beat random people of equal familiarity with the Bible.
Rationality is a tool, and if someone doesn't use it, then it doesn't matter how good a tool they have; they're not a rationalist any more than someone who owns a gun is a soldier. Rationalists have to actually go out and gather/analyse the data.
(Edit to change you to someone for clarity's sake.)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T04:10:00.535Z · LW(p) · GW(p)
For instance, you could have just cut up a bunch of similarly formatted stories from different sources, (or even better, have had a third party do it for you, so you don't see it,) stuck them in a box and pulled them out at random - sorting them into Bible and non-Bible piles according to your feelings. If you were getting the sort of information out that would go some way towards justifying your beliefs, you should easily beat random people of equal familiarity with the Bible.
No, I couldn't have for two reasons. By the time I could have thought of it I would have recognized nearly all the Bible passages as Biblical and to obscure meaning would require such short quotes I'd never be able to tell. Those are things I already explained-- you know, in the post where I said we should totally test this, using a similar experiment.
Replies from: Estarlio↑ comment by Estarlio · 2012-01-09T04:47:20.478Z · LW(p) · GW(p)
No, I couldn't have for two reasons. By the time I could have thought of it I would have recognized nearly all the Bible passages as Biblical and to obscure meaning would require such short quotes I'd never be able to tell. Those are things I already explained-- you know, in the post where I said we should totally test this, using a similar experiment.
If that's the stance you're going to take, it seems destructive to the idea that I should consider you rational. You proposed a test to verify your belief that could not be performed; in the knowledge that, if it was, it would give misleading results.
Minor points: There's more than just one bible out there. Unless you're a biblical scholar, the odds that there's nothing from a bible that you haven't read are fairly slim.
'nearly all' does leave you with some testable evidence. The odds that it just happens to be too short a test for your truth-sensing faculty to work are, I think, fairly slim.
People tend not to have perfect memories. Even if you are a biblical scholar the odds are that you will make mistakes in this, as you would in anything else, and information gained from the intuitive faculty would be expressed as a lower error rate than like-qualified people.
ETA quote.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T04:53:15.064Z · LW(p) · GW(p)
If that's the stance you're going to take, it seems destructive to the idea that I should consider you rational. You proposed a test to verify your belief that could not be performed; in the knowledge that, if it was, it would give misleading results.
Similar test. Not the same test. It was a test that, though still flawed, fixed those two things I could see immediately (and in doing so created other problems).
People tend not to have perfect memories. Even if you are a biblical scholar the odds are that you will make mistakes in this, as you would in anything else, and information gained from the intuitive faculty would be expressed as a lower error rate than like-qualified people.
Want to test this?
Replies from: Estarlio↑ comment by Estarlio · 2012-01-09T16:04:47.594Z · LW(p) · GW(p)
Similar test. Not the same test. It was a test that, though still flawed, fixed those two things I could see immediately (and in doing so created other problems).
I don't see that it would have fixed those things. We could, perhaps, come up with a more useful test if we discussed it on a less hostile footing. But, at the moment, I'm not getting a whole lot of info out of the exchange and don't think it worth arguing with you over quite why your test wouldn't work, since we both agree that it wouldn't.
Want to test this?
Not really. It's not that sort of thing where the outputs of the test would have much value for me. I could easily get 100% of the quotes correct by sticking them into google, as could you. The only answers we could accept with any significant confidence would be the ones we didn't think the other person was likely to lie about.
My beliefs in respect to claims about the supernatural are held with a high degree of confidence, and pushing them some tiny distance towards the false end of the spectrum is not worth the hours I would have to invest.
↑ comment by TheOtherDave · 2012-01-09T01:44:01.808Z · LW(p) · GW(p)
If so, that really doesn't count.
If you can say more about why deliberately induced mystical experiences don't count, but other kinds do, I'd be interested.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T02:14:30.833Z · LW(p) · GW(p)
For the same reason that if I had a see-an-image-of-Grandpa button, and pushed it, I wouldn't count the fact that I saw him as evidence that he's somehow still alive, but if I saw him right now spontaneously, I would.
Replies from: occlude, TheOtherDave, TimS↑ comment by occlude · 2012-01-09T03:26:56.568Z · LW(p) · GW(p)
For the same reason that if I had a see-an-image-of-Grandpa button, and pushed it, I wouldn't count the fact that I saw him as evidence that he's somehow still alive, but if I saw him right now spontaneously, I would.
Imagine that you have a switch in your home which responds to your touch by turning on a lamp (this probably won't take much imagination). One day this lamp, which was off, suddenly and for no apparent reason turns on. Would you assign supernatural or mundane causes to this event?
Now this isn't absolute proof that the switch wasn't turned on by something otherworldly; perhaps it responds to both mundane and supernatural causes. But, well, if I may be blunt, Occam's Razor. If your best explanations are "the Hand of Zeus" and "Mittens, my cat," then ...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T04:25:11.978Z · LW(p) · GW(p)
I assume much the same things about this as any other sense: it's there to give information about the world, but trickable. I mean, how tired you feel is a good measure of how long it's been since you've slept, but you can drink coffee and end up feeling more energetic than is merited. So if I want to be able to tell how much sleep I really need, I should avoid caffeine. That doesn't mean the existence of caffeine makes your subjective feelings of your own energy level arbitrary or worthless.
Replies from: occlude↑ comment by occlude · 2012-01-09T06:35:52.143Z · LW(p) · GW(p)
I assume much the same things about this as any other sense: it's there to give information about the world, but trickable.
Interestingly, this sounds like the way that I used to view my own spiritual experiences. While I can't claim to have ever had a full-blown vision, I have had powerful, spontaneous feelings associated with prayer and other internal and external religious stimuli. I assumed that God was trying to tell me something. Later, I started to wonder why I was also having these same powerful feelings at odd times clearly not associated with religious experiences, and in situations where there was no message for me as far as I could tell.
On introspection, I realized that I associated this with God because I'd been taught by people at church to identify this "frisson" with spirituality. At the time, it was the most accessible explanation. But there was no other reason for me to believe that explanation over a natural one. That I was getting data that seemed to contradict the "God's spirit" hypothesis eventually led to an update.
↑ comment by TheOtherDave · 2012-01-09T16:40:41.339Z · LW(p) · GW(p)
Unfortunately, the example you're drawing the analogy to is just as unclear to me as the original example I'd requested an explanation of.
I mean, I agree that seeing an image of my dead grandfather isn't particularly strong evidence that he's alive. Indeed, I see images of dead relatives on a fairly regular basis, and I continue to believe that they're dead. But I think that's equally true whether I deliberately invoked such an image, or didn't.
I get that you think it is evidence that he's alive when the image isn't deliberately invoked, and I can understand how the reason for that would be the same as the reason for thinking that a mystical experience "counts" when it isn't deliberately invoked, but I am just as unclear about what that reason is as I was to start with.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T19:15:39.673Z · LW(p) · GW(p)
If I suddenly saw my dead grandpa standing in front of me, that would be sufficiently surprising that I'd want an explanation. It's not sufficiently strong to make me believe by itself, but I'd say hello and see if he answered, and if he sounded like my grandpa, and then tell him he looks like someone I know and see the reaction, and if he reacts like Grandpa, I touch him to ascertain that he's corporeal, then invite him to come chat with me until I wake up, and assuming that everything else seems non-dream-like (I'll eventually have to read something, providing an opportunity to test whether or not I'm dreaming, plus I can try comparing physics to how they should be, perhaps by trying to fly), I'd tell my mom he's here.
Whereas if I had such a button, I'd ignore the image, because it wouldn't be surprising. I suppose looking at photographs is kind of like the button.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-01-09T19:26:05.545Z · LW(p) · GW(p)
Well, wait up. Now you're comparing two conditions with two variables, rather than one.
That is, not only is grandpa spontaneous in case A and button-initiated in case B, but also grandpa is a convincing corporeal fascimile of your grandpa in case A and not any of those things in case B. I totally get how a convincing fascimile of grandpa would "count" where an unconvincing image wouldn't (and, by analogy, how a convincing mystical experience would count where an unconvincing one wouldn't) but that wasn't the claim you started out making.
Suppose you discovered a button that, when pressed, created something standing in front of you that looked like your dead grandpa , sounded and reacted like your grandpa, chatted with you like you believe your grandpa would, etc. Would you ignore that?
It seems like you're claiming that you would, because it wouldn't be surprising... from which I infer that mystical experiences have to be surprising to count (which had been my original question, after all). But I'm not sure I properly understood you.
For my own part, if I'm willing to believe that my dead grandpa can come back to life at all, I can't see why the existence of a button that does this routinely should make me less willing to believe it .
↑ comment by TimS · 2012-01-09T03:18:32.995Z · LW(p) · GW(p)
The issue is that there is not a reliable "see-an-image-of-Grandpa button" in existence for mystical experiences. In other words, I'm unaware of any techniques that reliably induce mystical experiences. Since there are no techniques for reliably inducing mystical experiences, there is no basis for rejecting some examples of mystical experience as "unnatural/artificial mystical experiences."
As an aside, if you are still interested in evaluating readings, I would be interested in your take on this one
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-09T04:19:37.578Z · LW(p) · GW(p)
The issue is that there is not a reliable "see-an-image-of-Grandpa button" in existence for mystical experiences. In other words, I'm unaware of any techniques that reliably induce mystical experiences.
↑ comment by fortyeridania · 2012-01-09T17:00:48.774Z · LW(p) · GW(p)
isn't there, or wasn't there, some Islamic sect where people try to find God by spinning around?
Yes: Dervishes.
↑ comment by [deleted] · 2012-01-09T17:16:08.595Z · LW(p) · GW(p)
You've stated that you judge morality on a consequentialist basis. Now you state that going to hell is somehow not equivalent to god torturing you for eternity. What gives?
Also: You believe in god because your belief in god implies that you really ought to believe in god? What? Is that circular or recursivly justified? If the latter, please explain.
↑ comment by TheOtherDave · 2012-01-08T00:18:27.634Z · LW(p) · GW(p)
Of course, I'm not sure how you'll ever get the truth about how moral people are when they think no one's watching...
Hidden cameras help. So do setups like "leave a dollar, take a bagel" left in the office kitchen.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T00:59:53.672Z · LW(p) · GW(p)
That's a great idea! Now if only we could randomly assign people to convert to either Wicca or Christianity, we'd be all set. Unfortunately...
Replies from: Nornagest↑ comment by Nornagest · 2012-01-08T05:55:05.280Z · LW(p) · GW(p)
It's not exactly rigorous, but you could try leaving bagels at Christian and Wiccan gatherings of approximately the same size and see how many dollars you get back.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T05:59:22.029Z · LW(p) · GW(p)
That's an idea, but you'd need to know how they started out. If generally nice people joined one religion and stayed the same, and generally horrible people joined the other and became better people, they might look the same on the bagel test.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-08T06:02:19.268Z · LW(p) · GW(p)
True. You could control for that by seeing if established communities are more or less prone to stealing bagels than younger ones, but that would take a lot more data points.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T06:10:30.388Z · LW(p) · GW(p)
Indeed. Or you could test the people themselves individually. What if you got a bunch of very new converts to various religions, possibly more than just Christianity and Wicca, and tested them on the bagels and gave them a questionnaire containing some questions about morals and some about their conversion and some decoys to throw them off, then called them back again every year for the same tests, repeating for several years?
Replies from: Nornagest↑ comment by Nornagest · 2012-01-08T06:16:51.686Z · LW(p) · GW(p)
I don't really trust self-evaluation for questions like this, unfortunately -- it's too likely to be confounded by people's moral self-image, which is exactly the sort of thing I'd expect to be affected by a religious conversion. Bagels would still work, though.
Actually, if I was designing a study like this I think I'd sign a bunch of people up ostensibly for longitudial evaluation on a completely different topic -- and leave a basket of bagels in the waiting room.
Replies from: AspiringKnitter, DSimon↑ comment by AspiringKnitter · 2012-01-08T06:33:45.683Z · LW(p) · GW(p)
What about a study ostensibly of the health of people who convert to new religions? Bagels in the waiting room, new converts, random not-too-unpleasant medical tests for no real reason? Repeat yearly?
The moral questionnaire would be interesting because people's own conscious ethics might reflect something cool and if you're gonna test it anyway... but on the other hand, yeah. I don't trust them to evaluate how moral they are, either. But if people signal what they believe is right, then that means you do know what they think is good. You could use that to see a shift from no morals at all to believing morals are right and good to have. And just out of curiosity, I'd like to see if they shifted from deontologist to consequentialist ethics, or vice versa.
Replies from: Nornagest, TheOtherDave↑ comment by TheOtherDave · 2012-01-08T15:24:52.209Z · LW(p) · GW(p)
People don't necessarily signal what they think is right; sometimes they signal attitudes they think other people want them to possess. Admittedly, in a homogenous environment that can cause people to eventually endorse what they've been signaling.
↑ comment by DSimon · 2012-01-08T06:43:54.343Z · LW(p) · GW(p)
Hm, you'd probably want the bagels to be off in a small side room so that the patients can feel alone while considering whether or not to steal one.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T08:05:38.804Z · LW(p) · GW(p)
Yes, definitely. Or in a waiting room. "Oops, sorry, we're running a little late. Wait here in this deserted waiting room till five minutes from now, bye. :)" Otherwise, they might not see them.
↑ comment by soreff · 2012-01-08T03:03:41.814Z · LW(p) · GW(p)
The difference would be that if worship of Jehovah gets you eternal life in heaven, and worship of Astarte gets you eternal torture and damnation, then you should worship Jehovah and not Astarte. Also, if Astarte knows this, but pretends otherwise, then Astarte's a liar.
Or perhaps neither Jehovah nor Astarte knows now who will dominate in the end, and any promises either makes to any followers are, ahem, over-confident? :-) There was a line I read somewhere about how all generals tell their troops that their side will be victorious...
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T03:09:23.069Z · LW(p) · GW(p)
So you're assuming both sides are in a duel, and that the winner will send xyr worshipers to heaven and the loser's worshipers to hell? Because I was not.
Replies from: MixedNuts, soreff↑ comment by MixedNuts · 2012-01-08T03:17:15.626Z · LW(p) · GW(p)
Only Jehovah. He says that he's going to send his worshipers to heaven and Astarte's to hell. Astarte says neither Jehovah nor she will send anyone anywhere. Either one could be a liar, or they could be in a duel and each describing what happens if xe wins.
↑ comment by soreff · 2012-01-08T03:31:42.479Z · LW(p) · GW(p)
Only as a hypothetical possibility. (From such evidence as I've seen I don't think either really exists. And I have seen a fair number of Wiccan ceremonies - which seem like reasonably decent theater, but that's all.) One could construe some biblical passages as predicting some sort of duel - and if one believed those passages, and that interpretation, then the question of whether one side was overstating its chances would be relevant.
↑ comment by DSimon · 2012-01-08T00:46:42.148Z · LW(p) · GW(p)
I know how non-crazy I am. I know exactly the extent to which I've considered illness affecting my thoughts as a possible explanation.
Maybe I'm lacking context, but I'm not sure why you bring this up. Has anyone here described religious beliefs as being characteristically caused by mental illness? I'd be concerned if they had, since such a statement would be (a) incorrect and (b) stigmatizing.
Replies from: katydee, TheOtherDave, AspiringKnitter↑ comment by katydee · 2012-01-08T01:28:58.722Z · LW(p) · GW(p)
Has anyone here described religious beliefs as being characteristically caused by mental illness? I'd be concerned if they had, since such a statement would be (a) incorrect and (b) stigmatizing.
In this post, Eliezer characterized John C. Wright's conversion to Catholicism as the result of a temporal lobe epileptic fit and said that at least some (not sure if he meant all) religious experiences were "brain malfunctions."
Replies from: katydee↑ comment by TheOtherDave · 2012-01-08T02:45:24.961Z · LW(p) · GW(p)
The relevant category is probably not explanations for religious beliefs, but rather explanations of experiences such as AK has reported of what, for lack of a better term, I will call extrasensory perception. Most of the people I know who have religious beliefs don't report extrasensory perception, and most of the people I know who report extrasensory perception don't have religious beliefs. (Though of the people I know who do both, a reasonable number ascribe a causal relationship between them. The direction varies.)
↑ comment by AspiringKnitter · 2012-01-08T01:12:17.974Z · LW(p) · GW(p)
Maybe I'm lacking context,
You are. That's the main alternate explanation I can think of.
Replies from: DSimon↑ comment by DSimon · 2012-01-08T02:03:26.526Z · LW(p) · GW(p)
But, mental illness is not required to experience strong, odd feelings or even to "hear voices". Fully-functional human brains can easily generate such things.
Replies from: Nornagest↑ comment by Nornagest · 2012-01-08T05:53:38.137Z · LW(p) · GW(p)
Religious experience isn't usually pathologized in the mainstream (academically or by laypeople) unless it makes up part of a larger pattern of experience that's disruptive to normal life, but that doesn't say much one way or another about LW's attitude toward it.
Replies from: DSimon↑ comment by DSimon · 2012-01-08T06:05:44.204Z · LW(p) · GW(p)
My experience with LW's attitude has been similar, though owing to a different reason. Religion generally seems to be treated here as the result of cognitive bias, same as any number of other poorly setup beliefs.
Though LW does tend to use the word "insane" in a way that includes any kind of irrational cognition, I so far have interpreted that to mostly be slang, not meant to literally imply that all irrational cognition is mental illness (although the symptoms of many mental illnesses can be seen as a subset of irrational cognition).
Replies from: wedrifid↑ comment by wedrifid · 2012-01-08T06:08:11.960Z · LW(p) · GW(p)
Though LW does tend to use the word "insane" in a way that includes any kind of irrational cognition, I so far have interpreted that to mostly be slang, not meant to literally imply mental illness (although the symptoms of many mental illnesses can be seen as a subset of irrational cognition).
Not having certain irrational biases can be said to be a subset of mental illness.
Replies from: DSimon↑ comment by DSimon · 2012-01-08T06:41:11.392Z · LW(p) · GW(p)
How so? I can only think of Straw Vulcan examples. (Or, by "can be said", do you mean to imply that you disagree with the statement?)
Replies from: wedrifid, MixedNuts↑ comment by wedrifid · 2012-01-08T07:34:57.157Z · LW(p) · GW(p)
How so? I can only think of Straw Vulcan examples.
A subset of those diagnosed or diagnosable with high functioning autism and a subset of the features that constitute that label fit this category. Being rational is not normal.
(Or, by "can be said", do you mean to imply that you disagree with the statement?)
I don't affiliate myself with the DSM, nor does it always representative of an optimal way of carving reality. In this case I didn't want to specify one way or the other.
↑ comment by Anubhav · 2012-01-08T06:04:04.237Z · LW(p) · GW(p)
tl;dr for the last two comments (Just to help me understand this; if I misrepresent anyone, please call me out on it.)
Yvain: So you believe in multiple factions of supernatural beings, why do you think Jehovah is the benevolent side? Other gods have done awesomecool stuff too, and Jehovah's known to do downright evil stuff.
AK: Not multiple factions, just two. As to why I think Jehovah's the good guy.....
And knowing how my life has gone, I know how I've changed as a person since accepting Jesus, or Jehovah if that's the word you prefer. They don't mention drastic changes to their whole personalities to the point of near-unrecognizability even to themselves.
Don't you think that's an unjustified nitpick? Absolutely awful people are rare, people who have revelations are rarer, so obviously absolutely awful people who had revelations have to be extremely difficult to find. So it's not really surprising that two links someone gave you don't mention a story like that.
But I think you're assuming that the hallmark of a true religion is that it drastically increases the morality of its adherents. And that's an assumption you have no grounds for-- all that happened in your case was that the needle of your moral compass swerved from 'absolute scumbag' to 'reasonably nice person'. There's no reason to generalise that and believe that the moral compass of a reasonably nice person would swerve further to 'absolute saint'.
Anyhow, your testable prediction is 'converts to false religions won't show moral improvement'. I doubt there's any data on stuff like that right now (if there is, my apologies), so we have to rely on anecdotal evidence. The problem with that, of course, is that it's notoriously unreliable... If it doesn't show what you want it to show, you can just dismiss it all as lies or outliers or whatever. Doesn't really answer any questions.
And if you're willing to consider that kind of anecdotal evidence, why not other kinds of anecdotal evidence that sound just as convincing?
I discount all miracle stories from people I don't know, including Christian and Jewish miracle stories, which could at least plausibly be true. I discount them ALL when I don't know the person.
How convenient. When it happens to someone else it's a lie/delusion/hallucination, when it happens to you it's a miracle.
And yet.... Back to your premise. Even if your personality changed for the better... How does this show in any way that Jehovah's a good guy? Surely even an evil daemon has no use for social outcasts with a propensity for random acts of violence; a normal person would probably serve them better. And how do you answer Yvain's point about all the evil Jehovah has done? How do you know he's the good guy
....
Everyone else: Why are we playing the "let's assume everything you say is true" game anyway? Surely it'd be more honest to try and establish that his mystical experiences were all hallucinations?
↑ comment by wedrifid · 2012-01-08T02:29:04.821Z · LW(p) · GW(p)
Of course, I'm not sure how you'll ever get the truth about how moral people are when they think no one's watching...
We'll have to ask how God and Santa Claus manage to pull it off.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T02:31:05.924Z · LW(p) · GW(p)
I prefer TheOtherDave's idea. Unlike God, we're not omniscient or capable of reading minds. And unlike Santa Claus, we exist.
Replies from: TheOtherDave, wedrifid↑ comment by TheOtherDave · 2012-01-08T02:36:18.000Z · LW(p) · GW(p)
Well, now that you mention it... I infer that if you read someone's user page and got sensation A or B off of it, you would consider that evidence about the user's morality. Yes? No?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T02:50:02.782Z · LW(p) · GW(p)
Yes. But it would be more credible to other people, and make for a publishable study, if we used some other measure. It'd also be more certain that we'd actually get information.
↑ comment by Alejandro1 · 2012-01-07T08:03:05.060Z · LW(p) · GW(p)
Obviously I can't speak for AK, but maybe she believes that she has been epistemically lucky. Compare the religious case:
"I had this experience which gave me evidence for divinity X, so I am going to believe in X. Others have had analogous experiences for divinities Y and Z, but according to the X religion I adopted those are demonic, so Y and Z believers are wrong. I was lucky though, since if I had had a Y experience I would have become a Y believer".
with philosophical cases like the ones Alicorn discusses there:
"I accept philosophical position X because of compelling arguments I have been exposed to. Others have been exposed to seemingly compelling arguments for positions Y and Z, but according to X these arguments are flawed, so Y and Z believers are wrong. I was lucky though, since if I had gone to a university with Y teachers I would have become a Y believer".
It may be that the philosopher is also being irrational here and that she could strive more to trascend her education and assess X vs Y impartially, but in the end it is impossible to escape this kind of irrationality at all levels at once and assess beliefs from a perfect vaccuum. We all find some things compelling and not others because of the kind of people we are and the kind of lives we have lived, and the best we can get is reflective equilibrium. Recursive justification hitting bottom and all that.
The question is whether AK is already in reflective equilibrium or if she can still profit from some meta-examination and reassess this part of her belief system. (I believe that some religious believers have reflected enough about their beliefs and the counterarguments to them that they are in this kind of equilibrium and there is no further argument from an atheist that can rationally move them - though these are a minority and not representative of typical religious folks.)
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-01-08T00:55:23.178Z · LW(p) · GW(p)
See my response here - if Alicorn is saying she knows the other side has arguments exactly as convincing as those which led her to her side, but she is still justified to continue believing her side more likely than the other, I disagree with her.
↑ comment by occlude · 2012-01-07T05:50:54.263Z · LW(p) · GW(p)
What is true is already so, Owning up to it doesn't make it worse. Not being open about it doesn't make it go away.
You're doing it wrong. The power of the Litany comes from evidence. Every time you applying the Litany of Gendlin to an unsubstantiated assertion, a fairie drops dead.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T07:16:18.990Z · LW(p) · GW(p)
Every time you applying the Litany of Gendlin to an unsubstantiated assertion, a fairie drops dead.
I think this is a joke, ish, right? Because it's quite witty. /tangent
You're doing it wrong.
I mentioned some evidence elsewhere in the thread.
Replies from: occlude↑ comment by occlude · 2012-01-07T17:57:14.584Z · LW(p) · GW(p)
"Ish," yes. I have to admit I've had a hard time navigating this enormous thread, and haven't read all of it, including the evidence of demonic influence you're referring to. However, I predict in advance that 1) this evidence is based on words that a man wrote in an ancient book, and that 2) I will find this evidence dubious.
Two equally unlikely propositions should require equally strong evidence to be believed. Neither dragons nor demons exist, yet you assert that demons are real. Where, then, is the chain of entangled events leading from the state of the universe to the state of your mind? Honest truth-seeking is about dispassionately scrutinizing that chain, as an outsider would, and allowing others to scrutinize, evaluate, and verify it.
I was a Mormon missionary at 19. I used to give people copies of the Book of Mormon, testify of my conviction that it was true, and invite them to read it and pray about it. A few did (Most people in Iowa and Illinois aren't particularly vulnerable to Mormonism). A few of those people eventually (usually after meeting with us several times) came to feel as I did, that the book was true. I told those people that the feeling they felt was the Holy Spirit, manifesting the truth to them. And if that book is true, I told them, then Joseph Smith must have been a true prophet. And as a true prophet, the church that he established must be the Only True Church, according to Joseph's revelations and teachings. I would then invite them to be baptized (which was the most important metric in the mission), and to become a member of the LDS church. One of the church's teachings is that a person can become as God after death (omniscience and omnipotence included). Did the chain of reasoning leading from "I have a feeling that this book is true" justify the belief that "I can become like God"?
You are intelligent and capable of making good rhetorical arguments (from what I have read of your posts in the last week or two). I see you wielding Gendlin, for example, in support of your views. At some level, you're getting it. But the point of Gendlin is to encourage truth-seekers desiring to cast off comforting false beliefs. It works properly only if you are also willing to invoke Tarski:
Let me not become attached to beliefs I may not want.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T21:05:19.455Z · LW(p) · GW(p)
Upvoted for being a completely reasonable comment given that you haven't read through the entirety of a thread that's gotten totally monstrous.
However, I predict in advance that 1) this evidence is based on words that a man wrote in an ancient book,
Only partly right.
2) I will find this evidence dubious.
Of course you will. If I told you that God himself appeared to me personally and told me everything in the Bible was true, you'd find that dubious, too. Perhaps even more dubious.
Where, then, is the chain of entangled events leading from the state of the universe to the state of your mind?
Already partly in other posts on this thread (actually largely in other posts on this thread), buried somewhere, among something. You'll forgive me for not wanting to retype multiple pages, I hope.
Replies from: TheOtherDave, occlude↑ comment by TheOtherDave · 2012-01-08T00:14:06.312Z · LW(p) · GW(p)
If I told you that God himself appeared to me personally and told me everything in the Bible was true, you'd find that dubious, too.
Certainly. I'm now curious though: if I told you that God appeared to me personally and told me everything in the Bible was true (either for some specific meaning of "the Bible," which is of course an ambiguous phrase, or leaving it not further specified), roughly how much confidence would you have that I was telling you the truth?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-08T00:32:46.420Z · LW(p) · GW(p)
It would depend on how you said it-- as a joke, or as an explanation for why you suddenly believed in God and had decided to convert to Christianity, or as a puzzling experience that you were trying to figure out, or something else-- and whether it was April 1 or not, and what you meant by "the Bible" (whether you specified it or not), and how you described God and the vision and your plans for the future.
But I'd take it with a grain of salt. I'd probably investigate further and continue correspondence with you for some time, both to help you as well as I could and to ascertain with more certainty the source of your belief that God came to you (whether he really did or it was a drug-induced hallucination or something). It would not be something I'd bet on either way, at least not just from hearing it said.
↑ comment by Bugmaster · 2012-01-07T05:10:24.817Z · LW(p) · GW(p)
That's a bizarre thing to say. Is their offense evidence that I'm wrong?
No, but generally, applying a derogatory epithet to an entire group of people is seen as rude, unless you back it up with evidence, which in this case you did not do. You just stated it.
So does calling people Cthulhu-worshipers.
In his afterword, EY seems to be saying that the benign actions of his friends and family are inconsistent with the malicious actions of YHVH, as he is depicted in Exodus. This is different from flat-out stating, "all theists are evil" and leaving it at that. EY is offering evidence for his position, and he is also giving credit to theists for being good people despite their religion (as he sees it).
You guys sure seem quick to tell me that my beliefs are offensive, but if I said the same to you, you'd understand why that's beside the point.
I can't speak for "you guys", only for myself; and I personally don't think that your beliefs are particularly offensive, just the manner in which you're stating them. It's kind of like the difference between saying, "Christianity is wrong because Jesus is a fairytale and all Christians are idiots for believing it", versus, "I believe that Christians are mistaken because of reasons X, Y and Z".
If you want me to stop believing it, tell me why you think it's wrong.
Well, personally, I believe its wrong because no gods or demons of any kind exist.
Wiccans, on the other hand, would probably tell you that you're wrong because Wicca had made them better people, who are more loving, selfless, and considerate of others, which is inconsistent with the expected result of worshiping evil demons. I can't speak for all Wiccans, obviously; this is just what I'd personally heard some Wiccans say.
↑ comment by wedrifid · 2012-01-07T05:42:07.222Z · LW(p) · GW(p)
I should probably point out at this point that Wiccans (well, at least those whom I'd met), consider this point of view utterly misguided and incredibly offensive.
I object to the use of social politics to overwhelm assertions of fact. Christians and Wiccan's obviously find each other offensive rather frequently. Both groups (particularly the former) probably also find me offensive. In all cases I say that is their problem.
Now if the Christians were burning the witches I might consider it appropriate to intervene forcefully...
Incidentally I wouldn't have objected if you responded to "They consort with demons" with "What a load of bullshit. Get a clue!"
Replies from: Bugmaster↑ comment by Bugmaster · 2012-01-07T22:21:44.107Z · LW(p) · GW(p)
I was really objecting to the unsupported assertion; I wouldn't have minded if AK said, "they consort with demons, and here's the evidence".
Incidentally I wouldn't have objected if you responded to "They consort with demons" with "What a load of bullshit. Get a clue!"
Well, I personally do fully endorse that statement, but the existence of gods and demons is a matter of faith, or of personal experience, and thus whatever evidence or reason I can bring to bear in support of my statement is bound to be unpersuasive.
↑ comment by MixedNuts · 2012-01-07T19:25:43.428Z · LW(p) · GW(p)
Off-topic nitpick: I like to be called a demon-worshiper.
Replies from: TheOtherDave, cousin_it, soreff, Bugmaster↑ comment by TheOtherDave · 2012-01-08T00:20:52.639Z · LW(p) · GW(p)
You're a demon-worshipper!
↑ comment by taelor · 2012-01-07T06:36:10.258Z · LW(p) · GW(p)
Okay, I'll bite. On what basis do you conclude that the entities that modern day wiccans worship are demonic, rather than simply imaginary?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2012-01-07T07:13:00.648Z · LW(p) · GW(p)
Because the religion is evil rather than misguided. Whereas, say, Hinduism, for instance, is just really misguided. See other conversation. Also see Exodus 22:18 and Deuteronomy 18:10.
(I wish I had predicted that this would end this way before I answered that post... then I might not have done so.)
↑ comment by lavalamp · 2011-12-23T15:08:01.321Z · LW(p) · GW(p)
OK, last one from me, if you're still up for it.
There is nothing that you can claim, nothing that you can demand, nothing that you can take. And as soon as you try to take something as if it were your own-- you lose your [innocence]. The angel with the flaming sword stands armed against all selfhood that is small and particular, against the "I" that can say "I want..." "I need..." "I demand..." No individual enters Paradise, only the integrity of the Person.
Only the greatest humility can give us the instinctive delicacy and caution that will prevent us from reaching out for pleasures and satisfactions that we can understand and savor in this darkness. The moment we demand anything for ourselves or even trust in any action of our own to procure a deeper intensification of this pure and serene rest in [God], we defile and dissipate the perfect gift that [He] desires to communicate to us in the silence and repose of our own powers.
If there is one thing we must do it is this: we must realize to the very depths of our being that this is a pure gift of [God] which no desire, no effort and no heroism of ours can do anything to deserve or obtain. There is nothing we can do directly either to procure it or to preserve it or to increase it. Our own activity is for the most part an obstacle to the infusion of this peaceful and pacifying light, with the exception that [God] may demand certain acts and works of us by charity or obedience, and maintain us in deep experimental union with [Him] through them all, by [His] own good pleasure, not by any fidelity of ours.
At best we can dispose ourselves for the reception of this great gift by resting in the heart of our own poverty, keeping our soul as far as possible empty of desires for all the things that please and preoccupy our nature, no matter how pure or sublime they may be in themselves.
And when [God] reveals [Himself] to us in contemplation we must accept [Him] as [He] comes to us, in [His] own obscurity, in [His] own silence, not interrupting [Him] with arguments or words, conceptions or activities that belong to the level of our own tedious and labored existence.
We must respond to [God]'s gifts gladly and freely with thanksgiving, happiness and joy; but in contemplation we thank [Him] less by words than by the serene happiness of silent acceptance. ... It is our emptiness in the presence of the abyss of [His] reality, our silence in the presence of [His] infinitely rich silence, our joy in the bosom of the serene darkness in which [His] light holds us absorbed, it is all this that praises [Him]. It is this that causes love of [God] and wonder and adoration to swim up into us like tidal waves out of the depths of that peace, and break upon the shores of our consciousness in a vast, hushed surf of inarticulate praise, praise and glory!
↑ comment by Will_Newsome · 2011-12-28T23:33:17.208Z · LW(p) · GW(p)
(I might fail to communicate clearly with this comment; if so, my apologies, it's not purposeful. E.g. normally if I said "Thomistic metaphysical God" I would assume the reader either knew what I meant (were willing to Google "Thomism", say) or wasn't worth talking to. I'll try not to do that kind of thing in this comment as badly as I normally do. I'm also honestly somewhat confused about a lot of Catholic doctrine and so my comment will likely be confused as a result. To make things worse I only feel as if I'm thinking clearly if I can think about things in terms of theoretical computer science, particularly algorithmic probability theory; unfortunately not only is it difficult to translate ideas into those conceptual schemes, those conceptual schemes are themselves flawed (e.g. due to possibilities of hypercomputation and fundamental problems with probability that've been unearthed by decision theory). So again, my apologies if the following is unclear.)
I'm going to accept your interpretation at face value, i.e. accept that you're blessed with a supernatural charisma or something like that. That said, I'm not yet sure I buy the idea that the Thomistic metaphysical God, the sole optimal decision theory, the Form of the Good, the Logos-y thing, has much to do with transhumanly intelligent angels and demons of roughly the sort that folk around here would call superintelligences. (I haven't yet read the literature on that subject.) In my current state of knowledge if I was getting supernatural signals (which I do, but not as regularly as you do) then I would treat them the same way I'd treat a source of information that claimed to be Chaitin's constant: skeptically.
In fact it might not be a surface-level analogy to say that God is Chaitin's omega (and is thus a Turing oracle), for they would seem to share a surprising number of properties. Of course Chaitin's constant isn't computable, so there's no algorithmic way to check if the signals you're getting come from God or from a demon that wants you to think it's God (at least for claimed bits of Chaitin's omega that you don't already know). I believe the Christians have various arguments about states of mind that protect you from demonic influences like that; I haven't read this article on infallibility yet but I suspect it's informative.
Because there doesn't seem to be an algorithmic way of checking if God is really God rather than any other agent that has more bits of Chaitin's constant than you do, you're left in a situation where you have to have what is called faith, I think. (I do not understand Aquinas's arguments about faith yet; I'm not entirely sure I know what it is. I find the ideas counter-intuitive.) I believe that Catholics and maybe other Christians say that conscience is something like a gift from God and that you can trust it, so if your conscience objects to the signals you're getting then that at least a red flag that you might be being influenced by self-delusion or demons or what have you. But this "conscience" thing seems to be algorithmic in nature (though that's admittedly quite a contentious point), so if it can check the truth value of the moral information you're getting supernaturally then you already had those bits of Chaitin's constant. If your conscience doesn't say anything about it then it would seem you're dealing with a situation where you're supposed/have to have faith. That's the only way you can do better than an algorithmic approach.
Note that part of the reasons that I think about these things is 'cuz I want my FAI to be able to use bits of Chaitin's constant that it finds in its environment so as to do uncomputable things it otherwise wouldn't have. It is an extension of this same personal problem of what to do with information whose origin you can't algorithmicly verify.
Anyway it's a sort of awkward situation to be in. It seems natural to assume that this agent is God but I'm not sure if that is acceptable by the standard of (Kant's weirdly naive version of) Kan't categorical imperative. I notice that I am very confused about counterfactual states of knowledge and various other things that make thinking about this very difficult.
So um, how do you approach the problem? Er did I even describe the problem in such a way that it's understandable?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-29T01:14:24.327Z · LW(p) · GW(p)
I don't think I'm smart enough to follow this comment. Edit: but I think you're wrong about me having some sort of supernatural charisma... I'm pretty sure I haven't said I'm special, because if I did, I'd be wrong.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-29T01:20:04.665Z · LW(p) · GW(p)
Hm, so how would you describe the mechanism behind your sensations then? (Sorry, I'd been primed to interpret your description in light of similar things I'd seen before which I would describe as "supernatural" for lack of a better word.)
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-29T01:31:16.319Z · LW(p) · GW(p)
...I wasn't going to come back to say anything, but fine. I'd say it's God's doing. Not my own specialness. And I'm not going to continue this conversation further.
Replies from: Will_Newsome, dlthomas↑ comment by Will_Newsome · 2011-12-29T01:37:47.134Z · LW(p) · GW(p)
Okay, thanks. I didn't mean to imply 'twas your own "specialness" as such; apologies for being unclear. ETA: Also I'm sorry for anything else? I get the impression I did/said something wrong. So yeah, sorry.
↑ comment by lessdazed · 2011-12-27T16:48:36.058Z · LW(p) · GW(p)
Sensation A felt like there was something on my skin, like dirt or mud, and something squeezing my heart
The dirt just sits there? It doesn't also squeeze your skin? Or instead throb as if it had been squeezed for a while, but uniformly, not with a tourniquet, and was just released?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T19:48:59.228Z · LW(p) · GW(p)
Just sits there. Anyway, dirt is a bad metaphor.
↑ comment by Will_Newsome · 2011-12-29T00:28:58.418Z · LW(p) · GW(p)
Oh and also you should definitely look into using this to help build/invoke FAI/God. E.g. my prospective team has a slot open which you might be perfect for. I'm currently affiliated with Leverage Research who recently received a large donation from Jaan Tallinn, who also supports the Singularity Institute.
↑ comment by lavalamp · 2011-12-22T14:53:09.233Z · LW(p) · GW(p)
I'm not convinced that this is an accurate perception of AspiringKnitter's comments here so far.
E.g., I don't think she's yet claimed both omnipotence and omnibenevolence as attributes of god, so you may be criticizing views she doesn't hold. If there's a comment I missed, then ignore me. :)
But at a minimum, I think you misunderstood what she was asking by, "Do you mean that I can't consider his nonexistence as a counterfactual?" She was asking, by my reading, if you thought she had displayed an actual incapability of thinking that thought.
Replies from: None↑ comment by [deleted] · 2011-12-22T16:07:46.147Z · LW(p) · GW(p)
.
Replies from: thomblake↑ comment by thomblake · 2011-12-27T19:05:35.022Z · LW(p) · GW(p)
I don't think my correct characterization of a fictional being has any bearing on whether or not it exists.
If you're granted "fictional", then no. But if you don't believe in unicorns, you'd better mean "magical horse with a horn" and not "narwhal" or "rhinoceros".
↑ comment by TimS · 2011-12-22T19:08:28.590Z · LW(p) · GW(p)
given that I've gotten several downvotes (over seventeen, I think) in the last couple of hours, that's either the work of someone determined to downvote everything I say or evidence that multiple people think I'm being stupid.
For what it's worth, the downvotes appear to be correlated with anyone discussing theology. Not directed at you in particular. At least, that's my impression.
↑ comment by Will_Newsome · 2011-12-27T14:37:39.624Z · LW(p) · GW(p)
I do assign a really low prior probability to the existence of lucky socks anywhere
You do realize it might very well mean death to your Bayes score to say or think things like that around an omnipotent being who has a sense of humor, right? This is the sort of Dude Who wrestles with a mortal then names a nation to honor the match just to taunt future wannabe-Platonist Jews about how totally crazy their God is. He is perfectly capable of engineering some lucky socks just so He can make fun of you about it later. He's that type of Guy. And you do realize that the generalization of Bayes score to decision theoretic contexts with objective morality is actually a direct measure of sinfulness? And that the only reason you're getting off the hook is that Jesus allegedly managed to have a generalized Bayes score of zero despite being unable to tell a live fig tree from a dead one at a moderate distance and getting all pissed off about it for no immediately discernible reason? Just sayin', count your blessings.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T19:59:49.170Z · LW(p) · GW(p)
He is perfectly capable of engineering some lucky socks just so He can make fun of you about it later.
Yes, of course. Why he'd do that, instead of all the other things he could be doing, like creating a lucky hat or sending a prophet to explain the difference between "please don't be an idiot and quibble over whether it might hurt my feelings if you tell me the truth" and "please be as insulting as possible in your dealings with me".
And you do realize that the generalization of Bayes score to decision theoretic contexts with objective morality is actually a direct measure of sinfulness?
No, largely because I have no idea what that would even mean. However, if you mean that using good epistemic hygiene is a sin because there's objective morality, or if you think the objective morality only applies in certain situations which require special epistemology to handle, you're wrong.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T20:24:05.971Z · LW(p) · GW(p)
It's just that now "lucky socks" is the local Schelling point. It's possible I don't understand God very well, but I personally am modally afraid of jinxing stuff or setting myself up for dramatic irony. It has to do with how my personal history's played out. I was mostly just using the socks thing as an example of this larger problem of how epistemology gets harder when there's a very powerful entity around. I know I have a really hard time predicting the future because I'm used to... "miracles" occurring and helping me out, but I don't want to take them for granted, but I want to make accurate predictions... And so on. Maybe I'm over-complicating things.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T21:21:58.260Z · LW(p) · GW(p)
Okay, I can understand that. It can be annoying. However, the standard framework does still apply; you can still use Bayes. It's like anything else confusing you.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T21:39:47.738Z · LW(p) · GW(p)
I see what you're saying and it's a sensible approximation but I'm not actually sure you can use Bayes in situations with "mutual simulation" like that. Are you familiar with updateless/ambient decision theory perchance?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-27T21:50:37.168Z · LW(p) · GW(p)
No, I'm not. Should I be? Do you have a link to offer?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-12-27T22:46:30.535Z · LW(p) · GW(p)
This post combined with all the comments is perhaps the best place to start, or this post might be an easier introduction to the sorts of problems that Bayes has trouble with. This is the LW wiki hub for decision theory. That said it would take me awhile to explain why I think it'd particularly interest you and how it's related to things like lucky socks, especially as a lot of the most interesting ideas are still highly speculative. I'd like to write such an explanation at some point but can't at the moment.
↑ comment by lavalamp · 2011-12-22T04:17:34.668Z · LW(p) · GW(p)
I think this is missing the point: they believe that, but they're wrong.
...and they can say exactly the same thing about you. It's exactly that symmetry that defines No True Scotsman. You think you are reading and applying the text correctly, they think they are. It doesn't help to insist that you're really right and they're really wrong, because they can do the same thing.
Replies from: thomblake, AspiringKnitter↑ comment by thomblake · 2011-12-27T18:55:38.824Z · LW(p) · GW(p)
...and they can say exactly the same thing about you. It's exactly that symmetry that defines No True Scotsman.
No, No True Scotsman is characterized by moveable goalposts. If you actually do have a definition of True Scotsman that you can point to and won't change, then you're not going to fall under this fallacy.
↑ comment by AspiringKnitter · 2011-12-22T04:38:19.392Z · LW(p) · GW(p)
Okay, I'm confused here. Do you believe there are potentially correct and incorrect answers to the question "what does the Bible say that Jesus taught while alive?"
Replies from: lavalamp↑ comment by lavalamp · 2011-12-22T14:29:01.356Z · LW(p) · GW(p)
IMO, most Christians unconsciously concentrate on the passages that match their preconceptions, and ignore or explain away the rest. This behavior is ridiculously easy to notice in others, and equally difficult to notice in oneself.
For example, I expect you to ignore or explain away Matthew 10:34: "Do not think that I have come to bring peace to the earth. I have not come to bring peace, but a sword."
I expect you find Mark 11:12-14 rather bewildering: "On the following day, when they came from Bethany, he was hungry. And seeing in the distance a fig tree in leaf, he went to see if he could find anything on it. When he came to it, he found nothing but leaves, for it was not the season for figs. And he said to it, “May no one ever eat fruit from you again.”"
I still think Luke 14:26 has a moderately good explanation behind it, but there's also a good chance that this is a verse I'm still explaining away, even though I'm not a Christian any more and don't need to: "If anyone comes to me and does not hate his own father and mother and wife and children and brothers and sisters, yes, and even his own life, he cannot be my disciple."
The bible was authored by different individuals over the course of time. That's pretty well established. Those individuals had different motives and goals. IMO, this causes there to actually be competing strains of thought in the bible. People pick out the strains of thought that speak to their preconceived notions. For one last example, I expect you'll explain James in light of Ephesians, arguing that grace is the main theme. But I think it's equally valid for someone to explain Ephesians in light of James, arguing that changed behavior is the main theme. These are both valid approaches, in my mind, because contrary to the expectations of Christians (who believe that deep down, James and Ephesians must be saying the same thing), James and Ephesians are actually opposing view points.
Finally, I'll answer your question: probably not. Not every collection of words has an objective meaning. Restricting yourself to the gospels helps a lot, but I still think they are ambiguous enough to support multiple interpretations.
↑ comment by wedrifid · 2011-12-22T05:39:39.145Z · LW(p) · GW(p)
I suspect that nearly all Christians will agree with your definition (excepting Mormons and JW's, but I assume you added "divinity" in there to intentionally exclude them)
That isn't a tacked on addition. It's the core principle of the entire faith!
Replies from: lavalamp↑ comment by CronoDAS · 2011-12-22T07:21:41.538Z · LW(p) · GW(p)
The way I see it, there appear to be enough contradictions and ambiguities in the Bible and associated fan work that it's possible to use it to justify almost anything. (Including slavery.) So it's hard to tell a priori what's un-Christian and what isn't.
Replies from: Mass_Driver, wedrifid, AspiringKnitter↑ comment by Mass_Driver · 2011-12-22T09:49:55.531Z · LW(p) · GW(p)
Against a Biblical literalist, this would probably be a pretty good attack -- if you think a plausible implication of a single verse in the Bible, taken out of context, is an absolute moral justification for a proposed action, then, yes, you can justify pretty much any behavior.
However, this does not seem to be the thrust of AspiringKnitter's point, nor, even if it were, should we be content to argue against such a rhetorically weak position.
Rather, I think AspiringKnitter is arguing that certain emotions, attitudes, dispositions, etc. are repeated often enough and forcefully enough in the Bible so as to carve out an identifiable cluster in thing-space. A kind, gentle, equalitarian pacifist is (among other things) acting more consistently with the teachings of the literary character of Jesus than a judgmental, aggressive, elitist warrior. Assessing whether someone is acting consistently with the literary character of Jesus's teachings is an inherently subjective enterprise, but that doesn't mean that all opinions on the subject are equally valid -- there is some content there.
Replies from: CronoDAS↑ comment by CronoDAS · 2011-12-22T10:11:57.241Z · LW(p) · GW(p)
Rather, I think AspiringKnitter is arguing that certain emotions, attitudes, dispositions, etc. are repeated often enough and forcefully enough in the Bible so as to carve out an identifiable cluster in thing-space. A kind, gentle, equalitarian pacifist is (among other things) acting more consistently with the teachings of the literary character of Jesus than a judgmental, aggressive, elitist warrior. Assessing whether someone is acting consistently with the literary character of Jesus's teachings is an inherently subjective enterprise, but that doesn't mean that all opinions on the subject are equally valid -- there is some content there.
You have a good point there.
Then again, there are plenty of times that Jesus says things to the effect of "Repent sinners, because the end is coming, and God and I are gonna kick your ass if you don't!"
That is Jesus in half his moods speaking that way. But there’s another Jesus in there. There’s a Jesus who’s just paradoxical and difficult to interpret, a Jesus who tells people to hate their parents. And then there is the Jesus — while he may not be as plausible given how we want to think about Jesus — but he’s there in scripture, coming back amid a host of angels, destined to deal out justice to the sinners of the world. That is the Jesus that fully half of the American electorate is most enamored of at this moment.
-- Sam Harris
↑ comment by wedrifid · 2011-12-22T08:07:58.443Z · LW(p) · GW(p)
The way I see it, there appear to be enough contradictions and ambiguities in the Bible and associated fan work that it's possible to use it to justify almost anything.
Sacrifice other people's wives to the devil. That's almost certainly out.
(Including slavery.)
Yes, that's a significant moral absurdity to us but no a big deal to the cultures who created the religion or to the texts themselves. (Fairly ambivalent - mostly just supports following whatever is the status quo on the subject.)
So it's hard to tell a priori what's un-Christian and what isn't.
No, it's really not. There is plenty of grey but there are a whole lot of clear cut rules too. Murdering. Stealing. Grabbing guys by the testicles when they are fighting. All sorts of things.
↑ comment by AspiringKnitter · 2011-12-22T08:24:48.266Z · LW(p) · GW(p)
Your comment seems to be about a general trend and doesn't rest on slavery itself, correct?
Because if not, I just want to point out that the Bible never says "slavery is good". It regulates it, ensuring minimal rights for slaves, and assumes it will happen, which is kind of like the rationale behind legalizing drugs. Slaves are commanded in the New Testament to obey their masters, which those telling them to do so explain as being so that the faith doesn't get a bad reputation. The only time anyone's told to practice slavery is as punishment for a crime, which is surely no worse than incarceration. At least you're getting some extra work done.
I assume this doesn't change your mind because you have other examples in mind?
Replies from: Bugmaster, lavalamp, wedrifid↑ comment by Bugmaster · 2011-12-23T16:22:04.743Z · LW(p) · GW(p)
One thing that struck me about the Bible when I first read it was that Jesus never flat-out said, "look guys, owning people is wrong, don't do it". Instead, he (as you pointed out) treats slavery as a basic fact of life, sort of like breathing or language or agriculture. There are a lot of parables in the New Testament which use slavery as a plot device, or as an analogy to illustrate a point, but none that imagine a world without it.
Contrast this to the modern world we live in. To most of us, slavery is almost unthinkable, and we condemn it whenever we see it. As imperfect as we are, we've come a long way in the past 2000 years -- all of us, even Christians. That's something to be proud of, IMO.
↑ comment by lavalamp · 2011-12-22T15:03:49.081Z · LW(p) · GW(p)
... I just want to point out that the Bible never says "slavery is good". It regulates it, ensuring minimal rights for slaves, and assumes it will happen, which is kind of like the rationale behind legalizing drugs.
Hrm, I support legalizing-and-regulating (at least some) drugs and am not in favor of legalizing-and-regulating slavery. I just thought about it for 5 minutes and I still really don't think they are analogous.
Deciding factor: sane, controlled drug use does not harm anyone (with the possible exception of the user, but they do so willingly). "sane, controlled" slavery would still harm someone against their will (with the exception of voluntary BDSM type relationships, but I'm pretty sure that's not what we're talking about).
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T18:09:21.364Z · LW(p) · GW(p)
Do you support legalizing and regulating the imprisonment of people against their will?
Replies from: lavalamp↑ comment by lavalamp · 2011-12-22T20:13:00.244Z · LW(p) · GW(p)
Haha, I did think of that before making my last comment :)
Answer: in cases where said people are likely to harm others, yes. IMO, society gains more utilons from incarcerating them than the individuals lose from being incarcerated. Otherwise, I'd much rather see more constructive forms of punishment.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-22T20:21:07.470Z · LW(p) · GW(p)
OK. So, consider a proposal to force prisoners to perform involuntary labor, in such a way that society gains more utilons from that labor than the individuals lose from being forced to perform it.
Would you support that proposal?
Would you label that proposal "slavery"?
If not (to either or both), why not?
↑ comment by lavalamp · 2011-12-22T20:51:05.579Z · LW(p) · GW(p)
Would you support that proposal?
It would probably depend on the specific proposal. I'd lean more towards "no" the more involuntary and demeaning the task. (I'm not certain my values are consistent here; I haven't put huge amounts of thought into it.)
Would you label that proposal "slavery"?
Not in the sense I thought we were talking about, which (at least in my mind) included the concept of one individual "owning" another. In a more general sense, I guess yes.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-23T17:36:58.532Z · LW(p) · GW(p)
Well, for my own part I would consider a system of involuntary forced labor as good an example of slavery as I can think of... to be told "yes, you have to work at what I tell you to work at, and you have no choice in the matter, but at least I don't own you" would be bewildering.
That said, I don't care about the semantics very much. But if the deciding factor in your opposition to legalizing and regulating slavery is that slavery harms someone against their will, then it seems strange to me that who owns whom is relevant here. Is ownership in and of itself a form of harm?
Replies from: lavalamp, dlthomas↑ comment by lavalamp · 2011-12-23T18:12:11.676Z · LW(p) · GW(p)
Tabooing "slavery": "You committed crimes and society has deemed that you will perform task X for Y years as a repayment" seems significantly different (to me) from "You were kidnapped from country Z, sold to plantation owner W and must perform task X for the rest of your life". I can see arguments for and against the former, but the latter is just plain evil.
Replies from: Prismattic, TheOtherDave↑ comment by Prismattic · 2011-12-24T02:25:52.356Z · LW(p) · GW(p)
This actually understates the degree of difference. Chattel slavery isn't simply about involuntary labor. It also involves, for example, lacking the autonomy to marry without the consent of one's master, the arbitrary separation of families and the selling of slaves' children, etc.
↑ comment by TheOtherDave · 2011-12-23T20:28:36.622Z · LW(p) · GW(p)
Sure, I agree. But unless the latter is what's being referred to Biblically, we do seem to have shifted the topic of conversation somewhere along the line.
Replies from: lavalamp↑ comment by lavalamp · 2011-12-23T22:32:01.882Z · LW(p) · GW(p)
It's been awhile since I read it last, but IIRC, the laws regarding slavery in the OT cover individuals captured in a war as well as those sold into slavery to pay a debt.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-24T01:40:19.834Z · LW(p) · GW(p)
That's consistent with my recollection as well.
↑ comment by dlthomas · 2011-12-23T17:39:09.773Z · LW(p) · GW(p)
Does each and every feature of slavery need to contribute to it's awfulness?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-12-23T20:29:33.312Z · LW(p) · GW(p)
Certainly not.
↑ comment by wedrifid · 2011-12-22T10:29:35.975Z · LW(p) · GW(p)
The only time anyone's told to practice slavery is as punishment for a crime, which is surely no worse than incarceration. At least you're getting some extra work done.
In fact, often taking slaves is outright sinful. (Because you're supposed to genocide them instead! :P)
Replies from: TimS↑ comment by TimS · 2011-12-22T14:07:49.121Z · LW(p) · GW(p)
That's certainly the Old Testament position (i.e. the Amalekites). But I don't that it's fair to say that is an inherent part of Christian thought.
Replies from: wedrifid↑ comment by wedrifid · 2011-12-22T05:41:57.759Z · LW(p) · GW(p)
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence. It's correlated with self-harm, but not hurting other people.
I would confirm this with a particular emphasis on schizophrenia. Actually not quite - as I understand it there is a negative correlation.
↑ comment by dlthomas · 2011-12-22T02:41:08.747Z · LW(p) · GW(p)
Well, not the Pope, certainly. He's a Catholic. But I thought a workable definition of "Christian" was "person who believes in the divinity of Jesus Christ and tries to follow his teachings", in which case we have a pretty objective test.
Is this a "Catholics aren't Christian" thing, or just drawing attention to the point that not all Christians are Catholic?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T02:49:28.291Z · LW(p) · GW(p)
The latter.
Replies from: dlthomas↑ comment by Bugmaster · 2011-12-23T16:13:35.382Z · LW(p) · GW(p)
Hmm, so apparently, looking up religious conversion testimonies on the intertubes is more difficult than I thought, because all the top search results lead to sites that basically say, "here's why religion X is wrong and my own religion Y is the best thing since sliced bread". That said, here's a random compilation of Chrtistianity-to-Islam conversion testimonials. You can also check out the daily "Why am I an Atheist" feature on Pharyngula, but be advised that this site is quite a bit more angry than Less Wrong, so the posts may not be representative.
BTW, I'm not endorsing any of these testimonials, I'm just pointing out that they do exist.
NO. That takes a BIG NO. Severity of mental illness is NOT correlated with violence
Well, I brought that up because I know of at least one mental illness-related violent incident in my own extended family. That said, you are probably right in saying that schizophrenia and violence are not strongly correlated. However, note that violence against others was just one of the negative effects I'd brought up; existential risk to one's self was another.
I think they key disagreement we're having is along the following lines: is it better to believe in something that's true, or in something that's probably false, but has a positive effect on you as a person ? I believe that the second choice will actually result in a lower utility. Am I correct in thinking that you disagree ? If so, I can elaborate on my position.
Okay, so I mean, if you think you only want to fulfill your own selfish desires...
I don't think there are many people (outside of upper management, maybe, heh), of any religious denomination or lack thereof, who wake up every morning and say to themselves, "man, I really want to fulfill some selfish desires today, and other people can go suck it". Though, in a trivial sense, I suppose that one can interpret wanting to be nice to people as a selfish desire, as well...
Well, not the Pope, certainly. He's a Catholic.
You keep asserting things like this, but to an atheist, or an adherent of any faith other than yours, these assertions are pretty close to null statements -- unless you can back them up with some evidence that is independent of faith.
But I thought a workable definition of "Christian" was "person who believes in the divinity of Jesus Christ and tries to follow his teachings"
Every single person (plus or minus epsilon) who calls oneself "Christian" claims to "follow Jesus's teachings"; but all Christians disagree on what "following Jesus's teachings" actually means, so your test is not objective. All those Christians who want to persecute gay people, ban abortion, teach Creationism in schools, or even merely follow the Pope and venerate Mary -- all of them believe that they are doing what Jesus would've wanted them to do, and they can quote Bible verses to prove it.
Compare it with a relevant quote from the Bible, which has been placed in different places in different versions...
Some Christians claim that this story is a later addition to the Bible and therefore non-authoritative. I should also mention that both YHVH and, to a lesser extent, Jesus, did some pretty intolerant things; such as committing wholesale genocide, whipping people, condemning people, authorizing slavery, etc. The Bible is quite a large book...
Replies from: AspiringKnitter, dlthomas↑ comment by AspiringKnitter · 2011-12-23T20:11:49.064Z · LW(p) · GW(p)
That said, here's a random compilation of Chrtistianity-to-Islam conversion testimonials. You can also check out the daily "Why am I an Atheist" feature on Pharyngula, but be advised that this site is quite a bit more angry than Less Wrong, so the posts may not be representative.
Thank you.
Well, I brought that up because I know of at least one mental illness-related violent incident in my own extended family.
I'm sorry.
I think they key disagreement we're having is along the following lines: is it better to believe in something that's true, or in something that's probably false, but has a positive effect on you as a person ?
No, I don't think that's true, because it's better to believe what's true.
I believe that the second choice will actually result in a lower utility.
So do I, because of the utility I assign to being right.
Am I correct in thinking that you disagree ?
No.
Every single person (plus or minus epsilon) who calls oneself "Christian" claims to "follow Jesus's teachings"; but all Christians disagree on what "following Jesus's teachings" actually means, so your test is not objective. All those Christians who want to persecute gay people, ban abortion, teach Creationism in schools, or even merely follow the Pope and venerate Mary -- all of them believe that they are doing what Jesus would've wanted them to do, and they can quote Bible verses to prove it.
Suppose, hypothetically, that current LessWrong trends of adding rituals and treating EY as to some extent above others continue. And then suppose that decades or centuries down the line, we haven't got transhumanism, but we HAVE got LessWrongians who now argue about what EY really meant. And some of them disagree with each other, and others outside their community just raise their eyebrows and think man, LessWrongians are such a weird cult. Would it be correct, at least, to say that there's a correct answer to the question "who is following Eliezer Yudkowsky's teachings?" That there's a yes or no answer to the question "did EY advocate prisons just because he failed to speak out against them?" Or to the question "would he have disapproved of people being irrational?" If not, I'll admit you're being self-consistent, at least.
Some Christians claim that this story is a later addition to the Bible and therefore non-authoritative.
And that claim should be settled by studying the relevant history.
EDIT: oh, and I forgot to mention that one doesn't have to actually think "I want to go around fulfilling my selfish desires" so much as just have a utility function that values only one's own comfort and not other people's.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-24T04:18:44.073Z · LW(p) · GW(p)
No, I don't think that's true, because it's better to believe what's true.
This statement appears to contradict your earlier statements that
a). It's better to live with the perception-altering symptoms of schizophrenia, than to replace those symptoms with depression and other side-effects, and
b). You determine the nature of every "gut feeling" (i.e., whether it is divine or internal) by using multiple criteria, one of which is, "would I be better off as a person if this feeling was, in fact, divine".
Suppose, hypothetically, that current LessWrong trends of adding rituals and treating EY as to some extent above others continue.
I hope not, I think people are engaging in more than enough EY-worship as it is, but that's beside the point...
And then suppose that decades or centuries down the line, we haven't got transhumanism, but we HAVE got LessWrongians who now argue about what EY really meant... Would it be correct, at least, to say that there's a correct answer to the question "who is following Eliezer Yudkowsky's teachings?"
Since we know today that EY actually existed, and what he talked about, then yes. However, this won't be terribly relevant in the distant future, for several reasons:
- Even though everyone would have an answer to this question, it is far from guaranteed that more than zero answers would be correct, because it's entirely possible that no Yudkowskian sect would have the right answer.
- Our descendants likely won't have access to EY's original texts, but to Swahili translations from garbled Chinese transcriptions, or something; it's possible that the translations would reflect the translators' preferences more than EY's original intent. In this case, EY's original teachings would be rendered effectively inaccessible, and thus the question would become unanswerable.
- Unlike us here in the past, our future descendants won't have any direct evidence of EY's existence. They may have so little evidence, in fact, that they may be entirely justified in concluding that EY was a fictional character, like James Bond or Harry Potter. I'm not sure if fictional characters can have "teachings" or not.
That there's a yes or no answer to the question "did EY advocate prisons just because he failed to speak out against them?"
This question is not analogous, because, unlike the characters on the OT and NT, EY does not make a habit of frequently using prisons as the basis for his parables, nor does EY claim to be any kind of a moral authority. That said, if EY did say these things, and if prisons were found to be extremely immoral in the future -- then our descendants would be entirely justified in saying that EY's morality was far inferior to their own.
And that claim should be settled by studying the relevant history.
I doubt whether there exist any reasonably fresh first-hand accounts of Jesus's daily life (assuming, of course, that Jesus existed at all). If such accounts did exist, they did not survive the millennia that passed since then. Thus, it would be very difficult to determine what Jesus did and did not do -- especially given the fact that we don't have enough secular evidence to even conclude that he existed with any kind of certainty.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T05:13:12.073Z · LW(p) · GW(p)
This statement appears to contradict your earlier statements that a). It's better to live with the perception-altering symptoms of schizophrenia, than to replace those symptoms with depression and other side-effects,
I want to say I don't know why you think I made that statement, but I do know, and it's because you don't understand what I said. I said that given that those drugs fix the psychosis less than half the time, that almost ten percent of cases spontaneously recover anyway, that the entire rest of the utility function might take overwhelming amounts of disutility from side-effects including permanent disfiguring tics, a type of unfixable restlessness that isn't helped by fidgeting and usually causes great suffering, greater risk of diseases, lack of caring about anything, mental fog (which will definitely impair your ability to find the truth), and psychosis (not even kidding, that's one of the side-effects of antipsychotics), and given that it can lead to a curtailing of one's civil liberties to be diagnosed, it might not be worth it. Look, there's this moral theory called utilitarianism where you can have one bad thing happen and still think it's worth it because the alternative is worse, and it doesn't just have to work for morals. It works for anything; you can't just say "X is bad, fix X at all cost". You have to be sure it's not actually the best state of affairs first. Something can be both appalling and the best possible choice, and my utility function isn't as simple as you seem to think it is. I think there are things of value besides just having perfectly clear perception.
Our descendants likely won't have access to EY's original texts, but to Swahili translations from garbled Chinese transcriptions, or something;
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could. /nitpick
b). You determine the nature of every "gut feeling" (i.e., whether it is divine or internal) by using multiple criteria, one of which is, "would I be better off as a person if this feeling was, in fact, divine".
I really want to throw up my hands here and say "but I've explained this MULTIPLE TIMES, you are BEING AN IDIOT" but I remember the illusion of transparency. And that you haven't understood. And that you didn't make a deliberate decision to annoy me. But I'm still annoyed. I STILL want to call you an idiot, even though I know I haven't phrased something correctly and I should explain again. That doesn't even sound like what I believe or what I (thought I) said. (Maybe that's how it came out. Ugh.)
Why is communication so difficult? Why doesn't knowing that someone's not doing it on purpose matter? It's the sort of thing that you'd think would actually affect my feelings.
Replies from: Incorrect, soreff, NancyLebovitz, Bugmaster↑ comment by Incorrect · 2011-12-27T03:17:58.125Z · LW(p) · GW(p)
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could. /nitpick
You would be surprised... If it weren't for the internet archive much information would have already been lost. Some modern websites are starting to use web design techniques (ajax-loaded content) that break such archive services.
↑ comment by soreff · 2011-12-24T05:40:18.653Z · LW(p) · GW(p)
I really want to throw up my hands here and say "but I've explained this MULTIPLE TIMES, you are BEING AN IDIOT" but I remember the illusion of transparency.
One option would be to reply with a pointer to your previous comment. I see you've used the link syntax within a comment - this web site supports permalinks to comments as well. At least you wouldn't be forced to repeat yourself.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-24T07:04:57.546Z · LW(p) · GW(p)
But since I obviously explained it wrong, what good does it do to remind him of where I explained it? I've used the wrong words, I need to find new ones. Ugh.
Replies from: soreff↑ comment by soreff · 2011-12-24T17:47:56.506Z · LW(p) · GW(p)
Best wishes. Was your previous explanation earlier in your interchange with Bugmaster? If so, I agree that Bugmaster would have read your explanation, and that pointing to it wouldn't help (I sympathize). If, however, your previous explanation was in response to another lesswrongian, it is possible that Bugmaster missed it, in which case a pointer might help. I've been following your comments, but I'm sure I've missed some of them.
Replies from: dlthomas↑ comment by NancyLebovitz · 2011-12-27T18:45:18.049Z · LW(p) · GW(p)
It's conceivable that English could drift enough that EY's meaning would be unclear even if the texts remain.
↑ comment by Bugmaster · 2012-01-03T23:53:26.269Z · LW(p) · GW(p)
(I just came back from vacation, sorry for the late reply, and happy New Year ! Also, Merry Christmas if you are so inclined :-) )
Firstly, I operate by Crocker's Rules, so you can call me anything you want and I won't mind.
It works for anything; you can't just say "X is bad, fix X at all cost". You have to be sure it's not actually the best state of affairs first.
I agree with you completely regarding utilitarianism (although in this case we're not talking about the moral theory, just the approach in general). All I was saying is that IMO the utility one places on believing things that are likely to be actually true should, IMO, be extremely high -- and possibly higher than the utility you assign to this feature. But "extremely high" does not mean "infinite", of course, and it's entirely possible that, in some cases, the disutility from all the side-effects will not be worth the utility gain -- especially if the side-effects are preventing you from believing true things anyway (f.ex. "mental fog", psychosis, depression, etc.).
That said, if I personally was seeing visions or hearing voices, I would be willing (assuming I remained reasonably rational, of course) to risk a very large disutility even for a less than 50% chance of fixing the problem. If I can't trust my senses (or, indeed, my thoughts), then my ability to correctly evaluate my utility is greatly diminished. I could be thinking that everything is just great, while in reality I was hurting myself or others, and I'd be none the wiser. Of course, I could also be just great in reality, as well; but given the way this universe works, this is unlikely.
This is the internet. Nothing anyone says on the internet is ever going away, even if some of us really wish it could.
Data on the Internet is less permanent than many people think, IMO, but this is probably beside the point; I was making an analogy to the Bible, which was written in the days before the Internet, but (sadly) after the days of giant stone steles. Besides, the way things are going, it's not out of the question that future versions of the Internet would all be written in Chinese...
Why is communication so difficult? Why doesn't knowing that someone's not doing it on purpose matter?
I think this is because you possess religious faith, which I have never experienced, and thus I am unable to evaluate what you say in the same frame of reference. Or it could be because I'm just obtuse. Or a bit of both.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-01-04T00:23:51.146Z · LW(p) · GW(p)
Besides, the way things are going, it's not out of the question that future versions of the Internet would all be written in Chinese...
I don't think so. The popularity of the English language has gained momentum such that even if its original causes (the economic status of the US) ceased, it would go on for quite a while. Chinese hasn't. See http://www.andaman.org/BOOK/reprints/weber/rep-weber.htm (It was written a decade and a half ago, but I don't think the situation is significantly qualitatively different for English and Chinese in ways which couldn't have been predicted back then.) I think English is going to remain the main international language for at least 30 more years, unless some major catastrophe occurs (where by major I mean ‘killing at least 5% of the world human population’).
↑ comment by dlthomas · 2011-12-23T16:40:54.442Z · LW(p) · GW(p)
Well, not the Pope, certainly. He's a Catholic.
You keep asserting things like this, but to an atheist, or an adherent of any faith other than yours, these assertions are pretty close to null statements -- unless you can back them up with some evidence that is independent of faith.
There is a bit of ambiguity here, but I asked after it and apparently the more strident interpretation was not intended. The position that the Pope doesn't determine who is Christian because the Pope is Catholic and therefore doesn't speak with authority regarding those Christians who are not Catholic seems uncontroversial, internally consistent, and not privileging any particular view.
Replies from: Bugmaster↑ comment by ArisKatsaris · 2011-12-21T08:48:19.079Z · LW(p) · GW(p)
"Better person" here means "person who maximizes average utility better". Understood, though I was confused for a moment there. When other people say "better person", they usually mean something like "a person who is more helpful and kinder to others", not merely "a happier person", though obviously those categories do overlap.
I think that by "maximizes average utility" AspiringKnitter meant utility averaged over every human being -- so helpfulness and kindness to others is by necessity included.
Replies from: army1987, Bugmaster↑ comment by A1987dM (army1987) · 2011-12-21T22:42:23.159Z · LW(p) · GW(p)
Since a utility function is only defined up to affine transformations with positive scale factor, what does it mean to sum several utility functions together? (Sure someone has already thought about that, but I can't think of anything sensible.)
Replies from: CronoDAS, dlthomas↑ comment by CronoDAS · 2011-12-22T00:10:32.382Z · LW(p) · GW(p)
Yeah, that's a problem with many formulations of utilitarianism.
Replies from: army1987↑ comment by A1987dM (army1987) · 2011-12-22T18:06:23.145Z · LW(p) · GW(p)
Surely someone must have proposed some solution(s)?
↑ comment by juliawise · 2011-12-21T20:53:23.792Z · LW(p) · GW(p)
I haven't studied schizophrenia in any detail, but wouldn't a person suffering from it also have a skewed subjective perception of what "being miserable" is ?
Misery is a subjective experience. The schizophrenic patients I work with describe feeling a lot of distress because of their symptoms, and their voices usually tell them frightening things. So I would expect a person hearing voices due to psychosis to be more distressed than someone hearing God.
That said, I was less happy when I believed in God because I felt constantly that I had unmet obligations to him.
↑ comment by lavalamp · 2011-12-21T17:41:01.638Z · LW(p) · GW(p)
If the goal is to arrive at the truth no matter one's background or extenuating circumstances, I don't think this list quite does the trick. You want a list of steps such that, if a Muslim generated a list using the same cognitive algorithm, it would lead them to the same conclusion your list will lead you to.
From this perspective, #2 is extremely problematic; it assumes the thing you're trying to establish from the spiritual experience (the veracity of Christianity). If a muslim wrote this step, it'd look totally different, as it would for any religion. (You do hint at this, props for that.) This step will only get you to the truth if you start out already having the truth.
#7 is problematic from a different perspective; well-being and truth-knowledge are not connected on a fundamental level, most noticeably when people around you don't know the same things you know. For reference, see Gallileo.
Also, my own thought: if we both agree that your brain can generate surprisingly coherent stuff while dreaming, then it seems reasonable to suppose the brain has machinery capable of the process. So my own null hypothesis is that that machinery can get triggered in ways which produce the content of spiritual experiences.
↑ comment by Bugmaster · 2011-12-21T03:51:13.313Z · LW(p) · GW(p)
God has been known to speak to people through dreams, visions and gut feelings.
In addition to your discussion with APMason:
When you have a gut feeling, how do you know whether this is (most likely) a regular gut feeling, or whether this is (most likely) God speaking to you ? Gut feelings are different from visions (and possibly dreams), since even perfectly sane and healthy people have them all the time.
*There's a joke I can't find about some Talmudic scholars who are arguing. They ask God, a voice booms out from the heavens which one is right, and the others fail to update.
I can't find the source right now, but AFAIK this isn't merely a joke, but a parable from somewhere in the Talmud. One of the rabbis wants to build an oven in a way that's proscribed by the Law (because it'd be more convenient for some engineering reason that I forget), and the other rabbis are citing the Law at him to explain why this is wrong. The point of the parable is that the Law is paramount; not even God has the power to break it (to say nothing of mere mortal rabbis). The theme of rules and laws being ironclad is a trope of Judaism that does not, AFAIK, exist in Christianity.
Replies from: Nisan↑ comment by Nisan · 2011-12-21T04:28:23.901Z · LW(p) · GW(p)
In the Talmudic story, the voice of God makes a claim about the proper interpretation of the Law, but it is dismissed because the interpretation of the Law lies in the domain of Men, where it is bound by certain peculiar hermeneutics. The point is that Halacha does not flow from a single divine authority, but is produced by a legal tradition.
Replies from: wedrifid, AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T04:40:18.004Z · LW(p) · GW(p)
And that's not what I'm thinking of. It's probably a joke about the parable, though. But I distinctly recall it NOT having a moral and being on the internet on a site of Jewish jokes.
Bugmaster: Well, go with your gut either way, since it's probably right.
It could be something really surprising to you that you don't think makes sense or is true, just as one example. Of course, if not, I can't think of a good way off the top of my head.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T08:49:04.935Z · LW(p) · GW(p)
Well, go with your gut either way, since it's probably right.
Hmm, are you saying that going with your gut is most often the right choice ? Perhaps your gut is smarter than mine, since I can recall many examples from my own life when trusting my intuitions turned out to be a bad idea. Research likewise shows that human intuition often produces wrong answers to important questions; what we call "critical thinking" today is largely a collection of techniques that help people overcome their intuitive biases. Nowadays, whenever I get a gut feeling about something, I try to make the effort to double-check it in a more systematic fashion, just to make sure (excluding exceptional situations such as "I feel like there might be a tiger in that bush", of course).
Replies from: AspiringKnitter, army1987↑ comment by AspiringKnitter · 2011-12-21T22:44:18.187Z · LW(p) · GW(p)
I'm claiming that going with your gut instinct usually produces good results, and when time is limited produces the best results available unless there's a very simple bias involved and an equally simple correction to fix it.
↑ comment by A1987dM (army1987) · 2011-12-21T22:36:42.867Z · LW(p) · GW(p)
Sometimes I feel my gut is smarter than my explicit reasoning, as I sometimes, when I have to make a decision in a very limited time, I make a choice which, five seconds later, I can't fully make sense of, but on further reflection I realize it was indeed the most reasonable possible choice after all. (There might some kind of bias I fail to fully correct for, though.)
↑ comment by APMason · 2011-12-20T23:13:51.025Z · LW(p) · GW(p)
If you'll allow me to butt into this conversation, I have to say that on the assumption that consciousness and identity depend not on algorithms executed by the brain (and which could be executed just as well by transistors), but on a certain special identity attached to your body which cannot be transferred to another - granting that premise - it seems perfectly rational to not want to change hardware. But when you say:
Plus it's good practice, since our justice system won't decide personhood by asking God...
do you mean that you would like the justice system to decide personhood by asking God?
Replies from: dlthomas, AspiringKnitter↑ comment by dlthomas · 2011-12-20T23:20:16.282Z · LW(p) · GW(p)
do you mean that you would like the justice system to decide personhood by asking God?
FWIW, I didn't read it that way. I think it's just "Also, I'll follow the laws of secular society, obviously."
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-20T23:22:34.379Z · LW(p) · GW(p)
Yeah, mostly that. Am I unclear right now? Maybe I should go take a nap...
Replies from: APMason↑ comment by AspiringKnitter · 2011-12-20T23:20:15.116Z · LW(p) · GW(p)
Our justice system should put in safeguards against what happens if we accidentally appoint ungodly people. That's the intuition behind deontological morality (some people will cheat or not understand, so we have bureaucracy instead) and it's the idea behind most laws. The reasoning here is that judges are human. This would of course be different in a theocracy ruled by Jesus, which some Christians (I'm literally so tired right now I can't remember if this is true or just something some believe, or where it comes from) believe will happen for a thousand years between the tribulation and the end of the world.
Replies from: NancyLebovitz, Bugmaster, lavalamp, Nornagest↑ comment by NancyLebovitz · 2011-12-27T19:44:55.994Z · LW(p) · GW(p)
What do you have in mind when you say "godly people"?
The qualifications I want for judges are honest, intelligent, benevolent, commonsensical, and conscientious. (Knowing the law is implied by the other qualities since an intelligent, benevolent, conscientious person wouldn't take a job as a judge without knowing the law.)
Godly isn't on the list because I wouldn't trust judges who were chosen for godliness to be fair to non-godly people.
Replies from: wedrifid, AspiringKnitter↑ comment by wedrifid · 2011-12-27T19:50:00.733Z · LW(p) · GW(p)
Godly isn't on the list because I wouldn't trust judges who were chosen for godliness to be fair to non-godly people.
To be fair, many people who consider "godliness" to be a virtue include "benevolent and conscientious" in the definition.
↑ comment by AspiringKnitter · 2011-12-27T20:18:51.654Z · LW(p) · GW(p)
Godly isn't on the list because I wouldn't trust judges who were chosen for godliness to be fair to non-godly people.
Then you're using a different definition of "godly" from the one I use.
The qualifications I want for judges are honest, intelligent, benevolent, commonsensical, and conscientious.
Part but not all of my definition of "godly". (Actually, intelligent and commonsensical aren't part of it. So maybe judges should be godly, intelligent and commonsensical.)
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2011-12-27T20:22:39.049Z · LW(p) · GW(p)
How would you identify godliness for the purpose of choosing judges?
↑ comment by Bugmaster · 2011-12-21T00:38:45.103Z · LW(p) · GW(p)
Our justice system should put in safeguards against what happens if we accidentally appoint ungodly people.
Currently, we still have some safeguards in place that ensure that we don't accidentally appoint godly people. Our First Amendment, for example, is one of such safeguards, and I believe it to be a very good thing.
The problem with using religion as a basis for public policy is that there's no way to know (or even estimate), objectively, which religion is right. For example, would you be comfortable if our country officially adopted Sharia law, put Muslim clerics in all the key government positions, and mandated that Islam be taught in schools (*) ? Most Christians would answer "no", but why not ? Is it because Christianity is the one true religion, whereas Islam is not ? But Muslims say the exact same thing, only in reverse; and so does every other major religion, and there's no way to know whether any of them are right (other than after death, I suppose, which isn't very useful). Meanwhile, there are atheists such as myself who believe that the very idea of religion is deeply flawed; where do we fit into this proposed theocracy ?
This is why I believe that decoupling religion from government was an excellent move. If the government is entirely secular, then every person is free to worship the god or gods they believe in, and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
(*) I realize that the chances of this actually happening are pretty much nonexistent, but it's still a useful hypothetical example.
Replies from: lessdazed, AspiringKnitter↑ comment by lessdazed · 2011-12-28T00:19:25.920Z · LW(p) · GW(p)
If the government is entirely secular, then every person is free to worship the god or gods they believe in, and no person has the right to impose their faith onto others.
I don't think that one can say a government is entirely secular, nor can it reasonably be an ideal endlessly striven for. A political apparatus would have to determine what is and isn't permissible, and any line drawn would be arbitrary.
Suppose a law is passed by a coalition of theist and environmentalist politicians banning eating whales, where the theists think it is wrong for people (in that country) to eat whales as a matter of religious law. A court deciding whether or not the law was impermissibly religiously motivated not only has to try and divine the motives of those involved in passing the law, it would have to decide what probability of passing it would have had, what to counterfactually replace the theists' values with, etc. and then compare that to some standard.
↑ comment by AspiringKnitter · 2011-12-21T01:28:05.237Z · LW(p) · GW(p)
Currently, we still have some safeguards in place that ensure that we don't accidentally appoint godly people. Our First Amendment, for example, is one of such safeguards, and I believe it to be a very good thing.
Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.
Which part of this is intended to prevent the appointment of godly judges? The guarantee that we won't go killing people for heresy? Or the guarantee that you have freedom of speech and the freedom to tell the government you'd like it to do a better job on something?
Unless by "godly" you mean "fanatical extremists who approve of terrorism and/or fail to understand why theocracies only work in theory and not in practice". In which case I agree, but that wasn't my definition of that word.
For example, would you be comfortable if our country officially adopted Sharia law, put Muslim clerics in all the key government positions, and mandated that Islam be taught in schools (*) ?
No. You predict correctly.
Most Christians would answer "no", but why not ? Is it because Christianity is the one true religion, whereas Islam is not ?
Yes. And because I expect Sharia law to directly impinge on the freedoms that I rightly enjoy in secular society and would also enjoy if godly and sensible people (here meaning moral Christians who have a basic grasp of history, human nature, politics and rationality) were running things. And because I disapprove of female circumcision and the death penalty for gays. And because I think all the clothing I'd have to wear would be uncomfortable, I don't like gloves, black is nice but summer in California calls for something other than head-to-toe covering in all black, I prefer to dress practically and I have a male friend I'd like to not be separated from.
Some of the general nature of these issues showed up in medieval Europe. That's because they're humans-with-authority issues, not just issues with Islam. (At least, not with Islam alone.)
But Muslims say the exact same thing, only in reverse; and so does every other major religion,
Yes, but they're wrong.
and there's no way to know whether any of them are right (other than after death, I suppose, which isn't very useful)
We can test what they claim is true. For instance, Jehovah's Witnesses think it'll be only a very short time until the end of the world, too short for political involvement to be useful (I think). So if we wait and the world doesn't end and we ascertain that had more or fewer people been involved in whatever ways we could have had outcomes that would have been better or worse, we can disprove a tenet of that sect.
Meanwhile, there are atheists such as myself who believe that the very idea of religion is deeply flawed; where do we fit into this proposed theocracy ?
The one with the Muslims? Probably as corpses. Are you under the impression that I've suggested a Christian theocracy instead?
This is why I believe that decoupling religion from government was an excellent move.
Concur. I don't want our country hobbled by Baptists and Catholics arguing with each other.
If the government is entirely secular, then every person is free to worship the god or gods they believe in,
Of course, the government could mandate atheism, or allow people to identify as whatever while prohibiting them from doing everything their religion calls for (distributing Gideon Bibles at schools, wearing a hijab in public, whatever). Social pressure is also a factor, one which made for an oppressive, theocraticish early America even though we had the First Amendment.
and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
When it works, it really works. You'll find no disagreement from anyone with a modicum of sense.
Replies from: Bugmaster, TheOtherDave, TimS↑ comment by Bugmaster · 2011-12-21T02:31:36.908Z · LW(p) · GW(p)
Unless by "godly" you mean "fanatical extremists who approve of terrorism and/or fail to understand why theocracies only work in theory and not in practice".
Understood. When most Christian say things like, "I wish our elected official were more godly", they usually mean, "I really wish we lived in a Christian theocracy", but I see now that you're not one of these people. In this case, would you vote for an atheist and thus against a Christian, if you thought that the atheist candidate's policies were more beneficial to society than his Christian rival's ?
Yes, but they're wrong.
Funny, that's what they say about you...
We can test what they claim is true.
This is an excellent idea, but it's not always practical; otherwise, most people would be following the same religion by now. For example, you mentioned that you don't want to wear uncomfortable clothing or be separated from your male friend (to use some of the milder examples). Some Muslims, however (as well as some Christians), believe that doing these things is not merely a bad idea, but a mortal sin, a direct affront to their god (who, according to them, is the one true god), which condemns the sinner to a fiery hell after death. How would you test whether this claim was true or not ?
Of course, the government could mandate atheism
Even though I'm an atheist, I believe this would be a terrible idea.
When it works, it really works. You'll find no disagreement from anyone with a modicum of sense.
Well, this all depends on what you believe in. For example, some theists believe (or at least claim to believe) that certain actions -- such as wearing the wrong kind of clothes, or marrying the wrong kinds of people, etc. -- are mortal sins that provoke God's wrath. And when God's wrath is made manifest, it affects the entire nation, not just the individual sinners (there are plenty of Bible verses that seem to be saying the same thing).
If this belief is true, then stopping people from wearing sinful clothing or marrying in a sinful way or whatever is not merely a sensible thing to do, but pretty much a moral imperative. This is why (as far as I understand) some Christians are trying to turn our government into a Christian theocracy: they genuinely believe that it is their moral duty to do so. Since their beliefs are ultimately based on faith, they are not open to persuasion; and this is why I personally love the idea of a secular government.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T03:44:47.253Z · LW(p) · GW(p)
In this case, would you vote for an atheist and thus against a Christian, if you thought that the atheist candidate's policies were more beneficial to society than his Christian rival's ?
Possibly. Depends on how much better, how I expected both candidates' policies to change and how electable I considered them both.
For example, you mentioned that you don't want to wear uncomfortable clothing or be separated from your male friend (to use some of the milder examples). Some Muslims, however (as well as some Christians), believe that doing these things is not merely a bad idea, but a mortal sin, a direct affront to their god (who, according to them, is the one true god), which condemns the sinner to a fiery hell after death. How would you test whether this claim was true or not ?
I wouldn't. But I would test accompanying claims. For this particular example, I can't rule out the possibility of ending up getting sent to hell for this until I die. However, having heard what supporters of those policies say, I know that most Muslims who support this sort of idea of modest clothing claim that it causes women to be more respected, causes men exposed only to this kind of woman to be less lustful and some even claim it lowers the prevalence of rape. As I receive an optimal level of respect at the moment, I find the first claim implausible. Men in countries where it happens are more sexually frustrated and more likely to end up blowing themselves up. Countries imposing these sorts of standards harm women even more than they harm men. So that's implausible. And rape occurs less in cultures with more unsexualized nudity, which would indicate only a modest protective effect or none at all, or could even indicate that more covering up causes more rape.
It's not 100% out of the question that the universe has an evil god who orders people to do stupid things for his own amusement.
Funny, that's what they say about you...
I say you're wrong about atheism, but you don't consider that strong evidence in favor of Christianity.
For example, some theists believe (or at least claim to believe) that certain actions -- such as wearing the wrong kind of clothes, or marrying the wrong kinds of people, etc. -- are mortal sins that provoke God's wrath. And when God's wrath is made manifest, it affects the entire nation, not just the individual sinners (there are plenty of Bible verses that seem to be saying the same thing).
Ah. I see. Sounds plausible... ish... sort of.
Replies from: Bugmaster↑ comment by Bugmaster · 2011-12-21T08:15:14.396Z · LW(p) · GW(p)
Possibly. Depends on how much better, how I expected both candidates' policies to change and how electable I considered them both.
That's perfectly reasonable, but see my comments below.
For this particular example, I can't rule out the possibility of ending up getting sent to hell for this until I die. However, having heard what supporters of those policies say, I know that most Muslims who support this sort of idea of modest clothing claim that it causes women to be more respected...
Ok, so you've listed a bunch of empirically verifiable criteria, and evaluated them. This approach makes sense to me... but... it sounds to me like you're making your political ("atheist politician vs. Christian politician") and moral ("should I wear a burqa") choices based primarily (or perhaps even entirely) on secular reasoning. You would support the politician who will implement the best policies (and who stands a chance of being elected at all), regardless of his religion; and you would oppose social polices that demonstrably make people unhappy -- in this life, not the next. So, where does "godliness" come in ?
It's not 100% out of the question that the universe has an evil god who orders people to do stupid things for his own amusement.
I agree, but then, I don't have faith to inform me of any competing gods' existence. I imagine that if I had faith in a non-evil Christian god, who is also the only god, I'd peg the probability of the evil god's existence at exactly 0%. But it's possible that I'm misunderstanding what faith feels like "from the inside".
Ah. I see. Sounds plausible... ish... sort of.
Uh oh. :-)
↑ comment by TheOtherDave · 2011-12-21T02:07:25.432Z · LW(p) · GW(p)
I'm under the impression that you've just endorsed a legal system which safeguards against the consequences of appointing judges who don't agree with Christianity's model of right and wrong, but which doesn't safeguard against the consequences appointing judges who don't agree with other religions' models of right and wrong.
Am I mistaken?
If you are endorsing that, then yes, I think you've endorsed a violation of the Establishment Clause of the First Amendment as generally interpreted.
Regardless, I absolutely do endorse testing the claims of various religions (and non-religions), and only acting on the basis of a claim insofar as we have demonstrable evidence for that claim.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T03:23:01.757Z · LW(p) · GW(p)
It might be because it's late, but I'm confused about your first paragraph. Can you clarify?
↑ comment by TimS · 2011-12-21T01:56:50.191Z · LW(p) · GW(p)
But Muslims say the exact same thing, only in reverse; and so does every other major religion,
Yes, but they're wrong.
and no person has the right to impose their faith onto others. This system of government protects everyone, Christians included.
When it works, it really works. You'll find no disagreement from anyone with a modicum of sense.
These two quotes are an interesting contrast to me. I think the Enlightenment concept of tolerance is an essential principle of just government. But you believe that there is a right answer on the religion question. Why does tolerance make any sense to you?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T02:09:02.922Z · LW(p) · GW(p)
<Just to be clear, abandoning tolerance does not logically imply bringing back the Inquisition (or its Protestant equivalent),
How not? Hasn't it basically always resulted in either cruelty or separatism? The former is harmful to others, the latter dangerous to those who practice it. Are we defining tolerance differently? Tolerance makes sense to me for the same reason that if someone came up to me and said that the moon was made of green cheese because Omega said so, and then I ended up running into a whole bunch of people who said so and rarely listened to sense, I would not favor laws facilitating killing them. And if they said that it would be morally wrong for them to say otherwise, I would not favor causing them distress by forcing them to say things they think are wrong. Even though it makes no sense, I would avoid antagonizing them because I generally believe in not harming or antagonizing people.
But you believe that there is a right answer on the religion question.
Don't you? If you're an atheist, don't you believe that's the right answer?
Replies from: TimS↑ comment by TimS · 2011-12-21T02:29:54.778Z · LW(p) · GW(p)
It seems logically possible to me that government could favor a particular sect without necessarily engaging in immoral acts. For the favored sect, the government could pay the salary of pastors and the construction costs of churches. Education standards (even for home-schooled children) could include knowledge of particular theological positions of the sect. Membership could be a plus-factor in applying for government licenses or government employment.
As you note, human history strongly suggests government favoritism wouldn't stop there and would proceed to immoral acts. But it is conceivable, right? (And if we could edit out in-group bias, I think that government favoritism is the rational response to the existence of an objectively true moral proposition).
And you are correct that I used imprecise language about knowing the right answer on religion.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-21T03:31:46.718Z · LW(p) · GW(p)
It is conceivable. I consider it unlikely. It would probably be the beginning of a slippery slope, so I reject it on the grounds that it will lead to bad things.
Plus I wouldn't know which sect it should be, but we can rule out Catholicism, which will really make them angry, and all unfavored sects will grumble. (Some Baptists believe all Catholics are a prophesied evil. Try compromising between THEM.) And, you know, this very idea is what prompted one of the two genocides that brought part of my family to the New World.
And the government could ask favors of the sect in return for these favors, corrupting its theology.
Replies from: TimS↑ comment by lavalamp · 2011-12-20T23:47:44.842Z · LW(p) · GW(p)
... a theocracy ruled by Jesus, which some Christians (I'm literally so tired right now I can't remember if this is true or just something some believe, or where it comes from) believe will happen for a thousand years between the tribulation and the end of the world.
You are correct, some Christians believe that.
↑ comment by Nornagest · 2011-12-27T20:08:03.518Z · LW(p) · GW(p)
I'm literally so tired right now I can't remember if this is true or just something some believe, or where it comes from
You are probably thinking of premillenialism, which is a fairly common belief among Protestant denominations (particularly evangelical ones), but not a universal one. Catholic and Orthodox churches both reject it. As best I can tell it's fundamentally a Christian descendant of the Jewish messianic teachings, which are pretty weakly supported textually but tend to imply a messiah as temporal ruler; since Christianity already has its messiah, this in turn implies a second coming well before the final judgment and the destruction of the world. Eschatology in general tends to be pretty varied and speculative as theology goes, though.
↑ comment by Prismattic · 2011-12-20T06:33:59.304Z · LW(p) · GW(p)
Please define "soul".
↑ comment by Dreaded_Anomaly · 2011-12-20T03:07:27.171Z · LW(p) · GW(p)
On the flip side, your (and mine, and everyone else's) biological brain is currently highly susceptible to propaganda, brainwashing, indoctrination, and a whole slew of hostile manipulation techniques, and thus switching out your biological brain for an electronic one won't necessarily be a step down.
Also: transcranial magnetic stimulation, pharmaceuticals and other chemicals, physical damage...
↑ comment by TheOtherDave · 2011-12-20T03:39:03.166Z · LW(p) · GW(p)
Makes sense enough.
For my own part, two things:
I entirely agree with you that various forms of mistaken and fraudulent identity, where entities falsely claim to be me or are falsely believed to be me, are problematic. Indeed, there are versions of that happening right now in the real world, and they are a problem. (That last part doesn't have much to do with AI, of course.)
I agree that people being modified without their consent is problematic. That said, it's not clear to me that I would necessarily be more subject to being modified without my consent as a computer than I am as whatever I am now -- I mean, there's already a near-infinite assortment of things that can modify me without my consent, and there do exist techniques for making accidental/malicious modification of computers difficult, or at least reversible. (I would really have appreciated error-correction algorithms after my stroke, for example, or at least the ability to restore my mind from backup afterwards. So the idea that the kind of thing I am right now is the ne plus ultra of unmodifiability rings false for me.)
↑ comment by Laoch · 2011-12-22T00:06:24.861Z · LW(p) · GW(p)
Who wants to turn you into a computer? I'm confused. I don't want to turn anybody into anything, I have no sovereignty there nor would I expect it.
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T01:11:51.404Z · LW(p) · GW(p)
EY and Robin Hanson approve of emulating people's brains on computers.
Replies from: Nornagest, Laoch↑ comment by Nornagest · 2011-12-22T01:50:38.355Z · LW(p) · GW(p)
Approving of something in principle doesn't necessarily translate into believing it should be mandatory regardless of the subject's feelings on the matter, or even into advocating it in any particular case. I'd be surprised if EY in particular ever made such an argument, given the attitude toward self-determination expressed in his Metaethics and Fun Theory sequences; I am admittedly extrapolating from only tangentially related data, though. Not sure I've ever read anything of his dealing with the ethics of brain simulation, aside from the specific and rather unusual case given in Nonperson Predicates and related articles.
Robin Hanson's stance is a little different; his emverse is well-known, but as best I can tell he's founding it on grounds of economic determinism rather than ethics. I'm hardly an expert on the subject, nor an unbiased observer (from what I've read I think he's privileging the hypothesis, among other things), but everything of his that I've read on the subject parses much better as a Cold Equations sort of deal than as an ethical imperative.
↑ comment by Laoch · 2011-12-22T09:48:12.784Z · LW(p) · GW(p)
And? Does that mean forcing you to be emulated?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-22T18:34:37.511Z · LW(p) · GW(p)
Good point.
Replies from: Laoch↑ comment by Laoch · 2011-12-22T22:56:19.536Z · LW(p) · GW(p)
I'm sure you're pro self determination right? Or are you? One of the things that pushed me away from religion in the beginning was there was no space for self determination(not that there is much from a natural perspective), the idea of being owned is not nice one to me. Some of us don't want watch ourselves rot in a very short space of time.
↑ comment by [deleted] · 2011-12-22T23:52:07.855Z · LW(p) · GW(p)
Um, according to the Bible, the Abrahamic God's supposed to have done some pretty awful things to people on purpose, or directed humans to do such things. It's hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn't act as expected; exhorting people to go wipe out other cultures, legislating victim blame into ethics around rape, sending actual fragging bears to mutilate and kill irreverent children?
I'm not the sort of person who assumes Christians are inherently bad people, but it's a serious point of discomfort with me that some nontrivial portion of humanity believes that a being answering to that description and those actions a) exists and b) is any kind of moral authority.
If a human did that stuff, they'd be described as whimsical tyrants at the most charitable. Why's God supposed to be different?
Replies from: None↑ comment by [deleted] · 2012-01-20T22:56:21.855Z · LW(p) · GW(p)
While I agree with some of your other points, I'm not sure about this:
It's hard to imagine anything more like the definition of a petty tyrant than wiping out nearly all of humanity because they didn't act as expected
We shouldn't be too harsh until we are faced with either deleting a potentially self-improving AI that is not provably friendly or risking the destruction of not just our species but the destruction of all that we value in the universe.
Replies from: Raemon, Multiheaded↑ comment by Multiheaded · 2012-01-24T19:05:46.983Z · LW(p) · GW(p)
I don't understand the analogy. I see how deleting a superhuman AI with untold potential is a lot like killing many humans, but isn't it a point of God's omnipotence that humans can never even theoretically present a threat to Him or His creation (a threat that he doesn't approve of, anyway)?
Replies from: TheOtherDave, None↑ comment by TheOtherDave · 2012-01-24T19:43:55.196Z · LW(p) · GW(p)
Within the fictional universe of the Old and New Testaments, it seems clear that God has certain preferences about the state of the world, and that for some unspecified reason God does not directly impose those preferences on the world. Instead, God created humans and gave them certain instructions which presumably reflect or are otherwise associated with God's preferences, then let them go do what they would do, even when their doing so destroys things God values. And then every once in a while, God interferes with their doing those things, for reasons that are unclear.
None of that presupposes omnipotence in the sense that you mean it here, although admittedly many fans of the books have posited the notion that God possesses such omnipotence.
That said, I agree that the analogy is poor. Then again, all analogies will be poor. A superhumanly powerful entity doing and refraining from doing various things for undeclared and seemingly pointless and arbitrary motives is difficult to map to much of anything.
Replies from: Multiheaded↑ comment by Multiheaded · 2012-01-24T19:56:22.112Z · LW(p) · GW(p)
Yeah, I kind of realize that the problems of omnipotence, making rocks that one can't lift and all that, only really became part of the religious discourse in a more mature and reflection-prone culture, the ways of which would already have felt alien to the OT's authors.
↑ comment by [deleted] · 2012-03-06T08:27:33.537Z · LW(p) · GW(p)
Taking the old testament god as he is in the book of Genesis this isn't clear at all. At least when talking about the long term threat potential of humans.
Then the LORD God said, "Behold, the man has become like one of Us, knowing good and evil; and now, he might stretch out his hand, and take also from the tree of life, and eat, and live forever "--
or
And they said, Go to, let us build us a city and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth.
And the Lord came down to see the city and the tower, which the children of men builded.
And the Lord said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.
Go to, let us go down, and there confound their language, that they may not understand one another's speech.
The whole idea of what exactly God is varied during the long centuries in which the stories where written.
↑ comment by NancyLebovitz · 2011-12-23T18:17:03.398Z · LW(p) · GW(p)
Do you have an opinion about whether an AI that wasn't an em could have a soul?
Replies from: AspiringKnitter↑ comment by AspiringKnitter · 2011-12-23T20:31:00.439Z · LW(p) · GW(p)
No. I haven't tested it. I haven't ever seen an AI or anything like that. I don't know what basis I'd have for theorizing.
↑ comment by NancyLebovitz · 2011-12-23T12:25:39.947Z · LW(p) · GW(p)
Comment score below threshold, 306 replies. (Now 307). Is this a record?
Replies from: JoachimSchipper↑ comment by JoachimSchipper · 2011-12-23T12:46:33.853Z · LW(p) · GW(p)
It does suggest that the "newest comment" section is sufficient to sustain a discussion.
comment by helm · 2011-01-25T16:39:11.856Z · LW(p) · GW(p)
Hello. I'm helm and I come from LW's "parent", reddit. I'm a rationalist by birth, although I grew up in a nondenominational Christian family.
Replies from: wedrifid↑ comment by wedrifid · 2011-01-25T16:49:19.095Z · LW(p) · GW(p)
I'm a rationalist by birth
You are? What species? (It couldn't be human!)
Replies from: helm↑ comment by helm · 2011-01-25T16:56:37.937Z · LW(p) · GW(p)
Slight exaggeration, of course. I know that by 14 my ideas were very mature, of the type "humans invented gods to explain the mysteries of the world" or "A conscious mind will likely find the thought of nonexistence abhorrent, thus the idea of eternal life".
But I might be wrong about what this forum is about! I haven't lurked very much.
Replies from: wedrifid↑ comment by wedrifid · 2011-01-25T23:45:45.939Z · LW(p) · GW(p)
Slight exaggeration, of course. I know that by 14 my ideas were very mature, of the type "humans invented gods to explain the mysteries of the world" or "A conscious mind will likely find the thought of nonexistence abhorrent, thus the idea of eternal life".
That's fairly impressive... for a human! ;)
But I might be wrong about what this forum is about!
Nope, you've got it spot on. Welcome! :)
comment by edgar · 2010-08-12T13:12:22.618Z · LW(p) · GW(p)
Hello I am a professional composer/composition teacher and adjunct instructor teaching music aesthetics to motion graphic artists at the Fashion Institute of Technology and in the graduate computer arts department at the School of Visual Arts. I have a master from the Juilliard School in composition and have been recorded on Newport Classics with Kurt Vonnegut and Michael Brecker. I live and work in New York City. AI spend my life composing and explaining music to students who are not musicians connecting the language of music to the principles of the visual medium. Saying the accurate thing getting others to question me letting them find their way and admitting often that I am wrong is a life long journey.
comment by Larks · 2009-08-11T23:19:54.781Z · LW(p) · GW(p)
* Handle: Larks (also commonly Larklight, OxfordLark, Artrix)
* Name: Ben
* Sex: Male
* Location: Eastbourne, UK (a town about two hours from London and 1.5 from Cambridge).
* Age: at 17 I suspect I may be the baby of the group?
* Education: results permitting (to which I assign a probability in excess of 0.99) I'll be reading Mathematics and Philosophy at Oxford
* Occupation: As yet, none. Currently applying for night-shift work at a local supermarket
I came to LW through OB, which I found as a result of Bryan Caplan's writing on Econlog (or should it be at Econlog?). I fit much of the standard pattern: atheist, materialist, economist, reductionist, etc. Probably my only departure is being a Conservative Liberal rather than a libertarian; an issue of some concern to me is the disconnect between the US/Econlog/OB/LW/Rationalist group and the UK/Classical Liberal/Conservative Party group, both of which I am interested in. Though Hayek, of course, pervades all.
In an impressive display, I suppose, of cognitive dissidence, I realised that the Bible and Evolution were contradictory in year 4 (age:8), and so came to the conclusion that the continents had originally been separated into islands on opposite sides of the planet. Eden was on one side, evolution on the other, and then continental drift occurred. I have since rejected this hypothesis. I came to Rationalism partly as a result of debating on the NAGTY website.
There are probably two notable influences OB/LW have had on my life. Firstly, I've begun to reflexively refer to what would or would not be empirically the case under different policies, states of affairs, etc., thus making discourse notably more efficient (or at least, it makes it harder for other people to argue back. Hard to tell the difference.)
Secondly, I've given up trying out out-argue my irrational Marxist friend, and instead make money off him by making bets about political and economic matters. This does not seem to have affected his beliefs, but it is profitable.
comment by spriteless · 2009-07-20T02:15:21.923Z · LW(p) · GW(p)
I have not joined too recently, but I have started actually participating recently. My handle is no typo. Google tells me I'm the only one to use it, and my gender. Since I just drew attention to my gender, you can guess what it is without asking Google.
In real life I am a student and have to draw attention away from my gender instead.
Replies from: thomblake↑ comment by thomblake · 2009-07-20T02:21:47.641Z · LW(p) · GW(p)
I can hardly parse what you've written here. Are you trying to be mysterious, or is this my fault?
ETA: Thanks! That clears it up.
Replies from: spriteless↑ comment by spriteless · 2009-07-20T03:37:22.846Z · LW(p) · GW(p)
Completely my fault. I don't think verbally. I converted my thoughts into the first grammatically correct words to come to mind; it seems I did not actually convert them into usable language.
I joined awhile ago, however, I only started commenting recently. My handle is not a typo, although on many boards it is assumed a misspelling of 'spiritless.' When I run a Google search for my handle, I find mostly profiles I have created on social sites and wikis, some of which state my gender. Since I just drew attention to my gender, you can guess what it is without bothering to search.
In real life I am a student who is retaking English Composition 101.
Replies from: CronoDAS↑ comment by CronoDAS · 2009-07-20T06:30:05.145Z · LW(p) · GW(p)
In real life I am a student who is retaking English Composition 101.
You know, I consider myself to be a good writer, but I never managed to pass the first year Expository Writing course at Rutgers University. I just couldn't get my head around the subject matter I had to write about, and I was left with nothing to say. Eventually, that damn English course was the only thing standing between me and graduation. I eventually got special permission from the Dean to take a course titled "Scientific and Technical Writing" instead of that damn Expos class, and I ended up with an A in the course.
Anyway, welcome to LessWrong!
comment by RHollerith (rhollerith_dot_com) · 2009-07-03T20:20:35.961Z · LW(p) · GW(p)
Handle: rhollerith_dot_com
Name: Richard Hollerith.
Location: just north of San Francisco, California.
Suppose you are reading this because you are reading every comment under the user name rhollerith_dot_com from newest to oldest. Well, you have almost finished with that: there are only 4 comments older than the comment you are reading. But there are more comments written by me under a different user name, namely, rhollerith.
comment by evtujo · 2009-04-21T05:02:31.938Z · LW(p) · GW(p)
* Handle: evtujo
* Location: Montana
* Age: 40
* Gender: Male
* Education: Physics BS, CompSci Masters
* Occupation: Programmer
I've been following OB pretty much since the first couple of months. I was trying to think when I would have started calling myself a rationalist. I can't think of any time in my life when I wouldn't have thought of myself that way. Even 20+ years ago when I thought the world was 6000 years old. I just wasn't a relentless rationalist. I even tried using all the rationalism I could muster to try to develop evangelistic witnessing "scripts". It was during that process that I talked myself out of religion.
Reading OB/LW has made me aware that my rationality skills aren't as sophisticated as I once thought. But one of my current strongest interests in the rationality game is to learn ways to help my children develop their rationality muscles. Also wouldn't hate helping my friends/family/co-workers on this path. I guess I'm just an evangelist at heart.
comment by XFrequentist · 2009-04-18T20:33:42.384Z · LW(p) · GW(p)
- Name: Alex Demarsh
- Age: 26
- Education: MSc Epidemiology/Biostatistics
- Occupation: Epidemiologist
- Location: Ottawa, Canada
- Hobbies: Reading, travel, learning, sport.
I found OB/LW through Eliezer's Bayes tutorial, and was immediately taken in. It's the perfect mix of several themes that are always running through my head (rationality, atheism, Bayes, etc.) and a great primer on lots of other interesting stuff (QM, AI, ev. psych., etc). The emphasis on improving decision making and clear thinking plus the steady influx of interesting new areas to investigate makes for an intoxicating ambrosia. Very nice change from many other rationality blogs, which seem to mostly devote themselves to the fun-but-eventually-tiresome game of bashing X for being stupid/illogical/evil (clearly, X is all of these things and more, but that's not the point). Generally very nice writing, too.
As for real-life impact, LW has:
- grown my reading list exponentially,
- made me want to become a better writer,
- forced me to admit that my math nowhere near where it needs to be,
- made my unstated ultimate goal of understanding the world as a coherent whole seem less silly, and
- altered my list of possible/probable PhD topics.
I'll put some thought into my rationalist origins story, but it may have been that while passing several (mostly enjoyable) summers as a door-to-door salesman, I encountered the absolutely horrible decision making mechanisms of lots and lots of people. It kind of made me despair for the world, and probably made me aspire to do better. But that could be a false narrative.
comment by John_Maxwell (John_Maxwell_IV) · 2009-04-16T18:11:02.841Z · LW(p) · GW(p)
Many of the people sharing their info in this thread seem to have been around for a while (like me.) It's not that I mind reading about y'all, but MBlume was asking for people who've recently joined, right?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2009-04-16T18:15:09.267Z · LW(p) · GW(p)
Blame me :-)
comment by cousin_it · 2009-04-16T13:45:23.467Z · LW(p) · GW(p)
Uh... ciphergoth, I'll take your cue.
- Handle: cousin_it
- Name: Vladimir Slepnev
- Location: Moscow
- Age: 26
- Occupation: Programmer
Have math degree, interested in computation theory and game theory. Speak Russian, English and Italian. My work project, my open source project, my music.
comment by teme · 2009-04-16T18:07:05.155Z · LW(p) · GW(p)
I'm Paul Tiffany, the Executive Director of a new non-profit organization called the Teme Foundation. Teme leverages undergraduates to advance multidisciplinary interactions between information technologies and new IT disciplines: Our pilot project, designed by Dr. Goertzel, is a computational biological study of CR gene expression datasets using openbiomind.
Teme is trying to bridge an existing gap between the memetic influence of our existing brands and academia. To a large degree, Teme's mission is the same as that of a recently founded executive education program, centered instead on the magnitude of work undergraduates can contribute to this end.
Teme's main focus is to create an undergraduate research initiative, where two classes of members (undergraduates and advisors) can create, volunteer for, finance, and promote worthwhile projects. While the site is currently in pre-alpha, we hope to bring a flexible incentive structure for grants and scholarships, including Diamandis-style prizes, microphilanthropy, and other wikinomic innovations.
My own work centers on the Friendliness Problem, which I address through a theoretical model for heredity in economic systems (heredity has been postulated extensively, but not modeled). Whist attempting to facilitate other undergraduate research, I am utilizing Teme to find advisors for what is, in principle, dangerous knowledge.
Our website is http://temetics.org There you'll find a splash page (we're hoping to have a coordinated launch) with my contact info (also here). We're getting a lot of support from the Immortality Institute in initializing Teme, but considering the Foundation's mission is more in-line with the interests of those here, I'm spamming you all for any help you can give.
We're having a meeting with Justin Loew, the Executive Director of the Immortality Institute, today, April 16 at 5pm EST: http://www.ustream.tv/channel/sunday-evening-update
We're trying to coordinate a launch for TemeUVa on Friday, April 24 at 5pm EST. The agenda isn't set in stone yet, but we're looking for guest speakers (and lurkers) if you're interested in participating or know any celebrities.
Please email me to join the pre-alpha team, learn about our meetings, join an email list, submit research project ideas, etc. etc. There's so much work to be done, and I personally would love the sober enthusiasm this site's denizens can bring.